Deutsch English Français Italiano |
<vi2m3o$2vspa$1@dont-email.me> View for Bookmarking (what is this?) Look up another Usenet article |
Path: eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail From: Bart <bc@freeuk.com> Newsgroups: comp.lang.c Subject: Re: else ladders practice Date: Mon, 25 Nov 2024 20:19:04 +0000 Organization: A noiseless patient Spider Lines: 53 Message-ID: <vi2m3o$2vspa$1@dont-email.me> References: <3deb64c5b0ee344acd9fbaea1002baf7302c1e8f@i2pn2.org> <vg37nr$3bo0c$1@dont-email.me> <vg3b98$3cc8q$1@dont-email.me> <vg5351$3pada$1@dont-email.me> <vg62vg$3uv02$1@dont-email.me> <vgd3ro$2pvl4$1@paganini.bofh.team> <vgdc4q$1ikja$1@dont-email.me> <vgdt36$2r682$2@paganini.bofh.team> <vge8un$1o57r$3@dont-email.me> <vgpi5h$6s5t$1@paganini.bofh.team> <vgtsli$1690f$1@dont-email.me> <vhgr1v$2ovnd$1@paganini.bofh.team> <vhic66$1thk0$1@dont-email.me> <vhins8$1vuvp$1@dont-email.me> <vhj7nc$2svjh$1@paganini.bofh.team> <vhje8l$2412p$1@dont-email.me> <86y117qhc8.fsf@linuxsc.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Injection-Date: Mon, 25 Nov 2024 21:19:04 +0100 (CET) Injection-Info: dont-email.me; posting-host="c80a36c81a7479915291e305ba49d1d4"; logging-data="3142442"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19CdEM+GGkSaUekZbuNGsNe" User-Agent: Mozilla Thunderbird Cancel-Lock: sha1:pU449tonpnk3b0aoFuklXLn+MDY= In-Reply-To: <86y117qhc8.fsf@linuxsc.com> Content-Language: en-GB On 25/11/2024 18:49, Tim Rentsch wrote: > Bart <bc@freeuk.com> writes: > >> It's funny how nobody seems to care about the speed of compilers >> (which can vary by 100:1), but for the generated programs, the 2:1 >> speedup you might get by optimising it is vital! > > I think most people would rather take this path (these times > are actual measured times of a recently written program): > > compile time: 1 second > program run time: ~7 hours > > than this path (extrapolated using the ratios mentioned above): > > compile time: 0.01 second > program run time: ~14 hours > I'm trying to think of some computationally intensive app that would run non-stop for several hours without interaction. If you dig back throug the thread, you will see that I am not against compiling with optimisations for production code. But for very frequent routine builds I want it as fast as possible. For such a task as your example might do, you would spend some time testing on shorter examples and getting the best algorithm. Once you feel it's the best, /then/ you can think about getting it optimised. It doesn't even matter how long it takes, if it's going to take hours anyway. I thought of one artificial example, it's a C program to display the Fibonacci sequence 1 to 100 using the recursive function for each fib(i). I compiled it with gcc-O3 and set going. While it was doing that, I set up the same test using my interpreted language. It was much slower obviously. So I added memoisation. Now it showed all 100 values instantly (the C version meanwhile is in the low 50s). I noticed however that it overflowed the 64-bit range at around fib(93) (as the C version might do eventually). So I tweaked my 'slow' version to use bignum values. Then I tweaked it again to show the first 10,000 values. At this point, the optimised C was still in the mid 50s. The point is, for such a task as this, you do as much as you can to bring down the runtime, which could reduce it by a magnitude or two with the right choices. Adding -O3 at the end is a nice bonus speedup, but that's all it is.