Path: ...!feed.opticnetworks.net!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail From: David Brown Newsgroups: comp.lang.c Subject: Re: Baby X is bor nagain Date: Mon, 24 Jun 2024 17:09:25 +0200 Organization: A noiseless patient Spider Lines: 223 Message-ID: References: <20240613002933.000075c5@yahoo.com> <20240613174354.00005498@yahoo.com> <20240624160941.0000646a@yahoo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Injection-Date: Mon, 24 Jun 2024 17:09:26 +0200 (CEST) Injection-Info: dont-email.me; posting-host="995058d498c537c0d70be20402ad0eb4"; logging-data="1042740"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/sZ6lW1fA/kt5tFarQl2O0f7YtokTGXBw=" User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.11.0 Cancel-Lock: sha1:gd9MPOLk6cLoTDs4h3UuPFoGoTk= Content-Language: en-GB In-Reply-To: Bytes: 11933 On 24/06/2024 16:00, bart wrote: > On 24/06/2024 14:09, Michael S wrote: >> On Fri, 21 Jun 2024 22:47:46 +0100 >> bart wrote: >> >>> On 21/06/2024 14:34, David Brown wrote: >>>> On 21/06/2024 12:42, bart wrote: >>>>> On 21/06/2024 10:46, David Brown wrote: >>>>>> >>>>>> >>>>>> I understand your viewpoint and motivation.  But my own >>>>>> experience is mostly different. >>>>>> >>>>>> First, to get it out of the way, there's the speed of >>>>>> compilation. While heavy optimisation (-O3) can take noticeably >>>>>> longer, I never see -O0 as being in any noticeable way faster for >>>>>> compilation than -O1 or even -O2. >>>>> >>>>> Absolute time or relative? >>>> >>>> Both. >>>>> For me, optimised options with gcc always take longer: >>>> >>>> Of course.  But I said it was not noticeable - it does not make >>>> enough difference in speed for it to be worth choosing. >>>>>    C:\c>tm gcc bignum.c -shared -s -obignum.dll        # from cold >>>>>    TM: 3.85 >>>> >>>> Cold build times are irrelevant to development - when you are >>>> working on a project, all the source files and all your compiler >>>> files are in the PC's cache. >>>> >>>>>    C:\c>tm gcc bignum.c -shared -s -obignum.dll >>>>>    TM: 0.31 >>>>>    C:\c>tm gcc bignum.c -shared -s -obignum.dll -O2 >>>>>    TM: 0.83 >>>>>    C:\c>tm gcc bignum.c -shared -s -obignum.dll -O3 >>>>>    TM: 0.93 >>>>>    C:\c>dir bignum.dll >>>>>    21/06/2024  11:14            35,840 bignum.dll >>>> >>>> Any build time under a second is as good as instant. >>>> >>>> I tested on a real project, not a single file.  It has 158 C files >>>> and about 220 header files.  And I ran it on my old PC, without any >>>> "tricks" that you dislike so much, doing full clean re-builds.  The >>>> files are actually all compiled twice, building two variants of the >>>> binary. >>>> >>>> With -O2, it took 34.3 seconds to build.  With -O1, it took 33.4 >>>> seconds.  With -O0, it took 30.8 seconds. >>>> >>>> So that is a 15% difference for full builds.  In practice, of >>>> course, full rebuilds are rarely needed, and most builds after >>>> changes to the source are within a second or so. >>> >>> Then there's something very peculiar about your codebase. >>> >> >> >> To me it looks more likely that your codebase is very unusual rather >> than David's >> >> In order to get meaningful measurements I took embedded project that >> is significantly bigger than average by my standards. Here are times of >> full parallel rebuild (make -j5) on relatively old computer (4-core Xeon >> E3-1271 v3). >> >> Option time(s) -g time text size >> -O0    13.1      13.3   631648 >> -Os    13.6      14.1   424016 >> -O1    13.5      13.7   455728 >> -O2    14.0      14.1   450056 >> -O3    14.0      14.6   525380 >> >> The difference in time between different -O settings in my measurements >> is even smaller than reported by David Brown. That can be attributed to >> older compiler (gcc 4.1.2). Another difference is that this compiler >> works under cygwin, which is significantly slower both than native >> Linux and than native Windows. That causes relatively higher make >> overhead and longer link. > > I don't know why Cygwin would make much difference; the native code is > still running on the same processor. > Cygwin, especially older Cygwin, is very slow for all file access and all process control, because it tries to emulate POSIX as closely as possible on an OS that has only a fraction of the necessary features. gcc is not a monolithic tool - it is a driver, and controls multiple processes and accesses a fairly large number of files. So Cygwin-based gcc builds will spend a considerable amount of time in this sort of thing rather than actual processor-bound compiler work. I am confident that Michael would find a mingw/mingw64 based build significantly faster since that has a far thinner (almost transparent) emulation layer. And it would be a good deal faster again under Linux in the same hardware, as that has more efficient file handling. (I'm not suggesting Michael change for this project - for serious embedded work, repeatable builds and consistency of toolchains is generally far more important than build times. But I presume he'll use newer and better tools for new projects.) > However, is there any way of isolating the compilation time (turning .c > files into either or .o files) from 'make' the linker? Why would anyone want to do that? At times, it can be useful to do partial builds, but compilation alone is not particularly useful. > Failing that, can > you compile just one module in isolation (.c to .o) with -O0 and -O2, or > is that not possible? > > Those throughputs don't look that impressive for a parallel build on > what sounds like a high-spec machine. How can you possibly judge that when you have no idea how big the project is? > >> If I had were "native" tools then all times will be likely shorter by >> few seconds and the difference between -O0 and -O3 will be close to 10%. > > So two people now saying that all the many dozens of extras passes and > extra analysis that gcc -O2/O3 has to do, compared with the basic > front-end work that every toy compiler needs to do and does it quickly, > only slows it down by 10%. > > I really don't believe it. And you should understand that it doesn't add > up. > That's not what people have said. They have said that /build/ times for /real/ projects, measured in real time, with optimisation disabled do not give a speedup which justifies turning off optimisation and losing the features you get with a strong optimising compiler. No one denies that "gcc -O0" is faster than "gcc -O3" for individual compiles, and that the percentage difference will vary and sometimes be large. But that's not the point. People who do C development for a living, do not measure the quality of their tools by the speed of compiling random junk they found on the internet to see which compiler saves them half a second. Factors that are important for considering a compiler can include, in no particular order and not all relevant to all developers : * Does it support the target devices I need? * Does it support the languages and language standards I want? * Does it have the extensions I want to use? * How good are its error messages at leading me to problems in the code? * How good is its static checks and warnings? * How efficient are the results? * Is it compatible with the libraries and SDK's I want to use? * Is it commonly used by others - colleagues, customers, suppliers? * Is it supported by the suppliers of my microcontrollers, OS, etc.? * Can I easily run it on multiple machines? * Can I back it up and run it on systems in the future? * Can I get hold of specific old versions of the tools? Can I reasonably expect the tools to be available for a long time in the future? * What are the policies for bug reporting, and bug fixing in the toolchain? ========== REMAINDER OF ARTICLE TRUNCATED ==========