Warning: mysqli::__construct(): (HY000/1203): User howardkn already has more than 'max_user_connections' active connections in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\includes\artfuncs.php on line 21
Failed to connect to MySQL: (1203) User howardkn already has more than 'max_user_connections' active connections
Warning: mysqli::query(): Couldn't fetch mysqli in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\index.php on line 66
Article <v5bu5r$va3a$1@dont-email.me>
Deutsch   English   Français   Italiano  
<v5bu5r$va3a$1@dont-email.me>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: bart <bc@freeuk.com>
Newsgroups: comp.lang.c
Subject: Re: Baby X is bor nagain
Date: Mon, 24 Jun 2024 15:00:26 +0100
Organization: A noiseless patient Spider
Lines: 129
Message-ID: <v5bu5r$va3a$1@dont-email.me>
References: <v494f9$von8$1@dont-email.me>
 <v49seg$14cva$1@raubtier-asyl.eternal-september.org>
 <v49t6f$14i1o$1@dont-email.me>
 <v4bcbj$1gqlo$1@raubtier-asyl.eternal-september.org>
 <v4bh56$1hibd$1@dont-email.me> <v4c0mg$1kjmk$1@dont-email.me>
 <v4c8s4$1lki1$4@dont-email.me> <20240613002933.000075c5@yahoo.com>
 <v4emki$28d1b$1@dont-email.me> <20240613174354.00005498@yahoo.com>
 <v4okn9$flpo$2@dont-email.me> <v4p37r$k32n$1@dont-email.me>
 <v4pei3$m5th$2@dont-email.me> <v4plsk$nn9o$2@dont-email.me>
 <v4pnq6$o4fs$1@dont-email.me> <v4q245$si2n$1@dont-email.me>
 <v4q2rl$sqk3$1@dont-email.me> <v52308$2nli8$3@dont-email.me>
 <v53i4s$33k73$2@dont-email.me> <v53lf7$34huc$1@dont-email.me>
 <v53vh6$368vf$1@dont-email.me> <v54se1$3bqsk$1@dont-email.me>
 <20240624160941.0000646a@yahoo.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 24 Jun 2024 16:00:27 +0200 (CEST)
Injection-Info: dont-email.me; posting-host="b9ae78d5a8a2c8822d3975b378ef6e69";
	logging-data="1026154"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX1+I6UfwRK8owhby0y82vwmi"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:2iHsAavQIMR8ATlsjS8rn4xVW+Y=
In-Reply-To: <20240624160941.0000646a@yahoo.com>
Content-Language: en-GB
Bytes: 6699

On 24/06/2024 14:09, Michael S wrote:
> On Fri, 21 Jun 2024 22:47:46 +0100
> bart <bc@freeuk.com> wrote:
> 
>> On 21/06/2024 14:34, David Brown wrote:
>>> On 21/06/2024 12:42, bart wrote:
>>>> On 21/06/2024 10:46, David Brown wrote:
>>>>>
>>>>>
>>>>> I understand your viewpoint and motivation.  But my own
>>>>> experience is mostly different.
>>>>>
>>>>> First, to get it out of the way, there's the speed of
>>>>> compilation. While heavy optimisation (-O3) can take noticeably
>>>>> longer, I never see -O0 as being in any noticeable way faster for
>>>>> compilation than -O1 or even -O2.
>>>>
>>>> Absolute time or relative?
>>>
>>> Both.
>>>    
>>>> For me, optimised options with gcc always take longer:
>>>
>>> Of course.  But I said it was not noticeable - it does not make
>>> enough difference in speed for it to be worth choosing.
>>>    
>>>>   
>>>>    C:\c>tm gcc bignum.c -shared -s -obignum.dll        # from cold
>>>>    TM: 3.85
>>>
>>> Cold build times are irrelevant to development - when you are
>>> working on a project, all the source files and all your compiler
>>> files are in the PC's cache.
>>>
>>>    
>>>>   
>>>>    C:\c>tm gcc bignum.c -shared -s -obignum.dll
>>>>    TM: 0.31
>>>>   
>>>>    C:\c>tm gcc bignum.c -shared -s -obignum.dll -O2
>>>>    TM: 0.83
>>>>   
>>>>    C:\c>tm gcc bignum.c -shared -s -obignum.dll -O3
>>>>    TM: 0.93
>>>>   
>>>>    C:\c>dir bignum.dll
>>>>    21/06/2024  11:14            35,840 bignum.dll
>>>
>>> Any build time under a second is as good as instant.
>>>
>>> I tested on a real project, not a single file.  It has 158 C files
>>> and about 220 header files.  And I ran it on my old PC, without any
>>> "tricks" that you dislike so much, doing full clean re-builds.  The
>>> files are actually all compiled twice, building two variants of the
>>> binary.
>>>
>>> With -O2, it took 34.3 seconds to build.  With -O1, it took 33.4
>>> seconds.  With -O0, it took 30.8 seconds.
>>>
>>> So that is a 15% difference for full builds.  In practice, of
>>> course, full rebuilds are rarely needed, and most builds after
>>> changes to the source are within a second or so.
>>
>> Then there's something very peculiar about your codebase.
>>
> 
> 
> To me it looks more likely that your codebase is very unusual rather
> than David's
> 
> In order to get meaningful measurements I took embedded project that
> is significantly bigger than average by my standards. Here are times of
> full parallel rebuild (make -j5) on relatively old computer (4-core Xeon
> E3-1271 v3).
> 
> Option time(s) -g time text size
> -O0    13.1      13.3   631648
> -Os    13.6      14.1   424016
> -O1    13.5      13.7   455728
> -O2    14.0      14.1   450056
> -O3    14.0      14.6   525380
> 
> The difference in time between different -O settings in my measurements
> is even smaller than reported by David Brown. That can be attributed to
> older compiler (gcc 4.1.2). Another difference is that this compiler
> works under cygwin, which is significantly slower both than native
> Linux and than native Windows. That causes relatively higher make
> overhead and longer link.

I don't know why Cygwin would make much difference; the native code is 
still running on the same processor.

However, is there any way of isolating the compilation time (turning .c 
files into either or .o files) from 'make' the linker? Failing that, can 
you compile just one module in isolation (.c to .o) with -O0 and -O2, or 
is that not possible?

Those throughputs don't look that impressive for a parallel build on 
what sounds like a high-spec machine.

Your processor has a CPU-mark double that of mine, which has only two 
cores, and is using one.

Building a 34-module project with .text size of 300KB, with either gcc 
10 or 14, using -O0, takes about 8 seconds, or 37KB/second.

Your figures show about 50KB/second. You say you use gcc 4, but an older 
gcc is more likely to be faster in compilation speed than a newer one.

It does sound like something outside of gcc itself.

For the same project, on the same slow machine, Tiny C's throughput is 
1.3MB/second. While my non-C compiler, on other projects, is 
5-10MB/second, still only looking at .text segments. That is 100 times 
faster than your timings, for generating code that is as good as gcc's -O0.

So IT IS NOT WINDOWS ITSELF THAT IS SLOW.


> If I had were "native" tools then all times will be likely shorter by
> few seconds and the difference between -O0 and -O3 will be close to 10%.

So two people now saying that all the many dozens of extras passes and 
extra analysis that gcc -O2/O3 has to do, compared with the basic 
front-end work that every toy compiler needs to do and does it quickly, 
only slows it down by 10%.

I really don't believe it. And you should understand that it doesn't add up.