Deutsch   English   Français   Italiano  
<vishtd$1mnq1$1@dont-email.me>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: David Brown <david.brown@hesbynett.no>
Newsgroups: comp.lang.c
Subject: Re: question about linker
Date: Thu, 5 Dec 2024 16:46:53 +0100
Organization: A noiseless patient Spider
Lines: 172
Message-ID: <vishtd$1mnq1$1@dont-email.me>
References: <vi54e9$3ie0o$1@dont-email.me> <viifv8$2opi7$1@dont-email.me>
 <vik28b$390eg$1@dont-email.me> <vik8tc$3ang9$1@dont-email.me>
 <vikjff$3dgvc$1@dont-email.me> <viku00$3gamg$1@dont-email.me>
 <vil0qc$3fqqa$3@dont-email.me> <vil82t$3ie9o$2@dont-email.me>
 <vila9j$3j4dg$1@dont-email.me> <vin4su$49a6$1@dont-email.me>
 <vin95m$5da6$1@dont-email.me> <vinh3h$7ppb$1@dont-email.me>
 <vinjf8$8jur$1@dont-email.me> <vip5rf$p44n$1@dont-email.me>
 <viprao$umjj$1@dont-email.me> <viqfk9$13esp$1@dont-email.me>
 <vir5kp$3hjd9$1@paganini.bofh.team>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Thu, 05 Dec 2024 16:46:53 +0100 (CET)
Injection-Info: dont-email.me; posting-host="8984cd6544269285b834ffed477d8070";
	logging-data="1793857"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX19sxLm7Z6+xOIOKp+y3GSRVbHyr3+N4XP4="
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Cancel-Lock: sha1:mfsb0Ol2c0m5J2ZuvdFVIa9F0lw=
In-Reply-To: <vir5kp$3hjd9$1@paganini.bofh.team>
Content-Language: en-GB
Bytes: 9813

On 05/12/2024 04:11, Waldek Hebisch wrote:
> David Brown <david.brown@hesbynett.no> wrote:
>> On 04/12/2024 16:09, Bart wrote:
>>> On 04/12/2024 09:02, David Brown wrote:
>>>> On 03/12/2024 19:42, Bart wrote:
>>
>>> Yesterday you tried to give the misleading impression that compiling a
>>> substantial 200Kloc project only took 1-3 seconds with gcc.
>>>
>>
>> No, I did not.  I said my builds of that project typically take 1-3
>> seconds.  I believe I was quite clear on the matter.
> 
> Without word "make" it was not clear if you mean full build (say
> after checkout from repository).  Frequently people talk about re-making
> when they mean running make after a small edit and reserve build
> for full build.  So it was not clear if you claim to have a compile
> farm with few hundred cores (so you can compile all files in parallel).

I talk about "building" a project when I build the project - produce the 
relevant output files (typically executables of some sort, appropriately 
post-processed).

If I wanted to say a "full clean build", I'd say that.  If I wanted to 
include the time taken to check out the code from a repository, or to 
download and install the toolchain, or install the OS on the host PC, 
I'd say that.

When I am working with code, I edit some files.  Then I build the 
project.  Sometimes I simply want to do the build - to check for static 
errors, to see the size of code and data (as my targets are usually 
resource limited), etc.  Sometimes I want to download it to the target 
and test it or debug it.

/Why/ would anyone want to do a full clean build of their project? 
There are a number of good reasons - such as checking that everything 
works on a different host computer, or after updating tools, or because 
you have a somewhat limited build setup that can't handle changing 
header usage automatically.  Fair enough - but that's not a typical 
build, at least not for me.

(The fact that I use "make" - far and away the most common build tool - 
is irrelevant.  Much the same would apply with any other build tool.)

Bart is not interested in how much time it takes for people to build 
their projects.  He is interested in "proving" that his tools are 
superior because he can run gcc much more slowly than his tools.  He 
wants to be able to justify his primitive builds by showing that his 
tools are so fast that he doesn't need build tools or project 
management.  (He is wrong on all that, of course - build tools and 
project management is not just about choosing when you need to compile a 
file.)  Thus he also insists on single-threaded builds - compiling one 
file at a time, so that he can come up with huge times for running gcc 
on lots of files.  This is, of course, madness - multi-core machines 
have been the norm for a couple of decades.  My cheap work PC has 14 
cores and 20 threads - I'd be insane to compile one file at a time 
instead of 20 files at a time.

> 
>> If I do a full, clean re-compile of the code, it takes about 12 seconds
>> or so.  But only a fool would do that for their normal builds.  Are you
>> such a fool?  I haven't suggested you are - it's up to you to say if
>> that's how you normally build projects.
>>
>> If I do a full, clean re-compile /sequentially/, rather than with
>> parallel jobs, it would be perhaps 160 seconds.  But only a fool would
>> do that.
> 
> Well, when I download a project from internt the first (ant frequently
> the only compilation is a full build).  

That is not development, which is the topic here.

> And if build fails, IME
> to is much harder to find problem from log of parallel build.
> So I frequently run full builds sequentially.  Of course, I find
> something to do when computer is busy (300sec of computer time
> spent on full build is not worth extra 30 seconds to find trouble
> in parallel log (and for bigger things _both_ times grow so
> conclusion is the same)).
> 

That does not sound to me like a particularly efficient way of doing 
things.  However, it is presumably not something you are doing countless 
times while working.  And even then, the build time is only a small 
proportion of the time you spend finding the project, reading its 
documentation, and other related tasks.  It's a one-off cost.

>>> I gave some timings that showed gcc-O0 taking 50 times longer than tcc,
>>> and 150 times longer with -O2.
>>>
>>> That is the real picture. Maybe your machine is faster than mine, but I
>>> doubt it is 100 times faster. (If you don't like my benchmark, then
>>> provide another in portable C.)
>>>
>>> All this just so you can crap all over the benefits of small, faster,
>>> simpler tools.
>>
>> Your small, fast, simple tools are - as I have said countless times -
>> utterly useless to me.  Perhaps you find them useful, but I have never
>> known any other C programmer who would choose such tools for anything
>> but very niche use-cases.
>>
>> The real picture is that real developers can use real tools in ways that
>> they find convenient.  If you can't do that, it's your fault.  (I don't
>> even believe it is true that you can't do it - you actively /choose/ not
>> to.)
>>
>> And since compile speed is a non-issue for C compilers under most
>> circumstances, compiler size is /definitely/ a non-issue, and
>> "simplicity" in this case is just another word for "lacking useful
>> features", there are no benefits to your tools.
> 
> I somewhat disagree.  You probaly represent opinion of majority of
> developers.  But that leads to uncontrolled runaway complexity and
> bloat.  

You misunderstand.

The speed of C compilation is a non-issue for almost all C programmers. 
A reason for that is that it /is/ an issue to some - in particular, to 
compiler developers.  The gcc developers care about the speed of gcc - 
and because of that, I don't have to care.  (To be accurate, they care 
mostly about the speed of C++ compilation, because that is often a lot 
more relevant to developers.  C often just gets the benefits as a 
side-effect.)

> You clearly see need to have fast and resonably small code
> on your targets.  But there are also machines like Raspberry Pi,
> where normal tools, including compilers can be quite helpful.
> But such machines may have rather tight "disc" space and CPU
> use corresponds to power consumption which preferably should be
> low.  So there is some interest and benefits from smaller, more
> efficient tools.

Raspberry Pi's have no problem at all running native gcc.

It is certainly true that not all programs need to be small and fast. 
In fact, the great majority of programs do not need to be small and 
fast.  Thus for the great majority of programs, C is the wrong language 
to use.  You want something with a better balance of developer 
efficiency, features and run-time safety.

There can, of course, be circumstances where C is the right language 
even though code efficiency is irrelevant - but for most code for which 
C is the best choice, efficiency of the results is important.

> 
> OTOH, people do not want to drop all features.  And concerning
> gcc, AFAIK is is actually a compromise for good reason.  Some
> other projects are slow and bloated apparenty for no good
> reason.  Some time ago I found a text about Netscape mail
> index file.  The author (IIRC Jame Zawinsky) explained how
> it fastures ensured small size and fast loading.  But in
> later developement it was replaced by some generic DB-like
> solution leadind to huge slowdown and much higher space
> use (apparently new developers were not willing to spent
> a litte time to learn how old code worked).  And similar
> examples are quite common.
> 
> And concerning compiler size, I do not know if GCC/clang
> developers care.  But cleary Debian developers care,
> they use shared libraries, split debug info to separate
> packages and similar to reduce size.
> 

The more a program is used, the more important its efficiency is.  Yes, 
gcc and clang/llvm developers care about speed.  (They don't care much 
about disk space.  Few users are bothered about $0.10 worth of disk space.)

========== REMAINDER OF ARTICLE TRUNCATED ==========