Deutsch   English   Français   Italiano  
<vhlg53$8lff$1@dont-email.me>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!eternal-september.org!feeder2.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: Bart <bc@freeuk.com>
Newsgroups: comp.lang.c
Subject: Re: else ladders practice
Date: Wed, 20 Nov 2024 20:17:39 +0000
Organization: A noiseless patient Spider
Lines: 151
Message-ID: <vhlg53$8lff$1@dont-email.me>
References: <3deb64c5b0ee344acd9fbaea1002baf7302c1e8f@i2pn2.org>
 <vg37nr$3bo0c$1@dont-email.me> <vg3b98$3cc8q$1@dont-email.me>
 <vg5351$3pada$1@dont-email.me> <vg62vg$3uv02$1@dont-email.me>
 <vgd3ro$2pvl4$1@paganini.bofh.team> <vgdc4q$1ikja$1@dont-email.me>
 <vgdt36$2r682$2@paganini.bofh.team> <vge8un$1o57r$3@dont-email.me>
 <vgpi5h$6s5t$1@paganini.bofh.team> <vgtsli$1690f$1@dont-email.me>
 <vhgr1v$2ovnd$1@paganini.bofh.team> <vhic66$1thk0$1@dont-email.me>
 <vhins8$1vuvp$1@dont-email.me> <vhj7nc$2svjh$1@paganini.bofh.team>
 <vhje8l$2412p$1@dont-email.me> <vhl1up$5vdg$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Wed, 20 Nov 2024 21:17:40 +0100 (CET)
Injection-Info: dont-email.me; posting-host="44c3c689fff86cf7feb39046d1b84d39";
	logging-data="284143"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX1/DEwduta/sm3ODrnJpoIru"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:6gdGcuwtzELH2y0dvCabylVXw2o=
Content-Language: en-GB
In-Reply-To: <vhl1up$5vdg$1@dont-email.me>
Bytes: 7934

On 20/11/2024 16:15, David Brown wrote:
> On 20/11/2024 02:33, Bart wrote:

>> It's funny how nobody seems to care about the speed of compilers 
>> (which can vary by 100:1), but for the generated programs, the 2:1 
>> speedup you might get by optimising it is vital!
> 
> To understand this, you need to understand the benefits of a program 
> running quickly.

As I said, people are preoccupied with that for programs in general. But 
when it comes to compilers, it doesn't apply! Clearly, you are implying 
that those benefits don't matter when the program is a compiler.

>  Let's look at the main ones:

<snip>

OK. I guess you missed the bits here and in another post, where I 
suggested that enabling optimisation is fine for production builds.

For the routines ones that I do 100s of times a day, where test runs are 
generally very short, then I don't want to hang about waiting for a 
compiler that is taking 30 times longer than necessary for no good reason.


> There is usually a point where a program is "fast enough" - going faster 
> makes no difference.  No one is ever going to care if a compilation 
> takes 1 second or 0.1 seconds, for example.

If you look at all the interactions people have with technology, with 
GUI apps, even with mechanical things, a 1 second latency is generally 
disastrous.

A one-second delay between pressing a key and seeing a character appear 
on a display or any other feedback, would drive most people up to wall. 
But 0.1 is perfectly fine.


> It doesn't take much thought to realise that for most developers, the 
> speed of their compiler is not actually a major concern in comparison to 
> the speed of other programs.

Most developers are stuck with what there is. Naturally they will make 
the best of it. Usually by finding 100 ways or 100 reasons to avoid 
running the compiler.

> While writing code, and testing and debugging it, a given build might 
> only be run a few times, and compile speed is a bit more relevant. 
> Generally, however, most programs are run far more often, and for far 
> longer, than their compilation time.

Developing code is the critical bit.

Even when a test run takes a bit longer as you need to set things up, 
when you do need to change something and run it again, you don't want 
any pointless delay.

Neither do you want to waste /your/ time pandering to a compiler's 
slowness by writing makefiles and defining dependencies. Or even 
splitting things up into tiny modules. I don't want to care about that 
at all. Here's my bunch of source files, just build the damn thing, and 
do it now!

> And as usual, you miss out the fact that toy compilers - like yours, or 
> TinyC - miss all the other features developers want from their tools.  I 
> want debugging information, static error checking, good diagnostics, 
> support for modern language versions (that's primarily C++ rather than 
> C), useful extensions, compact code, correct code generation, and most 
> importantly of all, support for the target devices I want.

Sure. But then I'm sure you're aware that most scripting languages 
include a compilation stage where source code might be translated to 
bytecode.

I guess you're OK with that being as fast as possible so that there is 
no noticeable delay. But I also guess that all those features go out the 
window, yet people don't seem to care in that case.

My whole-program compilers (even my C one now) can run programs from 
source code just a like a scripting language.

So a fast, mechanical compiler than does little checking is good in one 
case, but not in another (specifically, anything created by Bart).



>  I wouldn't 
> care if your compiler can run at a billion lines per second and gcc took 
> an hour to compile - I still wouldn't be interested in your compiler 
> because it does not generate code for the devices I use.  Even if it 
> did, it would be useless to me, because I can trust the code gcc 
> generates and I cannot trust the code your tool generates.

Suppose I had a large C source file, mechanically generated via a 
compiler from another language so that it was fully verified.

It took a fraction of a second to generate it, all that's needed is a 
mechanical translation to native code. In that case you can keep your 
compiler that takes one hour to do analyses I don't need; I'll take the 
million line per second one. (A billion lines is not viable, one million 
is.)


>  And even if 
> your tool did everything else I need, and you could convince me that it 
> is something a professional could rely on, I'd still use gcc for the 
> better quality generated code, because that translates to money saved 
> for my customers.

Where have I said you should use my compiler? I'm simply making a case 
for the existence of very fast, baseline tools that do the minimum 
necessary with as little effort or footprint as necessary.

Here's an interesting test: I took sql.c (a 250Kloc sqlite3 test 
program), and compiled it first to NASM-compatible assembly, and then to 
my own assembly code.

I compiled the latter with my assembler and it took 1/6th for a second 
(for some 0.3M lines).

How long do you think NASM took?  It was nearly 8 minutes. Or a blazing 
5 minutes if you used -O0 (do only one pass).

No doubt you will argue that NASM is superior to my product, although 
I'm not sure how much deep analysis you can do of assembly code. And you 
will castigate me for giving it over-large inputs. However that is the 
task that needs to be done here.

It clearly has a bug, but if I hadn't mentioned it, I'd like to have 
known how sycophantic you would have been towards that product just to 
be able to belittle mine.

The NASM bug only starts to become obvious above 20Kloc or so. I wonder 
how many more subtle bugs exist in big products that result in 
significantly slower performance, but are not picked up because people 
like you /don't care/. You will just buy a faster machine or chop your 
application up into even smaller bits.

> 
>>
>> BTW why don't you use a cross-compiler? That's what David Brown would 
>> say.
>>
> 
> That is almost certainly what he normally does.  It can still be fun to 
> play around with things like TinyC, even if it is of no practical use 
> for the real development.