Deutsch   English   Français   Italiano  
<87ikz11osy.fsf@nosuchdomain.example.com>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!3.eu.feeder.erje.net!2.eu.feeder.erje.net!feeder.erje.net!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: Keith Thompson <Keith.S.Thompson+u@gmail.com>
Newsgroups: comp.lang.c
Subject: Re: C23 thoughts and opinions
Date: Sat, 25 May 2024 15:58:53 -0700
Organization: None to speak of
Lines: 160
Message-ID: <87ikz11osy.fsf@nosuchdomain.example.com>
References: <v2l828$18v7f$1@dont-email.me>
	<00297443-2fee-48d4-81a0-9ff6ae6481e4@gmail.com>
	<v2lji1$1bbcp$1@dont-email.me>
	<87msoh5uh6.fsf@nosuchdomain.example.com>
	<f08d2c9f-5c2e-495d-b0bd-3f71bd301432@gmail.com>
	<v2nbp4$1o9h6$1@dont-email.me> <v2ng4n$1p3o2$1@dont-email.me>
	<87y18047jk.fsf@nosuchdomain.example.com>
	<87msoe1xxo.fsf@nosuchdomain.example.com>
	<v2sh19$2rle2$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain
Injection-Date: Sun, 26 May 2024 00:58:57 +0200 (CEST)
Injection-Info: dont-email.me; posting-host="f86e0e741270de1a6fdb7e3596a53ea0";
	logging-data="3240795"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX19BMNJfJUupSJPO1Qqh8yFF"
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.2 (gnu/linux)
Cancel-Lock: sha1:7wV87J324X0r0LApZS6/RdWwCKA=
	sha1:AuVRHdp6OP5lLkPIXGXrDqctRSM=
Bytes: 7717

David Brown <david.brown@hesbynett.no> writes:
> On 25/05/2024 03:29, Keith Thompson wrote:
>> Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
>>> David Brown <david.brown@hesbynett.no> writes:
>>>> On 23/05/2024 14:11, bart wrote:
>>> [...]
>>>>> 'embed' was discussed a few months ago. I disagreed with the poor
>>>>> way it was to be implemented: 'embed' notionally generates a list of
>>>>> comma-separated numbers as tokens, where you have to take care of
>>>>> any trailing zero yourself if needed. It would also be hopelessly
>>>>> inefficient if actually implemented like that.
>>>>
>>>> Fortunately, it is /not/ actually implemented like that - it is only
>>>> implemented "as if" it were like that.  Real prototype implementations
>>>> (for gcc and clang - I don't know about other tools) are extremely
>>>> efficient at handling #embed.  And the comma-separated numbers can be
>>>> more flexible in less common use-cases.
>>> [...]
>>>
>>> I'm aware of a proposed implementation for clang:
>>>
>>> https://github.com/llvm/llvm-project/pull/68620
>>> https://github.com/ThePhD/llvm-project
>>>
>>> I'm currently cloning the git repo, with the aim of building it so I can
>>> try it out and test some corner cases.  It will take a while.
>>>
>>> I'm not aware of any prototype implementation for gcc.  If you are, I'd
>>> be very interested in trying it out.
>>>
>>> (And thanks for starting this thread!)
>> I've built this from source, and it mostly works.  I haven't seen it
>> do
>> any optimization; the `#embed` directive expands to a sequence of
>> comma-separated integer constants.
>> Which means that this:
>> #include <stdio.h>
>> int main(void) {
>>      struct foo {
>>          unsigned char a;
>>          unsigned short b;
>>          unsigned int c;
>>          double d;
>>      };
>>      struct foo obj = {
>> #embed "foo.dat"
>>      };
>>      printf("a=%d b=%d c=%d d=%f\n", obj.a, obj.b, obj.c, obj.d);
>> }
>> given "foo.dat" containing bytes with values 1, 2, 3, and 4,
>> produces
>> this output:
>> a=1 b=2 c=3 d=4.000000
>
> That is what you would expect by the way #embed is specified.  You
> would not expect to see any "optimisation", since optimisations should
> not change the results (apparent from choosing between alternative
> valid results).
>
> Where you will see the optimisation difference is between :
>
> 	const int xs[] = {
> #embed "x.dat"
> 	};
>
> and
>
> 	const int xs[] = {
> #include "x.csv"
> 	};
>
>
> where "x.dat" is a large binary file, and "x.csv" is the same data as
> comma-separated values.  The #embed version will compile very much 
> faster, using far less memory.  /That/ is the optimisation.

Why would it compile faster?  #embed expands to something similar to
CSV, which still has to be parsed.

Reference: <https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3220.pdf>
6.10.4.

The first one will probably initialize each int element of xs to a
single byte value extracted from x.dat.  Is that what you intended?
#embed works best with arrays of unsigned char.

If you mean that the #embed will expand to something other than the
sequence of integer constants, how does it know to do that in this
context?

If you have a binary file containing a sequence of int values, you can
use #embed to initialize an unsigned char array that's aliased with or
copied to the int array.

The *embed element width* is typically going to be CHAR_BIT bits by
default.  It can only be changed by an *implementation-defined* embed
parameter.  It seems odd that there's no standard way to specify the
element width.

It seems even more odd that the embed element width is
implementation defined and not set to CHAR_BIT by default.
A conforming implementation could set the embed element width to,
say, 4*CHAR_BIT and then not provide an implementation-defined embed
parameter to specify a different width, making #embed unusable for
unsigned char arrays.  (N3220 is a draft, not the final C23 standard,
but I haven't heard about any changes in this area.)

The kind of optimization I was thinking about was having #embed, in some
cases, expand to something other than the specified sequence of
comma-separated integer constants.  Such an optimization would be
intended to improve compile-time speed and memory usage, not run-time
performance.

With a straightforward implementation, the preprocessor has to generate
a sequence of integer constants as text, and then later compiler phases
have to parse that text sequence and generate the corresponding code.

Given:

    const unsigned char data[4] = {
    #embed "four_bytes.dat"
    }
    
That 4 byte data file is translated to something like "1, 2, 3, 4", then
converted into a stream of tokens, then those tokens are parsed, then,
given the context, the original 4-byte sequence is written into the
generated object file.

For a very large file, that could be a significant burden.  (I don't
have any numbers on that.)

An optimized version might have the preprocessor generate some
compiler-specific binary output, say something like "@rawdata N"
followed by N bytes of raw data.  Later compiler phases recognize the
"@rawdata" construct and directly dump the data into the object file in
the right place.  Making #embed generate @rawdata is only part of the
solution; the compiler has to implement @rawdata in a way that allows it
to be used inside an initializer, or perhaps in any other appropriate
context.

This could be substantially more efficient for something like:

    static const unsigned char data[] = {
    #embed "bigfile.dat"
    };

Of course it wouldn't handle my test case above.  But #embed can take
parameters, so it could generate the standard sequence by default and
"@rawdata" if you ask for it.

I don't know whether this kind of optimization is worthwhile, i.e.,
whether the straightforward implementation really imposes significant
commpile-time performance penalties that @rawdata or equivalent can
solve.  I also don't know whether existing implementations will
implement this kind of optimization (so far they haven't implemented
#embed at all).

-- 
Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
void Void(void) { Void(); } /* The recursive call of the void */