Deutsch   English   Français   Italiano  
<utkphe$34l73$1@dont-email.me>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!news.nobody.at!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: David Brown <david.brown@hesbynett.no>
Newsgroups: comp.lang.c
Subject: Re: A Famous Security Bug
Date: Fri, 22 Mar 2024 21:26:22 +0100
Organization: A noiseless patient Spider
Lines: 478
Message-ID: <utkphe$34l73$1@dont-email.me>
References: <bug-20240320191736@ram.dialup.fu-berlin.de>
 <20240320114218.151@kylheku.com> <uthirj$29aoc$1@dont-email.me>
 <20240321092738.111@kylheku.com> <87a5mr1ffp.fsf@nosuchdomain.example.com>
 <20240322083648.539@kylheku.com> <utkftr$32ahu$1@dont-email.me>
 <20240322105321.365@kylheku.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Fri, 22 Mar 2024 20:26:22 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="ce13a9b3027441d1758690182a14dc57";
	logging-data="3298531"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX19YogSSpriwIG/wIjTATmWqMCioW5EChDU="
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:g21nXuuKnSDcyIaTokkOU7Su+r0=
In-Reply-To: <20240322105321.365@kylheku.com>
Content-Language: en-GB
Bytes: 22351

On 22/03/2024 19:55, Kaz Kylheku wrote:
> On 2024-03-22, David Brown <david.brown@hesbynett.no> wrote:
>> You should read the footnotes to 5.1.1.2 "Translation phases".
>> Footnotes are not normative, but they are helpful in explaining the
>> meaning of the text.  They note that compilers don't have to follow the
>> details of the translation phases, and that source files, translation
>> units, and translated translation units don't have to have one-to-one
>> correspondences.
> 
> Yes, I'm aware of that. For instance preprocessing can all be jumbled
> into one process. But it has to produce that result.
> 
> Even if translation phases 7 and 8 are combined, the semantic analysis
> of the individual translation unit has to appear to be settled before
> linkage. So for instance a translation unit could incrementally emerge
> from the semantic analysis steps, and those parts of it already analyzed
> (phase 7) could start to be linked to other translation units (phase 8).
> 

Again, you are inferring far too much here.  The standard is /not/ 
limiting like this.

Compilers can make use of all sorts of additional information.  They 
have always been able to do so.  They can use extra information provided 
by compiler extensions - such as gcc attributes.  They can use 
information from profiling to optimise based on real-world usage.  They 
can analyse source code files and use that analysis for optimisation 
(and hopefully also static error checking).


Consider this:

A compiler can happily analyse each source code file in all kinds of 
ways, completely independently of what the C standards (or perhaps, by 
happy coincidence, using the same types of pre-processing and 
interpretation).  This analysis can be stored in files or some other 
storage place.  Do you agree that this is allowed, or do you think the C 
standards somehow ban it?  Note that we are calling this "analysis" - 
not C compilation.

Now the compiler starts the "real" compilation, passing through the 
translation phases one by one.  When it gets to phase 7, it reads all 
this stored analysis information.  (Nothing in the standards says the 
compiler can't pull in extra information - it is quite normal, for 
example, to pull in code snippets as part of the compilation process.) 
For each translation unit, it produces two outputs (in one "fat" object 
file) - one part is a relatively dumb translation that does not make use 
of the analysis, the other uses the analysis information to generate 
more optimal code.  Both parts make up the "translator output" for the 
translation unit.  Again, can you point to anything in the C standards 
that would forbid this?

Then we come to phase 8.  The compiler (or linker) reads all the 
"translator output" files needed for the complete program.  It checks 
that it has the same set of input files as were used during the 
pre-compilation analysis.  If they are all the same, then the analysis 
information about the different units is valid, and thus the 
optimisations using that extra information are valid.  The "dumb 
translation" versions can be used as a fallback if the analysis was not 
valid - otherwise they are thrown out, and the more optimised versions 
are linked together.

There is nothing in the description of the translation phases that 
hinders this.  All the compiler has to do is ensure that the final 
program - not any individual translation units - has correct observable 
behaviour.


I would also refer you to section 1 of the C standards - "Scope".  In 
particular, note that "This document does /not/ specify the mechanism by 
which C programs are transformed for use by a data-processing system". 
(Emphasis mine.)  The workings of the compiler are not part of the standard.


> I'm just saying that certain information leakage is clearly permitted,
> regardless of how the phases are integrated.
> 
>> The standard also does not say what the output of "translation" is - it
>> does not have to be assembly or machine code.  It can happily be an
>> internal format, as used by gcc and clang/llvm.  It does not define what
>> "linking" is, or how the translated translation units are "collected
>> into a program image" - combining the partially compiled units,
>> optimising, and then generating a program image is well within that
>> definition.
>>
>>> (That can be inferred
>>> from the rules which forbid semantic analysis across translation
>>> units, only linkage.)
>>
>> The rules do not forbid semantic analysis across translation units -
>> they merely do not /require/ it.  You are making an inference without
>> any justification that I can see.
> 
> Translation phase 7 is clearly about a single translation unit in
> isolation:
> 
> "The resulting tokens are syntactically and semantically analyzed
>   and translated as a translation unit."
> 
> Not: "as a combination of multiple translation uints".

The point is that many things are local to a translation unit, such as 
statics, type definitions, and so on.  These are valid within the 
translation unit (within their scope, of course), and independent of 
identically named items in other translation units.  It is about 
defining a kind of "unit of compilation" for the language semantics - it 
is /not/ restricting the behaviour of a compiler.

LTO does not change the language semantics in any way.  The language 
semantics determine the observable behaviour of the program, and we have 
already established that this must be unchanged.  Generated instructions 
for a target are not part of the language semantics.


> 
> 5.1.1.1 clearly refers to "[t]he separate translation units of a
> program".

It does so all in terms of what a compiler /may/ do.

And there is never any specification of the result of a "translation". 
It can happily be byte-code, or internal toolchain-specific formats.

> 
> LTO pretends that the program is still divided into the same translation
> units, while minging them together in ways contrary to all those
> chapter 5 descriptions.

No.

> 
> The conforming way to obtain LTO is to actually combine multiple
> preprocessing translation units into one.
> 

You could do that if you like (after manipulating things to handle 
statics, type definitions, etc.).

And you would then find that if "foo()" in "foo.c" called "bar()" in 
"bar.c", the call to "bar()" might be inlined, or omitted, or otherwise 
optimised, just as it could be if they were both defined in the same 
translation unit.

The result would be the same kind of object code as you get with LTO - 
one in which the observable behaviour is as expected, but you might get 
different details in the generated code.

I don't know why you would think that this kind of combination of units 
is conforming, but LTO is not.  It's all the same thing in principle - 
the only difference is that real-world implementations of LTO are 
designed to be scalable, do as much as possible in parallel, and avoid 
re-doing work for files that don't change.

Some link-time optimisation or "whole program optimisation" toolchains 
are aimed at small code bases (such as might fit into a small 
microcontroller) and combine all the source code together then handle it 
all at once.  Again, the principles and the semantics are not any 
different from gcc LTO - it's just a different way of splitting up the work.

>>> That's why we can have a real world security issue caused by zeroing
>>> being optimized away.
>>
>> No, it is not.  We have real-world security issues for all sorts of
>> reasons, including people mistakenly thinking they can force particular
>> types of code generation by calling functions in different source files.
> 
> In fact, that code generation is forced, when people do not use LTO,
> which is not enabled by default.
> 

No, it is not.

The C standards don't talk about LTO, or whether or not it is enabled, 
or what is "default", or even what kind of code generation you get.

========== REMAINDER OF ARTICLE TRUNCATED ==========