Deutsch   English   Français   Italiano  
<2df7a7f589d13b4b712555d80a562de0@www.novabbs.org>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!weretis.net!feeder9.news.weretis.net!news.nk.ca!rocksolid2!i2pn2.org!.POSTED!not-for-mail
From: mitchalsup@aol.com (MitchAlsup1)
Newsgroups: comp.arch
Subject: Re: Reverse engineering of Intel branch predictors
Date: Fri, 8 Nov 2024 18:48:47 +0000
Organization: Rocksolid Light
Message-ID: <2df7a7f589d13b4b712555d80a562de0@www.novabbs.org>
References: <vfbfn0$256vo$1@dont-email.me> <vg38o4$1mcfe$1@paganini.bofh.team> <jwvbjytwl4z.fsf-monnier+comp.arch@gnu.org> <vglj93$3mgpb$1@paganini.bofh.team>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Info: i2pn2.org;
	logging-data="1561080"; mail-complaints-to="usenet@i2pn2.org";
	posting-account="o5SwNDfMfYu6Mv4wwLiW6e/jbA93UAdzFodw5PEa6eU";
User-Agent: Rocksolid Light
X-Rslight-Posting-User: cb29269328a20fe5719ed6a1c397e21f651bda71
X-Spam-Checker-Version: SpamAssassin 4.0.0
X-Rslight-Site: $2y$10$2gGoy.2/cxvQjNHo8D2Q1udIE.mGrGDcO1LvMZS9LdgWTPydrYsdG
Bytes: 3569
Lines: 60

On Fri, 8 Nov 2024 17:54:45 +0000, Waldek Hebisch wrote:

> Stefan Monnier <monnier@iro.umontreal.ca> wrote:
>>> In case of branch predictor itself it means delay feedback by some
>>> number of clocks, which looks like minor cost.
>>
>> You can still make your next predictions based on "architectural state
>> + pending predictions" if the pending predictions themselves only
>> depend ultimately on the architectural state.
>>
>>> OTOH delaying fetches from speculatively fetched addresses will
>>> increase latency on critical path, possibly leading to
>>> significant slowdown.
>>
>> I think you can similarly perform eagerly the fetches from speculatively
>> fetched addresses but only if you can ensure that these will leave no
>> trace if the speculation happens to fail.
>
> It looks extremaly hard if not impossible.

What kind of front end µArchitecture are you assuming that makes
this hard (at all) ??

Seems to me that is there is an instruction buffer and you load the
speculative instructions into it, you can speculatively execute them
and throw them away if they were not supposed to execute. All you
have to avoid is filling I Cache if you were not supposed to have
fetched them.

Thus, not hard at all.

>> So whether and how you can do it depends the definition of "leave no
>> trace".  E.g. Mitch argues you can do it if you can refrain from putting
>> that info into the normal cache (where it would have to displace
>> something else, thus leaving a trace) and instead have to keep it in
>> what we could call a "speculative cache" but would likely be just some
>> sort of load buffer.
>
> Alone that is clearly insufficient.

Agreed insufficient all by itself but when combined...

>> If "leave no trace" includes not slowing down other concurrent memory

It does not.

>> accesses (e.g. from other CPUs), it might require some kind of
>> priority scheme.
>
> First, one needs to ensure that the CPU performing speculative
> fetch will not slown down due to say resource contention.  If you
> put some arbitrary limit like one or two speculative fetches in

Here, you use the word fetch as if it were a LD instruction. Is
that what you intended ?? {{I reserve Fetch for instruction fetches
only}}

> flight, that is likely to be detectable by the attacker and may
> leak information.  If you want several ("arbitrarily many") speculative
> fetches without slowing down normal execution, that would mean highly
> overprovisioned machine.