Warning: mysqli::__construct(): (HY000/1203): User howardkn already has more than 'max_user_connections' active connections in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\includes\artfuncs.php on line 21
Failed to connect to MySQL: (1203) User howardkn already has more than 'max_user_connections' active connections
Warning: mysqli::query(): Couldn't fetch mysqli in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\index.php on line 66
Article <vf8rrr$1jtma$1@dont-email.me>
Deutsch   English   Français   Italiano  
<vf8rrr$1jtma$1@dont-email.me>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: "Paul A. Clayton" <paaronclayton@gmail.com>
Newsgroups: comp.arch
Subject: Re: Privilege Levels Below User
Date: Sun, 20 Oct 2024 20:42:32 -0400
Organization: A noiseless patient Spider
Lines: 55
Message-ID: <vf8rrr$1jtma$1@dont-email.me>
References: <jai66jd4ih4ejmek0abnl4gvg5td4obsqg@4ax.com>
 <h0ib6j576v8o37qu1ojrsmeb5o88f29upe@4ax.com>
 <2024Jun9.185245@mips.complang.tuwien.ac.at>
 <38ob6jl9sl3ceb0qugaf26cbv8lk7hmdil@4ax.com>
 <2024Jun10.091648@mips.complang.tuwien.ac.at>
 <o32f6jlq2qpi9s1u8giq521vv40uqrkiod@4ax.com>
 <3a691dbdc80ebcc98d69c3a234f4135b@www.novabbs.org>
 <k58h6jlvp9rl13br6v1t24t47t4t2brfiv@4ax.com>
 <5a27391589243e11b610b14c3015ec09@www.novabbs.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 22 Oct 2024 20:45:16 +0200 (CEST)
Injection-Info: dont-email.me; posting-host="9430b49026ba67077ed425bf502d8afe";
	logging-data="1701578"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX191c8Kgf8RKdvKLa+/XmlC4af/bPUJ/fmI="
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.0
Cancel-Lock: sha1:1/30hYAPEeFYMSxR8Lcdx1UzMcM=
In-Reply-To: <5a27391589243e11b610b14c3015ec09@www.novabbs.org>
Bytes: 3968

THREAD NECROMANCY

On 6/11/24 5:18 PM, MitchAlsup1 wrote:
[snip]
> I doubt that RowHammer still works when refreshes are interspersed
> between accesses--RowHammer generally works because the events are
> not protected by refreshes--the DRC sees the right ROW open and
> simple streams at the open bank.

If one refreshes the two adjacent rows to avoid data disruption, 
those refreshes would be adjacent reads to two other rows so it
seems one would have to be a little cautious about excessively
frequent refreshes.

> Also note, there are no instructions in My 66000 that force a cache
> to DRAM whereas there are instructions that can force a cache line
> into L3. 

How does a system suspend to DRAM if it cannot force a writeback
of all dirty lines to memory? I am *guessing* this would not use a
special instruction but rather configuration of power management
that would cause hardware/firmware to clean the cache.

Writing back specific data to persistent memory might also
motivate cache block cleaning operations. Perhaps one could
implement such by copying from a cacheable mapping to a
non-cacheable(I/O?) memory?? (I simply remember that Intel added
instructions to write cache lines to persistent memory.)

> L3 is the buffer to DRAM. Nothing gets to DRAM without
> going through L3 and nothing comes out of DRM that is not also
> buffer by L3. So, if 96 cores simultaneously read a line residing in
> DRAM, DRAM is read once and 95 cores are serviced through L3. So,
> you can't RowHammer based on reading DRAM, either.

If 128 cores read distinct cache lines from the same page quickly
enough to hammer the adjacent pages but not quickly enough to get
DRAM page open hits, this would seem to require relatively
frequent refreshes of adjacent DRAM rows.

Since the L3/memory controller could see that the DRAM row was
unusually active, it could increase prefetching while the DRAM
row was open and/or queue the accesses longer so that the
hammering frequency was reduced and page open hits would be more
common.

The simple statement that L3 would avoid RowHammer by providing 
the same cache line to all requesters seemed a bit too simple.

Your design may very well handle all the problematic cases,
perhaps even with minimal performance penalties for inadvertent
hammering and logging/notification for questionable activity just
like for error correction (and has been proposed for detected race
conditions). I just know that these are hard problems.