Deutsch English Français Italiano |
<bad6f23d8c2ef5be62c4b14134911f40@www.novabbs.org> View for Bookmarking (what is this?) Look up another Usenet article |
Path: ...!weretis.net!feeder9.news.weretis.net!i2pn.org!i2pn2.org!.POSTED!not-for-mail From: mitchalsup@aol.com (MitchAlsup1) Newsgroups: comp.arch Subject: Re: Privilege Levels Below User Date: Tue, 22 Oct 2024 21:08:24 +0000 Organization: Rocksolid Light Message-ID: <bad6f23d8c2ef5be62c4b14134911f40@www.novabbs.org> References: <jai66jd4ih4ejmek0abnl4gvg5td4obsqg@4ax.com> <h0ib6j576v8o37qu1ojrsmeb5o88f29upe@4ax.com> <2024Jun9.185245@mips.complang.tuwien.ac.at> <38ob6jl9sl3ceb0qugaf26cbv8lk7hmdil@4ax.com> <2024Jun10.091648@mips.complang.tuwien.ac.at> <o32f6jlq2qpi9s1u8giq521vv40uqrkiod@4ax.com> <3a691dbdc80ebcc98d69c3a234f4135b@www.novabbs.org> <k58h6jlvp9rl13br6v1t24t47t4t2brfiv@4ax.com> <5a27391589243e11b610b14c3015ec09@www.novabbs.org> <vf8rrr$1jtma$1@dont-email.me> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Injection-Info: i2pn2.org; logging-data="3231887"; mail-complaints-to="usenet@i2pn2.org"; posting-account="o5SwNDfMfYu6Mv4wwLiW6e/jbA93UAdzFodw5PEa6eU"; User-Agent: Rocksolid Light X-Spam-Checker-Version: SpamAssassin 4.0.0 X-Rslight-Site: $2y$10$p6bUONyAbhSYE281AtYjEuoJY73KCddF6SznFBld6GqPTgyn4LWBa X-Rslight-Posting-User: cb29269328a20fe5719ed6a1c397e21f651bda71 Bytes: 5165 Lines: 82 On Mon, 21 Oct 2024 0:42:32 +0000, Paul A. Clayton wrote: > THREAD NECROMANCY > > On 6/11/24 5:18 PM, MitchAlsup1 wrote: > [snip] >> I doubt that RowHammer still works when refreshes are interspersed >> between accesses--RowHammer generally works because the events are >> not protected by refreshes--the DRC sees the right ROW open and >> simple streams at the open bank. > > If one refreshes the two adjacent rows to avoid data disruption, > those refreshes would be adjacent reads to two other rows so it > seems one would have to be a little cautious about excessively > frequent refreshes. > >> Also note, there are no instructions in My 66000 that force a cache >> to DRAM whereas there are instructions that can force a cache line >> into L3. > > How does a system suspend to DRAM if it cannot force a writeback > of all dirty lines to memory? In GENERAL, you do not want to give this capability to applications nor use it willy-nilly. > I am *guessing* this would not use a > special instruction but rather configuration of power management > that would cause hardware/firmware to clean the cache. There is a sideband command from any master (anywhere) that causes L3 to get dumped to DRAM over the next refresh interval. It is not an instruction, and the TLB has to cooperate. A device may initiate "suspend to DRAM" as well as a CPU (or any other bus master). > Writing back specific data to persistent memory might also > motivate cache block cleaning operations. Perhaps one could > implement such by copying from a cacheable mapping to a > non-cacheable(I/O?) memory?? (I simply remember that Intel added > instructions to write cache lines to persistent memory.) > >> L3 is the buffer to DRAM. Nothing gets to DRAM without >> going through L3 and nothing comes out of DRM that is not also >> buffer by L3. So, if 96 cores simultaneously read a line residing in >> DRAM, DRAM is read once and 95 cores are serviced through L3. So, >> you can't RowHammer based on reading DRAM, either. > > If 128 cores read distinct cache lines from the same page quickly > enough to hammer the adjacent pages but not quickly enough to get > DRAM page open hits, this would seem to require relatively > frequent refreshes of adjacent DRAM rows. DDR 5 has a 64 GB/s transfer rate 128 cache lines (64B) is 8192 bytes So this takes 1/8 of a millisecond or 125µs. A DDR5 refresh interval is 3.9µs. https://www.micron.com/content/dam/micron/global/public/products/white-paper/ddr5-new-features-white-paper.pdf#:~:text=REFRESH%20commands%20are%20issued%20at%20an%20average%20periodic,of%20295ns%20for%20a%2016Gb%20DDR5%20SDRAM%20device. So one has refreshes in the described situation. > Since the L3/memory controller could see that the DRAM row was > unusually active, it could increase prefetching while the DRAM > row was open and/or queue the accesses longer so that the > hammering frequency was reduced and page open hits would be more > common. A DRAM Row stays active, commands just CAS-out more data. That is there is no ROW Hammering--the word line remains asserted while the sense amplifiers remain asserted with captured data--while CASs are used to strobe out ore data {subject to refresh}. > The simple statement that L3 would avoid RowHammer by providing > the same cache line to all requesters seemed a bit too simple. You need to investigate the difference between RAS and CAS for DRAMs. > Your design may very well handle all the problematic cases, > perhaps even with minimal performance penalties for inadvertent > hammering and logging/notification for questionable activity just > like for error correction (and has been proposed for detected race > conditions). I just know that these are hard problems.