Warning: mysqli::__construct(): (HY000/1203): User howardkn already has more than 'max_user_connections' active connections in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\includes\artfuncs.php on line 21
Failed to connect to MySQL: (1203) User howardkn already has more than 'max_user_connections' active connections
Warning: mysqli::query(): Couldn't fetch mysqli in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\index.php on line 66
Article <uu56rq$3u2ve$1@dont-email.me>
Deutsch   English   Français   Italiano  
<uu56rq$3u2ve$1@dont-email.me>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: "Paul A. Clayton" <paaronclayton@gmail.com>
Newsgroups: comp.arch
Subject: Re: Another security vulnerability
Date: Thu, 28 Mar 2024 21:51:51 -0400
Organization: A noiseless patient Spider
Lines: 53
Message-ID: <uu56rq$3u2ve$1@dont-email.me>
References: <utpoi2$b6to$1@dont-email.me>
 <2024Mar25.082534@mips.complang.tuwien.ac.at>
 <20240326192941.0000314a@yahoo.com> <uu0kt1$2nr9j$1@dont-email.me>
 <VpVMN.731075$p%Mb.618266@fx15.iad>
 <2024Mar27.191411@mips.complang.tuwien.ac.at>
 <HH_MN.732789$p%Mb.8039@fx15.iad>
 <5fc6ea8088c0afe8618d2862cbacebab@www.novabbs.org>
 <TfhNN.110764$_a1e.90012@fx16.iad>
 <14b25c0880216e54fe36d28c96e8428c@www.novabbs.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Fri, 29 Mar 2024 01:51:55 +0100 (CET)
Injection-Info: dont-email.me; posting-host="e986263148564954c0f3cab57a0b9286";
	logging-data="4131822"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX1/BkL8KUkPJ9xKIlo2MazbNmK9QtzQD5KE="
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.0
Cancel-Lock: sha1:nS7d1IxPti8/R0Ef28rE7ozVlBY=
In-Reply-To: <14b25c0880216e54fe36d28c96e8428c@www.novabbs.org>
Bytes: 3908

On 3/28/24 3:59 PM, MitchAlsup1 wrote:
> EricP wrote:
[snip]
>> (d) any new pointers are virtual address that have to start back at
>>      the Load Store Queue for VA translation and forwarding testing
>>      after applying (a),(b) and (c) above.
> 
> This is the tidbit that prevents doing prefetches at/in the DRAM controller.
> The address so fetched needs translation !! And this requires dragging
> stuff over to DRC that is not normally done.

With multiple memory channels having independent memory
controllers (a reasonable design I suspect), a memory controller
may have to send the prefetch request to another memory controller
anyway. If the prefetch has to take a trip on the on-chip network,
a "minor side trip" for translation might not be horrible (though
it seems distasteful to me).

With the Mill having translation at last level cache miss, such
prefetching may be more natural *but* distributing the virtual
address translations and the memory controllers seems challenging
when one wants to minimize hops.

[snip]
>> BUT, as PIUMA proposes, we also allow the memory subsystem to 
>> read and write
>> individual aligned 8-byte values from DRAM, rather than whole 
>> cache lines,
>> so we only move that actual 8 bytes values we need.
> 
> Busses on cores are reaching the stage where an entire cache line
> is transferred in 1-cycle. With such busses, why define anything 
> smaller than a cache line ?? {other than uncacheable accesses}

The Intel research chip was special-purpose targeting 
cache-unfriendly code. Reading 64 bytes when 99% of the time 56 
bytes would be unused is rather wasteful (and having more memory 
channels helps under high thread count).

However, even for a "general purpose" processor, "word"-granular
atomic operations could justify not having all data transfers be
cache line size. (Such are rare compared with cache line loads
from memory or other caches, but a design might have narrower
connections for coherence, interrupts, etc. that could be used for
small data communication.)

In-cache compression might also nudge the tradeoffs. Being able to
have higher effective bandwidth when data is transmitted in a
compressed form might be useful. "Lossy compression", where the
recipient does not care about much of the data, would allow
compression even when the data itself is not compressible. For
contiguous useful data, this is comparable to a smaller cache
line.