Deutsch   English   Français   Italiano  
<f8869fa1aadb85896d237179d46b20f8@www.novabbs.org>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!weretis.net!feeder9.news.weretis.net!i2pn.org!i2pn2.org!.POSTED!not-for-mail
From: mitchalsup@aol.com (MitchAlsup1)
Newsgroups: comp.arch
Subject: Re: Memory ordering
Date: Mon, 29 Jul 2024 17:49:19 +0000
Organization: Rocksolid Light
Message-ID: <f8869fa1aadb85896d237179d46b20f8@www.novabbs.org>
References: <b5d4a172469485e9799de44f5f120c73@www.novabbs.org> <v7ubd4$2e8dr$1@dont-email.me> <v7uc71$2ec3f$1@dont-email.me> <2024Jul26.190007@mips.complang.tuwien.ac.at> <2032da2f7a4c7c8c50d28cacfa26c9c7@www.novabbs.org> <2024Jul29.152110@mips.complang.tuwien.ac.at>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Info: i2pn2.org;
	logging-data="787175"; mail-complaints-to="usenet@i2pn2.org";
	posting-account="65wTazMNTleAJDh/pRqmKE7ADni/0wesT78+pyiDW8A";
User-Agent: Rocksolid Light
X-Rslight-Posting-User: ac58ceb75ea22753186dae54d967fed894c3dce8
X-Rslight-Site: $2y$10$b1OEqLpD9E3rOLQaAjsmGeI7.3N8qhQf3G9j/Tv3JeVwo5AiL2bfy
X-Spam-Checker-Version: SpamAssassin 4.0.0
Bytes: 5670
Lines: 91

On Mon, 29 Jul 2024 13:21:10 +0000, Anton Ertl wrote:

> mitchalsup@aol.com (MitchAlsup1) writes:
>>On Fri, 26 Jul 2024 17:00:07 +0000, Anton Ertl wrote:
>>> Similarly, I expect that hardware that is designed for good TSO or
>>> sequential consistency performance will run faster on code written for
>>> this model than code written for weakly consistent hardware will run
>>> on that hardware.
>>
>>According to Lamport; only the ATOMIC stuff needs sequential
>>consistency.
>>So, it is completely possible to have a causally consistent processor
>>that switches to sequential consistency when doing ATOMIC stuff and gain
>>performance when not doing ATOMIC stuff, and gain programmability when
>>doing atomic stuff.
>
> That's not what I have in mind.  What I have in mind is hardware that,
> e.g., speculatively performs loads, predicting that no other core will
> store there with an earlier time stamp.  But if another core actually
> performs such a store, the usual misprediction handling happens and
> the code starting from that mispredicted load is reexecuted.  So as
> long as two cores do not access the same memory, they can run at full
> speed, and there is only slowdown if there is actual (not potential)
> communication between the cores.

OK...
>
> A problem with that approach is that this requires enough reorder
> buffering (or something equivalent, there may be something cheaper for
> this particular problem) to cover at least the shared-cache latency
> (usually L3, more with multiple sockets).

The depth of the execution window may be smaller than the time it takes
to send the required information around and have this core recognize
that it is out-of-order wrt memory.

>>>                    That's because software written for weakly
>>> consistent hardware often has to insert barriers or atomic operations
>>> just in case, and these operations are slow on hardware optimized for
>>> weak consistency.
>>
>>The operations themselves are not slow.
>
> Citation needed.

A MEMBAR dropped into the pipeline, when nothing is speculative, takes
no more time than an integer ADD. Only when there is speculation does
it have to take time to relax the speculation.

>>> By contrast, one can design hardware for strong ordering such that the
>>> slowness occurs only in those cases when actual (not potential)
>>> communication between the cores happens, i.e., much less frequently.
>>
>>How would you do this for a 256-way banked memory system of the
>>NEC SX ?? I.E., the processor is not in charge of memory order--
>>the memory system is.
>
> Memory consistency is defined wrt what several processors do.  Some
> processor performs some reads and writes and another performs some
> read and writes, and memory consistency defines what a processor sees
> about what the other does, and what ends up in main memory.  But as
> long as the processors, their caches, and their interconnect gets the
> memory ordering right, the main memory is just the backing store that
> eventually gets a consistent result of what the other components did.
> So it does not matter whether the main memory has one bank or 256.

NEC SX is a multi-processor vector machine with the property that
addresses are spewed out as fast as AGEN can perform. These addresses
are routed to banks based on bus-segment and can arrive OoO wrt
how they were spewed out.

So two processors accessing the same memory using vector LDs will
see a single vector having multiple memory orderings. P[0]V[0] ordered
before P[1]V[0] but P[1]V[1] ordered before P[0]V[1], ...

> One interesting aspect is that for supercomputers I generally think
> that they have not yet been struck by the software crisis:
> Supercomputer hardware is more expensive than supercomputer software.
> So I expect that supercomputer hardware designers tend to throw
> complexity over the wall to the software people, and in many cases
> they do (the Cell broadband engine offers many examples of that).
> However, "some ... Fujitsu [ARM] CPUs run with TSO at all times"
> <https://lwn.net/Articles/970907/>; that sounds like the A64FX, a
> processor designed for supercomputing.  So apparently in this case the
> hardware designers actually accepted the hardware and design
> complexity cost of TSO and gave a better model to software, even in
> hardware designed for a supercomputer.

That may be true, but I always saw it as "Supercomputers run
applications for which they have source code"
>
> - anton