Deutsch English Français Italiano |
<v87g4i$cvih$1@dont-email.me> View for Bookmarking (what is this?) Look up another Usenet article |
Path: ...!2.eu.feeder.erje.net!feeder.erje.net!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail From: BGB <cr88192@gmail.com> Newsgroups: comp.arch Subject: Re: Arguments for a sane ISA 6-years later Date: Mon, 29 Jul 2024 02:25:04 -0500 Organization: A noiseless patient Spider Lines: 152 Message-ID: <v87g4i$cvih$1@dont-email.me> References: <b5d4a172469485e9799de44f5f120c73@www.novabbs.org> <v7ubd4$2e8dr$1@dont-email.me> <v7uc71$2ec3f$1@dont-email.me> <2024Jul26.190007@mips.complang.tuwien.ac.at> <v872h5$alfu$2@dont-email.me> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Injection-Date: Mon, 29 Jul 2024 09:25:07 +0200 (CEST) Injection-Info: dont-email.me; posting-host="c8aa09e2d45f8fb38190453b39c47ea3"; logging-data="425553"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19rh4E07Iu5GpO9v1DsjUAEC/jF7IfRmNQ=" User-Agent: Mozilla Thunderbird Cancel-Lock: sha1:5ZTU9hlyI7JlqRZwBaE4CurUIgI= Content-Language: en-US In-Reply-To: <v872h5$alfu$2@dont-email.me> Bytes: 7257 On 7/28/2024 10:32 PM, Chris M. Thomasson wrote: > On 7/26/2024 10:00 AM, Anton Ertl wrote: >> "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes: >>> On 7/25/2024 1:09 PM, BGB wrote: >>>> At least with a weak model, software knows that if it doesn't go >>>> through >>>> the rituals, the memory will be stale. >> >> There is no guarantee of staleness, only a lack of stronger ordering >> guarantees. >> >>> The weak model is ideal for me. I know how to program for it >> >> And the fact that this model is so hard to use that few others know >> how to program for it make it ideal for you. >> >>> and it's more efficient >> >> That depends on the hardware. >> >> Yes, the Alpha 21164 with its imprecise exceptions was "more >> efficient" than other hardware for a while, then the Pentium Pro came >> along and gave us precise exceptions and more efficiency. And >> eventually the Alpha people learned the trick, too, and 21264 provided >> precise exceptions (although they did not admit this) and more >> efficieny. >> >> Similarly, I expect that hardware that is designed for good TSO or >> sequential consistency performance will run faster on code written for >> this model than code written for weakly consistent hardware will run >> on that hardware. That's because software written for weakly >> consistent hardware often has to insert barriers or atomic operations >> just in case, and these operations are slow on hardware optimized for >> weak consistency. >> >> By contrast, one can design hardware for strong ordering such that the >> slowness occurs only in those cases when actual (not potential) >> communication between the cores happens, i.e., much less frequently. >> >>> and sometimes use cases do not care if they encounter "stale" data. >> >> Great. Unless these "sometimes" cases are more often than the cases >> where you perform some atomic operation or barrier because of >> potential, but not actual communication between cores, the weak model >> is still slower than a well-implemented strong model. > > A strong model? You mean I don't have to use any memory barriers at all? > Tell that to SPARC in RMO mode... How strong? Even the x86 requires a > membar when a store followed by a load to another location shall be > respected wrt order. Store-Load. #StoreLoad over on SPARC. ;^) > > If you can force everything to be #StoreLoad (*) and make it faster than > a handcrafted algo on a very weak memory system, well, hats off! I > thought it was easier for a HW guy to implement weak consistency? At the > cost of the increased complexity wrt programming the sucker! ;^) > Programming for a weak model isn't that hard... Well, unless the program is built around a "naive lock free" strategy (where the threads manipulate members in a data-structure or similar and assume that the other threads will see the updates in a more-or-less consistent way). Though, one does have the issue that one can't just use cheap spinlocks. > > (*) Not just #StoreLoad for full consistency, you would need : > > MEMBAR #StoreLoad | #LoadStore | #StoreStore | #LoadLoad > > right? FWIW: I did figure out a way to "affordably" implement register banking. So, first attempt, naive strategy: Simply increased register array sizes from 64 to 256; Result (on XC7A100T): LUT cost went from 90% to 120%; LUTRAM cost went from 22% to 70%. Clearly, this wasn't going to work... Note that, when anything goes over 100%, the graph turns red, and the "Implementation" stage is going to fail (and that last 20% isn't going to just disappear...). So, ended up needing a more complex strategy: Expand register tags to 4 bits, also encoding the current bank; Add a banked register array, 256x 64-bits; If a register port isn't in the correct bank: Set parameters to signal that it needs to be swapped out; Stall the pipeline. Store the old register value to the array, while also fetching the register value from the array. If a swap-operation has fetched a register value from the array, store it into the main register array Currently by overriding the Lane 1 write port. TBD: May move to the Lane 3 port. Still need to work out the specifics of the mechanism for moving to/from the banked registers (may not be encoded directly with the existing register numbering; so will likely need to be indirect). Also there is currently likely to be wonk, and this will make interrupts and system calls faster at the expense of likely making task-switches slightly slower. Basically, since now rather than being able to load/store the registers directly, one is going to need to load/store them and also MOV them via temporary registers (adding some extra clock cycles). Also, a to-be addressed issue: To actually use the mechanism, would need different logic for the interrupt handlers (and context switching). As designed though, the mechanism itself is backwards compatible with my existing behavior (unless actually told to use the bank swapping, everything will behave as it did before). But, I am still having doubts as to whether or not this makes sense. I had thought RISC-V's Privileged spec had defined per-mode bank switching, but I can't seem to find any mention of this when I went back to look at it now. It appears instead that they were actually using a similar "save and restore everything on each interrupt" strategy as what BJX2 had been using thus far. Most obvious difference was that apparently they allowed interrupts to be layered across modes: User Mode interrupts go to Supervisor Mode; Supervisor mode interrupts go to Machine Mode. Contrast to the BJX2 core which only has an equivalent of Machine Mode interrupts (and treats User and Supervisor Mode as basically the same, differing mostly in that Supervisor mode has access to privileged instructions). ....