Warning: mysqli::__construct(): (HY000/1203): User howardkn already has more than 'max_user_connections' active connections in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\includes\artfuncs.php on line 21
Failed to connect to MySQL: (1203) User howardkn already has more than 'max_user_connections' active connections
Warning: mysqli::query(): Couldn't fetch mysqli in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\index.php on line 66
Article <v88oid$jkah$1@dont-email.me>
Deutsch   English   Français   Italiano  
<v88oid$jkah$1@dont-email.me>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!news.nobody.at!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>
Newsgroups: comp.arch
Subject: Re: Arguments for a sane ISA 6-years later
Date: Mon, 29 Jul 2024 11:55:08 -0700
Organization: A noiseless patient Spider
Lines: 193
Message-ID: <v88oid$jkah$1@dont-email.me>
References: <b5d4a172469485e9799de44f5f120c73@www.novabbs.org>
 <v7ubd4$2e8dr$1@dont-email.me> <v7uc71$2ec3f$1@dont-email.me>
 <2024Jul26.190007@mips.complang.tuwien.ac.at> <v872h5$alfu$2@dont-email.me>
 <v87g4i$cvih$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 29 Jul 2024 20:55:09 +0200 (CEST)
Injection-Info: dont-email.me; posting-host="e7ff1aa5790f368a9ad3a21131725cb1";
	logging-data="643409"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX1/TCrR6Umgka4EwYrTIrzEcKO1PufBtiEY="
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:Sas8RK35ApxIVqBE7i1M6BtgITg=
In-Reply-To: <v87g4i$cvih$1@dont-email.me>
Content-Language: en-US
Bytes: 9433

On 7/29/2024 12:25 AM, BGB wrote:
> On 7/28/2024 10:32 PM, Chris M. Thomasson wrote:
>> On 7/26/2024 10:00 AM, Anton Ertl wrote:
>>> "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
>>>> On 7/25/2024 1:09 PM, BGB wrote:
>>>>> At least with a weak model, software knows that if it doesn't go 
>>>>> through
>>>>> the rituals, the memory will be stale.
>>>
>>> There is no guarantee of staleness, only a lack of stronger ordering
>>> guarantees.
>>>
>>>> The weak model is ideal for me. I know how to program for it
>>>
>>> And the fact that this model is so hard to use that few others know
>>> how to program for it make it ideal for you.
>>>
>>>> and it's more efficient
>>>
>>> That depends on the hardware.
>>>
>>> Yes, the Alpha 21164 with its imprecise exceptions was "more
>>> efficient" than other hardware for a while, then the Pentium Pro came
>>> along and gave us precise exceptions and more efficiency.  And
>>> eventually the Alpha people learned the trick, too, and 21264 provided
>>> precise exceptions (although they did not admit this) and more
>>> efficieny.
>>>
>>> Similarly, I expect that hardware that is designed for good TSO or
>>> sequential consistency performance will run faster on code written for
>>> this model than code written for weakly consistent hardware will run
>>> on that hardware.  That's because software written for weakly
>>> consistent hardware often has to insert barriers or atomic operations
>>> just in case, and these operations are slow on hardware optimized for
>>> weak consistency.
>>>
>>> By contrast, one can design hardware for strong ordering such that the
>>> slowness occurs only in those cases when actual (not potential)
>>> communication between the cores happens, i.e., much less frequently.
>>>
>>>> and sometimes use cases do not care if they encounter "stale" data.
>>>
>>> Great.  Unless these "sometimes" cases are more often than the cases
>>> where you perform some atomic operation or barrier because of
>>> potential, but not actual communication between cores, the weak model
>>> is still slower than a well-implemented strong model.
>>
>> A strong model? You mean I don't have to use any memory barriers at 
>> all? Tell that to SPARC in RMO mode... How strong? Even the x86 
>> requires a membar when a store followed by a load to another location 
>> shall be respected wrt order. Store-Load. #StoreLoad over on SPARC. ;^)
>>
>> If you can force everything to be #StoreLoad (*) and make it faster 
>> than a handcrafted algo on a very weak memory system, well, hats off! 
>> I thought it was easier for a HW guy to implement weak consistency? At 
>> the cost of the increased complexity wrt programming the sucker! ;^)
>>
> 
> Programming for a weak model isn't that hard...
> 
> Well, unless the program is built around a "naive lock free" strategy 
> (where the threads manipulate members in a data-structure or similar and 
> assume that the other threads will see the updates in a more-or-less 
> consistent way).

lock/wait-free algorithms are very nice. Yes they can be fairly hard, 
but can be done for sure; stable and working in 100% correct order. The 
good ones are hard to beat using all locking logic. Try to beat RCU 
using a read write lock? I have some interesting algorithms that work 
like a charm.


> Though, one does have the issue that one can't just use cheap spinlocks.

One note... Spinlocks work in a very weak memory model for sure. You 
just need the right memory barrier logic... For instance, SPARC in RMO 
mode wrt locking a spinlock and/or mutex requires a #LoadStore | 
#LoadLoad membar _after_ the atomic logic that actually locks it occurs. 
It also requires a release membar #LoadStore | #StoreStore _before_ the 
atomic logic that unlocks it takes place. Take note that #StoreLoad is 
_not_ required for a spinlock or a mutex in this context...

However... There is "special" mutex logic that actually requires a 
#StoreLoad! Peterson's algorithm for example. Iirc, it needs a 
#StoreLoad because it depends on a store followed by a load to another 
location to hold true. This is a bit different than other locking 
algorithms...

There there are more "exotic" methods such as so-called asymmetric 
mutexes. They can have fast paths and slow paths, so to speak. It's 
almost getting into the realm of RCU here... A fast path can be memory 
barrier free. The slow path can make things consistent with the use of 
so called "remote" memory barriers. It's funny that Windows seems to 
have one:

https://learn.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-flushprocesswritebuffers

;^)

The slow path is meant to not be frequently used, hence the term 
asymmetric. On par with read/write logic... :^)

Should have some more time to respond to the rest of your post tonight 
or tomorrow. I am a bit busy right now.

> 
> 
>>
>> (*) Not just #StoreLoad for full consistency, you would need :
>>
>> MEMBAR #StoreLoad | #LoadStore | #StoreStore | #LoadLoad
>>
>> right?
> 
> 
> FWIW:
> I did figure out a way to "affordably" implement register banking.
> 
> 
> So, first attempt, naive strategy:
>    Simply increased register array sizes from 64 to 256;
>    Result (on XC7A100T):
>      LUT cost went from 90% to 120%;
>      LUTRAM cost went from 22% to 70%.
>      Clearly, this wasn't going to work...
> 
> Note that, when anything goes over 100%, the graph turns red, and the 
> "Implementation" stage is going to fail (and that last 20% isn't going 
> to just disappear...).
> 
> 
> So, ended up needing a more complex strategy:
>    Expand register tags to 4 bits, also encoding the current bank;
>    Add a banked register array, 256x 64-bits;
>    If a register port isn't in the correct bank:
>      Set parameters to signal that it needs to be swapped out;
>      Stall the pipeline.
>    Store the old register value to the array,
>      while also fetching the register value from the array.
>    If a swap-operation has fetched a register value from the array,
>      store it into the main register array
>        Currently by overriding the Lane 1 write port.
>        TBD: May move to the Lane 3 port.
> 
> 
> 
> Still need to work out the specifics of the mechanism for moving to/from 
> the banked registers (may not be encoded directly with the existing 
> register numbering; so will likely need to be indirect).
> 
> Also there is currently likely to be wonk, and this will make interrupts 
> and system calls faster at the expense of likely making task-switches 
> slightly slower.
> 
> Basically, since now rather than being able to load/store the registers 
> directly, one is going to need to load/store them and also MOV them via 
> temporary registers (adding some extra clock cycles).
> 
> 
> Also, a to-be addressed issue:
> To actually use the mechanism, would need different logic for the 
> interrupt handlers (and context switching).
> 
> As designed though, the mechanism itself is backwards compatible with my 
> existing behavior (unless actually told to use the bank swapping, 
> everything will behave as it did before).
> 
> 
> 
> But, I am still having doubts as to whether or not this makes sense.
> 
> I had thought RISC-V's Privileged spec had defined per-mode bank 
> switching, but I can't seem to find any mention of this when I went back 
> to look at it now.
> 
> It appears instead that they were actually using a similar "save and 
========== REMAINDER OF ARTICLE TRUNCATED ==========