Deutsch   English   Français   Italiano  
<ebbef6dff0079e70dd333726d5c963bd@www.novabbs.org>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!weretis.net!feeder6.news.weretis.net!i2pn.org!i2pn2.org!.POSTED!not-for-mail
From: mitchalsup@aol.com (MitchAlsup1)
Newsgroups: comp.arch
Subject: Re: 88xxx or PPC
Date: Sat, 9 Mar 2024 04:14:35 +0000
Organization: Rocksolid Light
Message-ID: <ebbef6dff0079e70dd333726d5c963bd@www.novabbs.org>
References: <uigus7$1pteb$1@dont-email.me> <ac55c75a923144f72d204c801ff7f984@www.novabbs.org> <20240303165533.00004104@yahoo.com> <2024Mar3.173345@mips.complang.tuwien.ac.at> <20240303203052.00007c61@yahoo.com> <2024Mar3.232237@mips.complang.tuwien.ac.at> <20240304171457.000067ea@yahoo.com> <2024Mar4.191835@mips.complang.tuwien.ac.at> <20240305001833.000027a9@yahoo.com> <0c2e37386287e8a0303191dc7b989c76@www.novabbs.org> <us5t1c$36voh$1@dont-email.me> <df173cbc4fb74394f9d03f285f9381f3@www.novabbs.org> <3hGFN.115182$m4d.77183@fx43.iad> <0cc87b9d559c4f79b9b2d7663fa3ccbf@www.novabbs.org> <us8l5d$6ae9$1@dont-email.me> <bbecdccd4319e935fd2a50f97664d6ea@www.novabbs.org> <usgid8$20vho$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Info: i2pn2.org;
	logging-data="1360030"; mail-complaints-to="usenet@i2pn2.org";
	posting-account="PGd4t4cXnWwgUWG9VtTiCsm47oOWbHLcTr4rYoM0Edo";
User-Agent: Rocksolid Light
X-Spam-Checker-Version: SpamAssassin 4.0.0
X-Rslight-Site: $2y$10$9q8ZrcUEgeH7oYXbKWXydeEDIhhoPM7bg5bVfAqZP2.4T2Ez8Ul8y
X-Rslight-Posting-User: ac58ceb75ea22753186dae54d967fed894c3dce8
Bytes: 5447
Lines: 90

Paul A. Clayton wrote:

> On 3/6/24 2:53 PM, MitchAlsup1 wrote:
>> Paul A. Clayton wrote:
>> 
>>> On 3/5/24 10:44 AM, MitchAlsup1 wrote:
>>>> Scott Lurndal wrote:
>>> [snip]
>>>>> If memory access ever becomes as fast a register access,
>>>>> all bets will be off...
>>>>
>>>> It won't, and never has.
>> 
>>> There seem to be three aspects that lead to this conclusion: the
>>> storage technology, the indexing method (including alignment and
>>> extension), and the method of determining presence ("tagging").
>> 
>> Porting. SRAMs are single ported, Register files are multiported.

> Is this really a fundamental distinction? 

Yes, actually it is. 

>                                           If one uses SRAM to mean
> merely Static (not-refreshed) RAM, then register files are also
> SRAM. If one uses SRAM to mean classic 6-transistor SRAM cells,
> then the 8-transistor cells used in one of Intel's Atom L1 caches
> would not be SRAM.

Would it surprise you know that in order to make such a dual ported
SRAM cell "process tolerant"* that the SRAM cell has to be at least 
as big as if there were 2 independent SRAM cells ? That is:: if you
want a 2 ported SRAM use 2 SRAM instances read them independently,
but write both at the same time with the same value.

(*) under some process variations, the SRAM cell will loose data if
both read ports are used simultaneously--UNLESS the gain of the 
central inverter-pair is increased. For cells with more than 2 ports
you get to a point where the cell cannot be written at some corners
of the process space (strong N-channels with weak P-channels.)

Transistor level design of Register Files is similarly fraught with
peril.

At some point, the number of select lines and the number of bus-
wires is big enough that you CAN hide the register file under the 
wires. Transistor count goes up as 2+2+2×ports while wire goes up
by 2+selects×2×ports.

> The storage technology is not strictly bound to is use.

In the abstract this is true enough. 
In practice it is not.

> (Obviously, high area/power per bit storage is biased to smaller
> capacity and higher latency storage is biased to infrequent access
> or prefetchable/thoughput uses.)

> [snip]
>>> For general memory addressing, there is a more complex address
>>> generation, the size of the operand will be variable (alignment
>>> and extension — word-addressed memory would avoid this overhead☺),
>> 
>> Register access if by fixed bit pattern in the instruction,
>> Memory access is by performing arithmetic on operands to get the 
>> address.

> As noted later, memory accesses can also be indexed by a fixed bit
> pattern in the instruction. Determining whether a register ID bit
> field is actually used may well require less decoding than
> determining if an operation is a load based on stack pointer or
> global pointer with an immediate offset, but the difference would
> not seem to be that great. The offset size would probably also
> have to be checked — the special cache would be unlikely to 
> support all offsets.

> Predecoding on insertion into the instruction cache could cache
> this usage information.

You cannot predecode if the instruction is not of fixed size, (or
if you do not add predecode bits ala Athlon, Opteron).

> [snip]
>>> Register read, address generation, and tag comparison overheads
>>> can be removed for offset addressing by using the base pointer as
>>> the "tag" and the offset as the index. (e.g., "Knapsack: A Zero-
>>> Cycle Memory Hierarchy Component", Todd M. Austin et al., 1993;
>>> "Signature Buffer: Bridging Performance Gap between Registers and
>>> Caches", Lu Peng et al., 2004) "Internal fragmentation" of
>>> utilization increases the cost of such storage relative to the
>>> benefit and offset addressing constrains generality.