Deutsch   English   Français   Italiano  
<vbfh2l$toif$1@dont-email.me>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: Brett <ggtgp@yahoo.com>
Newsgroups: comp.arch
Subject: Re: is Vax adressing sane today
Date: Fri, 6 Sep 2024 18:19:02 -0000 (UTC)
Organization: A noiseless patient Spider
Lines: 56
Message-ID: <vbfh2l$toif$1@dont-email.me>
References: <vbd6b9$g147$1@dont-email.me>
 <2024Sep6.073801@mips.complang.tuwien.ac.at>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Injection-Date: Fri, 06 Sep 2024 20:19:02 +0200 (CEST)
Injection-Info: dont-email.me; posting-host="d0bdee15b136cee82fc25d403b815fae";
	logging-data="975439"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX1/YYCGnx4o2PluhqOcbfXoL"
User-Agent: NewsTap/5.5 (iPad)
Cancel-Lock: sha1:P6fDsyi5iz+Q98LwT9FGSUPZAdk=
	sha1:vI0Up6XFHIoPoj5FuPGBymDn59Y=
Bytes: 3509

Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
> Brett <ggtgp@yahoo.com> writes:
>> But Vax allows all three arguments to be in memory with different pointers.
>> 
>> Is this sane, just a natural progression if you allow memory operands?
> 
> In combination with supporting unaligned accesses (but excluding
> indirect addressing), it means that an instruction can access 6 pages,
> and so the TLB (and/or TLB loader) has to be designed to support that.
> Likewise, the OS has to be designed to load all 6 pages into physical
> RAM without evicting one of these pages again.  So this kind of
> architecture increases the design complexity.  And I don't see a
> benefit from this design.

The memory system is pipelined, once you load the first of the three
values, you do not care if that cache line is evicted while you load the
second.

Caches are 16 way today, one does not worry about cache line evictions, it
just works.

>> Heads and tails encoding could actually do this reasonably, and the code
>> density would be actually be better than most competitors.
> 
> Would it?  Please present empirical data.  Certainly people claim that
> instruction sets with one-memory-address load-and-op and
> read-modify-write instructions have better code density, but when you
> look at the data, there are load-store instruction sets with better
> code density (and by quite a lot).  From
> <2024Aug21.184537@mips.complang.tuwien.ac.at>:
> 
>    bash     grep      gzip
>   595204   107636    46744 armhf    16 regs load/store     32-bit
>   599832   101102    46898 riscv64  32 regs load/store     64-bit
>   796501   144926    57729 amd64    16 regs ld-op ld-op-st 64-bit
>   829776   134784    56868 arm64    32 regs load/store     64-bit
>   853892   152068    61124 i386      8 regs ld-op ld-op-st 32-bit
>   891128   158544    68500 armel    16 regs load/store     32-bit
>   892688   168816    64664 s390x    16 regs ld-op ld-op-st 64-bit
>  1020720   170736    71088 mips64el 32 regs load/store     64-bit
>  1168104   194900    83332 ppc64el  32 regs load/store     64-bit
> 
> What is "heads and tails encoding"?

128 bit or larger packets with the fixed size opcodes on the front, and the
variable sized data and offsets packing in from the end. You get variable
length instruction density with easier faster wide decoding. And also using
memory operands give you another density bonus on top.

The down side is that it makes your one and two wide implementations
bigger.

> - anton