Path: ...!eternal-september.org!feeder3.eternal-september.org!news.quux.org!news.nk.ca!rocksolid2!i2pn2.org!.POSTED!not-for-mail From: mitchalsup@aol.com (MitchAlsup1) Newsgroups: comp.arch Subject: Re: Why VAX Was the Ultimate CISC and Not RISC Date: Sat, 1 Mar 2025 22:30:32 +0000 Organization: Rocksolid Light Message-ID: References: <8qJwP.64143$2zn8.32753@fx15.iad> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Injection-Info: i2pn2.org; logging-data="2408450"; mail-complaints-to="usenet@i2pn2.org"; posting-account="o5SwNDfMfYu6Mv4wwLiW6e/jbA93UAdzFodw5PEa6eU"; User-Agent: Rocksolid Light X-Rslight-Posting-User: cb29269328a20fe5719ed6a1c397e21f651bda71 X-Rslight-Site: $2y$10$3olli3yHbjlOR3DWnGUhy.09N4wSxWig/MdAcvPcLhmSzriXuQHSu X-Spam-Checker-Version: SpamAssassin 4.0.0 Bytes: 5415 Lines: 87 On Sat, 1 Mar 2025 19:40:55 +0000, EricP wrote: > Lawrence D'Oliveiro wrote: >> Found this paper >> >> at Gordon Bell’s website. Talking about the VAX, which was designed as >> the ultimate “kitchen-sink” architecture, with every conceivable >> feature to make it easy for compilers (and humans) to generate code, >> he explains: >> >> The VAX was designed to run programs using the same amount of >> memory as they occupied in a PDP-11. The VAX-11/780 memory range >> was 256 Kbytes to 2 Mbytes. Thus, the pressure on the design was >> to have very efficient encoding of programs. Very efficient >> encoding of programs was achieved by having a large number of >> instructions, including those for decimal arithmetic, string >> handling, queue manipulation, and procedure calls. In essence, any >> frequent operation, such as the instruction address calculations, >> was put into the instruction-set. VAX became known as the >> ultimate, Complex (Complete) Instruction Set Computer. The Intel >> x86 architecture followed a similar evolution through various >> address sizes and architectural fads. >> >> The VAX project started roughly around the time the first RISC >> concepts were being researched. Could the VAX have been designed as a >> RISC architecture to begin with? Because not doing so meant that, just >> over a decade later, RISC architectures took over the “real computer” >> market and wiped the floor with DEC’s flagship architecture, >> performance-wise. >> >> The answer was no, the VAX could not have been done as a RISC >> architecture. RISC wasn’t actually price-performance competitive until >> the latter 1980s: >> >> RISC didn’t cross over CISC until 1985. This occurred with the >> availability of large SRAMs that could be used for caches. It >> should be noted at the time the VAX-11/780 was introduced, DRAMs >> were 4 Kbits and the 8 Kbyte cache used 1 Kbits SRAMs. Memory >> sizes continued to improve following Moore’s Law, but it wasn’t >> till 1985, that Reduced Instruction Set Computers could be built >> in a cost-effective fashion using SRAM caches. In essence RISC >> traded off cache memories built from SRAMs for the considerably >> faster, and less expensive Read Only Memories that held the more >> complex instructions of VAX (Bell, 1986). > > If you look at the VAX 8800 or NVAX uArch you see that even in 1990 it > was > still taking multiple clocks to serially decode each instruction and > that basically stalls away any benefits a pipeline might have given. > > If they had just only put in *the things they actually use* > (as show by DEC's own instruction usage stats from 1982), > and left out all the things that they rarely or never use, > it would have had 50 or so opcodes instead of 305, > at most one operand that addressed memory on arithmetic and logic > opcodes > with 3 address modes (register, register address, register offset > address) > instead of 0 to 5 variable length operands with 13 address modes each > (most combinations of which are either silly, redundant, or illegal). Excepting for the 1 memory operand per instruction, the above para- graph accurately describes My 66000 ISA. > Then they would have be able to parse instructions in one clock, > which makes pipelining a possible consideration, > and simplifies the uArch so now it can all fit on one chip, > which allows it to complete with RISC. If VAX had stuck with PDP-11 address modes and simply added the {Byte, Half, Word, Double} accesses it would have been a lot easier to pipeline. > The reason it was designed the way it was, was because DEC had > microcode and microprogramming on the brain. As did most of academia at the time. > In this 1975 paper Bell and Strecher say it over and over and over. > They were looking at the cpu design as one large parsing machine > and not as a set of parallel hardware tasks. Orthogonality, Regularity, Expressibility, ... > This was their mental mindset just before they started the VAX design: > > What Have We Learned From PDP11, Bell Strecker, 1975 > https://gordonbell.azurewebsites.net/Digital/Bell_Strecker_What_we%20_learned_fm_PDP-11c%207511.pdf