Path: ...!weretis.net!feeder8.news.weretis.net!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail From: Terje Mathisen Newsgroups: comp.arch Subject: Re: ancient OS history, ARM is sort of channeling the IBM 360 Date: Mon, 24 Jun 2024 07:34:05 +0200 Organization: A noiseless patient Spider Lines: 56 Message-ID: References: <87plsb87hn.fsf@localhost> <87le2vatq4.fsf@localhost> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Injection-Date: Mon, 24 Jun 2024 07:34:05 +0200 (CEST) Injection-Info: dont-email.me; posting-host="baddb9e56c0ab9dc43b34fdaed8c4834"; logging-data="843698"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/aJjNxOj8jthJVf8NIeQCCtp6eBS1xM8yN5iag9MoWZA==" User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Firefox/91.0 SeaMonkey/2.53.18.2 Cancel-Lock: sha1:kb8peCtRNP/pnBeQmTLiApLVPHM= In-Reply-To: <87le2vatq4.fsf@localhost> Bytes: 4137 Lynn Wheeler wrote: > > John Levine writes: >> Not really. VS1 was basically MFT running in a single virtual address >> space. The early versions of VS2 were SVS, MVT running in a single >> virtual address space, and then MVS, where each job got its own >> address space. As Lynn has often explained, OS chewed up so much of >> the address space that they needed MVS to make enough room for >> programs to keep doing useful work. > > ... also SVS single 16mbyte virtual address space (sort of like running > MVT in CP67 16mbyte virtual machine) to "protect" regions from each > other still used the 360 4bit storage protection key ... so caped at 15 > concurrent regions ... but systems were getting faster, much faster than > disks were getting faster ... so needed increasing numbers of > concurrently executing regions ... so went to MVS ... gave each region > its own virtual address space (to keep them isolated/protected from each > other). But MVS was becoming increasingly bloated both in real storage > and amount it took in each region's virtual address space .... so needed > more than 16mbyte real storage as well as more than 16mbyte virtual > storage. > > trivia: I was pontificating in the 70s about mismatch between increase > in system throughput (memory & CPU) and increase in disk throughput. In > early 80s wrote a tome that the relative system throughput of disk had > declined by an order of magnitude since 360 was announced in the 60s > (systems increase 40-50 times, disks increased 3-5 times). A disk > division executive took exception and assigned the division performance > group to refute my claims. After a couple of weeks, they basically came > back and said that I had slightly understated the problem. > > They then respun the analysis for a (mainframe user group) SHARE > presentation for how to configure disks for increased system throughput > (16Aug1984, SHARE 63, B874). > > more recently there have been some references that cache-miss, memory > access latency, when measured in count of processor cycles, is > compareable to 60s disk access latency, when measure in count of 60s > processor cycles (memory is new disk). Not only is RAM the new disk, but last level cache is the new RAM, and you could argue that $L1 plays the role of a vector computer register array. The result values forwarding network is the new register array. Yeah, the comparison does break down a bit at the very end, but going in the opposite direction, disk (of the spinning rust variety) is an almost perfect match for 60'ies tape: Getting to any particular spot takes a long time, so when you get there you had better do a lot of sequential access! Terje -- - "almost all programming can be viewed as an exercise in caching"