Deutsch English Français Italiano |
<87plsb87hn.fsf@localhost> View for Bookmarking (what is this?) Look up another Usenet article |
Path: ...!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail From: Lynn Wheeler <lynn@garlic.com> Newsgroups: comp.arch Subject: Re: ARM is sort of channeling the IBM 360 Date: Thu, 20 Jun 2024 14:28:04 -1000 Organization: Wheeler&Wheeler Lines: 70 Message-ID: <87plsb87hn.fsf@localhost> References: <s7r87j1c3u6mim0db3ccbdvknvtjr4anu3@4ax.com> <v51tcr$26io$1@gal.iecc.com> MIME-Version: 1.0 Content-Type: text/plain Injection-Date: Fri, 21 Jun 2024 02:28:07 +0200 (CEST) Injection-Info: dont-email.me; posting-host="5306519966cc91e1011ed17b23360412"; logging-data="2964815"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+uo5RZ18dYgr2cE0ugLSs56jkrEhuOfBA=" User-Agent: Gnus/5.13 (Gnus v5.13) Cancel-Lock: sha1:f+6dIKhfIVxBjSL15Ol4mSaPhjE= sha1:Htyi9BJm+tvCUV3ozL/CezK0Z1o= Bytes: 4909 John Levine <johnl@taugh.com> writes: > It's not that close. S/360 had a single key in the PSW that it matched against > all of a program's storage refrences while this has the tag in a pointer, so > it's more like a capability. > > The x86 protection keys are more like S/360. There's a key for each > virtual page and a PKRU register that has to match. 360s, each 2kbytes had 4bit storage protect key .... match executing psw 4kbit key to storage protect key. zero in psw 4kbit key reserved for system and allowing access all storage ... non-zero allowing for (isolating) up to 15 separated concurrently executing (mvt) regions. a little over decade ago was asked to track down decision to add virtual memory to all 370s. basically MVT storage management was so bad that required specifying region storage requirements four times larger than used ... limiting number of concurrently executing regions to less than number needed for keeping 1mbyte, 370/165 busy and justified. Going to single 16mbyte virtual memory (VS2/SVS) allowed increasing number of concurrent regions by factor of four (up to 15) with litte or no paging (sort of like running MVT in a CP67 16mbyte virtual machine). Biggest bit of code was creating a copy of passed channel (I/O) programs, substituting real addresses for virtual addresses (Ludlow borrows "CCWTRANS" from CP67, crafting into MVT EXCP/SVC0). trivia: 370/165 engineers started complaining they if they had to implement the full 370 virtual memory architecture, it would slip announce by six months ... so several features were dropped (including virtual memory segment table entry r/o flag, could have combination of different virtual address spaces sharing the same segment, some being r/w and some being r/o). Note: other models (& software) that implmeneted full architecture, had to drop back to 370/165 subset. 370s were getting larger fast and increasingly needed more than 15 concurrently executing regions (to keep systems busy and justified) and so transition to VS2/MVS, a different virtual address space for each region (isolating each region storage access in different virtual address space). However, it inherited os/360 pointer-passing APIs and so mapped an image of the "MVS" kernel image into eight mbytes of every virtual address space (leaving eight for application). Also "subsystems" were mapped into separate address spaces and (pointer passing API) needed to access application storage. Initially a common 1mbyte segment storage area was mapped into all address spaces (common segment area/"CSA"). However space requirements was somewhat proportional to number of subsystems and concurrently executing application and "CSA" quickly becomes "common system area"). By 3033 time-frame CSA was frequently 5-6mbytes ... leaving 2-3mbytes for application regions (and threatening to becoming 8mbytes, leaving zero). This was part of mad rush to xa/370 ... special architecture features for MVS, including subsystems able to concurrently access multiple address spaces (a subset was eventually retrofitted to 3033 as "dual-address space mode"). other trivia: in 70s, I was pontificating that there was increasing mismatch between disk throughput and system throughput. In early 80s I wrote a tome about relative system disk throughput had declined by an order of magnitude since os/360 announce (systems got 40-50 times faster, disks only got 3-5 times faster). Some disk executive took exception and assigned the division system performance group to refute the claim. After a couple weeks they came back and effectively said I had slightly understated the case. Their analysis was then turned into (mainframe user group) SHARE https://en.wikipedia.org/wiki/SHARE_Operating_System presentation about configurating disks for better system throughput (16Aug1984, SHARE 63, B874). -- virtualization experience starting Jan1968, online at home since Mar1970