Deutsch   English   Français   Italiano  
<87fruqgwjh.fsf@localhost>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!news.misty.com!2.eu.feeder.erje.net!feeder.erje.net!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: Lynn Wheeler <lynn@garlic.com>
Newsgroups: comp.arch
Subject: Re: backward architecture, The Design of Design
Date: Thu, 09 May 2024 17:45:06 -1000
Organization: Wheeler&Wheeler
Lines: 62
Message-ID: <87fruqgwjh.fsf@localhost>
References: <v03uh5$gbd5$1@dont-email.me> <20240507115433.000049ce@yahoo.com>
	<v1fim7$3t28r$1@dont-email.me> <20240508141804.00005d47@yahoo.com>
	<v1gncp$1en9$1@gal.iecc.com> <87frurn4iv.fsf@localhost>
MIME-Version: 1.0
Content-Type: text/plain
Injection-Date: Fri, 10 May 2024 05:45:12 +0200 (CEST)
Injection-Info: dont-email.me; posting-host="f047f4e95222e02716464203aa2b7356";
	logging-data="1216952"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX1+NX1BXEC1PnJGY9nVfIZnOE2rTndPzIqk="
User-Agent: Gnus/5.13 (Gnus v5.13)
Cancel-Lock: sha1:iv36oM7pIj6m4+C+KkEI5Us8/PU=
	sha1:Tpykw8R3OwJhAFTtxyjLReiWt2w=
Bytes: 4809

Lynn Wheeler <lynn@garlic.com> writes:
> little over decade ago I was asked to track down decision to add virtual
> memory to all 370s and found staff to executive making the
> decision. Basically OS/360 MVT storage management was so bad, the
> execution regions had to be specified four times larger than used, as a
> result a 1mbyte 370/165 normally would only run four regions
> concurrently, insufficient to keep system busy and justified. Mapping
> MVT to a 16mbye virtual address space (aka VS2/SVS) would allow
> increasing number of concurrently running regions by factor of four
> times (with little or no paging), keeping 165 systems busy
> ... overlapping execution with disk I/O.

In some sense IBM CKD DASD was tech trade-off being able to use
disk&channel capacity to search for information because of limited real
memory for keeping tract of it. By the mid-70s that trade-off was
starting to invert. In the early 80s, I was also pontificating that
since mid-60s 360, relative system disk throughput had declined by an
order of magnitude ... disks had gotten 3-5 times faster while systems
had gotten 40-50 times faster. A disk division executive took exception
to my statements and assigned the division performance group to refute
it. After a couple weeks, they came back and explained that I had
understated the problem. They then respun the analysis for
recommendations for optimizing disk configurations for system throughput
.... that was presented at IBM mainframe user groups.

Now the MVT->VS2/SVS was actually capped at 15 concurrently executing
regions because it was (still) using 4bit storage protect keys to keep
the regions separate (in a single 16mbyte virtual address space)
.... which prompted SVS->MVS with a different virtual address space for
each executing region. However the OS/360 history was heavily pointer
passing APIs ... and to facilitate kernel calls, an 8mbyte image of the
MVS kernel was mapped into each 16mbyte application address space (so
kernel code to easily fetch/store application data). However, for MVS,
MVT subsystems were given their own virtual address space ... so for API
parameter and returning information a one common segment area (CSA) was
(also) mapped into every 16mbyte virtual address space (leaving 7mbytes
for application).  However, requirement for CSA space is somewhat
proportional to number of number of subsystems and number of
concurrently running applications ... and CSA quickly becomes multiple
segement area and the "Common System Area" ... and by late 70s and 3033,
it was common to be 5-6mbytes (leaving 2-3mbytes for applications) and
threatening to become 8mbytes (leaving zero).

That was part of the mad rush to get to 370/XA (31-bit) and MVS/XA
(while separate virtual address spaces theoretically allowed for
large number of concurrently computing programs, being able to
overlap execution with waiting on disk i/o, the CSA kludge had
severely capped it).

There were a number of 3033 temporary hacks. One was retrofitting part
of 370/xa access registers to 3033 as "dual-address space". A called
subsystem in its own address space could have a secondary address space
pointing to the calling application's address space ... so didn't
require CSA for API passing&returning information. They also took two
"unused" bits from page table to prefix to real page number ... while
all instructions could only specify real & virtual 24bit address
(16mbytes), it was possible to have virtual->real mapping up to 64mbytes
for execution (attaching more than 16mbytes of real storage to 3033).


-- 
virtualization experience starting Jan1968, online at home since Mar1970