Warning: mysqli::__construct(): (HY000/1203): User howardkn already has more than 'max_user_connections' active connections in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\includes\artfuncs.php on line 21
Failed to connect to MySQL: (1203) User howardkn already has more than 'max_user_connections' active connections
Warning: mysqli::query(): Couldn't fetch mysqli in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\index.php on line 66
Article <87frsnwbd5.fsf@localhost>
Deutsch   English   Français   Italiano  
<87frsnwbd5.fsf@localhost>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!news.nobody.at!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: Lynn Wheeler <lynn@garlic.com>
Newsgroups: comp.arch
Subject: Re: Architectural implications of locate mode I/O
Date: Fri, 05 Jul 2024 15:35:50 -1000
Organization: Wheeler&Wheeler
Lines: 83
Message-ID: <87frsnwbd5.fsf@localhost>
References: <v61jeh$k6d$1@gal.iecc.com> <1bed88y8na.fsf@pfeifferfamily.net>
	<v69jgf$1lg1$1@gal.iecc.com>
MIME-Version: 1.0
Content-Type: text/plain
Injection-Date: Sat, 06 Jul 2024 03:35:53 +0200 (CEST)
Injection-Info: dont-email.me; posting-host="e13bd93f2ce0ce0c86152ba447239d8c";
	logging-data="3701759"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX18sERMq3Fk4eXdDBo/KU4k3n31Lw3hK/ws="
User-Agent: Gnus/5.13 (Gnus v5.13)
Cancel-Lock: sha1:rZI6S/zcFeSHvKuY+KLc6iiPDyI=
	sha1:7rjO0Nyn747LFDAJ9FDVCknCFxQ=
Bytes: 5895

John Levine <johnl@taugh.com> writes:
> By putting most of the logic into the printer controller, the 1403 was
> not just faster, but only took a small fraction of the CPU so the
> whole system could do more work to keep the printer printing.

360 "CKD DASD" and multi-track search trade-off. 360s had relatively
little real storage (for caching information) and slow processor, so for
program libraries on disk ... they created "PDS" format and had (disk
resident, cylinder aligned) directory that contained records for name of
each program and its disk location in the library. To load a program, it
first did a "multi-track" search of of the PDS directory started at
track 0 of the 1st cylinder of the directory ... ran until it found name
match (or reached end of cylinder). If name wasn't found at end of
cylinder, it would restart if there were additional cylinders in the
directory. Trivia: the searched-for program name was in processor memory
and the multi-track search operation would refetch the name every time
it did a compare for matching name (with records in the PDS directory),
monopolizing channel, controller, & disk.

Roll forward to 1979, a large national grocery chain had large
loosely-coupled complex of multiple 370/168 systems sharing string of
DASD containing the PDS dataset of store controller applications ... and
was experiencing enormous throughput problems. All the usual corporate
performance specialists had been dragged through the datacenter with
hopes that they could address the problem ... until they eventually got
around to calling me. I was brought into large classroom with tables
covered with large stacks of activity/performance reports for each
system. After 30-40 mins examaning the reports ... I being to realize
the aggregate activity (summed across all systems) for specific shared
disk was peaking at 6-7 (total) I/O operations ... and corresponding
with severe performance problem. I asked what was on that disk and was
told it was the (shared) store controller program library for all the
stores in all regions and 168 systems; which I then strongly suspected
it was the PDS multi-track search perfoerformance that I had grappled
with as undergraduate in the 60s.

The store controller PDS dataset was quite large and had a three
cylinder directory, resident on 3330 disk drive ... implying that on the
avg, a search required 1.5 cylinders (and two I/Os), the first
multi-track search I/O for all 19 cylinders would be 19/60=.317sec
(during which time that processor's channel was busy, and the shared
controller was also busy ... blocking access to all disks on that
string, not just the speecific drive, for all systems in the complex)
and the 2nd would be 9.5/60=.158sec ... or .475sec for the two ... plus
a seek to move the disk arm to PDS directory, another seek to move the
disk arm to the cylinder where the program was located
.... approx. .5+secs total for each store controller program library load
(involving 6-7 I/Os) or two program loads per second aggregate serving
all stores in the country.

The store controller PDS program library was then split across set of
three disks, one dedicated (non-shared) set for each system in the in
the complex.

I was also doing some work on System/R (original sql/releational RDBMS)
and taking some flak from the IMS DBMS group down the road. The IMS
group were complaining that RDBMS had twice the disk space (for RDBMS
index) and increased the number of disk I/Os by 4-5 times (for
processing RDBMS index). Counter was that the RDBMS index significantly
reduced the manual maintenance (compared to IMS). By early 80s, disk
price/bit was significantly plummeting and system real memory
significantly increased useable for RDBMS caching, reducing physical
I/Os (while manual maintenance skills costs were significantly
increasing).

other trivia: when I transfer to San Jose, I got to wander around
datacenters in silicon valley, including disk engineering & product test
(bldg14&15) across the street. They were doing prescheduled, 7x24,
stand-alone mainframe testing. They mentioned they had recently tried
MVS, but it had 15min mean-time-between-failure, requiring manual
re-ipl/reboot in that environment. I offered to rewrite I/O supervisor
to make it bullet-proof and never fail enabling any amount of on-demand,
concurrent testing (greatly improving productivity). Downside was they
would point their finger at me whenever they had problem and I was
spending increasing amount of time diagnosing their hardware problems.

1980 was real tipping point as hardware tradeoff switched from system
bottleneck to I/O bottleck (my claims that relative system disk
throughput had declined by order or magnitude, systems got 40-50 times
faster, disks got 3-5 fasters).

-- 
virtualization experience starting Jan1968, online at home since Mar1970