Deutsch   English   Français   Italiano  
<87frtpx4xh.fsf@localhost>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!npeer.as286.net!npeer-ng0.as286.net!weretis.net!feeder8.news.weretis.net!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: Lynn Wheeler <lynn@garlic.com>
Newsgroups: comp.arch
Subject: Re: architectural goals, Byte Addressability And Beyond
Date: Thu, 06 Jun 2024 09:11:06 -1000
Organization: Wheeler&Wheeler
Lines: 64
Message-ID: <87frtpx4xh.fsf@localhost>
References: <v0s17o$2okf4$2@dont-email.me> <v38opv$1gsj2$3@dont-email.me>
	<v38riq$1aqo$1@gal.iecc.com>
	<niki5jps7jn2qfkj0t3s2t82qmrjoc97pi@4ax.com>
	<v3d9bh$s9a$2@gal.iecc.com>
	<gf6l5jtdreabgsud0qss6fu0cs17udisdb@4ax.com>
	<v3ejn5$2m8fn$1@dont-email.me> <v3lqo9$48om$4@dont-email.me>
	<soE7O.5245$Ktt5.2694@fx40.iad> <v3o7ot$kfrm$5@dont-email.me>
	<cme26jttmuh5i0l31fo5ch18g221oku84q@4ax.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Injection-Date: Thu, 06 Jun 2024 21:11:09 +0200 (CEST)
Injection-Info: dont-email.me; posting-host="76da0d07e3fc813994df09bd8ccad762";
	logging-data="1727561"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX18oYRNrduZA87w4d/Qmf/gWxhhSsgsdobk="
User-Agent: Gnus/5.13 (Gnus v5.13)
Cancel-Lock: sha1:N65gGhMipSqTbCacV1oWs1JuGao=
	sha1:3xycfXazOBKAGgBKXmBDpZDjvbA=
Bytes: 4448

George Neuner <gneuner2@comcast.net> writes:

> On Tue, 4 Jun 2024 23:25:18 -0000 (UTC), Lawrence D'Oliveiro
> <ldo@nz.invalid> wrote:
>
>>On Tue, 04 Jun 2024 13:11:52 GMT, Scott Lurndal wrote:
>>
>>> Lawrence D'Oliveiro <ldo@nz.invalid> writes:
>>>
>>>>On Sat, 1 Jun 2024 07:47:49 -0000 (UTC), Thomas Koenig wrote:
>>>>
>>>>> One of the main selling points [of zSeries] is the hardware
>>>>> reliability ...
>>>>
>>>>Quite an expensive way to get reliability. How does an outfit like
>>>>Google achieve essentially 0% downtime? By running a swarm of half a
>>>>million commodity servers, that’s how.
>>> 
>>> And that's not expensive?
>>
>>Consider the equivalent number of mainframes, with their inbuilt 
>>diagnostics capabilities etc, to match that reliability.
>
> Can't find it now and don't remember many details, but ...
>
> A long time ago, there was a story going around about Microsoft vs IBM
> regarding the day-to-day operation of their company web sites.  It
> claimed that Microsoft was running a ~1000 machine server farm with a
> crew of ~100, whereas IBM was running 3 mainframes with a crew of ~10.

microsoft had hundreds of millions of customers that were more internet
oriented, while IBM had thousands of customers that were much less
internet oriented (and rate of changing information was much lower) ...
and IBM number may have only been for the web operation, as opposed to
total support people.

Jan1979, I was con'ed into doing benchmark for national lab that was
looking at getting 70 4341s for compute farm (sort of leading edge of
the coming cluster supercomputing tsunami). 4341s were also selling into
the same mid-range market as VAX and in about same numbers for small
unit orders. Big difference was large companies were ordering hundreds
of vm/4341s at a time for deployment out into departmental areas (sort
of the leading edge of the coming distributed computing tsunami).

The IBM batch system (MVS) was looking at the exploding distributed
computing market. First problem was only disk product for non-datacenter
environment was FBA (fixed-block architecture) and MVS only supported
CKD. Eventually there was CKD simulation made available on FBA disks
(currently no CKD disks have been made for decades, all being simulated
on industry standard fixed block disks). It didn't do MVS much good
because distributed operation was looking at dozens of systems per
support person while MVS still required dozens of support people per
system.

admittedly 14 year old comparison, max configured z196 mainframe
benchmarked at 50BIPS ... still dozens of support people. Equivalent
cloud megadatacenter was half million or more E5-2600 blades that each
benchmarked at 500BIPS with enormous automation requiring 70-80 support
people (per megadatacenter, at least 6000-7000 systems per person and
each system ten times max configured mainframe) ... also the megacenter
comparison was linux (not windows).

-- 
virtualization experience starting Jan1968, online at home since Mar1970