Deutsch   English   Français   Italiano  
<vhg5i4$1d60l$2@dont-email.me>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!eternal-september.org!feeder2.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: Lynn McGuire <lynnmcguire5@gmail.com>
Newsgroups: rec.arts.sf.written
Subject: Re: ongoing infrastructure changes with AI in the USA
Date: Mon, 18 Nov 2024 13:46:12 -0600
Organization: A noiseless patient Spider
Lines: 63
Message-ID: <vhg5i4$1d60l$2@dont-email.me>
References: <vh3dim$2ehti$1@dont-email.me> <RfoZO.6$7ZKc.5@fx34.iad>
 <vhdf20$pbs3$1@dont-email.me> <mXH_O.18406$OVd1.4832@fx10.iad>
 <21rmjj5vlg3cb4jcr6dbu3f9m07v46hdru@4ax.com> <IaK_O.10739$eMe8.4562@fx06.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 18 Nov 2024 20:46:13 +0100 (CET)
Injection-Info: dont-email.me; posting-host="bc540513460c2acb3c6b57a63e9c6bd0";
	logging-data="1480725"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX1/wUb9YfcR96aOO0cIRbd6cb/NnMEKjrdo="
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:BXymVCi5V15aop0NFYjYOQ1oTfA=
Content-Language: en-US
In-Reply-To: <IaK_O.10739$eMe8.4562@fx06.iad>
Bytes: 3902

On 11/18/2024 10:45 AM, Scott Lurndal wrote:
> Paul S Person <psperson@old.netcom.invalid> writes:
>> On Mon, 18 Nov 2024 14:12:34 GMT, scott@slp53.sl.home (Scott Lurndal)
>> wrote:
>>
>>> Lynn McGuire <lynnmcguire5@gmail.com> writes:
>>>> On 11/14/2024 9:00 AM, Scott Lurndal wrote:
>>>>> Lynn McGuire <lynnmcguire5@gmail.com> writes:
>>>>>> I am on the periphery of the ongoing blanketing of the USA with AI
>>>>>> servers.  I have a few facts that might just blow you away.
>>>>>>
>>>>>> The expected number of AI servers in the USA alone is presently a
>>>>>> million (SWAG).  The current cost for a single AI server is $500,000=
>> US.
>>>>>>      1,000,000 x $500,000 =3D $500 billion US of capital.
>>>>> =20
>>>>> First, What is your source for this data?  Be specific.
>>>>> =20
>>>>> Second, define precisely what an "AI server" is.
>>>>
>>>> BTW, I should have mentioned that the AI Servers are not uniform in any=
>> =20
>>>> of their aspects.  I just gave the specs for one of the high end=20
>>>> machines that the manufacturer has a three month waiting list for.
>>>
>>> Some of us actually produce ML hardware.  The term AI is an marketing
>>> gimic, not reality.
>>>
>>> ML hardware ranges from custom logic in a desktop CPU to massively =
>> parallel
>>> specialized hardware (e.g. plug-in GPUs).   An ML-enabled server can
>>> range from a simple ML accelerator block in the CPU itself (Apple,
>>> Google, Amazon, Marvell) to a large, expensive, power-hungry GPU from =
>> Nvidia.
>>>
>>> Yes, there are large racks of ML-enabled servers, particularly for
>>> training.  No, they're not really anything special other than using
>>> high-end, high performance CPUs, GPUs and high-speed interconnects
>>> (Infiniband, 100 and 400Gb ethernet).      Basically no different
>>> than any supercomputer in scale and power requirements.
>>
>> 1 million of them will rather increase the power requirements,
>> possibly causing shortages.
> 
> It is unlikely that there will be anywhere near one million
> ML training setups, which are the power-hungry setups.
> 
> ML Inference requires significantly less horsepower and will
> be distributed over millions of laptop/desktop/table systems.
> 
> It's just as likely that the hype bubble will burst in the
> next 18 months.

""Prediction is very difficult, especially about the future." — Niels Bohr"
  
https://www.reddit.com/r/quotes/comments/10at8mq/prediction_is_very_difficult_especially_about_the/

BTW, my buddy who programs AI Servers at the big dumb company, agrees 
with you.  But they are selling hundreds of the huge AI servers each 
month and are struggling to meet demand.

Lynn