Deutsch   English   Français   Italiano  
<87y0u6faxh.fsf@somewhere.edu>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: news.eternal-september.org!eternal-september.org!.POSTED!not-for-mail
From: Ethan Carter <ec1828@somewhere.edu>
Newsgroups: comp.misc
Subject: Re: AI is Dehumanizing Technology
Date: Thu, 05 Jun 2025 12:46:50 -0300
Organization: A noiseless patient Spider
Lines: 53
Message-ID: <87y0u6faxh.fsf@somewhere.edu>
References: <101euu8$1519c$1@dont-email.me>
	<networks-20250601205900@ram.dialup.fu-berlin.de>
	<101kaoa$3b0c7$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain
Injection-Date: Thu, 05 Jun 2025 17:47:45 +0200 (CEST)
Injection-Info: dont-email.me; posting-host="b51a380c01b3346b37dc30eb1e4c25ef";
	logging-data="1723837"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX19FsRLCbWgKARGpxkYVBmuF2oRhSIzDLMo="
Cancel-Lock: sha1:dJ4xVsxY34PtALVjabio3v1tTZ8=
	sha1:VCc1BFvKofErBQT0/mSWZq+arLE=

Ben Collver <bencollver@tilde.pink> writes:

> On 2025-06-01, Stefan Ram <ram@zedat.fu-berlin.de> wrote:
>> Ben Collver <bencollver@tilde.pink> wrote or quoted:
>>>                        For example, to create an LLM such as
>>>ChatGPT, you'd start with an enormous quantity of text, then do a lot
>>>of computationally-intense statistical analysis to map out which
>>>words and phrases are most likely to appear near to one another.
>>>Crunch the numbers long enough, and you end up with something similar
>>>to the next-word prediction tool in your phone's text messaging app,
>>>except that this tool can generate whole paragraphs of mostly
>>>plausible-sounding word salad.
>>
>>   If you know your stuff and can actually break down AI or LLMs and get
>>   what's risky about them, speak up, because we need people like you.
>
> I remember reading about the dangers of GMO crops.  At the time a
> common modification was to make corn and soy roundup ready.  The
> official research said that roundup was safe for human consumption.
>
> I read a story that some found it cheaper to douse surplus roundup on
> wheat after the harvest rather than buy the normal dessicants.  This
> was not the the intended use nor was this the amount of human
> exposure reported in the studies.  However, it is consistent with the
> values that produced roundup: profit being more valuable than health
> or safety.
>
> Unintended consequences are bound to come out sideways.  Did we need
> more expertise in GMOs?  No, we needed a different approach.

Quite right.

What's frightening, though, is that so long as the means to evolve these
techniques---for GMOs, automation, surveillance et cetera---lives on,
such techniques and systems will evolve.  History shows that we never
stopped developing anything because they destroy the dignity of human
life.  An approach dies because it loses to another approach---this is
the way of techniques.  Technique is not a specific approach; it is all
techniques together.

This is not the age of AI; this is the age of technique; the age of
efficiency.  If you were to stop a big shot leader from doing his work,
another one would appear and take it from there.  It's an autonomous
system; it has a life of its own.

The matter discussed in the article is a superficial symptom; it
scratches the surfaces.  Underneath the symptom, there's a movement, a
system at work.  When doctors remove a cancer from someone's body, they
do not destroy the properties of the system that produced that cancer,
which explain why so many people get cancer, remove it and die later
when new tumors appear and there's nothing else to do.  Still, some
people look at tumors and exclaim---wow, look how fast, efficient this
system is; brave new world!