| Deutsch English Français Italiano |
|
<networks-20250601205900@ram.dialup.fu-berlin.de> View for Bookmarking (what is this?) Look up another Usenet article |
Path: news.eternal-september.org!eternal-september.org!feeder3.eternal-september.org!fu-berlin.de!uni-berlin.de!not-for-mail
From: ram@zedat.fu-berlin.de (Stefan Ram)
Newsgroups: comp.misc
Subject: Re: AI is Dehumanizing Technology
Date: 1 Jun 2025 20:08:49 GMT
Organization: Stefan Ram
Lines: 29
Expires: 1 Jun 2026 11:59:58 GMT
Message-ID: <networks-20250601205900@ram.dialup.fu-berlin.de>
References: <101euu8$1519c$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Trace: news.uni-berlin.de ZXhVVVgebGgEop4tMkYNUAEebh5KhBAGFVyzowgPH7wFkZ
Cancel-Lock: sha1:ukrlfD3NZrN9yy1aDjhXvCOZ82Y= sha256:EFJisFVKgXzK7Z6btwfmpRyqY4lZBd+QkKq6nKgRvoM=
X-Copyright: (C) Copyright 2025 Stefan Ram. All rights reserved.
Distribution through any means other than regular usenet
channels is forbidden. It is forbidden to publish this
article in the Web, to change URIs of this article into links,
and to transfer the body without this notice, but quotations
of parts in other Usenet posts are allowed.
X-No-Archive: Yes
Archive: no
X-No-Archive-Readme: "X-No-Archive" is set, because this prevents some
services to mirror the article in the web. But the article may
be kept on a Usenet archive server with only NNTP access.
X-No-Html: yes
Content-Language: en-US
Ben Collver <bencollver@tilde.pink> wrote or quoted:
> For example, to create an LLM such as
>ChatGPT, you'd start with an enormous quantity of text, then do a lot
>of computationally-intense statistical analysis to map out which
>words and phrases are most likely to appear near to one another.
>Crunch the numbers long enough, and you end up with something similar
>to the next-word prediction tool in your phone's text messaging app,
>except that this tool can generate whole paragraphs of mostly
>plausible-sounding word salad.
I see stuff like that from time to time, but it's really just
a watered-down way of explaining LLMs for kids, and you can't use
it if you're actually trying to make a solid point, since the way
those networks are layered means words turn into concepts, links,
and statements that aren't tied to any one way of saying things.
That ends up getting turned back into language that clearly isn't
just word salad. Sure, stats matter - whether a drug helps 90 or
10 percent of people is a big deal, and knowing statistically common
sentence patterns is exactly what keeps output from turning into
word salad, you want to learn such stats when you learn a language.
The quoted text is from someone trying to make AI criticism
look bad by pretending to be an unqualified critic who just
tosses around stuff that's obviously off base.
If you know your stuff and can actually break down AI or LLMs and get
what's risky about them, speak up, because we need people like you.