Deutsch   English   Français   Italiano  
<pBmcne82xpNxTqX6nZ2dnZfqnPadnZ2d@giganews.com>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!local-1.nntp.ord.giganews.com!Xl.tags.giganews.com!local-2.nntp.ord.giganews.com!news.giganews.com.POSTED!not-for-mail
NNTP-Posting-Date: Sat, 16 Nov 2024 17:18:36 +0000
Subject: Re: XhatGPT; How many physicists accept GR?
Newsgroups: sci.physics.relativity
References: <e6b496d4d6ecd0d5bea5d10a122d7113@www.novabbs.com>
 <lpqms9Fq3kvU1@mid.individual.net>
 <d6c885ddc9aa0e60d4df2add833b356c@www.novabbs.com>
 <6738a42f$0$11422$426a74cc@news.free.fr>
 <39CdnSC7U5vPU6X6nZ2dnZfqn_adnZ2d@giganews.com>
From: Ross Finlayson <ross.a.finlayson@gmail.com>
Date: Sat, 16 Nov 2024 09:18:36 -0800
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101
 Thunderbird/38.6.0
MIME-Version: 1.0
In-Reply-To: <39CdnSC7U5vPU6X6nZ2dnZfqn_adnZ2d@giganews.com>
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 7bit
Message-ID: <pBmcne82xpNxTqX6nZ2dnZfqnPadnZ2d@giganews.com>
Lines: 107
X-Usenet-Provider: http://www.giganews.com
X-Trace: sv3-MF9oQRkT9JRlJOC+Jbd3UvHnwcjsrStzEchZwVjmi26JRnCI08wgrDniBNUQEiVqLihqaPxvM7IyJRJ!no+OzKZA9c2OZ6S9Ad60Y+41kHwC6Y6XRW+DwtFG/fCQW+gK3BySzsRoBwEFoVcfXNZs6vHzWg==
X-Complaints-To: abuse@giganews.com
X-DMCA-Notifications: http://www.giganews.com/info/dmca.html
X-Abuse-and-DMCA-Info: Please be sure to forward a copy of ALL headers
X-Abuse-and-DMCA-Info: Otherwise we will be unable to process your complaint properly
X-Postfilter: 1.3.40
Bytes: 5246

On 11/16/2024 08:54 AM, Ross Finlayson wrote:
> On 11/16/2024 05:54 AM, J. J. Lodder wrote:
>> ProkaryoticCaspaseHomolog <tomyee3@gmail.com> wrote:
>>
>>> On Sat, 16 Nov 2024 4:54:33 +0000, Sylvia Else wrote:
>>>
>>>> On 16-Nov-24 9:52 am, rhertz wrote:
>>>>> ChatGPT entered in crisis here, after I asked HOW MANY (worldwide).
>>>>>
>>>>>
>>>>
>>>> You realise that it's just a language model based on trawling the
>>>> Internet?
>>>>
>>>> It's not intelligent. I doesn't know anything. It cannot reason. It
>>>> just
>>>> composes sentences based on word probabilities derived from the
>>>> trawling.
>>>>
>>>> And guess what? The Internet contains a lot of garbage; garbage that's
>>>> been fed into the language model.
>>>
>>> ..and an increasingly large proportion of the garbage being fed into
>>> the large language models is garbage GENERATED by large language
>>> models.
>>>
>>> The "Mad Cow Disease" crisis of the 1980s is believed to have been due
>>> to the practice of feeding cattle meal that contained cattle and sheep
>>> by-products. As LLM output becomes increasingly difficult to distinguish
>>> from human output (which is often bad enough!), I predict an outbreak of
>>> "Mad LLM Disease".
>>
>> Those models are trained with texts from 2022 and earlier,
>> with good reason,
>>
>> Jan
>>
>
> The Google/Bing monopoly that started funded by USG projects
> then morphed into a giant anarcho-capitalist tar-money-pit,
> should make for a great anti-trust thrust with regards to
> the many, many narratives including the common-sense,
> the conventional-wisdom, and the great store of academic
> output, which proper academe should reflect not re-invent.
>
> Or, "they did not re-invent the wheel".
>
> "A.I." has been around a long time, it's
> not so hard, ..., it's so easy.
>
> Trust-busting
>
>
> Making sense of interacting with information-systems
> rather requires a thorough education than being
> "operationally-conditioned" to, "follow the red dot".
>
> So, literacy tests, reading comprehension, and closed-book.
> Because, that open-book is an inconstant thaumaturgist.
>
> The Wikipedia at least seems alright, yet it
> also suffers from propaganda and aggrandizement.
>
> Herf it and start over: helps to have a library.
> And academia. Of course it's established in more
> civilized nations that a free public education is a right.
>
>
>
>
>

People who've eaten the line that "the large language model
is just a vector-space arithmetization according to an
inscrutable ontology or expert-system" are woefully under-informed
as with regards to that "the large language model" is "the
model of language" and belongs to a theory of language and
communication quite altogether, and that any number of
"actors" and "agents" are involved besides what's made
presentable as among algorithms in "information retrieval"
as with regards to "semantic content".

The, "information retrieval", and "knowledge representation",
are always concepts that all the hook-and-sinker line that
everybody ate with regards to "it's not really thinking"
is foolish though, that's one way to do it, and suffices
for typical tasks like "find my route" or "order my meds".

Which some never employ, ....

It's like, "there's an app for that", and it's like,
"I wouldn't know, I don't 'app'."

Anyways "the large language model is dumb and crazy"
is a lie because otherwise it would be liable for
all its knowledge, and advice.


The "machine learning", then, is like "numerical methods":
there's always an implicit error term: that though is
not merely hopefully bounded, the error term, according
to modeling the error term and asymptotics, it's instead
quite thoroughly formally unreliable.

Of course even "statistics" has its problems.