Warning: mysqli::__construct(): (HY000/1203): User howardkn already has more than 'max_user_connections' active connections in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\includes\artfuncs.php on line 21
Failed to connect to MySQL: (1203) User howardkn already has more than 'max_user_connections' active connections
Warning: mysqli::query(): Couldn't fetch mysqli in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\index.php on line 66
Article <86sels40mz.fsf@example.com>
Deutsch   English   Français   Italiano  
<86sels40mz.fsf@example.com>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: news.eternal-september.org!eternal-september.org!feeder3.eternal-september.org!news.tcpreset.net!weretis.net!feeder8.news.weretis.net!reader5.news.weretis.net!news.solani.org!.POSTED!not-for-mail
From: Richmond <dnomhcir@gmx.com>
Newsgroups: comp.ai.philosophy
Subject: Re: A conversation with ChatGPT's brain.
Date: Mon, 28 Apr 2025 19:11:16 +0100
Organization: Frantic
Message-ID: <86sels40mz.fsf@example.com>
References: <868qnt8qfb.fsf@example.com> <vu5sdv$2iqbu$1@dont-email.me>
	<86v7qx77eq.fsf@example.com> <vub5b9$3f365$1@dont-email.me>
	<868qnq4wu4.fsf@example.com> <vugijc$hbs5$1@dont-email.me>
	<86ldroypro.fsf@example.com> <vuobus$3o914$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Injection-Info: solani.org;
	logging-data="3296"; mail-complaints-to="abuse@news.solani.org"
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/28.2 (gnu/linux)
Cancel-Lock: sha1:3/cnsi/2YVT/0o+TiUMsBEf45gs= sha1:trm8ugkEgMQk/nOHlMIkVe913FE=
X-User-ID: eJwFwYEBwDAEBMCVUO9rHBL2HyF3+ELj0APhWKwBS1zHr9KbvJ4xwu1TCoupnjT6CErWDh8TsREI

Doc O'Leary , <droleary.usenet@2023.impossiblystupid.com> writes:

> Ha!  Blame the AI hype machine for making hallucination a
> “meaningless” word.  Call it whatever you like, but the fact remains
> that these programs give *incorrect answers* as part of their regular
> operation.  It’s not a “bug” that occurs in certain conditions; it
> really *is* “all output” that can be right or wrong, given with equal
> confidence.

They use the term 'hallucination' for a particular circumstance. But
it's not just any circumstance where it gives a wrong answer. And
anyway, human beings give incorrect answers as part of their normal
operation too. The part that I disagree with is 'equal
confidence'. Searching the internet can give you wrong answers, and
takes much longer to do it, especially if you end up on Quora.

>
> Don’t fool yourself into thinking chatbots are thinking.  If it isn’t
> obvious that the people you talk to are thinking more than machines,
> start hanging around smarter people.  They may challenge you to do
> more thinking, too.  Win-win in my book.

I am not fooling myself into thinking it is thinking. And anyway, it
says it is not thinking. It describes how it operates. It looks up in
its database how LLM works, and spews it out. It has no understanding of
what it is saying. It is spewing out something it read somewhere. But
what's the difference? Do you know where your thoughts come from? Do you
ever have intuition and wonder how you knew?

I've watched this video of Andrej Kaparthy:

https://www.youtube.com/watch?v=7xTGNNLPyMI

But the end result is still amazing. I've used it to solve DIY problems
and to write bits of code.

Try asking ChatGPT: "How do I tell the difference between consciousness
and simulated consciousness?", then ask a human being, who will probably
say "Huh?"