Deutsch   English   Français   Italiano  
<v2eppu$3rio4$2@dont-email.me>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!weretis.net!feeder8.news.weretis.net!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: Don Y <blockedofcourse@foo.invalid>
Newsgroups: sci.electronics.design
Subject: Re: smart people doing stupid things
Date: Sun, 19 May 2024 23:15:25 -0700
Organization: A noiseless patient Spider
Lines: 72
Message-ID: <v2eppu$3rio4$2@dont-email.me>
References: <bk9f4j5689jbmg8af3ha53t3kcgiq0vbut@4ax.com>
 <v28fi7$286e$1@nnrp.usenet.blueworldhosting.com>
 <v28rap$2e811$3@dont-email.me>
 <v292p9$18cb$1@nnrp.usenet.blueworldhosting.com>
 <v29aso$2kjfs$1@dont-email.me>
 <v29bqi$14iv$1@nnrp.usenet.blueworldhosting.com>
 <v29fi8$2l9d8$1@dont-email.me>
 <v2af21$14mr$1@nnrp.usenet.blueworldhosting.com>
 <v2baf7$308d7$1@dont-email.me>
 <v2bdpp$1b5n$1@nnrp.usenet.blueworldhosting.com>
 <v2bhs4$31hh9$1@dont-email.me>
 <v2bm3g$7tj$1@nnrp.usenet.blueworldhosting.com>
 <v2chkc$3anli$1@dont-email.me>
 <v2d90q$22of$1@nnrp.usenet.blueworldhosting.com>
 <v2ee06$3ppfi$2@dont-email.me>
 <v2ehbd$1hmn$1@nnrp.usenet.blueworldhosting.com>
 <v2eli1$3qus1$2@dont-email.me>
 <v2em3c$hc0$1@nnrp.usenet.blueworldhosting.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Mon, 20 May 2024 08:15:27 +0200 (CEST)
Injection-Info: dont-email.me; posting-host="5d80f1dc6dc80e940061abf9488d5484";
	logging-data="4049668"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX1+Q1dkluYkGyI3S8HGE7FJd"
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.2.2
Cancel-Lock: sha1:T7adq93hyzyb+TlhLFJVmRnQ2x0=
Content-Language: en-US
In-Reply-To: <v2em3c$hc0$1@nnrp.usenet.blueworldhosting.com>
Bytes: 5267

On 5/19/2024 10:12 PM, Edward Rawde wrote:
> "Don Y" <blockedofcourse@foo.invalid> wrote in message
> news:v2eli1$3qus1$2@dont-email.me...
>> On 5/19/2024 8:51 PM, Edward Rawde wrote:
>>> It is my view that you don't need to know how a brain works to be able to
>>> make a brain.
>>
>> That's a fallacy.  We can't make a *plant* let alone a brain.
> 
> But we can make a system which behaves like a brain. We call it AI.

No.  It only "reasons" like a brain.  If that is all your brain was/did,
you would be an automaton.  I can write a piece of code that can tell
you your odds of winning any given DEALT poker hand (with some number
of players and a fresh deck).  That's more than a human brain can
muster, reliably.

But, I can't factor in the behavior of other players; "Is he bluffing?"
"Will he fold prematurely?"  etc.  These are HUMAN issues that the
software (AI) can't RELIABLY accommodate.

Do AIs get depressed/happy?  Experience joy/sadness?  Revelation?
Frustration?  Addiction?  Despair?  Pain?  Shame/pride?  Fear?

These all factor into how humans make decisions.  E.g., if you
are afraid that your adversary is going to harm you (even if that
fear is unfounded), then you will react AS IF that was more of
a certainty.  A human might dramatically alter his behavior
(decision making process) if there is an emotional stake involved.

Does the AI know the human's MIND to be able to estimate the
likelihood and affect of any such influence?  Yes, Mr Spock.

I repeat, teaching a brain to "reason" is trivial.  Likewise to
recognize patterns.  Done.  Now you just need to expose it to
as many VERIFIABLE facts (*who* verifies them?) and let it
do the forward chaining exercises.

Then, you need to audit its conclusions and wonder why it has
hallucinated (as it won't be able to TELL you).  Will you have
a committee examine every conclusion from the AI to determine
(within their personal limitations) if this is a hallucination
or some yet-to-be-discovered truth?  Imagine how SLOW the
effective rate of the AI when you have to ensure it is CORRECT!

<https://www.superannotate.com/blog/ai-hallucinations>
<https://www.ibm.com/topics/ai-hallucinations>
<https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/>

Given how quickly an AI *can* generate outputs, this turns mankind
into a "fact checking" organization; what value a reference if
it can't be trusted to be accurate?  What if its conclusions require
massive amounts of resources to validate?  What if there are
timeliness issues involved:  "Russia is preparing to launch a
nuclear first strike!"?  Even if you can prove this to be
inaccurate, when will you stop heeding this warning -- to your
detriment?

Beyond that, we are waiting for humans to understand the
basis of all these other characteristics attributed to
The Brain to be able to codify them in a way that can be taught.
Yet, we can't seem to do it to children, reliably...

I can teach an AI that fire burns -- it's just a relationship
of already established facts in its knowledge base.  I can teach
a child that fire burns.  The child will remember the *experience*
of burning much differently than an AI (what do you do, delete a
few NP junctions to make it "feel" the pain?  permanently toast
some foils -- "scar tissue" -- so those associated abilities are
permanently impaired?)