Deutsch   English   Français   Italiano  
<v2er93$l62$1@nnrp.usenet.blueworldhosting.com>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!weretis.net!feeder9.news.weretis.net!newsfeed.hasname.com!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!nnrp.usenet.blueworldhosting.com!.POSTED!not-for-mail
From: "Edward Rawde" <invalid@invalid.invalid>
Newsgroups: sci.electronics.design
Subject: Re: smart people doing stupid things
Date: Mon, 20 May 2024 02:40:33 -0400
Organization: BWH Usenet Archive (https://usenet.blueworldhosting.com)
Lines: 93
Message-ID: <v2er93$l62$1@nnrp.usenet.blueworldhosting.com>
References: <bk9f4j5689jbmg8af3ha53t3kcgiq0vbut@4ax.com> <v28fi7$286e$1@nnrp.usenet.blueworldhosting.com> <v28rap$2e811$3@dont-email.me> <v292p9$18cb$1@nnrp.usenet.blueworldhosting.com> <v29aso$2kjfs$1@dont-email.me> <v29bqi$14iv$1@nnrp.usenet.blueworldhosting.com> <v29fi8$2l9d8$1@dont-email.me> <v2af21$14mr$1@nnrp.usenet.blueworldhosting.com> <v2baf7$308d7$1@dont-email.me> <v2bdpp$1b5n$1@nnrp.usenet.blueworldhosting.com> <v2bhs4$31hh9$1@dont-email.me> <v2bm3g$7tj$1@nnrp.usenet.blueworldhosting.com> <v2chkc$3anli$1@dont-email.me> <v2d90q$22of$1@nnrp.usenet.blueworldhosting.com> <v2ee06$3ppfi$2@dont-email.me> <v2ehbd$1hmn$1@nnrp.usenet.blueworldhosting.com> <v2eli1$3qus1$2@dont-email.me> <v2em3c$hc0$1@nnrp.usenet.blueworldhosting.com> <v2eppu$3rio4$2@dont-email.me>
Injection-Date: Mon, 20 May 2024 06:40:35 -0000 (UTC)
Injection-Info: nnrp.usenet.blueworldhosting.com;
	logging-data="21698"; mail-complaints-to="usenet@blueworldhosting.com"
Cancel-Lock: sha1:5huT2MqGm8d0bvSmNdfnNum2YYY= sha256:aruPcsK8+xOgv3zkfwx4sf8s8Iy4bFrX5pM5MvbmMeU=
	sha1:AUd954kWBKjHtA+ju9A1+5UcOE0= sha256:9yKNGvX/K3eKgW9HTQtpIzv6FaUw2bFHWZV3kJt49gw=
X-Newsreader: Microsoft Outlook Express 6.00.2900.5931
X-MSMail-Priority: Normal
X-Priority: 3
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157
X-RFC2646: Format=Flowed; Response
Bytes: 5947

"Don Y" <blockedofcourse@foo.invalid> wrote in message 
news:v2eppu$3rio4$2@dont-email.me...
> On 5/19/2024 10:12 PM, Edward Rawde wrote:
>> "Don Y" <blockedofcourse@foo.invalid> wrote in message
>> news:v2eli1$3qus1$2@dont-email.me...
>>> On 5/19/2024 8:51 PM, Edward Rawde wrote:
>>>> It is my view that you don't need to know how a brain works to be able 
>>>> to
>>>> make a brain.
>>>
>>> That's a fallacy.  We can't make a *plant* let alone a brain.
>>
>> But we can make a system which behaves like a brain. We call it AI.
>
> No.  It only "reasons" like a brain.  If that is all your brain was/did,
> you would be an automaton.  I can write a piece of code that can tell
> you your odds of winning any given DEALT poker hand (with some number
> of players and a fresh deck).  That's more than a human brain can
> muster, reliably.

I can write a piece of code which multiplies two five-digit numbers 
together.
That's more than most human brains can muster reliably.

>
> But, I can't factor in the behavior of other players; "Is he bluffing?"
> "Will he fold prematurely?"  etc.  These are HUMAN issues that the
> software (AI) can't RELIABLY accommodate.

You have given no explanation of why an AI cannot reliably accommodate this.

>
> Do AIs get depressed/happy?  Experience joy/sadness?  Revelation?
> Frustration?  Addiction?  Despair?  Pain?  Shame/pride?  Fear?

You have given no explanation of why they cannot, but you appear to believe 
that they cannot.

>
> These all factor into how humans make decisions.  E.g., if you
> are afraid that your adversary is going to harm you (even if that
> fear is unfounded), then you will react AS IF that was more of
> a certainty.  A human might dramatically alter his behavior
> (decision making process) if there is an emotional stake involved.

You have given no explanation of why an AI cannot do that just like a human 
can.

>
> Does the AI know the human's MIND to be able to estimate the
> likelihood and affect of any such influence?  Yes, Mr Spock.
>
> I repeat, teaching a brain to "reason" is trivial.  Likewise to
> recognize patterns.  Done.  Now you just need to expose it to
> as many VERIFIABLE facts (*who* verifies them?) and let it
> do the forward chaining exercises.
>
> Then, you need to audit its conclusions and wonder why it has
> hallucinated (as it won't be able to TELL you).  Will you have
> a committee examine every conclusion from the AI to determine
> (within their personal limitations) if this is a hallucination
> or some yet-to-be-discovered truth?  Imagine how SLOW the
> effective rate of the AI when you have to ensure it is CORRECT!
>
> <https://www.superannotate.com/blog/ai-hallucinations>
> <https://www.ibm.com/topics/ai-hallucinations>
> <https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/>
>
> Given how quickly an AI *can* generate outputs, this turns mankind
> into a "fact checking" organization; what value a reference if
> it can't be trusted to be accurate?  What if its conclusions require
> massive amounts of resources to validate?  What if there are
> timeliness issues involved:  "Russia is preparing to launch a
> nuclear first strike!"?  Even if you can prove this to be
> inaccurate, when will you stop heeding this warning -- to your
> detriment?
>
> Beyond that, we are waiting for humans to understand the
> basis of all these other characteristics attributed to
> The Brain to be able to codify them in a way that can be taught.
> Yet, we can't seem to do it to children, reliably...
>
> I can teach an AI that fire burns -- it's just a relationship
> of already established facts in its knowledge base.  I can teach
> a child that fire burns.  The child will remember the *experience*
> of burning much differently than an AI (what do you do, delete a
> few NP junctions to make it "feel" the pain?  permanently toast
> some foils -- "scar tissue" -- so those associated abilities are
> permanently impaired?)
>
>