Deutsch   English   Français   Italiano  
<v2bdpp$1b5n$1@nnrp.usenet.blueworldhosting.com>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!weretis.net!feeder9.news.weretis.net!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!nnrp.usenet.blueworldhosting.com!.POSTED!not-for-mail
From: "Edward Rawde" <invalid@invalid.invalid>
Newsgroups: sci.electronics.design
Subject: Re: smart people doing stupid things
Date: Sat, 18 May 2024 19:32:07 -0400
Organization: BWH Usenet Archive (https://usenet.blueworldhosting.com)
Lines: 206
Message-ID: <v2bdpp$1b5n$1@nnrp.usenet.blueworldhosting.com>
References: <bk9f4j5689jbmg8af3ha53t3kcgiq0vbut@4ax.com> <v28fi7$286e$1@nnrp.usenet.blueworldhosting.com> <v28rap$2e811$3@dont-email.me> <v292p9$18cb$1@nnrp.usenet.blueworldhosting.com> <v29aso$2kjfs$1@dont-email.me> <v29bqi$14iv$1@nnrp.usenet.blueworldhosting.com> <v29fi8$2l9d8$1@dont-email.me> <v2af21$14mr$1@nnrp.usenet.blueworldhosting.com> <v2baf7$308d7$1@dont-email.me>
Injection-Date: Sat, 18 May 2024 23:32:09 -0000 (UTC)
Injection-Info: nnrp.usenet.blueworldhosting.com;
	logging-data="44215"; mail-complaints-to="usenet@blueworldhosting.com"
Cancel-Lock: sha1:tdNG7iG1Ld9JFPsGHbXzUZd6Vlc= sha256:LYytr5ybIPtXqP50UbcR+qF8jGuesir5IMzsYI8dWno=
	sha1:vQyj+9u6aT86I6jIskIyO/gOkCs= sha256:m9rA3SoEm0Evp47i1Bq23NNy4WmC8BYCXnTEHycKuuw=
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157
X-Newsreader: Microsoft Outlook Express 6.00.2900.5931
X-Priority: 3
X-MSMail-Priority: Normal
X-RFC2646: Format=Flowed; Response
Bytes: 9852

"Don Y" <blockedofcourse@foo.invalid> wrote in message 
news:v2baf7$308d7$1@dont-email.me...
> On 5/18/2024 7:47 AM, Edward Rawde wrote:
>>>>> But, as the genie is
>>>>> out of the bottle, there is nothing to stop others from using/abusing 
>>>>> it
>>>>> in ways that we might not consider palatable!  (Do you really think an
>>>>> adversary will follow YOUR rules for its use -- if they see a way to
>>>>> achieve gains?)
>>>>>
>>>>> The risk from AI is that it makes decisions without being able to
>>>>> articulate
>>>>> a "reason" in a verifiable form.
>>>>
>>>> I know/have known plenty of people who can do that.
>>>
>>> But *you* can evaluate the "goodness" (correctness?) of their
>>> decisions by an examination of their reasoning.
>>
>> But then the decision has already been made so why bother with such an
>> examination?
>
> So you can update your assessment of the party's decision making
> capabilities/strategies.

But it is still the case that the decision has already been made.

>
> When a child is "learning", the parent is continually refining the
> "knowledge" the child is accumulating; correcting faulty
> "conclusions" that the child may have gleaned from its examination
> of the "facts" it encounters.

The quality of parenting varies a lot.

>
> In the early days of AI, inference engines were really slow;
> forward chaining was an exhaustive process (before Rete).
> So, it was not uncommon to WATCH the "conclusions" (new
> knowledge) that the engine would derive from its existing
> knowledge base.  You would use this to "fix" poorly defined
> "facts" so the AI wouldn't come to unwarranted conclusions.
>
> AND GATE THOSE INACCURATE CONCLUSIONS FROM ENTERING THE
> KNOWLEDGE BASE!
>
>   Women bear children.
>   The Abbess is a woman.
>   Great-great-grandmother Florence is a woman.
>   Therefore, the Abbess and Florence bear children.
>
> Now, better algorithms (Rete, et al.), faster processors,
> SIMD/MIMD, cheap/fast memory make it possible to process
> very large knowledge bases faster than an interactive "operator"
> can validate the conclusions.
>
> Other technologies don't provide information to an "agency"
> (operator) for validation; e.g., LLMs can't explain why they
> produced their output whereas a Production System can ennumerate
> the rules followed for your inspection (and CORRECTION).
>
>>>   So, you can
>>> opt to endorse their decision or reject it -- regardless of
>>> THEIR opinion on the subject.
>>>
>>> E.g., if a manager makes stupid decisions regarding product
>>> design, you can decide if you want to deal with the
>>> inevitable (?) outcome from those decisions -- or "move on".
>>> You aren't bound by his decision making process.
>>>
>>> With AIs making societal-scale decisions (directly or
>>> indirectly), you get caught up in the side-effects of those.
>>
>> Certainly AI decisions will depend on their training, just as human
>> decisions do.
>
> But human learning happens over years and often in a supervised context.
> AIs "learn" so fast that only another AI would be productive at
> refining its training.

In that case how did AlphaZero manage to teach itself to play chess by 
playing against itself?

>
>> And you can still decide whether to be bound by that decision.
>> Unless, of course, the AI has got itself into a position where it will 
>> see
>> you do it anyway by persuasion, coercion, or force.
>
> Consider the mammogram example.  The AI is telling you that this
> sample indicates the presence -- or likelihood -- of cancer.
> You have a decision to make... an ACTIVE choice:  do you accept
> its Dx or reject it?  Each choice comes with a risk/cost.
> If you ignore the recommendation, injury (death?) can result from
> your "inaction" on the recommendation.  If you take some remedial
> action, injury (in the form of unnecessary procedures/surgery)
> can result.
>
> Because the AI can't *explain* its "reasoning" to you, you have no way
> of updating your assessment of its (likely) correctness -- esp in
> THIS instance.

I'm not sure I get why it's so essential to have AI explain its reasons.
If I need some plumbing done I don't expect the plumber to give detailed 
reasons why a specific type of pipe was chosen. I just want it done.
If I want to play chess with a computer I don't expect it to give detailed 
reasons why it made each move. I just expect it to win if it's set to much 
above beginner level.
A human chess player may be able to give detailed reasons for making a 
specific move but would not usually be aske to do this.

>
>> Just like humans do.
>> Human treatment of other animals tends not to be of the best, except in a
>> minority of cases.
>> How do we know that AI will treat us in a way we consider to be 
>> reasonable?
>
> The AI doesn't care about you, one way or the other.  Any "bias" in
> its conclusions has been baked in from the training data/process.

Same with humans.

>
> Do you know what that data was?  Can you assess its bias?  Do the folks
> who *compiled* the training data know?  Can they "tease" the bias out
> of the data -- or, are they oblivious to its presence?

Humans have the same issue. You can't see into another person's brain to see 
what bias they may have.

>
> Lots of blacks in prison.  Does that "fact" mean that blacks are
> more criminally inclined?  Or, that they are less skilled at evading
> the consequences of their crimes?  Or, that there is a bias in the
> legal/enforcement system?

I don't see how that's relevant to AI which I think is just as capable of 
bias as humans are.

>
> All sorts of "criminals" ("rapists", "drug dealers", etc) allegedly coming
> into our (US) country.  Or, is that just hyperbole ("illegal" immigrants
> tend to commit FEWER crimes)?  Will the audience be biased in its
> acceptance/rejection of that "assertion"?

Who knows, but whether it's human or AI it will have it's own personality 
and its own biases.
That's why I started this with "One thing which bothers me about AI is that 
if it's like us but way more
intelligent than us then..."

>
>> Human managers often don't. Sure you can make a decision to leave that 
>> job
>> but it's not an option for many people.
>>
>> Actors had better watch out if this page is anything to go by:
>> https://openai.com/index/sora/
>>
>> I remember a discussion with a colleague many decades ago about where
>> computers were going in the future.
>> My view was that at some future time, human actors would no longer be
>> needed.
>> His view was that he didn't think that would ever be possible.
>
> If I was a "talking head" (news anchor, weather person), I would be VERY
> afraid for my future livelihood.  Setting up a CGI newsroom would be
> a piece of cake.  No need to pay for "personalities", "wardrobe",
> "hair/makeup", etc.  "Tune" voice and appearance to fit the preferences
> of the viewership.  Let viewers determine which PORTIONS of the WORLD
> news they want to see/hear presented without incurring the need for
> a larger staff (just feed the stories from the wire services to your
> *CGI* talking heads!)
>
> And that's not even beginning to address other aspects of the
> "presentation" (e.g., turn left girls).
>
> Real estate agents would likely be the next to go; much of their
========== REMAINDER OF ARTICLE TRUNCATED ==========