Deutsch   English   Français   Italiano  
<v2bhs4$31hh9$1@dont-email.me>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!news.nobody.at!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: Don Y <blockedofcourse@foo.invalid>
Newsgroups: sci.electronics.design
Subject: Re: smart people doing stupid things
Date: Sat, 18 May 2024 17:41:28 -0700
Organization: A noiseless patient Spider
Lines: 281
Message-ID: <v2bhs4$31hh9$1@dont-email.me>
References: <bk9f4j5689jbmg8af3ha53t3kcgiq0vbut@4ax.com>
 <v28fi7$286e$1@nnrp.usenet.blueworldhosting.com>
 <v28rap$2e811$3@dont-email.me>
 <v292p9$18cb$1@nnrp.usenet.blueworldhosting.com>
 <v29aso$2kjfs$1@dont-email.me>
 <v29bqi$14iv$1@nnrp.usenet.blueworldhosting.com>
 <v29fi8$2l9d8$1@dont-email.me>
 <v2af21$14mr$1@nnrp.usenet.blueworldhosting.com>
 <v2baf7$308d7$1@dont-email.me>
 <v2bdpp$1b5n$1@nnrp.usenet.blueworldhosting.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sun, 19 May 2024 02:41:41 +0200 (CEST)
Injection-Info: dont-email.me; posting-host="2280490cb45f7d091eec621fb3eef257";
	logging-data="3196457"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX1/OVYzLpPUSTzFoQDghBqN3"
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.2.2
Cancel-Lock: sha1:bIldT04xp+8m/UPFq307tCY9VKU=
In-Reply-To: <v2bdpp$1b5n$1@nnrp.usenet.blueworldhosting.com>
Content-Language: en-US
Bytes: 14055

On 5/18/2024 4:32 PM, Edward Rawde wrote:
>>> But then the decision has already been made so why bother with such an
>>> examination?
>>
>> So you can update your assessment of the party's decision making
>> capabilities/strategies.
> 
> But it is still the case that the decision has already been made.

That doesn't mean that YOU have to abide by it.  Or, even that
the other party has ACTED on the decision.  I.e., decisions are
not immutable.

>> When a child is "learning", the parent is continually refining the
>> "knowledge" the child is accumulating; correcting faulty
>> "conclusions" that the child may have gleaned from its examination
>> of the "facts" it encounters.
> 
> The quality of parenting varies a lot.

Wouldn't you expect the training for AIs to similarly vary
in capability?

>>>>    So, you can
>>>> opt to endorse their decision or reject it -- regardless of
>>>> THEIR opinion on the subject.
>>>>
>>>> E.g., if a manager makes stupid decisions regarding product
>>>> design, you can decide if you want to deal with the
>>>> inevitable (?) outcome from those decisions -- or "move on".
>>>> You aren't bound by his decision making process.
>>>>
>>>> With AIs making societal-scale decisions (directly or
>>>> indirectly), you get caught up in the side-effects of those.
>>>
>>> Certainly AI decisions will depend on their training, just as human
>>> decisions do.
>>
>> But human learning happens over years and often in a supervised context.
>> AIs "learn" so fast that only another AI would be productive at
>> refining its training.
> 
> In that case how did AlphaZero manage to teach itself to play chess by
> playing against itself?

Because it was taught how to learn from its own actions.
It, qualifying as "another AI".

I bake a lot.  My Rxs are continuously evolving.  How did I
manage to "teach myself" how to bake *better* than my earlier
efforts?  There was no external agency (like the creator of the AI)
that endowed me with that skillset or desire.

>>> And you can still decide whether to be bound by that decision.
>>> Unless, of course, the AI has got itself into a position where it will
>>> see
>>> you do it anyway by persuasion, coercion, or force.
>>
>> Consider the mammogram example.  The AI is telling you that this
>> sample indicates the presence -- or likelihood -- of cancer.
>> You have a decision to make... an ACTIVE choice:  do you accept
>> its Dx or reject it?  Each choice comes with a risk/cost.
>> If you ignore the recommendation, injury (death?) can result from
>> your "inaction" on the recommendation.  If you take some remedial
>> action, injury (in the form of unnecessary procedures/surgery)
>> can result.
>>
>> Because the AI can't *explain* its "reasoning" to you, you have no way
>> of updating your assessment of its (likely) correctness -- esp in
>> THIS instance.
> 
> I'm not sure I get why it's so essential to have AI explain its reasons.

Do you ever ask questions of your doctor, plumber, lawyer, spouse, etc.?
Why do THEY have to explain their reasons?  You /prima facie/ actions
suggest you HIRED those folks for their expertise; why do you now need
an explanation their actions/decisions instead of just blindly accepting
them?

> If I need some plumbing done I don't expect the plumber to give detailed
> reasons why a specific type of pipe was chosen. I just want it done.

If you suspect that he may not be competent -- or may be motivated by
greed -- then you would likely want some further information to reinforce
your opinion/suspicions.

We hired folks to paint the house many years ago.  One of the questions
that I would ask (already KNOWING the nominal answer) is "How much paint
do you think it will take?"  This chosen because it sounds innocent
enough that a customer would likely ask it.

One candidate answered "300 gallons".  At which point, I couldn't
contain the afront:  "We're not painting a f***ing BATTLESHIP!"

I.e., his outrageous reply told me:
- he's not competent enough to estimate a job's complexity WHEN
   EVERY ASPECT OF IT IS VISIBLE FOR PRIOR INSPECTION
*or*
- he's a crook thinking he can take advantage of a "dumb homeowner"

In either case, he was disqualified BY his "reasoning".

In the cases where AIs are surpassing human abilities (being able
to perceive relationships that aren't (yet?) apparent to humans,
it seems only natural that you would want to UNDERSTAND their
"reasoning".  Especially in cases where there is no chaining
of facts but, rather, some "hidden pattern" perceived.

> If I want to play chess with a computer I don't expect it to give detailed
> reasons why it made each move. I just expect it to win if it's set to much
> above beginner level.

Then you don't expect to LEARN from the chess program.
When I learned to play chess, my neighbor (teacher) would
make a point of showing me what I had overlooked in my
play and why that led to the consequences that followed.
If I had a record of moves made (from which I could incrementally
recreate the gameboard configuration), I *might* have spotted
my error.

As the teacher (AI in this case) is ultimately a product of
current students (who grow up to become teachers, refined
by their experiences as students), we evolve in our
capabilities as a society.

If the plumber never explains his decisions, then the
homeowner never learns (e.g., don't over-tighten the
hose bibb lest you ruin the washer inside and need
me to come out, again, to replace it!)

> A human chess player may be able to give detailed reasons for making a
> specific move but would not usually be aske to do this.

If the human was expected to TEACH then those explanations would be
essential TO that teaching!

If the student was wanting to LEARN, then he would select a player that
was capable of teaching!

>>> Just like humans do.
>>> Human treatment of other animals tends not to be of the best, except in a
>>> minority of cases.
>>> How do we know that AI will treat us in a way we consider to be
>>> reasonable?
>>
>> The AI doesn't care about you, one way or the other.  Any "bias" in
>> its conclusions has been baked in from the training data/process.
> 
> Same with humans.

That's not universally true.  If it was, then all decisions would
be completely motivated for personal gain.

>> Do you know what that data was?  Can you assess its bias?  Do the folks
>> who *compiled* the training data know?  Can they "tease" the bias out
>> of the data -- or, are they oblivious to its presence?
> 
> Humans have the same issue. You can't see into another person's brain to see
> what bias they may have.

Exactly.  But, you can pose questions of them and otherwise observe their
behaviors in unrelated areas and form an opinion.

I've a neighbor who loudly claims NOT to be racist.  But, if you take the
whole of your experiences with him and the various comments he has made,
over the years (e.g., not shopping at a particular store because there
are lots of blacks living in the apartment complex across the street
from said store -- meaning lots of them SHOP in that store!), it's
not hard to come to that conclusion.
========== REMAINDER OF ARTICLE TRUNCATED ==========