Deutsch   English   Français   Italiano  
<v2ecma$3pjer$3@dont-email.me>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: Don Y <blockedofcourse@foo.invalid>
Newsgroups: sci.electronics.design
Subject: Re: smart people doing stupid things
Date: Sun, 19 May 2024 19:31:37 -0700
Organization: A noiseless patient Spider
Lines: 115
Message-ID: <v2ecma$3pjer$3@dont-email.me>
References: <bk9f4j5689jbmg8af3ha53t3kcgiq0vbut@4ax.com>
 <v28fi7$286e$1@nnrp.usenet.blueworldhosting.com>
 <v28rap$2e811$3@dont-email.me>
 <v292p9$18cb$1@nnrp.usenet.blueworldhosting.com>
 <v29aso$2kjfs$1@dont-email.me>
 <v29bqi$14iv$1@nnrp.usenet.blueworldhosting.com>
 <v29c0i$1sj0$1@nnrp.usenet.blueworldhosting.com>
 <v29fji$2l9d8$2@dont-email.me>
 <v2adc3$19i5$1@nnrp.usenet.blueworldhosting.com>
 <v2b845$2vo5o$2@dont-email.me>
 <v2bb9d$fth$1@nnrp.usenet.blueworldhosting.com>
 <v2bmtr$364pd$1@dont-email.me>
 <v2boga$13nv$1@nnrp.usenet.blueworldhosting.com>
 <v2bpm2$36hos$3@dont-email.me>
 <v2bqtf$1rlo$1@nnrp.usenet.blueworldhosting.com>
 <v2c3hr$385ds$1@dont-email.me>
 <v2d5ff$1rds$1@nnrp.usenet.blueworldhosting.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Mon, 20 May 2024 04:31:40 +0200 (CEST)
Injection-Info: dont-email.me; posting-host="5d80f1dc6dc80e940061abf9488d5484";
	logging-data="3984859"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX1+zE3SWmCyHawBIFhb0tXyU"
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.2.2
Cancel-Lock: sha1:rtmV18vS3ZqG0yzhn6MrGc1WA2I=
In-Reply-To: <v2d5ff$1rds$1@nnrp.usenet.blueworldhosting.com>
Content-Language: en-US
Bytes: 6423

On 5/19/2024 8:22 AM, Edward Rawde wrote:
>> That depends on the qualities and capabilities that you lump into
>> "HUMAN intelligence".  Curiosity?  Creativity?  Imagination?  One
>> can be exceedingly intelligent and of no more "value" than an
>> encyclopedia!
> 
> Brains appear to have processing and storage spread thoughout the brain.
> There is no separate information processing and separate storage.
> Some brain areas may be more processing than storage (cerebellum?)
> So AI should be trainable to be of whatever value is wanted which no doubt
> will be maximum value.

How do you *teach* creativity?  curiosity?  imagination?  How do you
MEASURE these to see if your teaching is actually accomplishing its goals?

>> I am CERTAIN that AIs will be able to process the information available
>> to "human practitioners" (in whatever field) at least to the level of
>> competence that they (humans) can, presently.  It's just a question of
>> resources thrown at the AI and the time available for it to "respond".
>>
>> But, this ignores the fact that humans are more resourceful at probing
>> the environment than AIs ("No thumbs!") without mechanical assistance.
> 
> So AI will get humans to do it. At least initially.

No, humans will *decide* if they want to invest the effort to
provide the AI with the data it seeks -- assuming the AI knows
how to express those goals.

"Greetings, Dr Mengele..."

If there comes a time when the AI has its own "effectors",
how do we know it won't engage in "immoral" behaviors?

>> Could (would?) an AI decide to explore space?
> 
> Definitely. And it would not be constrained by the need for a specific
> temperature, air composition and pressure, and g.

Why would *it* opt to make the trip?  Surely, it could wait indefinitely
for light-speed data transmission back to earth...

How would it evaluate the cost-benefit tradeoff for such an enterprise?
Or, would it just assume that whatever IT wanted was justifiable?

>>   Or, the ocean depths?
>> Or, the rain forest?  Or, would its idea of exploration merely be a
>> visit to another net-neighbor??
> 
> Its idea would be what it had become due to its training, just like a
> huiman.

Humans inherently want to explore.  There is nothing "inherent" in
an AI; you have to PUT those goals into it.

Should it want to explore what happens when two nuclear missiles
collide in mid air?  Isn't that additional data that it could use?
Or, what happens if we consume EVEN MORE fossilized carbon.  So it
can tune its climate models for the species that FOLLOW man?

>> Would (could) it consider human needs as important?
> 
> Doepends on whether it is trained to.
> It may in some sense keep us as pets.

How do you express those "needs"?  How do you explain morality to
a child?  Love?  Belonging?  Purpose?  How do you measure your success
in instilling these needs/beliefs?

>>   (see previous post)
>> How would it be motivated?
> 
> Same way humans are.

So, AIs have the same inherent NEEDS that humans do?

The technological part of "AI" is the easy bit.  We already know general
approaches and, with resources, can refine those.  The problem (as I've
tried to suggest above) is instilling some sense of morality in the AI.
Humans seem to need legal mechanisms to prevent them from engaging in
behaviors that are harmful to society.  These are only partially
successful and rely on The Masses to push back on severe abuses.  Do you
build a shitload of AIs and train them to have independant goals with
a shared goal of preventing any ONE (or more) from interfering with
THEIR "individual" goals?

How do you imbue an AI with the idea of "self"?  (so, in the degenerate case,
it is willing to compromise and join with others to contain an abuser?)

>   >Would it attempt to think beyond it's
>> limitations (something humans always do)?  Or, would those be immutable
>> in its understanding of the world?
>>
>>> I don't mean to suggest that AI will become human, or will need to become
>>> human. It will more likely have its own agenda.
>>
>> Where will that agenda come from?
> 
> No-one knows exactly. That'y why "One thing which bothers me about AI is
> that if it's like us but way more
> intelligent than us then..."
> 
> Maybe we need Gort (The day the earth stood still.) but the problem with
> that is will Gort be an American, Chinese, Russian, Other, or none of the
> above.
> My preference would be none of the above.
> 
>> Will it inherit it from watching B-grade
>> sci-fi movies?  "Let there be light!"
>>
>>
> 
>