Deutsch   English   Français   Italiano  
<v2eh0n$lt4$1@nnrp.usenet.blueworldhosting.com>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!weretis.net!feeder9.news.weretis.net!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!nnrp.usenet.blueworldhosting.com!.POSTED!not-for-mail
From: "Edward Rawde" <invalid@invalid.invalid>
Newsgroups: sci.electronics.design
Subject: Re: smart people doing stupid things
Date: Sun, 19 May 2024 23:45:25 -0400
Organization: BWH Usenet Archive (https://usenet.blueworldhosting.com)
Lines: 181
Message-ID: <v2eh0n$lt4$1@nnrp.usenet.blueworldhosting.com>
References: <bk9f4j5689jbmg8af3ha53t3kcgiq0vbut@4ax.com> <v28fi7$286e$1@nnrp.usenet.blueworldhosting.com> <v28rap$2e811$3@dont-email.me> <v292p9$18cb$1@nnrp.usenet.blueworldhosting.com> <v29aso$2kjfs$1@dont-email.me> <v29bqi$14iv$1@nnrp.usenet.blueworldhosting.com> <v29c0i$1sj0$1@nnrp.usenet.blueworldhosting.com> <v29fji$2l9d8$2@dont-email.me> <v2adc3$19i5$1@nnrp.usenet.blueworldhosting.com> <v2b845$2vo5o$2@dont-email.me> <v2bb9d$fth$1@nnrp.usenet.blueworldhosting.com> <v2bmtr$364pd$1@dont-email.me> <v2boga$13nv$1@nnrp.usenet.blueworldhosting.com> <v2bpm2$36hos$3@dont-email.me> <v2bqtf$1rlo$1@nnrp.usenet.blueworldhosting.com> <v2c3hr$385ds$1@dont-email.me> <v2d5ff$1rds$1@nnrp.usenet.blueworldhosting.com> <v2ecma$3pjer$3@dont-email.me>
Injection-Date: Mon, 20 May 2024 03:45:27 -0000 (UTC)
Injection-Info: nnrp.usenet.blueworldhosting.com;
	logging-data="22436"; mail-complaints-to="usenet@blueworldhosting.com"
Cancel-Lock: sha1:UYK+dPKseK6ezPDChnMnYSB8mRc= sha256:8YFxEpN+spFT4zWfzAYhyl4Xs5b4U128rg5YbYCwfYA=
	sha1:LjnTCFe6tsmAHIFVmBXeVPv8wMM= sha256:8E8g3+yBpsLBRc9VpuK78nYd9aGPNdl6b62HTSZE0F0=
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157
X-Priority: 3
X-MSMail-Priority: Normal
X-RFC2646: Format=Flowed; Response
X-Newsreader: Microsoft Outlook Express 6.00.2900.5931
Bytes: 8341

"Don Y" <blockedofcourse@foo.invalid> wrote in message 
news:v2ecma$3pjer$3@dont-email.me...
> On 5/19/2024 8:22 AM, Edward Rawde wrote:
>>> That depends on the qualities and capabilities that you lump into
>>> "HUMAN intelligence".  Curiosity?  Creativity?  Imagination?  One
>>> can be exceedingly intelligent and of no more "value" than an
>>> encyclopedia!
>>
>> Brains appear to have processing and storage spread thoughout the brain.
>> There is no separate information processing and separate storage.
>> Some brain areas may be more processing than storage (cerebellum?)
>> So AI should be trainable to be of whatever value is wanted which no 
>> doubt
>> will be maximum value.
>
> How do you *teach* creativity?  curiosity?  imagination?  How do you
> MEASURE these to see if your teaching is actually accomplishing its goals?

Same way as with a human.

>
>>> I am CERTAIN that AIs will be able to process the information available
>>> to "human practitioners" (in whatever field) at least to the level of
>>> competence that they (humans) can, presently.  It's just a question of
>>> resources thrown at the AI and the time available for it to "respond".
>>>
>>> But, this ignores the fact that humans are more resourceful at probing
>>> the environment than AIs ("No thumbs!") without mechanical assistance.
>>
>> So AI will get humans to do it. At least initially.
>
> No, humans will *decide* if they want to invest the effort to
> provide the AI with the data it seeks -- assuming the AI knows
> how to express those goals.

Much of what humans do is decided by others.

>
> "Greetings, Dr Mengele..."
>
> If there comes a time when the AI has its own "effectors",
> how do we know it won't engage in "immoral" behaviors?

We don't.

>
>>> Could (would?) an AI decide to explore space?
>>
>> Definitely. And it would not be constrained by the need for a specific
>> temperature, air composition and pressure, and g.
>
> Why would *it* opt to make the trip?

Same reason humans might if it were possible.
Humans sometimes take their pets with them on vacation.

>  Surely, it could wait indefinitely
> for light-speed data transmission back to earth...

Surely it could also not notice sleeping for many years on the way to 
another star.

>
> How would it evaluate the cost-benefit tradeoff for such an enterprise?

Same way a human does.

> Or, would it just assume that whatever IT wanted was justifiable?

Why would it do anything different from what a human would do if it's 
trained to be human like?

>
>>>   Or, the ocean depths?
>>> Or, the rain forest?  Or, would its idea of exploration merely be a
>>> visit to another net-neighbor??
>>
>> Its idea would be what it had become due to its training, just like a
>> huiman.
>
> Humans inherently want to explore.  There is nothing "inherent" in
> an AI; you have to PUT those goals into it.

What you do is you make an AI which inherently wants to explore.
You might in some way train it that it's good to explore.

>
> Should it want to explore what happens when two nuclear missiles
> collide in mid air?  Isn't that additional data that it could use?
> Or, what happens if we consume EVEN MORE fossilized carbon.  So it
> can tune its climate models for the species that FOLLOW man?
>
>>> Would (could) it consider human needs as important?
>>
>> Doepends on whether it is trained to.
>> It may in some sense keep us as pets.
>
> How do you express those "needs"?  How do you explain morality to
> a child?  Love?  Belonging?  Purpose?  How do you measure your success
> in instilling these needs/beliefs?

Same way as you do with humans.

>
>>>   (see previous post)
>>> How would it be motivated?
>>
>> Same way humans are.
>
> So, AIs have the same inherent NEEDS that humans do?

Why wouldn't they if they're trained to be like humans?

>
> The technological part of "AI" is the easy bit.  We already know general
> approaches and, with resources, can refine those.  The problem (as I've
> tried to suggest above) is instilling some sense of morality in the AI.

Same with humans.

> Humans seem to need legal mechanisms to prevent them from engaging in
> behaviors that are harmful to society.  These are only partially
> successful and rely on The Masses to push back on severe abuses.  Do you
> build a shitload of AIs and train them to have independant goals with
> a shared goal of preventing any ONE (or more) from interfering with
> THEIR "individual" goals?

No, you just make them like humans.

So as AI gets better and better there is clearly a lot to think about.
Otherwise it may become more like humans than we would like.

I don't claim to know how you do this or that with AI.
But I do know that we now seem to be moving towards being able to make 
something which matches the complexity of the human central nervous system.
I don't say we are there yet and I don't know when we will be.
In the past it would have been unthinkable that we could really make 
something like a human brain because nothing of sufficient complexity could 
be made.
It is my view that you don't need to know how a brain works to be able to 
make a brain.
You just need something which has sufficient complexity which learns to 
become what you want it to become.

You seem to think that humans have something which AI can never have.
I don't. So perhaps we should leave it there.

>
> How do you imbue an AI with the idea of "self"?  (so, in the degenerate 
> case,
> it is willing to compromise and join with others to contain an abuser?)
>
>>   >Would it attempt to think beyond it's
>>> limitations (something humans always do)?  Or, would those be immutable
>>> in its understanding of the world?
>>>
>>>> I don't mean to suggest that AI will become human, or will need to 
>>>> become
>>>> human. It will more likely have its own agenda.
>>>
>>> Where will that agenda come from?
>>
>> No-one knows exactly. That'y why "One thing which bothers me about AI is
>> that if it's like us but way more
>> intelligent than us then..."
>>
>> Maybe we need Gort (The day the earth stood still.) but the problem with
>> that is will Gort be an American, Chinese, Russian, Other, or none of the
>> above.
>> My preference would be none of the above.
>>
>>> Will it inherit it from watching B-grade
>>> sci-fi movies?  "Let there be light!"
>>>
>>>
>>
>>
>
> 
========== REMAINDER OF ARTICLE TRUNCATED ==========