Deutsch   English   Français   Italiano  
<v2c3hr$385ds$1@dont-email.me>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: Don Y <blockedofcourse@foo.invalid>
Newsgroups: sci.electronics.design
Subject: Re: smart people doing stupid things
Date: Sat, 18 May 2024 22:43:11 -0700
Organization: A noiseless patient Spider
Lines: 39
Message-ID: <v2c3hr$385ds$1@dont-email.me>
References: <bk9f4j5689jbmg8af3ha53t3kcgiq0vbut@4ax.com>
 <v28fi7$286e$1@nnrp.usenet.blueworldhosting.com>
 <v28rap$2e811$3@dont-email.me>
 <v292p9$18cb$1@nnrp.usenet.blueworldhosting.com>
 <v29aso$2kjfs$1@dont-email.me>
 <v29bqi$14iv$1@nnrp.usenet.blueworldhosting.com>
 <v29c0i$1sj0$1@nnrp.usenet.blueworldhosting.com>
 <v29fji$2l9d8$2@dont-email.me>
 <v2adc3$19i5$1@nnrp.usenet.blueworldhosting.com>
 <v2b845$2vo5o$2@dont-email.me>
 <v2bb9d$fth$1@nnrp.usenet.blueworldhosting.com>
 <v2bmtr$364pd$1@dont-email.me>
 <v2boga$13nv$1@nnrp.usenet.blueworldhosting.com>
 <v2bpm2$36hos$3@dont-email.me>
 <v2bqtf$1rlo$1@nnrp.usenet.blueworldhosting.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sun, 19 May 2024 07:43:24 +0200 (CEST)
Injection-Info: dont-email.me; posting-host="2280490cb45f7d091eec621fb3eef257";
	logging-data="3413436"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX1/GEv7WLKtoCwNRpb396TB8"
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.2.2
Cancel-Lock: sha1:v1RMI9ec173RaKN/ixfkq0xdDgw=
Content-Language: en-US
In-Reply-To: <v2bqtf$1rlo$1@nnrp.usenet.blueworldhosting.com>
Bytes: 3461

On 5/18/2024 8:15 PM, Edward Rawde wrote:
> "Don Y" <blockedofcourse@foo.invalid> wrote in message
> news:v2bpm2$36hos$3@dont-email.me...
>> On 5/18/2024 7:34 PM, Edward Rawde wrote:
> 
> So is it ok if I take a step back here and ask whether you think that AI/AGI
> has some inherent limitation which means it will never match human
> intelligence?
> Or do you think that AI/AGI will, at some future time, match human
> intelligence?

That depends on the qualities and capabilities that you lump into
"HUMAN intelligence".  Curiosity?  Creativity?  Imagination?  One
can be exceedingly intelligent and of no more "value" than an
encyclopedia!

I am CERTAIN that AIs will be able to process the information available
to "human practitioners" (in whatever field) at least to the level of
competence that they (humans) can, presently.  It's just a question of
resources thrown at the AI and the time available for it to "respond".

But, this ignores the fact that humans are more resourceful at probing
the environment than AIs ("No thumbs!") without mechanical assistance.
Could (would?) an AI decide to explore space?  Or, the ocean depths?
Or, the rain forest?  Or, would its idea of exploration merely be a
visit to another net-neighbor??

Would (could) it consider human needs as important?  (see previous post)
How would it be motivated?  Would it attempt to think beyond it's
limitations (something humans always do)?  Or, would those be immutable
in its understanding of the world?

> I don't mean to suggest that AI will become human, or will need to become
> human. It will more likely have its own agenda.

Where will that agenda come from?  Will it inherit it from watching B-grade
sci-fi movies?  "Let there be light!"