Deutsch   English   Français   Italiano  
<v2boga$13nv$1@nnrp.usenet.blueworldhosting.com>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!weretis.net!feeder9.news.weretis.net!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!nnrp.usenet.blueworldhosting.com!.POSTED!not-for-mail
From: "Edward Rawde" <invalid@invalid.invalid>
Newsgroups: sci.electronics.design
Subject: Re: smart people doing stupid things
Date: Sat, 18 May 2024 22:34:47 -0400
Organization: BWH Usenet Archive (https://usenet.blueworldhosting.com)
Lines: 137
Message-ID: <v2boga$13nv$1@nnrp.usenet.blueworldhosting.com>
References: <bk9f4j5689jbmg8af3ha53t3kcgiq0vbut@4ax.com> <v28fi7$286e$1@nnrp.usenet.blueworldhosting.com> <v28rap$2e811$3@dont-email.me> <v292p9$18cb$1@nnrp.usenet.blueworldhosting.com> <v29aso$2kjfs$1@dont-email.me> <v29bqi$14iv$1@nnrp.usenet.blueworldhosting.com> <v29c0i$1sj0$1@nnrp.usenet.blueworldhosting.com> <v29fji$2l9d8$2@dont-email.me> <v2adc3$19i5$1@nnrp.usenet.blueworldhosting.com> <v2b845$2vo5o$2@dont-email.me> <v2bb9d$fth$1@nnrp.usenet.blueworldhosting.com> <v2bmtr$364pd$1@dont-email.me>
Injection-Date: Sun, 19 May 2024 02:34:50 -0000 (UTC)
Injection-Info: nnrp.usenet.blueworldhosting.com;
	logging-data="36607"; mail-complaints-to="usenet@blueworldhosting.com"
Cancel-Lock: sha1:XVuEzSGjzx32ysV3QtielGuFJ54= sha256:ljCnQJvXqiUSWhTg9c55nSBHUl2mZIJAy9iYejvyUlM=
	sha1:3LEx1XNyLRggBOmP6KSy9bJG/Bo= sha256:HAe9qctmzV87pI0IW+Tzf0+39q3e97OSYjE1pWA6HWI=
X-Newsreader: Microsoft Outlook Express 6.00.2900.5931
X-RFC2646: Format=Flowed; Response
X-Priority: 3
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157
X-MSMail-Priority: Normal
Bytes: 6216

"Don Y" <blockedofcourse@foo.invalid> wrote in message 
news:v2bmtr$364pd$1@dont-email.me...
> On 5/18/2024 3:49 PM, Edward Rawde wrote:
>>>>>> What is a decision?
>>>>>
>>>>> Any option to take one fork vs. another.
>>>>
>>>> So a decision is a decision.
>>>
>>> A decision is a choice.  A srategy is HOW you make that choice.
>>>
>>>> Shouldn't a decision be that which causes a specific fork to be chosen?
>>>
>>> Why?  I choose to eat pie.  The reasoning behind the choice may be
>>> as banal as "because it's already partially eaten and will spoil if
>>> not consumed soon" or "because that is what my body craves at this 
>>> moment"
>>> or "because I want to remove that item from the refrigerator to make 
>>> room
>>> for some other item recently acquired".
>>>
>>>> In other words the current state of a system leads it to produce a
>>>> specific
>>>> future state?
>>>
>>> That defines a strategic goal.  Choices (decisions) are made all the 
>>> time.
>>> Their *consequences* are often not considered in the process!
>>
>> In that case I'm not seeing anything different between decisions, goals 
>> and
>> choices made by a human brain and those made by an AI system.
>
> There is none.  The motivation for a human choice or goal pursuit will
> likely be different than that of an AI.

Yes

>  Does an AI have *inherent* needs
> (that haven't been PLACED THERE)?

I'm not sure I follow that.

>
>> But what started this was "People are invariably mislead by thinking that
>> there is "intelligence" involved in the technology".
>>
>> So perhaps I should be asking what is intelligence? And can a computer 
>> have
>> it?
>> Was the computer which created these videos intelligent?
>> https://openai.com/index/sora/
>> Plenty of decisions and choices must have been made and I don't see 
>> anything
>> in the "Historical footage of California during the gold rush" which says
>> it's not a drone flying over a set made for a movie.
>> The goal was to produce the requested video.
>> Some of the other videos do scream AI but that may not be the case in a 
>> year
>> or two.
>> In any case the human imagination is just as capable of imagining a scene
>> with tiny red pandas as it is of imagining a scene which could exist in
>> reality.
>> Did the creation of these videos require intelligence?
>> What exactly IS intelligence?
>> I might also ask what is a reason?
>
> Reason is not confined to humans.  It is just a mechanism of connecting
> facts to achieve a goal/decision/outcome.
>
> Intelligence maps imagination onto reality.  Again, would an AI
> have created /The Persistence of Memory/ without previously having
> encountered a similar exemplar?  The idiot savant who can perform
> complex calculations in his head, in very little time -- but who can't
> see the flaw in the missing dollar riddle?
>
> Knock knock.
> Who's there?
> Banana
> Banana who?
>
> Knock knock.
> Who's there?
> Banana
> Banana who?
>
> ..
>
> Knock knock.
> Who's there?
> Banana
> Banana who?
>
> Knock knock.
> Who's there?
> Orange
> Banana who?
> Orange you glad I didn't say Banana?
>
> Would an AI "think" to formulate a joke based on the APPROXIMATELY
> similar sounds of "Aren't" and "Orange"?

Um well they don't sound similar to me but maybe I have a different accent.

>
> Guttenberg has an interesting test for sentience that he poses to
> Number5 in Short Circuit.  The parallel would be, can an AI (itself!)
> appreciate humor?  Or, only as a tool towards some other goal?
>
> Why do YOU tell jokes?  How much of it is to amuse others vs.
> to feed off of their reactions?  I.e., is it for you, or them?
>
> Is a calculator intelligent?  Smart?  Creative?  Imaginative?

That reminds me of a religious teacher many decades ago when we had to have 
one hour of "religious education" per week for some reason.
Typical of his quesions were "why does a calculator never get a sum wrong?" 
and "can a computer make decisions?".
Also typical were statements such as "a dog can't tell the difference 
between right and wrong. Only humans can."
Being very shy at the time I just sat there thinking "there's wishful 
thinking for you".

>
> You can probably appreciate the cleverness and philosophical
> aspects of Theseus's paradox.  Would an AI?  Even if it
> could *explain* it?
>
>>>> I don't claim to know what a decision is but I think it's interesting
>>>> that
>>>> it seems to be one of those questions everyone knows the answer to 
>>>> until
>>>> they're asked.
>
>