Deutsch   English   Français   Italiano  
<v2bmtr$364pd$1@dont-email.me>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: Don Y <blockedofcourse@foo.invalid>
Newsgroups: sci.electronics.design
Subject: Re: smart people doing stupid things
Date: Sat, 18 May 2024 19:07:44 -0700
Organization: A noiseless patient Spider
Lines: 105
Message-ID: <v2bmtr$364pd$1@dont-email.me>
References: <bk9f4j5689jbmg8af3ha53t3kcgiq0vbut@4ax.com>
 <v28fi7$286e$1@nnrp.usenet.blueworldhosting.com>
 <v28rap$2e811$3@dont-email.me>
 <v292p9$18cb$1@nnrp.usenet.blueworldhosting.com>
 <v29aso$2kjfs$1@dont-email.me>
 <v29bqi$14iv$1@nnrp.usenet.blueworldhosting.com>
 <v29c0i$1sj0$1@nnrp.usenet.blueworldhosting.com>
 <v29fji$2l9d8$2@dont-email.me>
 <v2adc3$19i5$1@nnrp.usenet.blueworldhosting.com>
 <v2b845$2vo5o$2@dont-email.me>
 <v2bb9d$fth$1@nnrp.usenet.blueworldhosting.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sun, 19 May 2024 04:07:57 +0200 (CEST)
Injection-Info: dont-email.me; posting-host="2280490cb45f7d091eec621fb3eef257";
	logging-data="3347245"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX19B+zVkrfZiyjCzBgqVI+Nf"
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.2.2
Cancel-Lock: sha1:MMr67hlm5iuDatwUqQEkgfiGUCc=
In-Reply-To: <v2bb9d$fth$1@nnrp.usenet.blueworldhosting.com>
Content-Language: en-US
Bytes: 5265

On 5/18/2024 3:49 PM, Edward Rawde wrote:
>>>>> What is a decision?
>>>>
>>>> Any option to take one fork vs. another.
>>>
>>> So a decision is a decision.
>>
>> A decision is a choice.  A srategy is HOW you make that choice.
>>
>>> Shouldn't a decision be that which causes a specific fork to be chosen?
>>
>> Why?  I choose to eat pie.  The reasoning behind the choice may be
>> as banal as "because it's already partially eaten and will spoil if
>> not consumed soon" or "because that is what my body craves at this moment"
>> or "because I want to remove that item from the refrigerator to make room
>> for some other item recently acquired".
>>
>>> In other words the current state of a system leads it to produce a
>>> specific
>>> future state?
>>
>> That defines a strategic goal.  Choices (decisions) are made all the time.
>> Their *consequences* are often not considered in the process!
> 
> In that case I'm not seeing anything different between decisions, goals and
> choices made by a human brain and those made by an AI system.

There is none.  The motivation for a human choice or goal pursuit will
likely be different than that of an AI.  Does an AI have *inherent* needs
(that haven't been PLACED THERE)?

> But what started this was "People are invariably mislead by thinking that
> there is "intelligence" involved in the technology".
> 
> So perhaps I should be asking what is intelligence? And can a computer have
> it?
> Was the computer which created these videos intelligent?
> https://openai.com/index/sora/
> Plenty of decisions and choices must have been made and I don't see anything
> in the "Historical footage of California during the gold rush" which says
> it's not a drone flying over a set made for a movie.
> The goal was to produce the requested video.
> Some of the other videos do scream AI but that may not be the case in a year
> or two.
> In any case the human imagination is just as capable of imagining a scene
> with tiny red pandas as it is of imagining a scene which could exist in
> reality.
> Did the creation of these videos require intelligence?
> What exactly IS intelligence?
> I might also ask what is a reason?

Reason is not confined to humans.  It is just a mechanism of connecting
facts to achieve a goal/decision/outcome.

Intelligence maps imagination onto reality.  Again, would an AI
have created /The Persistence of Memory/ without previously having
encountered a similar exemplar?  The idiot savant who can perform
complex calculations in his head, in very little time -- but who can't
see the flaw in the missing dollar riddle?

Knock knock.
Who's there?
Banana
Banana who?

Knock knock.
Who's there?
Banana
Banana who?

...

Knock knock.
Who's there?
Banana
Banana who?

Knock knock.
Who's there?
Orange
Banana who?
Orange you glad I didn't say Banana?

Would an AI "think" to formulate a joke based on the APPROXIMATELY
similar sounds of "Aren't" and "Orange"?

Guttenberg has an interesting test for sentience that he poses to
Number5 in Short Circuit.  The parallel would be, can an AI (itself!)
appreciate humor?  Or, only as a tool towards some other goal?

Why do YOU tell jokes?  How much of it is to amuse others vs.
to feed off of their reactions?  I.e., is it for you, or them?

Is a calculator intelligent?  Smart?  Creative?  Imaginative?

You can probably appreciate the cleverness and philosophical
aspects of Theseus's paradox.  Would an AI?  Even if it
could *explain* it?

>>> I don't claim to know what a decision is but I think it's interesting
>>> that
>>> it seems to be one of those questions everyone knows the answer to until
>>> they're asked.