Deutsch   English   Français   Italiano  
<v2bpm2$36hos$3@dont-email.me>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!3.eu.feeder.erje.net!feeder.erje.net!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: Don Y <blockedofcourse@foo.invalid>
Newsgroups: sci.electronics.design
Subject: Re: smart people doing stupid things
Date: Sat, 18 May 2024 19:54:46 -0700
Organization: A noiseless patient Spider
Lines: 107
Message-ID: <v2bpm2$36hos$3@dont-email.me>
References: <bk9f4j5689jbmg8af3ha53t3kcgiq0vbut@4ax.com>
 <v28fi7$286e$1@nnrp.usenet.blueworldhosting.com>
 <v28rap$2e811$3@dont-email.me>
 <v292p9$18cb$1@nnrp.usenet.blueworldhosting.com>
 <v29aso$2kjfs$1@dont-email.me>
 <v29bqi$14iv$1@nnrp.usenet.blueworldhosting.com>
 <v29c0i$1sj0$1@nnrp.usenet.blueworldhosting.com>
 <v29fji$2l9d8$2@dont-email.me>
 <v2adc3$19i5$1@nnrp.usenet.blueworldhosting.com>
 <v2b845$2vo5o$2@dont-email.me>
 <v2bb9d$fth$1@nnrp.usenet.blueworldhosting.com>
 <v2bmtr$364pd$1@dont-email.me>
 <v2boga$13nv$1@nnrp.usenet.blueworldhosting.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 19 May 2024 04:54:59 +0200 (CEST)
Injection-Info: dont-email.me; posting-host="2280490cb45f7d091eec621fb3eef257";
	logging-data="3360540"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX1/BwVpuLezKG+KoniO0fFyp"
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.2.2
Cancel-Lock: sha1:StBmjjBotripafnG2REV6JQXyEc=
In-Reply-To: <v2boga$13nv$1@nnrp.usenet.blueworldhosting.com>
Content-Language: en-US
Bytes: 5437

On 5/18/2024 7:34 PM, Edward Rawde wrote:
>>   Does an AI have *inherent* needs
>> (that haven't been PLACED THERE)?
> 
> I'm not sure I follow that.

cf. Maslow’s "Hierarchy of Needs".  Does an AI have *any*?
If I ensured you had food and shelter -- and nothing else -- would
you survive as a healthy organism?  If I gave the AI electricity
and data, would it?

>> Intelligence maps imagination onto reality.  Again, would an AI
>> have created /The Persistence of Memory/ without previously having
>> encountered a similar exemplar?  The idiot savant who can perform
>> complex calculations in his head, in very little time -- but who can't
>> see the flaw in the missing dollar riddle?
>>
>> Knock knock.
>> Who's there?
>> Banana
>> Banana who?
>>
>> Knock knock.
>> Who's there?
>> Banana
>> Banana who?
>>
>> ..
>>
>> Knock knock.
>> Who's there?
>> Banana
>> Banana who?
>>
>> Knock knock.
>> Who's there?
>> Orange
>> Banana who?
>> Orange you glad I didn't say Banana?
>>
>> Would an AI "think" to formulate a joke based on the APPROXIMATELY
>> similar sounds of "Aren't" and "Orange"?
> 
> Um well they don't sound similar to me but maybe I have a different accent.

It's a *stretch*.  Would an AI make that "leap" or be limited to
only words (objects) that it knows to rhyme with "aren't"?  Would
it *expect* humans to be able to fudge the sound of orange, in
their minds, to make a connection to "aren't"?

Would it see the humor in "May the Schwartz be with you?"  Or,
the silliness of an actor obviously walking on his knees to
appear short?  Or, other innuendo?

Would an AI expect humans to notice the "sotto voce" dieseling of
Banzai's jet car and appreciate the humor?

As kids, we "learn" the format of the "Knock, Knock" joke.  Some
folks obviously keep that in mind as they travel through life and find
other opportunities to fit humorous anecdote into that format.
Using their minds to manipulate these observations into a
more humorous form (why?  do they intend to ear a living telling
knock, knock jokes??)

[There tends to be a correlation between intelligence and appreciation
of humor.  E.g., 
<https://www.newsweek.com/funny-people-higher-iq-more-intelligent-685585>]

>> Guttenberg has an interesting test for sentience that he poses to
>> Number5 in Short Circuit.  The parallel would be, can an AI (itself!)
>> appreciate humor?  Or, only as a tool towards some other goal?
>>
>> Why do YOU tell jokes?  How much of it is to amuse others vs.
>> to feed off of their reactions?  I.e., is it for you, or them?
>>
>> Is a calculator intelligent?  Smart?  Creative?  Imaginative?
> 
> That reminds me of a religious teacher many decades ago when we had to have
> one hour of "religious education" per week for some reason.
> Typical of his quesions were "why does a calculator never get a sum wrong?"
> and "can a computer make decisions?".
> Also typical were statements such as "a dog can't tell the difference
> between right and wrong. Only humans can."
> Being very shy at the time I just sat there thinking "there's wishful
> thinking for you".

The calculator example was deliberate.  If an AI trained on
mammograms notices a correlation (yet to be discovered
by humans), is it really intelligent?  Or, is it just
performing a different *calculation*?  In which case,
isn't it just a yawner?

>> You can probably appreciate the cleverness and philosophical
>> aspects of Theseus's paradox.  Would an AI?  Even if it
>> could *explain* it?
>>
>>>>> I don't claim to know what a decision is but I think it's interesting
>>>>> that
>>>>> it seems to be one of those questions everyone knows the answer to
>>>>> until
>>>>> they're asked.
>>
>>
> 
>