Deutsch   English   Français   Italiano  
<v2af21$14mr$1@nnrp.usenet.blueworldhosting.com>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!weretis.net!feeder9.news.weretis.net!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!nnrp.usenet.blueworldhosting.com!.POSTED!not-for-mail
From: "Edward Rawde" <invalid@invalid.invalid>
Newsgroups: sci.electronics.design
Subject: Re: smart people doing stupid things
Date: Sat, 18 May 2024 10:47:27 -0400
Organization: BWH Usenet Archive (https://usenet.blueworldhosting.com)
Lines: 69
Message-ID: <v2af21$14mr$1@nnrp.usenet.blueworldhosting.com>
References: <bk9f4j5689jbmg8af3ha53t3kcgiq0vbut@4ax.com> <v28fi7$286e$1@nnrp.usenet.blueworldhosting.com> <v28rap$2e811$3@dont-email.me> <v292p9$18cb$1@nnrp.usenet.blueworldhosting.com> <v29aso$2kjfs$1@dont-email.me> <v29bqi$14iv$1@nnrp.usenet.blueworldhosting.com> <v29fi8$2l9d8$1@dont-email.me>
Injection-Date: Sat, 18 May 2024 14:47:29 -0000 (UTC)
Injection-Info: nnrp.usenet.blueworldhosting.com;
	logging-data="37595"; mail-complaints-to="usenet@blueworldhosting.com"
Cancel-Lock: sha1:lBFhjM6DyU8tJtiq4m0+T8y/EdQ= sha256:B0Bihmo4OHbDmgJY9GFSMxreTAHKXH4/ZaA98WqsxYw=
	sha1:yd8ajuGUCvzb36V5XADuMi3fA1A= sha256:iBl+1LBi3OYf4zhUJwlk05NZMcwUJYuTAys623y799A=
X-Priority: 3
X-Newsreader: Microsoft Outlook Express 6.00.2900.5931
X-MSMail-Priority: Normal
X-RFC2646: Format=Flowed; Response
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157
Bytes: 4054

"Don Y" <blockedofcourse@foo.invalid> wrote in message 
news:v29fi8$2l9d8$1@dont-email.me...
> On 5/17/2024 9:46 PM, Edward Rawde wrote:
>>> Where it will be in 10 years is impossible to predict.
>>
>> I agree.
>
> So, you can be optimistic (and risk disappointment) or
> pessimistic (and risk being pleasantly surprised).
> Unfortunately, the consequences aren't as trivial as
> choosing between the steak or lobster...
>
>>> But, as the genie is
>>> out of the bottle, there is nothing to stop others from using/abusing it
>>> in ways that we might not consider palatable!  (Do you really think an
>>> adversary will follow YOUR rules for its use -- if they see a way to
>>> achieve gains?)
>>>
>>> The risk from AI is that it makes decisions without being able to
>>> articulate
>>> a "reason" in a verifiable form.
>>
>> I know/have known plenty of people who can do that.
>
> But *you* can evaluate the "goodness" (correctness?) of their
> decisions by an examination of their reasoning.

But then the decision has already been made so why bother with such an 
examination?

>  So, you can
> opt to endorse their decision or reject it -- regardless of
> THEIR opinion on the subject.
>
> E.g., if a manager makes stupid decisions regarding product
> design, you can decide if you want to deal with the
> inevitable (?) outcome from those decisions -- or "move on".
> You aren't bound by his decision making process.
>
> With AIs making societal-scale decisions (directly or
> indirectly), you get caught up in the side-effects of those.

Certainly AI decisions will depend on their training, just as human 
decisions do.
And you can still decide whether to be bound by that decision.
Unless, of course, the AI has got itself into a position where it will see 
you do it anyway by persuasion, coercion, or force.
Just like humans do.
Human treatment of other animals tends not to be of the best, except in a 
minority of cases.
How do we know that AI will treat us in a way we consider to be reasonable?
Human managers often don't. Sure you can make a decision to leave that job 
but it's not an option for many people.

Actors had better watch out if this page is anything to go by:
https://openai.com/index/sora/

I remember a discussion with a colleague many decades ago about where 
computers were going in the future.
My view was that at some future time, human actors would no longer be 
needed.
His view was that he didn't think that would ever be possible.
Now it's looking like I might live long enough to get to type something like 
Prompt: Create a new episode of Blake's Seven.

>
>