Deutsch   English   Français   Italiano  
<v2eli1$3qus1$2@dont-email.me>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!news.mixmin.net!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: Don Y <blockedofcourse@foo.invalid>
Newsgroups: sci.electronics.design
Subject: Re: smart people doing stupid things
Date: Sun, 19 May 2024 22:02:56 -0700
Organization: A noiseless patient Spider
Lines: 67
Message-ID: <v2eli1$3qus1$2@dont-email.me>
References: <bk9f4j5689jbmg8af3ha53t3kcgiq0vbut@4ax.com>
 <v28fi7$286e$1@nnrp.usenet.blueworldhosting.com>
 <v28rap$2e811$3@dont-email.me>
 <v292p9$18cb$1@nnrp.usenet.blueworldhosting.com>
 <v29aso$2kjfs$1@dont-email.me>
 <v29bqi$14iv$1@nnrp.usenet.blueworldhosting.com>
 <v29fi8$2l9d8$1@dont-email.me>
 <v2af21$14mr$1@nnrp.usenet.blueworldhosting.com>
 <v2baf7$308d7$1@dont-email.me>
 <v2bdpp$1b5n$1@nnrp.usenet.blueworldhosting.com>
 <v2bhs4$31hh9$1@dont-email.me>
 <v2bm3g$7tj$1@nnrp.usenet.blueworldhosting.com>
 <v2chkc$3anli$1@dont-email.me>
 <v2d90q$22of$1@nnrp.usenet.blueworldhosting.com>
 <v2ee06$3ppfi$2@dont-email.me>
 <v2ehbd$1hmn$1@nnrp.usenet.blueworldhosting.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Mon, 20 May 2024 07:02:59 +0200 (CEST)
Injection-Info: dont-email.me; posting-host="5d80f1dc6dc80e940061abf9488d5484";
	logging-data="4029313"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX19dIm4cqVf7A4MulFNXa9NW"
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.2.2
Cancel-Lock: sha1:3ROiUVc/tHHkySLE7i3xLuip85U=
In-Reply-To: <v2ehbd$1hmn$1@nnrp.usenet.blueworldhosting.com>
Content-Language: en-US
Bytes: 4743

On 5/19/2024 8:51 PM, Edward Rawde wrote:
> It is my view that you don't need to know how a brain works to be able to
> make a brain.

That's a fallacy.  We can't make a *plant* let alone a brain.

> You just need something which has sufficient complexity which learns to
> become what you want it to become.

So, you don't know what a brain is.  And, you don't know how it learns.
Yet, magically expect it to do so?

> You seem to think that humans have something which AI can never have.

I designed a resource allocation mechanism to allow competing
agents to "bid" for the resources that they needed to achieve
their individual goals.  The thought was that they could each
reach some sort of homeostatic equilibrium at which point
the available resources would be fairly apportioned to achieve
whatever *could* be achieved with the available system resources
(because resources available can change and demands placed on them
could change as well).

My thinking was that I could endow each "task" with different
amounts of "cash" to suggest their relative levels of importance.
They could then interactively "bid" with each other for resources;
"How much is it WORTH to you to meet your goals?"

This was a colossal failure.  Because bidding STRATEGY is difficult
to codify in a manner that can learn and meet its own goals.
Some tasks would "shoot their wad" and still not be guaranteed to
"purchase" the resources they needed IN THE FACE OF OTHER COMPETITORS.
Others would spread themselves too thin and find themselves losing
out to more modest "bidders".

A human faces similar situation when going to an auction with a fixed
amount of cash.  If you find an item of interest, you have to make
some judgement call as to how much of your available budget to
risk on that item, knowing that if you WIN the bid, your reserves
for other items (whose competitors are yet to be seen) will be
reduced.

And, if you allow this to be a fluid/interactive process where bidders
can ADJUST their bids, dynamically (up or down), then the system
oscillates until some bidder "goes all in".

The failure is not in the concept but, rather, the implementation.
*I* couldn't figure out how to *teach* (code) a strategy that
COULD win as often as it SHOULD win.  Because I hoped for more than
the results available with more trivial approaches.

AI practitioners don't know how to teach issues unrelated to "chaining
facts in a knowledge base" or "looking for patterns in data".  These
are relatively simple undertakings that just rely on resources.

E.g., a *child* can understand how an inference engine works:
Knowledge base:
   Children get parties on their birthday.
   You are a child.
   Today is your birthday.
Conclusion:
   You will have a party today!

So, AIs will be intelligent but lack many (all?) of the other
HUMAN characteristics that we tend to associate with intelligence
(creativity, imagination, originality, intuition, etc.)