Deutsch   English   Français   Italiano  
<v2em3c$hc0$1@nnrp.usenet.blueworldhosting.com>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!weretis.net!feeder9.news.weretis.net!panix!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!nnrp.usenet.blueworldhosting.com!.POSTED!not-for-mail
From: "Edward Rawde" <invalid@invalid.invalid>
Newsgroups: sci.electronics.design
Subject: Re: smart people doing stupid things
Date: Mon, 20 May 2024 01:12:10 -0400
Organization: BWH Usenet Archive (https://usenet.blueworldhosting.com)
Lines: 86
Message-ID: <v2em3c$hc0$1@nnrp.usenet.blueworldhosting.com>
References: <bk9f4j5689jbmg8af3ha53t3kcgiq0vbut@4ax.com> <v28fi7$286e$1@nnrp.usenet.blueworldhosting.com> <v28rap$2e811$3@dont-email.me> <v292p9$18cb$1@nnrp.usenet.blueworldhosting.com> <v29aso$2kjfs$1@dont-email.me> <v29bqi$14iv$1@nnrp.usenet.blueworldhosting.com> <v29fi8$2l9d8$1@dont-email.me> <v2af21$14mr$1@nnrp.usenet.blueworldhosting.com> <v2baf7$308d7$1@dont-email.me> <v2bdpp$1b5n$1@nnrp.usenet.blueworldhosting.com> <v2bhs4$31hh9$1@dont-email.me> <v2bm3g$7tj$1@nnrp.usenet.blueworldhosting.com> <v2chkc$3anli$1@dont-email.me> <v2d90q$22of$1@nnrp.usenet.blueworldhosting.com> <v2ee06$3ppfi$2@dont-email.me> <v2ehbd$1hmn$1@nnrp.usenet.blueworldhosting.com> <v2eli1$3qus1$2@dont-email.me>
Injection-Date: Mon, 20 May 2024 05:12:12 -0000 (UTC)
Injection-Info: nnrp.usenet.blueworldhosting.com;
	logging-data="17792"; mail-complaints-to="usenet@blueworldhosting.com"
Cancel-Lock: sha1:S5q5TROvPuX/sPiCZUZqSO6yA98= sha256:4FJA3SVted78PPUuPGI7aQNKllG8ZoJgazUrab2JX/s=
	sha1:Sssu5xyK+Mxwq8DUcVjAZIbJr50= sha256:vn/s8TzC+ClZ5xDWv/bqwuokrbWkeWuCEb0lCaVtkR0=
X-MSMail-Priority: Normal
X-Priority: 3
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157
X-RFC2646: Format=Flowed; Response
X-Newsreader: Microsoft Outlook Express 6.00.2900.5931
Bytes: 5303

"Don Y" <blockedofcourse@foo.invalid> wrote in message 
news:v2eli1$3qus1$2@dont-email.me...
> On 5/19/2024 8:51 PM, Edward Rawde wrote:
>> It is my view that you don't need to know how a brain works to be able to
>> make a brain.
>
> That's a fallacy.  We can't make a *plant* let alone a brain.

But we can make a system which behaves like a brain. We call it AI.

>
>> You just need something which has sufficient complexity which learns to
>> become what you want it to become.
>
> So, you don't know what a brain is.

Humans clearly have one (well most of them) and AI is moving on similar 
lines.

>And, you don't know how it learns.

Correct.

> Yet, magically expect it to do so?

There is nothing magical about it because it obviously does learn.
It is therefore factual, not magical, that it learns.

>
>> You seem to think that humans have something which AI can never have.
>
> I designed a resource allocation mechanism to allow competing
> agents to "bid" for the resources that they needed to achieve
> their individual goals.  The thought was that they could each
> reach some sort of homeostatic equilibrium at which point
> the available resources would be fairly apportioned to achieve
> whatever *could* be achieved with the available system resources
> (because resources available can change and demands placed on them
> could change as well).
>
> My thinking was that I could endow each "task" with different
> amounts of "cash" to suggest their relative levels of importance.
> They could then interactively "bid" with each other for resources;
> "How much is it WORTH to you to meet your goals?"
>
> This was a colossal failure.  Because bidding STRATEGY is difficult
> to codify in a manner that can learn and meet its own goals.
> Some tasks would "shoot their wad" and still not be guaranteed to
> "purchase" the resources they needed IN THE FACE OF OTHER COMPETITORS.
> Others would spread themselves too thin and find themselves losing
> out to more modest "bidders".
>
> A human faces similar situation when going to an auction with a fixed
> amount of cash.  If you find an item of interest, you have to make
> some judgement call as to how much of your available budget to
> risk on that item, knowing that if you WIN the bid, your reserves
> for other items (whose competitors are yet to be seen) will be
> reduced.
>
> And, if you allow this to be a fluid/interactive process where bidders
> can ADJUST their bids, dynamically (up or down), then the system
> oscillates until some bidder "goes all in".
>
> The failure is not in the concept but, rather, the implementation.
> *I* couldn't figure out how to *teach* (code) a strategy that
> COULD win as often as it SHOULD win.  Because I hoped for more than
> the results available with more trivial approaches.
>
> AI practitioners don't know how to teach issues unrelated to "chaining
> facts in a knowledge base" or "looking for patterns in data".  These
> are relatively simple undertakings that just rely on resources.
>
> E.g., a *child* can understand how an inference engine works:
> Knowledge base:
>   Children get parties on their birthday.
>   You are a child.
>   Today is your birthday.
> Conclusion:
>   You will have a party today!
>
> So, AIs will be intelligent but lack many (all?) of the other
> HUMAN characteristics that we tend to associate with intelligence
> (creativity, imagination, originality, intuition, etc.)
>