Path: ...!weretis.net!feeder9.news.weretis.net!newsfeed.hasname.com!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!nnrp.usenet.blueworldhosting.com!.POSTED!not-for-mail From: "Edward Rawde" Newsgroups: sci.electronics.design Subject: Re: smart people doing stupid things Date: Mon, 20 May 2024 02:40:33 -0400 Organization: BWH Usenet Archive (https://usenet.blueworldhosting.com) Lines: 93 Message-ID: References: Injection-Date: Mon, 20 May 2024 06:40:35 -0000 (UTC) Injection-Info: nnrp.usenet.blueworldhosting.com; logging-data="21698"; mail-complaints-to="usenet@blueworldhosting.com" Cancel-Lock: sha1:5huT2MqGm8d0bvSmNdfnNum2YYY= sha256:aruPcsK8+xOgv3zkfwx4sf8s8Iy4bFrX5pM5MvbmMeU= sha1:AUd954kWBKjHtA+ju9A1+5UcOE0= sha256:9yKNGvX/K3eKgW9HTQtpIzv6FaUw2bFHWZV3kJt49gw= X-Newsreader: Microsoft Outlook Express 6.00.2900.5931 X-MSMail-Priority: Normal X-Priority: 3 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-RFC2646: Format=Flowed; Response Bytes: 5947 "Don Y" wrote in message news:v2eppu$3rio4$2@dont-email.me... > On 5/19/2024 10:12 PM, Edward Rawde wrote: >> "Don Y" wrote in message >> news:v2eli1$3qus1$2@dont-email.me... >>> On 5/19/2024 8:51 PM, Edward Rawde wrote: >>>> It is my view that you don't need to know how a brain works to be able >>>> to >>>> make a brain. >>> >>> That's a fallacy. We can't make a *plant* let alone a brain. >> >> But we can make a system which behaves like a brain. We call it AI. > > No. It only "reasons" like a brain. If that is all your brain was/did, > you would be an automaton. I can write a piece of code that can tell > you your odds of winning any given DEALT poker hand (with some number > of players and a fresh deck). That's more than a human brain can > muster, reliably. I can write a piece of code which multiplies two five-digit numbers together. That's more than most human brains can muster reliably. > > But, I can't factor in the behavior of other players; "Is he bluffing?" > "Will he fold prematurely?" etc. These are HUMAN issues that the > software (AI) can't RELIABLY accommodate. You have given no explanation of why an AI cannot reliably accommodate this. > > Do AIs get depressed/happy? Experience joy/sadness? Revelation? > Frustration? Addiction? Despair? Pain? Shame/pride? Fear? You have given no explanation of why they cannot, but you appear to believe that they cannot. > > These all factor into how humans make decisions. E.g., if you > are afraid that your adversary is going to harm you (even if that > fear is unfounded), then you will react AS IF that was more of > a certainty. A human might dramatically alter his behavior > (decision making process) if there is an emotional stake involved. You have given no explanation of why an AI cannot do that just like a human can. > > Does the AI know the human's MIND to be able to estimate the > likelihood and affect of any such influence? Yes, Mr Spock. > > I repeat, teaching a brain to "reason" is trivial. Likewise to > recognize patterns. Done. Now you just need to expose it to > as many VERIFIABLE facts (*who* verifies them?) and let it > do the forward chaining exercises. > > Then, you need to audit its conclusions and wonder why it has > hallucinated (as it won't be able to TELL you). Will you have > a committee examine every conclusion from the AI to determine > (within their personal limitations) if this is a hallucination > or some yet-to-be-discovered truth? Imagine how SLOW the > effective rate of the AI when you have to ensure it is CORRECT! > > > > > > Given how quickly an AI *can* generate outputs, this turns mankind > into a "fact checking" organization; what value a reference if > it can't be trusted to be accurate? What if its conclusions require > massive amounts of resources to validate? What if there are > timeliness issues involved: "Russia is preparing to launch a > nuclear first strike!"? Even if you can prove this to be > inaccurate, when will you stop heeding this warning -- to your > detriment? > > Beyond that, we are waiting for humans to understand the > basis of all these other characteristics attributed to > The Brain to be able to codify them in a way that can be taught. > Yet, we can't seem to do it to children, reliably... > > I can teach an AI that fire burns -- it's just a relationship > of already established facts in its knowledge base. I can teach > a child that fire burns. The child will remember the *experience* > of burning much differently than an AI (what do you do, delete a > few NP junctions to make it "feel" the pain? permanently toast > some foils -- "scar tissue" -- so those associated abilities are > permanently impaired?) > >