Path: ...!weretis.net!feeder8.news.weretis.net!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail From: Don Y Newsgroups: sci.electronics.design Subject: Re: smart people doing stupid things Date: Sun, 19 May 2024 23:15:25 -0700 Organization: A noiseless patient Spider Lines: 72 Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Injection-Date: Mon, 20 May 2024 08:15:27 +0200 (CEST) Injection-Info: dont-email.me; posting-host="5d80f1dc6dc80e940061abf9488d5484"; logging-data="4049668"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+Q1dkluYkGyI3S8HGE7FJd" User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.2.2 Cancel-Lock: sha1:T7adq93hyzyb+TlhLFJVmRnQ2x0= Content-Language: en-US In-Reply-To: Bytes: 5267 On 5/19/2024 10:12 PM, Edward Rawde wrote: > "Don Y" wrote in message > news:v2eli1$3qus1$2@dont-email.me... >> On 5/19/2024 8:51 PM, Edward Rawde wrote: >>> It is my view that you don't need to know how a brain works to be able to >>> make a brain. >> >> That's a fallacy. We can't make a *plant* let alone a brain. > > But we can make a system which behaves like a brain. We call it AI. No. It only "reasons" like a brain. If that is all your brain was/did, you would be an automaton. I can write a piece of code that can tell you your odds of winning any given DEALT poker hand (with some number of players and a fresh deck). That's more than a human brain can muster, reliably. But, I can't factor in the behavior of other players; "Is he bluffing?" "Will he fold prematurely?" etc. These are HUMAN issues that the software (AI) can't RELIABLY accommodate. Do AIs get depressed/happy? Experience joy/sadness? Revelation? Frustration? Addiction? Despair? Pain? Shame/pride? Fear? These all factor into how humans make decisions. E.g., if you are afraid that your adversary is going to harm you (even if that fear is unfounded), then you will react AS IF that was more of a certainty. A human might dramatically alter his behavior (decision making process) if there is an emotional stake involved. Does the AI know the human's MIND to be able to estimate the likelihood and affect of any such influence? Yes, Mr Spock. I repeat, teaching a brain to "reason" is trivial. Likewise to recognize patterns. Done. Now you just need to expose it to as many VERIFIABLE facts (*who* verifies them?) and let it do the forward chaining exercises. Then, you need to audit its conclusions and wonder why it has hallucinated (as it won't be able to TELL you). Will you have a committee examine every conclusion from the AI to determine (within their personal limitations) if this is a hallucination or some yet-to-be-discovered truth? Imagine how SLOW the effective rate of the AI when you have to ensure it is CORRECT! Given how quickly an AI *can* generate outputs, this turns mankind into a "fact checking" organization; what value a reference if it can't be trusted to be accurate? What if its conclusions require massive amounts of resources to validate? What if there are timeliness issues involved: "Russia is preparing to launch a nuclear first strike!"? Even if you can prove this to be inaccurate, when will you stop heeding this warning -- to your detriment? Beyond that, we are waiting for humans to understand the basis of all these other characteristics attributed to The Brain to be able to codify them in a way that can be taught. Yet, we can't seem to do it to children, reliably... I can teach an AI that fire burns -- it's just a relationship of already established facts in its knowledge base. I can teach a child that fire burns. The child will remember the *experience* of burning much differently than an AI (what do you do, delete a few NP junctions to make it "feel" the pain? permanently toast some foils -- "scar tissue" -- so those associated abilities are permanently impaired?)