Path: ...!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail From: Don Y Newsgroups: sci.electronics.design Subject: Re: smart people doing stupid things Date: Fri, 17 May 2024 21:30:06 -0700 Organization: A noiseless patient Spider Lines: 116 Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Injection-Date: Sat, 18 May 2024 06:30:18 +0200 (CEST) Injection-Info: dont-email.me; posting-host="48f68e1d0e8948705c48f8596ac4ee9b"; logging-data="2772476"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX189OUO1e31BbbqTWCMO3lRL" User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.2.2 Cancel-Lock: sha1:ntKAPigFGPxyY8aaQ6G5U1H7VXI= In-Reply-To: Content-Language: en-US Bytes: 6918 On 5/17/2024 7:11 PM, Edward Rawde wrote: > "Don Y" wrote in message > news:v28rap$2e811$3@dont-email.me... >> On 5/17/2024 1:43 PM, Edward Rawde wrote: >>> Not sure how he managed to say master debaters that many times while >>> seemingly keeping a straight face but it reminds me of this: >>> https://www.learningmethods.com/downloads/pdf/james.alcock--the.belief.engine.pdf >>> >>> One thing which bothers me about AI is that if it's like us but way more >>> intelligent than us then... >> >> The 'I' in AI doesn't refer to the same sense of "intelligence" that >> you are imagining. > > Strange that you could know what I was imagining. People are invariably mislead by thinking that there is "intelligence" involved in the technology. If there is intelligence, then there should be *reason*, right? If there is reason, then I should be able to inquire as to what, specifically, those reasons were for any "decision"/choice that is made. [Hint: you can't get such an answer. Just a set of coefficients that resolve to a particular "choice".] Additionally, these "baseless" decisions can be fed back to the AI to enhance its (apparent) abilities. Who acts as gatekeepers of that "knowledge"? Is it *really* knowledge? I can recall hearing folks comment about friends who were dying of cancer when I was a child. They would say things like: "Once the *air* gets at it, they're dead!" -- refering to once they are opened up by a surgeon (hence the "air getting at it"). Of course, this is nonsense. The cancerous cells didn't magically react to the "air". Rather, the patient was sick enough to warrant a drastic surgical intervention and, thus, more likely to *die* (than someone else who also has UNDIAGNOSED cancer). > Have a look at this and then tell me where you think AI/AGI will be in say > 10 years. > https://www.youtube.com/watch?v=YZjmZFDx-pA "10 years" and "AI" are almost an hilarious cliche; it's ALWAYS been "10 years from now" (since my classes in the 70's). Until it was *here* (or, appeared to be) Where it will be in 10 years is impossible to predict. But, as the genie is out of the bottle, there is nothing to stop others from using/abusing it in ways that we might not consider palatable! (Do you really think an adversary will follow YOUR rules for its use -- if they see a way to achieve gains?) The risk from AI is that it makes decisions without being able to articulate a "reason" in a verifiable form. And, then marches on -- without our ever "blessing" it's conclusion(s). There is no understanding; no REASONING; it's all just pattern observation/matching. I use AIs to anticipate the needs of occupants (of the house, a business, etc.). Based on observations of their past behaviors. SWMBO sleeps at night. The AI doesn't know that she is "sleeping" or even what "sleeping" is! It just notices that she enters the bedroom each night and doesn't leave it until some time the next morning. This is such a repeated behavior that the AI *expects* her to enter the bedroom each night (at roughly the same hour). Often, she will awaken in the middle of the night for a bathroom break, to clear her sinuses, or get up and read for a while. If she takes a bathroom break, the AI will notice that she invariably turn on her HiFi afterwards (to have some music to listen to while drifting BACK to sleep). If she reads (for some indeterminate time), the AI will notice that she turns on her HiFi just before turning off the light by her bedside. It doesn't know why she is headed into the bathroom. Or, why the bedside light comes on. Or, why she is turning on the HiFi. But, HER *observed* behavior fits a repeatable pattern that allows the AI to turn the HiFi on *for* her -- when she comes out of the bathroom or AFTER she has turned off her bedside light. Due to the manner in which I implemented the AI, *I* can see the conditions that are triggering the AIs behavior and correct erroneous conclusions (maybe the AI hears a neighbor's truck passing by the house as he heads off to work in the wee hours of the morning and correlates THAT with her desire to listen to music! "'Music'? What's that??") But, as you get more subtleties in the AIs input, these sorts of causal actions are less obvious. So, you have to think hard about what you provide *to* the AI for it to draw its conclusions. OTOH, if what you provide is limited by the relationships that YOU can imagine, then the AI is limited to YOUR imagination! Maybe the color of your car DOES relate to the chance of it being in an accident! An AI looks at tens of thousands of mamograms and "somehow" comes up with a good correlation between image and breast cancer incidence. *It* then starts recommending care. What does the oncologist do? The AI is telling him there is a good indication of cancer (or, a likelihood of it developing). Does *he* treat the cancer? (the AI can't practice medicine) What if he *doesn't*? Will he face a lawsuit when/if the patient later develops cancer and has a bad outcome -- that might have been preventable if the oncologist had heeded the AI's advice? ("You CHARGED me for the AI consult; and then you IGNORED its recommendations??") OTOH, what if the AI was "hallucinating" and saw something that *seemed* to correlate well -- but, a human examiner would know is NOT related to the Dx (e.g., maybe the AI noticed some characteristic of the WRITTEN label on the film and correlated that, by CHANCE, with the Dx -- a human would KNOW there was no likely causal relationship!)