Path: ...!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail From: Don Y Newsgroups: sci.electronics.design Subject: Re: smart people doing stupid things Date: Sat, 18 May 2024 22:43:11 -0700 Organization: A noiseless patient Spider Lines: 39 Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Injection-Date: Sun, 19 May 2024 07:43:24 +0200 (CEST) Injection-Info: dont-email.me; posting-host="2280490cb45f7d091eec621fb3eef257"; logging-data="3413436"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/GEv7WLKtoCwNRpb396TB8" User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.2.2 Cancel-Lock: sha1:v1RMI9ec173RaKN/ixfkq0xdDgw= Content-Language: en-US In-Reply-To: Bytes: 3461 On 5/18/2024 8:15 PM, Edward Rawde wrote: > "Don Y" wrote in message > news:v2bpm2$36hos$3@dont-email.me... >> On 5/18/2024 7:34 PM, Edward Rawde wrote: > > So is it ok if I take a step back here and ask whether you think that AI/AGI > has some inherent limitation which means it will never match human > intelligence? > Or do you think that AI/AGI will, at some future time, match human > intelligence? That depends on the qualities and capabilities that you lump into "HUMAN intelligence". Curiosity? Creativity? Imagination? One can be exceedingly intelligent and of no more "value" than an encyclopedia! I am CERTAIN that AIs will be able to process the information available to "human practitioners" (in whatever field) at least to the level of competence that they (humans) can, presently. It's just a question of resources thrown at the AI and the time available for it to "respond". But, this ignores the fact that humans are more resourceful at probing the environment than AIs ("No thumbs!") without mechanical assistance. Could (would?) an AI decide to explore space? Or, the ocean depths? Or, the rain forest? Or, would its idea of exploration merely be a visit to another net-neighbor?? Would (could) it consider human needs as important? (see previous post) How would it be motivated? Would it attempt to think beyond it's limitations (something humans always do)? Or, would those be immutable in its understanding of the world? > I don't mean to suggest that AI will become human, or will need to become > human. It will more likely have its own agenda. Where will that agenda come from? Will it inherit it from watching B-grade sci-fi movies? "Let there be light!"