Path: ...!news.nobody.at!weretis.net!feeder8.news.weretis.net!reader5.news.weretis.net!news.solani.org!.POSTED!not-for-mail From: Mild Shock Newsgroups: comp.lang.prolog Subject: 2nd Cognitive Turn ~~> no Bayesian Brain (Re: Prolegomena by Rappaport) Date: Sat, 3 Aug 2024 22:50:14 +0200 Message-ID: References: <1b7ce2bd-722b-4c2e-b853-12fc2232752bn@googlegroups.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Injection-Date: Sat, 3 Aug 2024 20:50:14 -0000 (UTC) Injection-Info: solani.org; logging-data="802586"; mail-complaints-to="abuse@news.solani.org" User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Firefox/91.0 SeaMonkey/2.53.18.2 Cancel-Lock: sha1:GV3IFTX7x4hQi20B4/EV9KEUAls= In-Reply-To: X-User-ID: eJwFwYEBwDAEBMCV8E/ScRD2H6F3jtDow/Cgry9z9t1UxICt707BtmQp3kgbs1OipHbOR7h+I+ktrch68wNlTxXz Bytes: 5621 Lines: 108 Hi, Yes, maybe we are just before a kind of 2nd Cognitive Turn. The first Cognitive Turn is characterized as: > The cognitive revolution was an intellectual > movement that began in the 1950s as an > interdisciplinary study of the mind and its > processes, from which emerged a new > field known as cognitive science. https://en.wikipedia.org/wiki/Cognitive_revolution The current mainstream believe is that Chat Bots and the progress in AI is mainly based on "Machine Learning", whereas most of the progress is more based on "Deep Learning". But I am also sceptical about "Deep Learning" in the end a frequentist is again lurking. In the worst case the no Bayension Brain shock will come with a Technological singularity in that the current short inferencing of LLMs is enhanced by some long inferencing, like here: A week ago, I posted that I was cooking a logical reasoning benchmark as a side project. Now it's finally ready! Introducing 🦓 𝙕𝙚𝙗𝙧𝙖𝙇𝙤𝙜𝙞𝙘, designed for evaluating LLMs with Logic Puzzles. https://x.com/billyuchenlin/status/1814254565128335705 making it possible not to excell by LLMs in such puzzles, but to advance to more elaborate scientific models, that can somehow overcome fallacies such as: - Kochen Specker Paradox, some fallacies caused by averaging? - Gluts and Gaps in Bayesian Reasoning, some fallacies by consistency assumptions? - What else? So on quiet paws AI might become the new overlord of science which we will happily depend on. Jeff Barnett schrieb: > You are surprised; I am saddened. Not only have we lost contact with the primary studies of knowledge and reasoning, we have also lost contact with the studies of methods and motivation. Psychology was the basic home room of Alan Newell and many other AI all stars. What is now called AI, I think incorrectly, is just ways of exercising large amounts of very cheap computer power to calculate approximates to correlations and other statistical approximations. The problem with all of this in my mind, is that we learn nothing about the capturing of knowledge, what it is, or how it is used. Both logic and heuristic reasoning are needed and we certainly believe that intelligence is not measured by its ability to discover "truth" or its infallibly consistent results. Newton's thought process was pure genius but known to produce fallacious results when you know what Einstein knew at a later time. I remember reading Ted Shortliffe's dissertation about MYCIN (an early AI medical consultant for diagnosing blood-borne infectious diseases) where I learned about one use of the term "staff disease", or just "staff" for short. In patient care areas there always seems to be an in- house infection that changes over time. It changes because sick patients brought into the area contribute whatever is making them sick in the first place. In the second place there is rapid mutations driven by all sorts of factors present in hospital-like environments. The result is that the local staff is varying, literally, minute by minute. In a days time, the samples you took are no longer valid, i.e., their day old cultures may be meaningless. The underlying mathematical problem is that probability theory doesn't really have the tools to make predictions when the basic probabilities are changing faster than observations can be turned into inferences. Why do I mention the problems of unstable probabilities here? Because new AI uses fancy ideas of correlation to simulate probabilistic inference, e.g., Bayesian inference. Since actual probabilities may not exist in any meaningful ways, the simulations are often based on air. A hallmark of excellent human reasoning is the ability to explain how we arrived at our conclusions. We are also able to repair our inner models when we are in error if we can understand why. The abilities to explain and repair are fundamental to excellence of thought processes. By the way, I'm not claiming that all humans or I have theses reflective abilities. Those who do are few and far between. However, any AI that doesn't have some of these capabilities isn't very interesting. For more on reasons why logic and truth are only part of human ability to reasonably reason, see https://www.yahoo.com/news/opinion-want-convince-conspiracy-theory-100258277.html -- Jeff Barnett