Path: ...!news.nobody.at!weretis.net!feeder8.news.weretis.net!reader5.news.weretis.net!news.solani.org!.POSTED!not-for-mail From: Mild Shock Newsgroups: sci.logic Subject: bullshit bullshit bullshit (Re: Ok I made a joke, sorry) Date: Sun, 4 Aug 2024 00:16:13 +0200 Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Injection-Date: Sat, 3 Aug 2024 22:16:13 -0000 (UTC) Injection-Info: solani.org; logging-data="798692"; mail-complaints-to="abuse@news.solani.org" User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Firefox/91.0 SeaMonkey/2.53.18.2 Cancel-Lock: sha1:1W+g1bGgshvksMwGAMRmQXTQ0v4= X-User-ID: eJwFwQkBwDAIA0BLpOWVk1HwL2F3dh3eoW6utra9KFMeLOLRn6DLpGXQslIT3zDMM+ZokTKJk4QiSy8lflJjFPM= In-Reply-To: Bytes: 8893 Lines: 194 David Woodruff Smith writes: > And "cognitive science" has recently pursued > the relation of intentional mental activities > to neural processes in the brain. I call this bullshit. He confuses cognitive science with some sort of Neuroscience and/or connectionist approaches. Some broader working definition of cognitive science is for example: > Cognitive science is an interdisciplinary > science that deals with the processing of > information in the context of perception, > thinking and decision-making processes, > both in humans and in animals or machines. You see how much philosophy is behind. David Woodruff Smith published the paper in 2003? I don't think there are any excuses for his nonsense definition. Especially if one writes about pure form. This is so idiotic. Mild Shock schrieb: > > BTW: Friedrich Ueberweg is quite good > and funny to browse, he reports relatively > unfiltered what we would nowadays call > > forms of "rational behaviour", so its a little > pot purry, except for his sections where he > explains some schemas, like the Aristotelan > > figures, which are more pure logic of the form. > And peng you get a guy talking pages and > pages about pure and form: > > "Pure" logic, ontology, and phenomenology > David Woodruff Smith > https://www.cairn.info/revue-internationale-de-philosophie-2003-2-page-21.htm > > > But the above is a from species of philosophy > that is endangered now. Its predator are > abstractions on the computer like lambda > > calculus and the Curry Howard isomorphism. The > revue has become an irrelevant cabarett, only > dead people would be interested in, like > > may father, grandfather etc... > > Mild Shock schrieb: >> >> My impression Cognitive Science was never >> Bayesian Brain, so I guess I made a joke. >> >> The time scale, its start in 1950s and that >> it is still relative unknown subject, >> >> would explain: >> - why my father or mother never tried to >>    educated me towards cognitive science. >>    It could be that they are totally blank >>    in this respect? >> >> - why my grandfather or grandmothers never >>    tried to educate me towards cognitive >>    science. Dito It could be that they are totally >>    blank in this respect? >> >> - it could be that there are rare cases where >>    some philosophers had already a glimps of >>    cognitive science. But when I open for >>    example this booklet: >> >> System der Logic >> Friedrich Ueberweg >> Bonn - 1868 >> https://philpapers.org/rec/UEBSDL >> >>    One can feel the dry swimming that is reported >>    for several millennia.  What happened in the >>    1950s was the possibility of computer modelling. >> >> Mild Shock schrieb: >>> Hi, >>> >>> Yes, maybe we are just before a kind >>> of 2nd Cognitive Turn. The first Cognitive >>> Turn is characterized as: >>> >>>> The cognitive revolution was an intellectual movement that began in >>>> the 1950s as an interdisciplinary study of the mind and its >>>> processes, from which emerged a new field known as cognitive science. >>> https://en.wikipedia.org/wiki/Cognitive_revolution >>> >>> The current mainstream believe is that >>> Chat Bots and the progress in AI is mainly >>> based on "Machine Learning", whereas >>> >>> most of the progress is more based on >>> "Deep Learning". But I am also sceptical >>> about "Deep Learning" in the end a frequentist >>> >>> is again lurking. In the worst case the >>> no Bayension Brain shock will come with a >>> Technological singularity in that the current >>> >>> short inferencing of LLMs is enhanced by >>> some long inferencing, like here: >>> >>> A week ago, I posted that I was cooking a >>> logical reasoning benchmark as a side project. >>> Now it's finally ready! Introducing 🦓 𝙕𝙚𝙗𝙧𝙖𝙇𝙤𝙜𝙞𝙘, >>> designed for evaluating LLMs with Logic Puzzles. >>> https://x.com/billyuchenlin/status/1814254565128335705 >>> >>> making it possible not to excell by LLMs >>> in such puzzles, but to advance to more >>> elaborate scientific models, that can somehow >>> >>> overcome fallacies such as: >>> - Kochen Specker Paradox, some fallacies >>>    caused by averaging? >>> - Gluts and Gaps in Bayesian Reasoning, >>>    some fallacies by consistency assumptions? >>> - What else? >>> >>> So on quiet paws AI might become the new overlord >>> of science which we will happily depend on. >>> >>> Jeff Barnett schrieb: >>>> You are surprised; I am saddened. Not only have we lost contact with >>>> the primary studies of knowledge and reasoning, we have also lost >>>> contact with the studies of methods and motivation. Psychology was >>>> the basic home room of Alan Newell and many other AI all stars. What >>>> is now called AI, I think incorrectly, is just ways of exercising >>>> large amounts of very cheap computer power to calculate approximates >>>> to correlations and other statistical approximations. >>>> >>>> The problem with all of this in my mind, is that we learn nothing >>>> about the capturing of knowledge, what it is, or how it is used. >>>> Both logic and heuristic reasoning are needed and we certainly >>>> believe that intelligence is not measured by its ability to discover >>>> "truth" or its infallibly consistent results. Newton's thought >>>> process was pure genius but known to produce fallacious results when >>>> you know what Einstein knew at a later time. >>>> >>>> I remember reading Ted Shortliffe's dissertation about MYCIN (an >>>> early AI medical consultant for diagnosing blood-borne infectious >>>> diseases) where I learned about one use of the term "staff disease", >>>> or just "staff" for short. In patient care areas there always seems >>>> to be an in-house infection that changes over time. It changes >>>> because sick patients brought into the area contribute whatever is >>>> making them sick in the first place. In the second place there is >>>> rapid mutations driven by all sorts of factors present in >>>> hospital-like environments. The result is that the local staff is >>>> varying, literally, minute by minute. In a days time, the samples >>>> you took are no longer valid, i.e., their day old cultures may be >>>> meaningless. The underlying mathematical problem is that probability >>>> theory doesn't really have the tools to make predictions when the >>>> basic probabilities are changing faster than observations can be >>>> turned into inferences. >>>> >>>> Why do I mention the problems of unstable probabilities here? >>>> Because new AI uses fancy ideas of correlation to simulate >>>> probabilistic inference, e.g., Bayesian inference. Since actual >>>> probabilities may not exist in any meaningful ways, the simulations >>>> are often based on air. >>>> >>>> A hallmark of excellent human reasoning is the ability to explain >>>> how we arrived at our conclusions. We are also able to repair our ========== REMAINDER OF ARTICLE TRUNCATED ==========