Path: ...!news.roellig-ltd.de!open-news-network.org!weretis.net!feeder8.news.weretis.net!reader5.news.weretis.net!news.solani.org!.POSTED!not-for-mail From: Mild Shock Newsgroups: sci.math Subject: XAI is over and out (Re: Vectors are the new JSON) Date: Fri, 10 Jan 2025 12:07:34 +0100 Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Injection-Date: Fri, 10 Jan 2025 11:07:34 -0000 (UTC) Injection-Info: solani.org; logging-data="2582526"; mail-complaints-to="abuse@news.solani.org" User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101 Firefox/128.0 SeaMonkey/2.53.20 Cancel-Lock: sha1:SB1UI2WS0l2evVdfZTwDLmwcJGE= X-User-ID: eJwNx8ERACAIA7CVEGiFcRRx/xH08gqMgzWdoON+QQqze8IyKS3xT4VX9AmTGQUbBWq63rF3b9O12LV4Tz0WvBTm In-Reply-To: Bytes: 6413 Lines: 161 Hi, Another example of total nonsense: CfR: Vienna World Logic Day Lecture Joao Marques-Silva on Trustable Explainable AI 14 Jan 2025, Online [WLD Event] https://resources.illc.uva.nl/LogicList/newsitem.php?id=12030 The abstract is out of date. XAI was a problem a few years ago. But it has nothing to do with ChatGPT. Because ChatGPT is not the machine learning that XAI is trying to fix. The fuzzy logic in ChatGPT has nothing to do with deep learning and latent parameters. ChatGPT throws everything back to natural language and data. Virtually no invented latent parameter. There are no ontologies with top and bottom in the vectors. They are quite flat attribute structures that not only control words, but also sentences and polysemy. See also: Sentence embedding https://en.wikipedia.org/wiki/Sentence_embedding This means that the academic world is completely overwhelmed. And now stare in mental shock. Don't notice that "traditions" like XAI are already out of date. Bye P.S.: Here's the abstract, it's complete nonsense: Abstract: Explainable artificial intelligence (XAI) is a mainstay of trustworthy AI. Recent years have witnessed massive efforts towards delivering some sort of XAI solutions. Most of these efforts are based on non-symbolic methods, an d invariably will produce erroneous results. As a result, even if the predictions of a machine learning model could be trusted, the lack of reliable explanations will also make those predictions unworthy of trust. This talk provides a brief glimpse of the emerging field of logic-based explainable AI, a rigorous alternative to the still widely-used but extremely problematic non-symbolic methods. https://resources.illc.uva.nl/LogicList/newsitem.php?id=12030 Mild Shock schrieb: > Hi, > > I have switched to use the term "Fuzzy Logic", since > Probability and/or Bayes is surely misleading. > "Fuzzy Logic" is quite old: > > In 1965, in his essay Fuzzy Sets[5] - which had been > cited more than 70,000 times by mid-2017 - he first > presented his concept of a theory of fuzzy sets, which > became the nucleus and basis of the rapidly developing > fuzzy logic - (content: The Logic of Uncertainty  ) > https://de.wikipedia.org/wiki/Lotfi_Zadeh#Leistungen > > This is also quite intersting, but PostgreSQL is not > the only database management system, that provides > such retrieval extensions: > > Vectors are the new JSON > https://www.postgresql.eu/events/pgconfeu2023/sessions/session/4592/slides/435/pgconfeu2023_vectors.pdf > > > Bye > > Mild Shock schrieb: >> Hi, >> >> Prologers with their pipe dream of Ontologies >> with Axioms are most hurt by LLMs that work >> more on the basis of Fuzzy Logic. >> >> Even good old "hardmath" is not immune to >> this coping mechanism: >> >> "I've cast one of my rare votes-to-delete. It is >> a self-answer to the OP's off-topic "question". >> Rather than improve the original post, the effort >> has been made to "promote" some so-called RETRO >> Project by linking YouTube and arxiv.org URLs. >> Not worth retaining IMHO. >> -- hardmath >> >> https://math.meta.stackexchange.com/a/38051/1482376 >> >> Bye >> >> Mild Shock schrieb: >>> Hi, >>> >>> More details on RAG, see here RETRO Project (*) at t=12:01: >>> >>> What's wrong with LLMs and what we should be building instead >>> Tom Dietterich - 10.07.2023 >>> https://youtu.be/cEyHsMzbZBs >>> >>> So its not a very new technique now appearing in >>> generative AIs on the market as well. Some chat bots >>> are even now able to sometimes show more clearly the >>> >>> used source documents in their answer. The MSE end >>> user can still edit a citation by hand to conform >>> more to the SEN format, if this would be the issue. >>> >>> Also the MSE end user can explicitly now ask a chat >>> bot for sources, which he will get most of the time. >>> Or he can give a chat bot a source for review and >>> >>> discussion. This works also. So there is not anymore >>> this "remoteness" of an LLM to the actual virtual >>> world of documents. Its more that they now inhabit the >>> >>> actual virtual world and interact with it. Another issue >>> I see is that in certain countries and educational >>> institutions, it might the case that working with a >>> >>> chat bot is something that the students learn, >>> yet they are not officially allowed to use it on >>> MSE, because MSE policies are based on outdated >>> >>> views about generative AI. >>> >>> See also: >>> >>> (*) RETRO Project: >>> >>> Improving language models by retrieving from trillions of tokens >>> Sebastian Borgeaud et al. - 7 Feb 2022 >>> https://arxiv.org/abs/2112.04426 >>> >>> Bye >>> >>> Mild Shock schrieb: >>>> Hi, >>>> >>>> Now you can listen to Bird songs for a minute: >>>> >>>> 2016 Dana Scott gave a talk honoring Raymond Smullyan >>>> https://www.youtube.com/watch?v=omz6SbUpFQ8 >>>> >>>> A little quiz: >>>> >>>> Q: And also on the Curry-Howard Isomorphism. Is >>>> there a nice way to put it in bird-forest form like To >>>> Mock a Mocking Bird. This book made everything so >>>> simple and intuitive for me. >>>> >>>> A: Hardly, because xx has no simple type. >>>> >>>> Right? >>>> >>>> Bye >>> >> >