Deutsch   English   Français   Italiano  
<vls4be$2feqe$2@solani.org>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!weretis.net!feeder8.news.weretis.net!reader5.news.weretis.net!news.solani.org!.POSTED!not-for-mail
From: Mild Shock <janburse@fastmail.fm>
Newsgroups: comp.lang.prolog
Subject: John Sowa is close with RNT (Re: Traditions die: Another one bites
 the Dust)
Date: Fri, 10 Jan 2025 22:44:15 +0100
Message-ID: <vls4be$2feqe$2@solani.org>
References: <vls3sb$2fejb$1@solani.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Fri, 10 Jan 2025 21:44:14 -0000 (UTC)
Injection-Info: solani.org;
	logging-data="2603854"; mail-complaints-to="abuse@news.solani.org"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101
 Firefox/128.0 SeaMonkey/2.53.20
Cancel-Lock: sha1:H/X+Dh6sVb0/vGVkfNbvNvRwDwQ=
X-User-ID: eJwFwYEBwDAEBMCVKB7jhOT3H6F3YVBsOgIeDMoI4dvUanu5teKf1xVJW21KYfjqHkYnbb7cO/pOyxDn4AdN/RX7
In-Reply-To: <vls3sb$2fejb$1@solani.org>
Bytes: 3175
Lines: 60

Hi,

Interestingly, John Sowa is close with RNT to how
ChatGPT works . In his LLMs bashing videos, John Sowa
repeatedly showed brain models in his slides that

come from Sydney Lamb:

Relational Network Theory (RNT), also known as
Neurocognitive Linguistics (NCL) and formerly as
Stratificational Linguistics or Cognitive-Stratificational
Linguistics, is a connectionist theoretical framework in
linguistics primarily developed by Sydney Lamb which
aims to integrate theoretical linguistics with neuroanatomy.
https://en.wikipedia.org/wiki/Relational_Network_Theory

You can ask ChatGPT and ChatGPT will tell you
what parallels it sees between LLM and RNT.

Bye

P.S.: Here is what ChatGPT tells me about LLM and RNT
as a summary, but the answer itself was much bigger:

While there are shared aspects, particularly the
emphasis on relational dynamics, ChatGPT models are
not explicitly designed with RNT principles. Instead,
they indirectly align with RNT through their ability
to encode and use relationships learned from data.
However, GPT’s probabilistic approach and reliance
on large-scale data contrast with the more structured
and theory-driven nature of RNT.
https://chatgpt.com/share/67818fb7-a788-8013-9cfe-93b3972c8114

I would also put the answer into perspective if one
include RAG. So with Retrieval Augmented Generation

things look completely different again.

Mild Shock schrieb:
> Hi,
> 
>  > Subject: An Isabelle Foundation?
>  > Date: Fri, 10 Jan 2025 14:16:33 +0000
>  > From: Lawrence Paulson via isabelle-dev
>  > Some of us have been talking about how to keep
> things going after the recent retirement of Tobias and
> myself and the withdrawal of resources from Munich.
> I've even heard a suggestion that Isabelle would not
> be able to survive for much longer.
> https://en.wikipedia.org/wiki/Isabelle_%28proof_assistant%29
> 
> No more money for symbolic AI? LoL
> 
> Maybe suplanted by Lean (proof assistant) and my
> speculation maybe re-orientation to keep up with
> Hybrid methods, such as found in ChatGPT.
> https://en.wikipedia.org/wiki/Lean_%28proof_assistant%29
> 
> Bye