Deutsch   English   Français   Italiano  
<vler1s$28i58$5@solani.org>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!news.roellig-ltd.de!open-news-network.org!weretis.net!feeder8.news.weretis.net!reader5.news.weretis.net!news.solani.org!.POSTED!not-for-mail
From: Mild Shock <janburse@fastmail.fm>
Newsgroups: comp.lang.prolog
Subject: =?UTF-8?Q?Re:_LLM_versus_CYC_=28Re:_The_Emperor=e2=80=99s_New_Cloth?=
 =?UTF-8?Q?es_[John_Sowa]=29?=
Date: Sun, 5 Jan 2025 21:45:52 +0100
Message-ID: <vler1s$28i58$5@solani.org>
References: <vl9kaa$25sv2$2@solani.org> <vl9kuu$25t7k$2@solani.org>
 <vlelbm$1s2jf$1@solani.org> <vlelkv$1s2re$1@solani.org>
 <vleqt8$28i58$2@solani.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 5 Jan 2025 20:45:48 -0000 (UTC)
Injection-Info: solani.org;
	logging-data="2377896"; mail-complaints-to="abuse@news.solani.org"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Firefox/91.0 SeaMonkey/2.53.19
Cancel-Lock: sha1:nQADpw6MGzC7sa8KEa1gGeUSBEM=
X-User-ID: eJwFwYEBwCAIA7CXEGj1HYvr/ycsQXFxdhNsGH6DDp5MLkgPJ6Ei23URtr8tZXuUdwcaE8f0XaNGKvIHStkVew==
In-Reply-To: <vleqt8$28i58$2@solani.org>
Bytes: 2793
Lines: 49


Notice John Sowa calls LLM the “store”
of GPT. This could be a misconception that
matches what Permion did for their cognitive memory.
But matters are a little bit more complicated
to say the least, especially

since OpenAI insists that GPT itself is also
an LLM. What might highlight the situation is
Fig 6 of this paper, postulating two Mixture of
Experts (MoE), one on attention mechanism and
one on feed-forward:

A Survey on Mixture of Experts
[2407.06204] A Survey on Mixture of Experts
https://arxiv.org/abs/2407.06204

Disclaimer: Pitty Marvin Minksy didn’t describe
these things already in his society of mind!
Would make it easier to understand it now…

Mild Shock schrieb:
> Douglas Lenat died two years ago in
> August 31, 2023. I don’t know whether
> CYC and Cycorp will make a dent in
> the future. CYC adressed the common
> 
> knowledge bottleneck, and so do LLM. I
> am using CYC mainly as a historical reference.
> The “common knowledge bottleneck” in AI is
> a challenge that plagued early AI systems.
> This bottleneck stems from the difficulty
> 
> of encoding vast amounts of everyday,
> implicit human knowledge things we take for
> granted but computers historically struggled
> to understand. Currently LLM by design focus
> more on shallow
> 
> knowledge, whereas systems such as CYC might
> exhibit more deep knowlege in certain domains,
> making them possibly more suitable when the
> stakeholders expect more reliable
> analytic capabilities.
> 
> The problem is not explainability,
> the problem is intelligence.
> 
>