Deutsch   English   Français   Italiano  
<vler35$28i58$6@solani.org>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!news.roellig-ltd.de!open-news-network.org!weretis.net!feeder8.news.weretis.net!reader5.news.weretis.net!news.solani.org!.POSTED!not-for-mail
From: Mild Shock <janburse@fastmail.fm>
Newsgroups: sci.math
Subject: =?UTF-8?Q?Re:_LLM_versus_CYC_=28Was:_The_Emperor=e2=80=99s_New_Clot?=
 =?UTF-8?Q?hes_[John_Sowa]=29?=
Date: Sun, 5 Jan 2025 21:46:33 +0100
Message-ID: <vler35$28i58$6@solani.org>
References: <vl9kcf$25sv2$4@solani.org> <vl9l4g$25t7k$3@solani.org>
 <vlem4o$1s2uj$1@solani.org> <vlem5n$1s2uj$2@solani.org>
 <vleqs9$28i58$1@solani.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 5 Jan 2025 20:46:29 -0000 (UTC)
Injection-Info: solani.org;
	logging-data="2377896"; mail-complaints-to="abuse@news.solani.org"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Firefox/91.0 SeaMonkey/2.53.19
Cancel-Lock: sha1:ABL2B3K4wrgcIAyJ9J4WBuF/5Wk=
In-Reply-To: <vleqs9$28i58$1@solani.org>
X-User-ID: eJwFwYEBwCAIA7CXFGid5xSE/09YAudmnSAYGIx/LlOZLnb4km4yz3Dak/XWm+4otbZwYoQpSyHSSkmrH3YeFx4=
Bytes: 2786
Lines: 49


Notice John Sowa calls LLM the “store”
of GPT. This could be a misconception that
matches what Permion did for their cognitive memory.
But matters are a little bit more complicated
to say the least, especially

since OpenAI insists that GPT itself is also
an LLM. What might highlight the situation is
Fig 6 of this paper, postulating two Mixture of
Experts (MoE), one on attention mechanism and
one on feed-forward:

A Survey on Mixture of Experts
[2407.06204] A Survey on Mixture of Experts
https://arxiv.org/abs/2407.06204

Disclaimer: Pitty Marvin Minksy didn’t describe
these things already in his society of mind!
Would make it easier to understand it now…

Mild Shock schrieb:
> Douglas Lenat died two years ago in
> August 31, 2023. I don’t know whether
> CYC and Cycorp will make a dent in
> the future. CYC adressed the common
> 
> knowledge bottleneck, and so do LLM. I
> am using CYC mainly as a historical reference.
> The “common knowledge bottleneck” in AI is
> a challenge that plagued early AI systems.
> This bottleneck stems from the difficulty
> 
> of encoding vast amounts of everyday,
> implicit human knowledge things we take for
> granted but computers historically struggled
> to understand. Currently LLM by design focus
> more on shallow
> 
> knowledge, whereas systems such as CYC might
> exhibit more deep knowlege in certain domains,
> making them possibly more suitable when the
> stakeholders expect more reliable
> analytic capabilities.
> 
> The problem is not explainability,
> the problem is intelligence.
> 
>