Deutsch   English   Français   Italiano  
<vqf9l6$16euf$1@solani.org>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!news.mixmin.net!weretis.net!feeder8.news.weretis.net!reader5.news.weretis.net!news.solani.org!.POSTED!not-for-mail
From: Mild Shock <janburse@fastmail.fm>
Newsgroups: comp.lang.prolog
Subject: Last Exit Analogical Resoning (Was: Prolog totally missed the AI
 Boom)
Date: Fri, 7 Mar 2025 18:16:25 +0100
Message-ID: <vqf9l6$16euf$1@solani.org>
References: <vpceij$is1s$1@solani.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Fri, 7 Mar 2025 17:16:22 -0000 (UTC)
Injection-Info: solani.org;
	logging-data="1260495"; mail-complaints-to="abuse@news.solani.org"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101
 Firefox/128.0 SeaMonkey/2.53.20
Cancel-Lock: sha1:Ry3KesD3hvZrRB6cDlPQLnbYHzg=
X-User-ID: eJwFwYEBwDAEBMCVgvxjHFH2H6F3MArbL8GLxbL602w+FzkxO5uuvcKEhV3GK5y2Pa6wLJPAt44QreFo/ExwFOY=
In-Reply-To: <vpceij$is1s$1@solani.org>
Bytes: 4020
Lines: 79


The problem I am trying to address was
already adressed here:

ILP and Reasoning by Analogy
Intuitively, the idea is to use what is already
known to explain new observations that appear similar
to old knowledge. In a sense, it is opposite of induction,
where to explain the observations one comes up with
new hypotheses/theories.
Vesna Poprcova et al. - 2010
https://www.researchgate.net/publication/220141214

The problem consists in that ILP doesn’t try to
learn and apply analogies , whereas autoencoders and
transformers typically try to “Grok” analogies, so that
with a fewer training they can perform

well in certain domains. They will do some inferencing
on the part of the encoders also for unseen input
data. And they will do some generation on the part of
the decoder also for unseen

latent space configurations from unseen input data.
By unseen data I mean data not in the training set.
The full context window may tune the inferencing and
generation, which appeals to:

Analogy as a Search Procedure
Rumelhart and Abrahamson showed that when presented
with analogy problems like mokey:pig:gorilla:X, with
rabbit, tiger, cow, and elephant as alternatives for X,
subjects rank the four options following the
parallelogram rule.
Matías Osta-Vélez - 2022
https://www.researchgate.net/publication/363700634

There are learning methods that work similarly
like ILP, in that they are based on positive and
negative samples. And the statistics can involve
bilinear forms, similar like

is seen in the “Attention is all you Need” paper.
But I have not yet a good implementation of this
evisioned marriage of autoencoders and ILP, and
I am still researching the topic.

Mild Shock schrieb:
> 
> Inductive logic programming at 30
> https://arxiv.org/abs/2102.10556
> 
> The paper contains not a single reference to autoencoders!
> Still they show this example:
> 
> Fig. 1 ILP systems struggle with structured examples that
> exhibit observational noise. All three examples clearly
> spell the word "ILP", with some alterations: 3 noisy pixels,
> shifted and elongated letters. If we would be to learn a
> program that simply draws "ILP" in the middle of the picture,
> without noisy pixels and elongated letters, that would
> be a correct program.
> 
> I guess ILP is 30 years behind the AI boom. An early autoencoder
> turned into transformer was already reported here (*):
> 
> SERIAL ORDER, Michael I. Jordan - May 1986
> https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
> 
> Well ILP might have its merits, maybe we should not ask
> for a marriage of LLM and Prolog, but Autoencoders and ILP.
> But its tricky, I am still trying to decode the da Vinci code of
> 
> things like stacked tensors, are they related to k-literal clauses?
> The paper I referenced is found in this excellent video:
> 
> The Making of ChatGPT (35 Year History)
> https://www.youtube.com/watch?v=OFS90-FX6pg
>