Deutsch   English   Français   Italiano  
<vpdgto$k4uv$1@solani.org>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!news.mixmin.net!weretis.net!feeder8.news.weretis.net!reader5.news.weretis.net!news.solani.org!.POSTED!not-for-mail
From: Mild Shock <janburse@fastmail.fm>
Newsgroups: comp.lang.prolog
Subject: Auto-Encoders as Prolog Fact Stores (Was: Prolog totally missed the
 AI Boom)
Date: Sat, 22 Feb 2025 22:51:53 +0100
Message-ID: <vpdgto$k4uv$1@solani.org>
References: <vpceij$is1s$1@solani.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sat, 22 Feb 2025 21:51:52 -0000 (UTC)
Injection-Info: solani.org;
	logging-data="660447"; mail-complaints-to="abuse@news.solani.org"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101
 Firefox/128.0 SeaMonkey/2.53.20
Cancel-Lock: sha1:VcEVgdfC8uhfRM7kmu6v8PK8YlY=
X-User-ID: eJwNzAEBwCAIBMBKgvBgHB60f4TtApxvCDoMDvPnb4yMSZYtGZhpanFWd52MuGfFEZcr/qpxfWuyk2RrQ/7mA2DgFbs=
In-Reply-To: <vpceij$is1s$1@solani.org>
Bytes: 3935
Lines: 85

Hi,

One idea I had was that autoencoders would
become kind of invisible, and work under the hood
to compress Prolog facts. Take these facts:

% standard _, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
data(seg7, [0,0,0,0,0,0,0], [0,0,0,0,0,0,0]).
data(seg7, [1,1,1,1,1,1,0], [1,1,1,1,1,1,0]).
data(seg7, [0,1,1,0,0,0,0], [0,1,1,0,0,0,0]).
data(seg7, [1,1,0,1,1,0,1], [1,1,0,1,1,0,1]).
data(seg7, [1,1,1,1,0,0,1], [1,1,1,1,0,0,1]).
data(seg7, [0,1,1,0,0,1,1], [0,1,1,0,0,1,1]).
data(seg7, [1,0,1,1,0,1,1], [1,0,1,1,0,1,1]).
data(seg7, [1,0,1,1,1,1,1], [1,0,1,1,1,1,1]).
data(seg7, [1,1,1,0,0,0,0], [1,1,1,0,0,0,0]).
data(seg7, [1,1,1,1,1,1,1], [1,1,1,1,1,1,1]).
data(seg7, [1,1,1,1,0,1,1], [1,1,1,1,0,1,1]).
% alternatives 9, 7, 6, 1
data(seg7, [1,1,1,0,0,1,1], [1,1,1,1,0,1,1]).
data(seg7, [1,1,1,0,0,1,0], [1,1,1,0,0,0,0]).
data(seg7, [0,0,1,1,1,1,1], [1,0,1,1,1,1,1]).
data(seg7, [0,0,0,0,1,1,0], [0,1,1,0,0,0,0]).
https://en.wikipedia.org/wiki/Seven-segment_display

Or more visually, 9 7 6 1 have variants trained:

:- show.
_0123456789(9)(7)(6)(1)

The auto encoder would create a latent space, an
encoder, and a decoder. And we could basically query
?- data(seg7, X, Y) with X input, and Y output,

9 7 6 1 were corrected:

:- random2.
0, 0
_01234567899761

The autoencoder might also tolerate errors in the
input that are not in the data, giving it some inferential
capability. And then choose an output again not in

the data, giving it some generative capabilities.

Bye

See also:

What is Latent Space in Deep Learning?
https://www.geeksforgeeks.org/what-is-latent-space-in-deep-learning/

Mild Shock schrieb:
> 
> Inductive logic programming at 30
> https://arxiv.org/abs/2102.10556
> 
> The paper contains not a single reference to autoencoders!
> Still they show this example:
> 
> Fig. 1 ILP systems struggle with structured examples that
> exhibit observational noise. All three examples clearly
> spell the word "ILP", with some alterations: 3 noisy pixels,
> shifted and elongated letters. If we would be to learn a
> program that simply draws "ILP" in the middle of the picture,
> without noisy pixels and elongated letters, that would
> be a correct program.
> 
> I guess ILP is 30 years behind the AI boom. An early autoencoder
> turned into transformer was already reported here (*):
> 
> SERIAL ORDER, Michael I. Jordan - May 1986
> https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
> 
> Well ILP might have its merits, maybe we should not ask
> for a marriage of LLM and Prolog, but Autoencoders and ILP.
> But its tricky, I am still trying to decode the da Vinci code of
> 
> things like stacked tensors, are they related to k-literal clauses?
> The paper I referenced is found in this excellent video:
> 
> The Making of ChatGPT (35 Year History)
> https://www.youtube.com/watch?v=OFS90-FX6pg
>