Deutsch   English   Français   Italiano  
<vrf7pb$4qs4$3@solani.org>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!2.eu.feeder.erje.net!3.eu.feeder.erje.net!feeder.erje.net!weretis.net!feeder8.news.weretis.net!reader5.news.weretis.net!news.solani.org!.POSTED!not-for-mail
From: Mild Shock <janburse@fastmail.fm>
Newsgroups: sci.math
Subject: Re: Auto-Encoders as Prolog Fact Stores (Re: Prolog totally missed
 the AI Boom)
Date: Wed, 19 Mar 2025 21:00:45 +0100
Message-ID: <vrf7pb$4qs4$3@solani.org>
References: <vpcek7$is1s$2@solani.org> <vpdh0b$k4uv$2@solani.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Wed, 19 Mar 2025 20:00:43 -0000 (UTC)
Injection-Info: solani.org;
	logging-data="158596"; mail-complaints-to="abuse@news.solani.org"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101
 Firefox/128.0 SeaMonkey/2.53.20
Cancel-Lock: sha1:nSgiFqIZQoAMaBftOUljH84Mi4I=
X-User-ID: eJwFwYEBACAEBMCVhH81jsT+I3QH42KFE3QM5g132w2jX4dE8fbkiGYJlHXKUm3Jhp0edGeSL7Tl+CvTD18vFbw=
In-Reply-To: <vpdh0b$k4uv$2@solani.org>
Bytes: 5720
Lines: 156


Hi,

I first wanted to use a working title:

"new frontiers in logic programming"

But upon reflection and because of fElon,
here another idea for a working title:

"neuro infused logic programming" (NILP)

What could it mean? Or does it have some
alternative phrasing already?

Try this paper:

Compositional Neural Logic Programming
Son N. Tran - 2021
The combination of connectionist models for low-level
information processing and logic programs for high-level
decision making can offer improvements in inference
efficiency and prediction performance
https://www.ijcai.org/proceedings/2021/421

Browsing through the bibliography I find:

[Cohen et al., 2017]
Tensorlog: Deep learning meets probabilistic

[Donadello et al., 2017]
Logic tensor networks

[Larochelle and Murray, 2011]
The neural autoregressive distribution estimator

[Manhaeve et al., 2018]
Neural probabilistic logic programming

[Mirza and Osindero, 2014]
Conditional generative adversarial nets

[Odena et al., 2017]
auxiliary classifier GANs

[Pierrot et al., 2019]
compositional neural programs

[Reed and de Freitas, 2016]
Neural programmer-interpreters

[Riveret et al., 2020]
Neuro-Symbolic Probabilistic Argumentation Machines

[Serafini and d’Avila Garcez, 2016]
logic tensor networks.

[Socher et al., 2013]
neural tensor networks

[Towell and Shavlik, 1994]
Knowledge-based artificial neural networks

[Tran and d’Avila Garcez, 2018]
Deep logic networks

[Wang et al., 2019]
compositional neural information fusion


Mild Shock schrieb:
> Hi,
> 
> One idea I had was that autoencoders would
> become kind of invisible, and work under the hood
> to compress Prolog facts. Take these facts:
> 
> % standard _, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
> data(seg7, [0,0,0,0,0,0,0], [0,0,0,0,0,0,0]).
> data(seg7, [1,1,1,1,1,1,0], [1,1,1,1,1,1,0]).
> data(seg7, [0,1,1,0,0,0,0], [0,1,1,0,0,0,0]).
> data(seg7, [1,1,0,1,1,0,1], [1,1,0,1,1,0,1]).
> data(seg7, [1,1,1,1,0,0,1], [1,1,1,1,0,0,1]).
> data(seg7, [0,1,1,0,0,1,1], [0,1,1,0,0,1,1]).
> data(seg7, [1,0,1,1,0,1,1], [1,0,1,1,0,1,1]).
> data(seg7, [1,0,1,1,1,1,1], [1,0,1,1,1,1,1]).
> data(seg7, [1,1,1,0,0,0,0], [1,1,1,0,0,0,0]).
> data(seg7, [1,1,1,1,1,1,1], [1,1,1,1,1,1,1]).
> data(seg7, [1,1,1,1,0,1,1], [1,1,1,1,0,1,1]).
> % alternatives 9, 7, 6, 1
> data(seg7, [1,1,1,0,0,1,1], [1,1,1,1,0,1,1]).
> data(seg7, [1,1,1,0,0,1,0], [1,1,1,0,0,0,0]).
> data(seg7, [0,0,1,1,1,1,1], [1,0,1,1,1,1,1]).
> data(seg7, [0,0,0,0,1,1,0], [0,1,1,0,0,0,0]).
> https://en.wikipedia.org/wiki/Seven-segment_display
> 
> Or more visually, 9 7 6 1 have variants trained:
> 
> :- show.
> _0123456789(9)(7)(6)(1)
> 
> The auto encoder would create a latent space, an
> encoder, and a decoder. And we could basically query
> ?- data(seg7, X, Y) with X input, and Y output,
> 
> 9 7 6 1 were corrected:
> 
> :- random2.
> 0, 0
> _01234567899761
> 
> The autoencoder might also tolerate errors in the
> input that are not in the data, giving it some inferential
> capability. And then choose an output again not in
> 
> the data, giving it some generative capabilities.
> 
> Bye
> 
> See also:
> 
> What is Latent Space in Deep Learning?
> https://www.geeksforgeeks.org/what-is-latent-space-in-deep-learning/
> 
> Mild Shock schrieb:
>>
>> Inductive logic programming at 30
>> https://arxiv.org/abs/2102.10556
>>
>> The paper contains not a single reference to autoencoders!
>> Still they show this example:
>>
>> Fig. 1 ILP systems struggle with structured examples that
>> exhibit observational noise. All three examples clearly
>> spell the word "ILP", with some alterations: 3 noisy pixels,
>> shifted and elongated letters. If we would be to learn a
>> program that simply draws "ILP" in the middle of the picture,
>> without noisy pixels and elongated letters, that would
>> be a correct program.
>>
>> I guess ILP is 30 years behind the AI boom. An early autoencoder
>> turned into transformer was already reported here (*):
>>
>> SERIAL ORDER, Michael I. Jordan - May 1986
>> https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
>>
>> Well ILP might have its merits, maybe we should not ask
>> for a marriage of LLM and Prolog, but Autoencoders and ILP.
>> But its tricky, I am still trying to decode the da Vinci code of
>>
>> things like stacked tensors, are they related to k-literal clauses?
>> The paper I referenced is found in this excellent video:
>>
>> The Making of ChatGPT (35 Year History)
>> https://www.youtube.com/watch?v=OFS90-FX6pg
>