Deutsch   English   Français   Italiano  
<vqftqi$16pcl$2@solani.org>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!news.mixmin.net!weretis.net!feeder8.news.weretis.net!reader5.news.weretis.net!news.solani.org!.POSTED!not-for-mail
From: Mild Shock <janburse@fastmail.fm>
Newsgroups: comp.lang.prolog
Subject: Re: ILP is still dreaming of higher order (Was: Prolog Education
 Group clueless about the AI Boom?)
Date: Sat, 8 Mar 2025 00:00:34 +0100
Message-ID: <vqftqi$16pcl$2@solani.org>
References: <vq4a7g$10843$1@solani.org> <vqftm2$16pcl$1@solani.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Fri, 7 Mar 2025 23:00:34 -0000 (UTC)
Injection-Info: solani.org;
	logging-data="1271189"; mail-complaints-to="abuse@news.solani.org"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101
 Firefox/128.0 SeaMonkey/2.53.20
Cancel-Lock: sha1:L7pGry4kGWLeC6kR+BXkx/CMq/0=
X-User-ID: eJwNyEkRAEEIBDBLnA3IoQbwL2E3z7iC8cLgMD+/x17RvEMogT5c3iKvkyhYIH9VxClXm6QuXxWNBqZjRz84ARTA
In-Reply-To: <vqftm2$16pcl$1@solani.org>
Bytes: 4280
Lines: 104

You are probably aiming at the decomposition
of autoencoder or transformer into an encoder
and decoder. Making the split up automatically

from a more general ILP framework.

 > The H is the bottleneck on purpose:
 >
 > relation(X, Y) :- encoder(X, H), decoder(H, Y).

Ok you missed the point. Lets assume for the
moment the H on purpose is not something that
happens accidential through a more general

learning algorithm. But it is a design feature
of how we want to learn. Can we incorporate
analogical reasoning, the parallelogram?

Yeah, relatively simple, just add more input
and output layers. The new parameter K indicates
how the representation was chosen:

relation(X, Y) :-
    similar(X, A, K),
    encoder(A, H),
    decoder(H, B),
    similar(Y, B, K).

Its again an autoencoder respectively transformer,
with a bigger latent space. Prominent additional input
layers that work here are convolutional neural networks.

Things like max pooling or self-attention pooling:

relation(X, Y) :-
    encoder2(X, J),
    decoder2(J, Y).

encoder2(X, [K|H]) :-
    similar(X, A, K),
    encoder(A, H),

decoder2([K|H], Y) :-
    decoder(H, B),
    similar(Y, B, K).

You can learn the decoder2/2 as a whole in your
autoencoder and transformer learning framework,
provided it can deal with many layers, i.e.

if it has deep learning techniques.

Mild Shock schrieb:
> The first deep learning breakthrough was
> AlexNet by Alex Krizhevsky, Ilya Sutskever
> and Geoffrey Hinton:
> 
>  > In 2011, Geoffrey Hinton started reaching out
>  > to colleagues about “What do I have to do to
>  > convince you that neural networks are the future?”
>  > https://en.wikipedia.org/wiki/AlexNet
> 
> Meanwhile ILP is still dreaming of higher order logic:
> 
>  > We pull it out of thin air. And the job that does
>  > is, indeed, that it breaks up relations into
>  > sub-relations or sub-routines, if you prefer.
> 
> You mean this here:
> 
>  > Background knowledge (Second Order)
>  > -----------------------------------
>  > (Chain) ∃.P,Q,R ∀.x,y,z: P(x,y)← Q(x,z),R(z,y)
>  >
>  > https://github.com/stassa/vanilla/tree/master/lib/poker
> 
> Thats too general, it doesn’t adress
> analogical reasoning.
> 
> Mild Shock schrieb:
>> Concerning this boring nonsense:
>>
>> https://book.simply-logical.space/src/text/2_part_ii/5.3.html#
>>
>> Funny idea that anybody would be interested just now in
>> the year 2025 in things like teaching breadth first
>> search versus depth first search, or even be “mystified”
>> by such stuff. Its extremly trivial stuff:
>>
>> Insert your favorite tree traversal pictures here.
>>
>> Its even not artificial intelligence neither has anything
>> to do with mathematical logic, rather belongs to computer
>> science and discrete mathematics which you have in
>> 1st year university
>>
>> courses, making it moot to call it “simply logical”. It
>> reminds me of the idea of teaching how wax candles work
>> to dumb down students, when just light bulbs have been
>> invented. If this is the outcome
>>
>> of the Prolog Education Group 2.0, then good night.
>>
>