| Deutsch English Français Italiano |
|
<vpfm7n$ld6s$2@solani.org> View for Bookmarking (what is this?) Look up another Usenet article |
Path: news.eternal-september.org!eternal-september.org!feeder3.eternal-september.org!weretis.net!feeder8.news.weretis.net!reader5.news.weretis.net!news.solani.org!.POSTED!not-for-mail From: Mild Shock <janburse@fastmail.fm> Newsgroups: sci.logic Subject: Ignorance in ILP circles confirmed (Was: Auto-Encoders as Prolog Fact Stores) Date: Sun, 23 Feb 2025 18:34:49 +0100 Message-ID: <vpfm7n$ld6s$2@solani.org> References: <vpcele$is1s$3@solani.org> <vpdh2r$k4uv$3@solani.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Injection-Date: Sun, 23 Feb 2025 17:34:47 -0000 (UTC) Injection-Info: solani.org; logging-data="701660"; mail-complaints-to="abuse@news.solani.org" User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101 Firefox/128.0 SeaMonkey/2.53.20 Cancel-Lock: sha1:hxwX9Rm1NhIxDrHkhWoPQCxomL4= X-User-ID: eJwFwQkBwDAIA0BLfAEqp2XEv4TdwVNzKhIZIOhwN4PWu6nPjvfc/ZzEuUFZ2WS4qKCn95UNg8kW8jQjfkDkFW0= In-Reply-To: <vpdh2r$k4uv$3@solani.org> Hi, Somebody wrote: > It’s a self-supervised form of ILP. > No autoencoders anywhere at all. And, this only proofs my point that ILP doesn’t solve the problem to make autoencoders and transformers available directly in Prolog. Which was the issue I posted at the top of this thread. Subsequently I would not look into ILP for Prolog autoencoders and transformers is my point exactly. Because mostlikely ILP is unaware of the concept of latent space. Latent space has quite some advantages: - *Dimensionality Reduction:* It captures the essential structure of high-dimensional data in a more compact form. - *Synthetic Data:* Instead of modifying raw data, you can use the latent space, to generate variations for further learning. - *Domain Adaptation:* Well-structured latent space can help transfer knowledge from abundant domains to underrepresented ones. If you don’t mention autoencoders and transformers at all, you are possibly also not aware of the above advantages and other properties of autoencoders and transformers. In ILP mostlikely the concept of latent space is dormant or blurred, since the stance is well we invent predicates, ergo relations. There is no attempt to break down relations further: https://www.v7labs.com/blog/autoencoders-guide Basically autoencoders and transformers, by imposing some hidden layer, are further structuring relations into an encoder and a decoder. So a relation is seen as a join. The H is the bottleneck on purpose: relation(X, Y) :- encoder(X, H), decoder(H, Y). The values of H go through the latent space which is invented during the learning process. It is not simply the input or output space. This design has some very interesting repercussions. Bye Mild Shock schrieb: > Hi, > > One idea I had was that autoencoders would > become kind of invisible, and work under the hood > to compress Prolog facts. Take these facts: > > % standard _, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 > data(seg7, [0,0,0,0,0,0,0], [0,0,0,0,0,0,0]). > data(seg7, [1,1,1,1,1,1,0], [1,1,1,1,1,1,0]). > data(seg7, [0,1,1,0,0,0,0], [0,1,1,0,0,0,0]). > data(seg7, [1,1,0,1,1,0,1], [1,1,0,1,1,0,1]). > data(seg7, [1,1,1,1,0,0,1], [1,1,1,1,0,0,1]). > data(seg7, [0,1,1,0,0,1,1], [0,1,1,0,0,1,1]). > data(seg7, [1,0,1,1,0,1,1], [1,0,1,1,0,1,1]). > data(seg7, [1,0,1,1,1,1,1], [1,0,1,1,1,1,1]). > data(seg7, [1,1,1,0,0,0,0], [1,1,1,0,0,0,0]). > data(seg7, [1,1,1,1,1,1,1], [1,1,1,1,1,1,1]). > data(seg7, [1,1,1,1,0,1,1], [1,1,1,1,0,1,1]). > % alternatives 9, 7, 6, 1 > data(seg7, [1,1,1,0,0,1,1], [1,1,1,1,0,1,1]). > data(seg7, [1,1,1,0,0,1,0], [1,1,1,0,0,0,0]). > data(seg7, [0,0,1,1,1,1,1], [1,0,1,1,1,1,1]). > data(seg7, [0,0,0,0,1,1,0], [0,1,1,0,0,0,0]). > https://en.wikipedia.org/wiki/Seven-segment_display > > Or more visually, 9 7 6 1 have variants trained: > > :- show. > _0123456789(9)(7)(6)(1) > > The auto encoder would create a latent space, an > encoder, and a decoder. And we could basically query > ?- data(seg7, X, Y) with X input, and Y output, > > 9 7 6 1 were corrected: > > :- random2. > 0, 0 > _01234567899761 > > The autoencoder might also tolerate errors in the > input that are not in the data, giving it some inferential > capability. And then choose an output again not in > > the data, giving it some generative capabilities. > > Bye > > See also: > > What is Latent Space in Deep Learning? > https://www.geeksforgeeks.org/what-is-latent-space-in-deep-learning/ > > Mild Shock schrieb: >> >> Inductive logic programming at 30 >> https://arxiv.org/abs/2102.10556 >> >> The paper contains not a single reference to autoencoders! >> Still they show this example: >> >> Fig. 1 ILP systems struggle with structured examples that >> exhibit observational noise. All three examples clearly >> spell the word "ILP", with some alterations: 3 noisy pixels, >> shifted and elongated letters. If we would be to learn a >> program that simply draws "ILP" in the middle of the picture, >> without noisy pixels and elongated letters, that would >> be a correct program. >> >> I guess ILP is 30 years behind the AI boom. An early autoencoder >> turned into transformer was already reported here (*): >> >> SERIAL ORDER, Michael I. Jordan - May 1986 >> https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >> >> Well ILP might have its merits, maybe we should not ask >> for a marriage of LLM and Prolog, but Autoencoders and ILP. >> But its tricky, I am still trying to decode the da Vinci code of >> >> things like stacked tensors, are they related to k-literal clauses? >> The paper I referenced is found in this excellent video: >> >> The Making of ChatGPT (35 Year History) >> https://www.youtube.com/watch?v=OFS90-FX6pg >