Warning: mysqli::__construct(): (HY000/1203): User howardkn already has more than 'max_user_connections' active connections in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\includes\artfuncs.php on line 21
Failed to connect to MySQL: (1203) User howardkn already has more than 'max_user_connections' active connections
Warning: mysqli::query(): Couldn't fetch mysqli in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\index.php on line 66
Article <vs3a2d$eecp$1@solani.org>
Deutsch   English   Français   Italiano  
<vs3a2d$eecp$1@solani.org>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: news.eternal-september.org!eternal-september.org!feeder3.eternal-september.org!weretis.net!feeder8.news.weretis.net!reader5.news.weretis.net!news.solani.org!.POSTED!not-for-mail
From: Mild Shock <janburse@fastmail.fm>
Newsgroups: comp.lang.prolog
Subject: Lets re-iterate software engineering first! (Was: A software
 engineering analyis why Prolog fails)
Date: Thu, 27 Mar 2025 11:42:22 +0100
Message-ID: <vs3a2d$eecp$1@solani.org>
References: <vpceij$is1s$1@solani.org> <vru3mc$c4jo$1@solani.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Thu, 27 Mar 2025 10:42:22 -0000 (UTC)
Injection-Info: solani.org;
	logging-data="473497"; mail-complaints-to="abuse@news.solani.org"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101
 Firefox/128.0 SeaMonkey/2.53.20
Cancel-Lock: sha1:3ngXA7Yv3wuQfRL0M/rAdsDmDZ0=
X-User-ID: eJwFwQkBwDAIA0BL5W0qhwXwL2F3YSnJ6xnpsbF889wwQBdJ4L7pPMLQSMyafJS+CUoponXV+/PuKoOd8wNjjxXO
In-Reply-To: <vru3mc$c4jo$1@solani.org>

I have retracted those posts, that had Python-first
in it, not sure whether my analysis about some projects
was water thight. I only made the Python example as to
illustrate the idea of

a variation point. I do not think programming language
trench wars are good idea, and one should put software
engineering -first, as an abstract computer science
discipline. Not doing so

is only a distraction from the real issues at hand.
Variation points where defined quite vaguely
on purpose:

 > Ivar Jacobson defines a variation point as follows:
 > A variation point identifies one or more locations at
 > which the variation will occur.

Variation points can come in many shades, and for
example ProbLog based approaches take the viewpoint
of a Prolog text with a lot of configuration flags
and predicate

annotations. This is quite different from the
autoencoder or transformer component approach I
suggested here. In particular component oriented
approach could be

more flexible and dynamic, when they allow programmatic
configuration of components. The drawback is you cannot
understand what the program does by looking at a

simply structured Prolog text. Although I expected
the situation is not that bad, and one could do
something similar to a table/1 directive, i.e. some
directive that says

look, this predicate is an autoencoder or transformer:

 > One idea I had was that autoencoders would become
 > kind of invisible, and work under the hood to compress
 > Prolog facts. Take these facts:
 >
 > % standard _, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
 > data(seg7, [0,0,0,0,0,0,0], [0,0,0,0,0,0,0]).

So to instruct the Prolog system to do what is sketched,
one would possibly need a new directive autoencoder/1:

:- autoencoder data/3.

Mild Shock schrieb:
> Hi,
> 
> A software engineering analyis why Prolog fails
> ================================================
> 
> You would also get more done, if Prolog had some
> well design plug and play machine learning libraries.
> Currently most SWI Prolog packages are just GitHub dumps:
> 
> (Python) Problem ---> import solver ---> Solution
> 
> (SWI) Problem ---> install pack ---> Problem
> 
> Python shows more success in the practitioners domain,
> since it has more libraries that have made the test of
> time of practial use. Whereas Prolog is still in its
> infancy in many domains,
> 
> you don’t arrive at the same level of convenience and
> breadth as Python, if you have only fire and forget dumps
> offered, from some PhD projects where software engineering
> is secondary.
> 
> I don’t know exactly why Prolog has so much problems
> with software engineering. Python has object orientation,
> but Logtalk didn’t make the situation better. SWI-Prolog
> has modules, but they are never used. For example this
> 
> here is a big monolith:
> 
> This module performs learning over Logic Programs
> https://github.com/friguzzi/liftcover/blob/main/prolog/liftcover.pl
> 
> Its more designed towards providing some command line
> control. But if you look into it, it has EM algorithms
> and gradient algorithm, and who knows what. These building
> blocks are not exposed,
> 
> not made towards reused or towards improvement by
> switching in 3rd party alternatives. Mostlikely a design
> flaw inside the pack mechanism itself, since it assumes a
> single main module?
> 
> So the pack mechanism works, if a unit pack imports a
> clp(BNR) pack, since it uses the single entry of clp(BNR).
> But it is never on paar with the richness of Python packages,
> which have more a hierarchical structure of many
> 
> many modules in their packs.
> 
> Mild Shock schrieb:
>>
>> Inductive logic programming at 30
>> https://arxiv.org/abs/2102.10556
>>
>> The paper contains not a single reference to autoencoders!
>> Still they show this example:
>>
>> Fig. 1 ILP systems struggle with structured examples that
>> exhibit observational noise. All three examples clearly
>> spell the word "ILP", with some alterations: 3 noisy pixels,
>> shifted and elongated letters. If we would be to learn a
>> program that simply draws "ILP" in the middle of the picture,
>> without noisy pixels and elongated letters, that would
>> be a correct program.
>>
>> I guess ILP is 30 years behind the AI boom. An early autoencoder
>> turned into transformer was already reported here (*):
>>
>> SERIAL ORDER, Michael I. Jordan - May 1986
>> https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
>>
>> Well ILP might have its merits, maybe we should not ask
>> for a marriage of LLM and Prolog, but Autoencoders and ILP.
>> But its tricky, I am still trying to decode the da Vinci code of
>>
>> things like stacked tensors, are they related to k-literal clauses?
>> The paper I referenced is found in this excellent video:
>>
>> The Making of ChatGPT (35 Year History)
>> https://www.youtube.com/watch?v=OFS90-FX6pg
>>
>