Deutsch   English   Français   Italiano  
<vlem5n$1s2uj$2@solani.org>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: news.eternal-september.org!eternal-september.org!feeder3.eternal-september.org!weretis.net!feeder8.news.weretis.net!reader5.news.weretis.net!news.solani.org!.POSTED!not-for-mail
From: Mild Shock <janburse@fastmail.fm>
Newsgroups: sci.math
Subject: =?UTF-8?Q?Re:_The_Emperor=e2=80=99s_New_Clothes_[John_Sowa]_=28Re:_?=
 =?UTF-8?Q?Linear_Algebraic_Approaches_to_Logic_Programming=29?=
Date: Sun, 5 Jan 2025 20:22:35 +0100
Message-ID: <vlem5n$1s2uj$2@solani.org>
References: <vl9kcf$25sv2$4@solani.org> <vl9l4g$25t7k$3@solani.org>
 <vlem4o$1s2uj$1@solani.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 5 Jan 2025 19:22:31 -0000 (UTC)
Injection-Info: solani.org;
	logging-data="1969107"; mail-complaints-to="abuse@news.solani.org"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Firefox/91.0 SeaMonkey/2.53.19
Cancel-Lock: sha1:X89brQVcWE1wTQE4wwBFvZX6c1c=
In-Reply-To: <vlem4o$1s2uj$1@solani.org>
X-User-ID: eJwFwYEBwCAIA7CXKFIq5ziU/09YwpXIViQzOJzvmGuPGqUmyrANX5pRZITefTWXb3i4jty4nBT8VgPdPyvsFGw=


Whats also interesting, the recent physics
nobel price recipient Geoffrey Hinton has also
a like 30 year old paper about MoE,

which has like 6652 citations:

 > Adaptive Mixtures of Local Experts
 > https://www.cs.toronto.edu/~fritz/absps/jjnh91.pdf

Mild Shock schrieb:
> 
> John Sowa shows clear signs of coping problems. We just
> have an instance of “The Emperor’s New Clothes” some
> companies have become naked with the advent of GPT,
> 
> I don’t think it is productive to postulate
> some CANNOT like here:
> 
>  > Linguists say that LLMs cannot be a language mode'.
>  > - Tensors do not make the linguistic information explicit.
>  > - They do not distinguish the syntax, sernantics, and ontology.
>  > - GPT cannot use the 60• years of Al research and development.
>  > https://www.youtube.com/watch?v=6K6F_zsQ264
> 
> Then in the next slide he embraces tensors for
> his new Prolog system nevertheless. WTF! Basically
> this is a very narrow narrative, which is totally
> 
> unfounded in my opinion. Just check out these papers:
> 
> GRIN: GRadient-INformed MoE
> [2409.12136] GRIN: GRadient-INformed MoE
> https://arxiv.org/abs/2409.12136
> 
> A Survey on Mixture of Experts
> [2407.06204] A Survey on Mixture of Experts
> https://arxiv.org/abs/2407.06204
> 
> This paints a totally different picture of LLMs, seems
> they are more in the tradition of CYC by Douglas Lenant.
> 
> Mild Shock schrieb:
>> Hi,
>>
>> Maybe one can get a better grip of an intimate
>> relationship, simply by some hands on?
>>
>> Linear Algebraic Approaches to Logic Programming
>>
>> Katsumi Inoue (National Institute of Informatics, Japan)
>>
>> Abstract: Integration of symbolic reasoning and machine
>> learning is important for robust AI.  Realization of
>> symbolic reasoning based on algebraic methods is promising
>> to bridge between symbolic reasoning and machine learning,
>> since algebraic data structures have been used in machine
>> learning. To this end, Sakama, Inoue and Sato have defined
>> notable relations between logic programming and linear
>> algebra and have proposed algorithms to compute logic
>> programs numerically using tensors.  This work has been
>> extended in various ways, to compute supported and stable
>> models of normal logic programs, to enhance the efficiency
>> of computation using sparse methods, and to enable abduction
>> for abductive logic programming.  A common principle in
>> this approach is to formulate logical formulas as vectors/
>> matrices/tensors, and linear algebraic operations are
>> applied on these elements for computation of logic programming.
>> Partial evaluation can be realized in parallel and by 
>> self-multiplication, showing the potential for exponential
>> speedup.  Furthermore, the idea to represent logic programs
>> as tensors and matrices and to transform logical reasoning
>> to numeric computation can be the basis of the differentiable
>> methods for learning logic programs.
>>
>> https://www.iclp24.utdallas.edu/invited-speakers/
>>
>> Bye
>>
>> Mild Shock schrieb:
>>> Hi,
>>>
>>> Ok this one is only 250 bucks for a TPU:
>>>
>>> Introducing NVIDIA Jetson Orin™ Nano Super
>>> https://www.youtube.com/watch?v=S9L2WGf1KrM
>>>
>>> Now I am planning to do the following:
>>>
>>> Create a tensor flow Domain Specific Language (DSL).
>>>
>>> With these use cases:
>>>
>>> - Run the tensor flow DSL locally in
>>>    your Prolog system interpreted.
>>>
>>> - Run the tensor flow DSL locally in
>>>    your Prolog system compiled.
>>>
>>> - Run the tensor flow DSL locally on
>>>    your Tensor Processing Unit (TPU).
>>>
>>> - Run the tensor flow DSL remotely
>>>    on a compute server.
>>>
>>> - What else?
>>>
>>> Maybe also support some ONNX file format?
>>>
>>> Bye
>>
>