Deutsch   English   Français   Italiano  
<vlsgis$2fk77$3@solani.org>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: news.eternal-september.org!eternal-september.org!feeder3.eternal-september.org!weretis.net!feeder8.news.weretis.net!reader5.news.weretis.net!news.solani.org!.POSTED!not-for-mail
From: Mild Shock <janburse@fastmail.fm>
Newsgroups: sci.math
Subject: What is Argonne doing now? (Re: Or ChatGPT is better in Program
 Verification)
Date: Sat, 11 Jan 2025 02:13:02 +0100
Message-ID: <vlsgis$2fk77$3@solani.org>
References: <vls3u5$2fejb$3@solani.org> <vls4dk$2feqe$3@solani.org>
 <vls5cf$2ff9i$3@solani.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sat, 11 Jan 2025 01:13:00 -0000 (UTC)
Injection-Info: solani.org;
	logging-data="2609383"; mail-complaints-to="abuse@news.solani.org"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101
 Firefox/128.0 SeaMonkey/2.53.20
Cancel-Lock: sha1:nx2bbDllWe2PsvXowVIQ0VqyIlo=
In-Reply-To: <vls5cf$2ff9i$3@solani.org>
X-User-ID: eJwFwQcBwEAIBDBLrGPIoTz4l9AE6uwT5nDD4SpdLBlSJdA5ECN6S19Qbdi6qHgxUz/pyJiiT9J6unwvfw31FC0=

Hi,

Once spear heading symbolic methods:

Automated Deduction at Argonne
https://www.mcs.anl.gov/research/projects/AR/

Now they invest into "foundation
models", ca 68 Mio USD:

Argonne receives funding for artificial
intelligence in scientific research
- Privacy-Preserving Federated Learning for Science
- Tensor-Compressed Sustainable Pre-Training of
   Extreme-Scale Foundation Models
https://www.anl.gov/article/argonne-receives-funding-for-artificial-intelligence-in-scientific-research

"foundation models" was initiated a while ago
for example by IBM, it was intially only deep
learning applied to like for example Climate Change.

It had for example some new meteo reporting
inovations, like quicker and more local forecasts:

IBM and NASA are building an AI foundation model for weather and climate
https://research.ibm.com/blog/weather-climate-foundation-model

but the term "foundation models" seems to be
also now used for models that include natural
language. The term has somehow become the new

synonym for the machine learning behind LLMs,
whith applications in health care, etc, etc..

What are foundation models?
https://www.ibm.com/think/topics/foundation-models

Bye

Mild Shock schrieb:
> Hi,
> 
> Or ChatGPT is better in Program Verification. Means
> the "Fine Tuning" that ChatGPT has to deal with
> Queries that include programming language fragments
> 
> or tasks, and the generation of programming language
> fragements or tasks is practically already better and
> expected to improve even more:
> 
> - Not an Architecture based on a Symbolic HOL language
>    and backend to extract Programming Languages.
> 
> - Not an Architecture that only supports some academic
>    "pure" Languages like ML, Haskell, Scala, etc..
> 
> - Rather an Architecture where one can directly interacts
>    in the Programming Languages, similar like it deals with
>    Natural Language, Pipe Dream Ontologies are not seen.
> 
> - Rather an Architecture that can deal with "impure"
>    Languages such as JavaScript, Python, etc.. that are
>    imperative and have side effects.
> 
> - What else?
> 
> Bye
> 
> Mild Shock schrieb:
>> Hi,
>>
>> Interestingly, John Sowa is close with RNT to how
>> ChatGPT works . In his LLMs bashing videos, John Sowa
>> repeatedly showed brain models in his slides that
>>
>> come from Sydney Lamb:
>>
>> Relational Network Theory (RNT), also known as
>> Neurocognitive Linguistics (NCL) and formerly as
>> Stratificational Linguistics or Cognitive-Stratificational
>> Linguistics, is a connectionist theoretical framework in
>> linguistics primarily developed by Sydney Lamb which
>> aims to integrate theoretical linguistics with neuroanatomy.
>> https://en.wikipedia.org/wiki/Relational_Network_Theory
>>
>> You can ask ChatGPT and ChatGPT will tell you
>> what parallels it sees between LLM and RNT.
>>
>> Bye
>>
>> P.S.: Here is what ChatGPT tells me about LLM and RNT
>> as a summary, but the answer itself was much bigger:
>>
>> While there are shared aspects, particularly the
>> emphasis on relational dynamics, ChatGPT models are
>> not explicitly designed with RNT principles. Instead,
>> they indirectly align with RNT through their ability
>> to encode and use relationships learned from data.
>> However, GPT’s probabilistic approach and reliance
>> on large-scale data contrast with the more structured
>> and theory-driven nature of RNT.
>> https://chatgpt.com/share/67818fb7-a788-8013-9cfe-93b3972c8114
>>
>> I would also put the answer into perspective if one
>> include RAG. So with Retrieval Augmented Generation
>>
>> things look completely different again.
>>
>> Mild Shock schrieb:
>>> Hi,
>>>
>>>  > Subject: An Isabelle Foundation?
>>>  > Date: Fri, 10 Jan 2025 14:16:33 +0000
>>>  > From: Lawrence Paulson via isabelle-dev
>>>  > Some of us have been talking about how to keep
>>> things going after the recent retirement of Tobias and
>>> myself and the withdrawal of resources from Munich.
>>> I've even heard a suggestion that Isabelle would not
>>> be able to survive for much longer.
>>> https://en.wikipedia.org/wiki/Isabelle_%28proof_assistant%29
>>>
>>> No more money for symbolic AI? LoL
>>>
>>> Maybe suplanted by Lean (proof assistant) and my
>>> speculation maybe re-orientation to keep up with
>>> Hybrid methods, such as found in ChatGPT.
>>> https://en.wikipedia.org/wiki/Lean_%28proof_assistant%29
>>>
>>> Bye
>>
>