Deutsch   English   Français   Italiano  
<vlsip7$2fl6j$2@solani.org>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!news.mixmin.net!weretis.net!feeder8.news.weretis.net!reader5.news.weretis.net!news.solani.org!.POSTED!not-for-mail
From: Mild Shock <janburse@fastmail.fm>
Newsgroups: comp.lang.prolog
Subject: Lemanicus at EPFL Computer Museum [compare to NVIDIA H100 NVL]
Date: Sat, 11 Jan 2025 02:50:32 +0100
Message-ID: <vlsip7$2fl6j$2@solani.org>
References: <vls3sb$2fejb$1@solani.org> <vls4be$2feqe$2@solani.org>
 <vls57s$2ff9i$1@solani.org> <vlsgf7$2fk77$2@solani.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sat, 11 Jan 2025 01:50:31 -0000 (UTC)
Injection-Info: solani.org;
	logging-data="2610387"; mail-complaints-to="abuse@news.solani.org"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101
 Firefox/128.0 SeaMonkey/2.53.20
Cancel-Lock: sha1:DkBwqMrnI9qRwg0E57nnwhdhXTw=
X-User-ID: eJwFwQEBADAEBMBKzPDiYPSPsDsVY2u/pnZ1dUvBlT4pDE+MRsuLyIY7K/mgCHtsxJGn7E3j3ZGlA0naD1ZzFaY=
In-Reply-To: <vlsgf7$2fk77$2@solani.org>
Bytes: 7598
Lines: 183

Hi,

Ok, I remember showing somebody around
in 2022 not only the AI Campus at EPFL,
but also the Computer Museum at EPFL:

Installation de l’IBM Blue Gene/Q
de l’EPFL, Lemanicus, au Musée Bolo
https://www.museebolo.ch/blue-gene-q-au-musee-bolo/

What was Lemanicus? It had to do with Argonne:

- A 209 TFLOPS peak (172 TFLOPS LINPACK) Blue
Gene/Q system called Lemanicus was installed at
the EPFL in March 2013.[61] This system belongs
to the Center for Advanced Modeling Science CADMOS ([62])
which is a collaboration between the three main research
institutions on the shore of the Lake Geneva in
the French speaking part of Switzerland :
University of Lausanne, University of Geneva and EPFL.
The system consists of a single rack (1,024 compute nodes)
with 2.1 PB of IBM GPFS-GSS storage.
https://en.wikipedia.org/wiki/IBM_Blue_Gene#Blue_Gene/Q

Interestingly Lemanicus is outrun by NVIDIA H100 NVL.
But NVIDIA H100 NVL hasn't even yet a Wikipedia page.

Bye

P.S.: Here a comparison that ChatGPT generated:

Aspect    Blue Gene/Q (Lemanicus)
         NVIDIA H100 NVL
Primary Purpose    Scientific simulations (physics, biology, etc.)
         AI/ML training and inference
Peak Performance    1.6 petaflops (double-precision)
         Exaflop-level AI performance (FP8)
Memory    16 GB RAM per node
         Large HBM3 memory per GPU
Interconnect    5D torus network
         NVLink with high bandwidth
Precision Focus    Double-precision (64-bit floating-point)
         FP8, FP16, and INT8 for AI workloads
Energy Efficiency    Extremely energy-efficient for its era
         Highly optimized for power efficiency
Software Ecosystem    Custom tools for HPC
         CUDA, TensorRT, cuDNN for AI/ML workloads


Mild Shock schrieb:
> Hi,
> 
> Once spear heading symbolic methods:
> 
> Automated Deduction at Argonne
> https://www.mcs.anl.gov/research/projects/AR/
> 
> Now they invest into "foundation
> models", ca 68 Mio USD:
> 
> Argonne receives funding for artificial
> intelligence in scientific research
> - Privacy-Preserving Federated Learning for Science
> - Tensor-Compressed Sustainable Pre-Training of
>    Extreme-Scale Foundation Models
> https://www.anl.gov/article/argonne-receives-funding-for-artificial-intelligence-in-scientific-research 
> 
> 
> "foundation models" was initiated a while ago
> for example by IBM, it was intially only deep
> learning applied to like for example Climate Change.
> 
> It had for example some new meteo reporting
> inovations, like quicker and more local forecasts:
> 
> IBM and NASA are building an AI foundation model for weather and climate
> https://research.ibm.com/blog/weather-climate-foundation-model
> 
> but the term "foundation models" seems to be
> also now used for models that include natural
> language. The term has somehow become the new
> 
> synonym for the machine learning behind LLMs,
> whith applications in health care, etc, etc..
> 
> What are foundation models?
> https://www.ibm.com/think/topics/foundation-models
> 
> Bye
> 
> Mild Shock schrieb:
>> Hi,
>>
>> Or ChatGPT is better in Program Verification. Means
>> the "Fine Tuning" that ChatGPT has to deal with
>> Queries that include programming language fragments
>>
>> or tasks, and the generation of programming language
>> fragements or tasks is practically already better and
>> expected to improve even more:
>>
>> - Not an Architecture based on a Symbolic HOL language
>>    and backend to extract Programming Languages.
>>
>> - Not an Architecture that only supports some academic
>>    "pure" Languages like ML, Haskell, Scala, etc..
>>
>> - Rather an Architecture where one can directly interacts
>>    in the Programming Languages, similar like it deals with
>>    Natural Language, Pipe Dream Onologies are not seen.
>>
>> - Rather an Architecture tat can deal with "impure"
>>    Languages such as JavaScript, Python, etc.. that are
>>    imperative and have side effects.
>>
>> - What else?
>>
>> Bye
>>
>> Mild Shock schrieb:
>>> Hi,
>>>
>>> Interestingly, John Sowa is close with RNT to how
>>> ChatGPT works . In his LLMs bashing videos, John Sowa
>>> repeatedly showed brain models in his slides that
>>>
>>> come from Sydney Lamb:
>>>
>>> Relational Network Theory (RNT), also known as
>>> Neurocognitive Linguistics (NCL) and formerly as
>>> Stratificational Linguistics or Cognitive-Stratificational
>>> Linguistics, is a connectionist theoretical framework in
>>> linguistics primarily developed by Sydney Lamb which
>>> aims to integrate theoretical linguistics with neuroanatomy.
>>> https://en.wikipedia.org/wiki/Relational_Network_Theory
>>>
>>> You can ask ChatGPT and ChatGPT will tell you
>>> what parallels it sees between LLM and RNT.
>>>
>>> Bye
>>>
>>> P.S.: Here is what ChatGPT tells me about LLM and RNT
>>> as a summary, but the answer itself was much bigger:
>>>
>>> While there are shared aspects, particularly the
>>> emphasis on relational dynamics, ChatGPT models are
>>> not explicitly designed with RNT principles. Instead,
>>> they indirectly align with RNT through their ability
>>> to encode and use relationships learned from data.
>>> However, GPT’s probabilistic approach and reliance
>>> on large-scale data contrast with the more structured
>>> and theory-driven nature of RNT.
>>> https://chatgpt.com/share/67818fb7-a788-8013-9cfe-93b3972c8114
>>>
>>> I would also put the answer into perspective if one
>>> include RAG. So with Retrieval Augmented Generation
>>>
>>> things look completely different again.
>>>
>>> Mild Shock schrieb:
>>>> Hi,
>>>>
>>>>  > Subject: An Isabelle Foundation?
>>>>  > Date: Fri, 10 Jan 2025 14:16:33 +0000
>>>>  > From: Lawrence Paulson via isabelle-dev
>>>>  > Some of us have been talking about how to keep
>>>> things going after the recent retirement of Tobias and
>>>> myself and the withdrawal of resources from Munich.
>>>> I've even heard a suggestion that Isabelle would not
>>>> be able to survive for much longer.
>>>> https://en.wikipedia.org/wiki/Isabelle_%28proof_assistant%29
>>>>
>>>> No more money for symbolic AI? LoL
>>>>
>>>> Maybe suplanted by Lean (proof assistant) and my
>>>> speculation maybe re-orientation to keep up with
>>>> Hybrid methods, such as found in ChatGPT.
>>>> https://en.wikipedia.org/wiki/Lean_%28proof_assistant%29
========== REMAINDER OF ARTICLE TRUNCATED ==========