Deutsch   English   Français   Italiano  
<vo7goc$14mlg$1@solani.org>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: news.eternal-september.org!eternal-september.org!feeder3.eternal-september.org!weretis.net!feeder8.news.weretis.net!reader5.news.weretis.net!news.solani.org!.POSTED!not-for-mail
From: Mild Shock <janburse@fastmail.fm>
Newsgroups: sci.logic
Subject: Machine Learning discovers Roman Numerals (Was: How ELIZA 2.0 killed
 Wolfram Alpha)
Date: Sat, 8 Feb 2025 12:55:57 +0100
Message-ID: <vo7goc$14mlg$1@solani.org>
References: <vl6nv9$1nl63$3@solani.org> <vn98o3$ie21$2@solani.org>
 <vnsl6e$ue77$2@solani.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sat, 8 Feb 2025 11:55:56 -0000 (UTC)
Injection-Info: solani.org;
	logging-data="1202864"; mail-complaints-to="abuse@news.solani.org"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101
 Firefox/128.0 SeaMonkey/2.53.20
Cancel-Lock: sha1:UViSt0gPL0aiKMin8WPUEW3MhhQ=
In-Reply-To: <vnsl6e$ue77$2@solani.org>
X-User-ID: eJwFwYEBgDAIA7CXLFJg56wg/59gwjcQnR4M53IbLBx6CWjXKTG+0iTmomHsodLX4wrkI7uo6U3NWtJ+R8gVhw==

Hi,

I try to motivate a Biology Teacher already for a while to
replicate the below grokking experiment. But I have my
own worries, why bother with the blackbox of what a

machine learning method has learnt?

Simple PyTorch Implementation of "Grokking"
https://github.com/teddykoker/grokking

Well its not correct to say that the learnt model is a black box.
The training data was somehow a black box, but the resulting
model is a white box, you can inspect it.

This gives rise to a totally new scientific profession of
full time artificial intelligence model gazers. And it is
aprils fools day all year long:

Language Models Use Trigonometry to Do Addition
https://arxiv.org/abs/2502.00873

Have Fun!

Bye

Mild Shock schrieb:
> Hi,
> 
> Because of the wide availability of Machine Learning
> via Python libraries , the whole world (at least China)
> has become a big Petri Dish that is experimenting with
> 
> new strategies to evolve brains on the computer.
> Recent discovery seems to be Group Preference Optimization.
> This is when you make the chat bot, detect and react
> 
> differently to different groups of people. It seems to
> work on the "policy level". I don't understand it yet
> completely. But chat bots can then evolve and use
> 
> multiple policies automatically:
> 
> Group Preference Optimization
> https://arxiv.org/abs/2310.11523
> 
> DeepSeekMath: Pushing the Limits
> https://arxiv.org/abs/2402.03300
> 
> Now it seems that it is also at the core of DeepSeekMath,
> what is possibly detected is not group of people, but
> mathematical topics, so that in the end it excells.
> 
> When unsupervised learning is used groups or math
> topics might be found from data, through a form of
> abduction.
> 
> Bye
> 
> Mild Shock schrieb:
>> Hi,
>>
>> Wait till USA figures out there is a second
>> competitor besides DeepSeek, its called Yi-Lightning:
>>
>> Yi-Lightning Technical Report
>> https://arxiv.org/abs/2412.01253
>>
>> It was already discussed 2 months ago:
>>
>> Eric Schmidt DROPS BOMBSHELL: China DOMINATES AI!
>> https://www.youtube.com/watch?v=ddWuEUjo4u4
>>
>> Bye
>>
>> Mild Shock schrieb:
>>> Hi,
>>>
>>> How it started:
>>> https://www.instagram.com/p/Cump3losObg
>>>
>>> How its going:
>>> https://9gag.com/gag/azx28eK
>>>
>>> Bye
>>
>