Deutsch   English   Français   Italiano  
<veghkd$mhii$1@dont-email.me>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!weretis.net!feeder9.news.weretis.net!2.eu.feeder.erje.net!feeder.erje.net!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: Pancho <Pancho.Jones@proton.me>
Newsgroups: comp.os.linux.misc
Subject: Re: Well DUH ! AI People Finally Realize They Can Ditch Most
 Floating-Point for Big Ints
Date: Sun, 13 Oct 2024 14:23:25 +0100
Organization: A noiseless patient Spider
Lines: 28
Message-ID: <veghkd$mhii$1@dont-email.me>
References: <YPKdnaTfaLzAq5b6nZ2dnZfqnPidnZ2d@earthlink.com>
 <wwv5xpw8it1.fsf@LkoBDZeT.terraraq.uk> <vege7m$lobb$9@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 13 Oct 2024 15:23:26 +0200 (CEST)
Injection-Info: dont-email.me; posting-host="96eaabfe285b27060ffbe483cf339a56";
	logging-data="738898"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX1+E/556Ws/eHxGSt0K6uku9levfU0YKwuU="
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:bYvKdSg846OcT9B4tDyoGmm4W24=
In-Reply-To: <vege7m$lobb$9@dont-email.me>
Content-Language: en-GB
Bytes: 2524

On 10/13/24 13:25, The Natural Philosopher wrote:
> On 13/10/2024 10:15, Richard Kettlewell wrote:
>> "186282@ud0s4.net" <186283@ud0s4.net> writes:
>>> https://techxplore.com/news/2024-10-integer-addition-algorithm-energy-ai.html
>> [...]
>>>    The default use of floating-point really took off when
>>>    'neural networks' became popular in the 80s. Seemed the
>>>    ideal way to keep track of all the various weightings
>>>    and values.
>>>
>>>    But, floating-point operations use a huge amount of
>>>    CPU/NPU power.
>>>
>>>    Seems somebody finally realized that the 'extra resolution'
>>>    of floating-point was rarely necessary and you can just
>>>    use large integers instead. Integer math is FAST and uses
>>>    LITTLE power .....
>>
>> That’s situational. In this case, the paper isn’t about using large
>> integers, it’s about very low precision floating point representations.
>> They’ve just found a way to approximate floating point multiplication
>> without multiplying the fractional parts of the mantissas.
>>
> Last I heard they were going to use D to As feeding analog multipliers. 
> And convert back to D afterwards. for a speed/ precision tradeoff.
> 

That sounds like the 1960s. I guess this idea does sound like a slide rule.