Deutsch   English   Français   Italiano  
<LpScnb7e54pgk5P6nZ2dnZfqn_qdnZ2d@earthlink.com>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!local-3.nntp.ord.giganews.com!Xl.tags.giganews.com!local-2.nntp.ord.giganews.com!nntp.earthlink.com!news.earthlink.com.POSTED!not-for-mail
NNTP-Posting-Date: Tue, 15 Oct 2024 06:43:09 +0000
Subject: Re: Well DUH ! AI People Finally Realize They Can Ditch Most
 Floating-Point for Big Ints
Newsgroups: comp.os.linux.misc
References: <YPKdnaTfaLzAq5b6nZ2dnZfqnPidnZ2d@earthlink.com>
 <veg8cq$k36i$1@dont-email.me>
From: "186282@ud0s4.net" <186283@ud0s4.net>
Organization: wokiesux
Date: Tue, 15 Oct 2024 02:43:08 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.13.0
MIME-Version: 1.0
In-Reply-To: <veg8cq$k36i$1@dont-email.me>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
Message-ID: <LpScnb7e54pgk5P6nZ2dnZfqn_qdnZ2d@earthlink.com>
Lines: 51
X-Usenet-Provider: http://www.giganews.com
NNTP-Posting-Host: 99.101.150.97
X-Trace: sv3-NbswghZ+jWuOv3e4h1fVldBKkeqRDYHvz0pb0NmALxJtBDHW+eYEC+ywv+4fdPDLOBDDc4A376KMKBM!o/yLrnJRLvnhkEBomyDedT9asjMbWeWJN/Hvf+zcRvsLqFmnlSlOTkasDTMZPH+ZyI+3q4nMkn0C!L4Qm/9I8okpq0JOpVIA+
X-Abuse-and-DMCA-Info: Please be sure to forward a copy of ALL headers
X-Abuse-and-DMCA-Info: Otherwise we will be unable to process your complaint properly
X-Postfilter: 1.3.40
Bytes: 3485

On 10/13/24 6:45 AM, Pancho wrote:
> On 10/13/24 03:54, 186282@ud0s4.net wrote:
> 
>> The new technique is basic—instead of using complex
>> floating-point multiplication (FPM), the method uses integer
>> addition. Apps use FPM to handle extremely large or small
>> numbers, allowing applications to carry out calculations
>> using them with extreme precision. It is also the most
>> energy-intensive part of AI number crunching.
>>
> 
> That isn't really true. Floats can handle big and small, but the reason 
> people use them is for simplicity.


   "Simple", usually. Energy/time-efficient ... not so much.


> The problem is that typical integer calculations are not closed, the 
> result is not an integer. Addition is fine, but the result of division 
> is typically not an integer. So if you use integers to model a problem 
> every time you do a division (or exp, log, sin, etc) you need to make a 
> decision about how to force the result into an integer.


   The question is how EXACT the precision HAS to be for
   most "AI" uses. Might be safe to throw away a few
   decimal points at the bottom.


> Floats actually use integral values for exponent and mantissa, but they 
> automatically make ballpark reasonable decisions about how to force the 
> results into integral values for mantissa and exponent, meaning 
> operations are effectively closed (ignoring exceptions).  So the 
> programmer doesn't have to worry, so much.
> 
> Floating point ops are actually quite efficient, much less of a concern 
> than something like a branch misprediction. A 20x speed up (energy 
> saving) sounds close to a theoretical maximum. I would be surprised if 
> it can be achieved in anything but a few cases.

   Well ... the article insists they are NOT energy-efficient,
   esp when performed en-masse. I think their prelim tests
   suggested an almost 95% savings (sometimes).

   Anyway, at least the IDEA is back out there again. We
   old guys, oft dealing with microcontrollers, knew the
   advantages of wider integers over even 'small' FP.

   Math processors disguised the amount of processing
   required for FP ... but it was STILL there.