| Deutsch English Français Italiano |
|
<UImcnTzl_oxjOY_6nZ2dnZfqn_GdnZ2d@earthlink.com> View for Bookmarking (what is this?) Look up another Usenet article |
Path: ...!Xl.tags.giganews.com!local-4.nntp.ord.giganews.com!nntp.earthlink.com!news.earthlink.com.POSTED!not-for-mail NNTP-Posting-Date: Fri, 18 Oct 2024 18:12:14 +0000 Subject: Re: Well DUH ! AI People Finally Realize They Can Ditch Most Floating-Point for Big Ints Newsgroups: comp.os.linux.misc References: <YPKdnaTfaLzAq5b6nZ2dnZfqnPidnZ2d@earthlink.com> <veg8cq$k36i$1@dont-email.me> <LpScnb7e54pgk5P6nZ2dnZfqn_qdnZ2d@earthlink.com> <velibm$1m3bg$4@dont-email.me> <VEGdnTMGMJLMwpL6nZ2dnZfqn_adnZ2d@earthlink.com> <wwvplo071u3.fsf@LkoBDZeT.terraraq.uk> <OUOdnY595riFf4z6nZ2dnZfqn_GdnZ2d@earthlink.com> <wwvmsj15q0m.fsf@LkoBDZeT.terraraq.uk> From: 186282ud0s3 <186283@ud0s4.net> Date: Fri, 18 Oct 2024 14:12:13 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: <wwvmsj15q0m.fsf@LkoBDZeT.terraraq.uk> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Message-ID: <UImcnTzl_oxjOY_6nZ2dnZfqn_GdnZ2d@earthlink.com> Lines: 33 X-Usenet-Provider: http://www.giganews.com NNTP-Posting-Host: 99.101.150.97 X-Trace: sv3-aXsr/8bPP7ZE/+F7+Kuaefu/l+PHcxUycRjiil2M071obcxYZOC5v4bbUL7m6ZjbaOLtxndm6SyVcAA!65BQlF3cQaMW1CpdbbpIzpz+9lCvw8FvysgC9RZy1n9BvVP6uqn2WAiEfRjLRF4fSHI672Vv6Hs/!PFrMlr3UEq/Y/OdM5/Vc X-Abuse-and-DMCA-Info: Please be sure to forward a copy of ALL headers X-Abuse-and-DMCA-Info: Otherwise we will be unable to process your complaint properly X-Postfilter: 1.3.40 Bytes: 2936 On 10/18/24 12:34 PM, Richard Kettlewell wrote: > "186282@ud0s4.net" <186283@ud0s4.net> writes: >> On 10/16/24 6:56 AM, Richard Kettlewell wrote: >>> "186282@ud0s4.net" <186283@ud0s4.net> writes: >>>> BUT ... as said, even a 32-bit int can handle fairly >>>> large vals. Mult little vals by 100 or 1000 and you can >>>> throw away the need for decimal points - and the POWER >>>> required to do such calx. Accuracy should be more than >>>> adequate. >>> You’re talking about fixed-point arithmetic, which is already used >>> where appropriate (although the scale is a power of 2 so you can >>> shift products down into the right place rather than dividing). >>> >>>> In any case, I'm happy SOMEONE finally realized this. >>>> >>>> TOOK a really LONG time though ...... >>> >>> It’s obvious that you’ve not actually read or understood the paper >>> that this thread is about. >> >> Maybe I understood it better than you ... and from >> 4+ decades of experiences. > > Perhaps you could explain why you keep talking about integer arithmetic > when the paper is about floating point arithmetic, then. Umm ... because the idea of swapping FP for ints in order to save lots of power was introduced ? This issue is getting to be *poitical* now - the ultra-greenies freaking about how much power the 'AI' computing centers require.