| Deutsch English Français Italiano |
|
<VEGdnTMGMJLMwpL6nZ2dnZfqn_adnZ2d@earthlink.com> View for Bookmarking (what is this?) Look up another Usenet article |
Path: ...!local-1.nntp.ord.giganews.com!Xl.tags.giganews.com!local-4.nntp.ord.giganews.com!nntp.earthlink.com!news.earthlink.com.POSTED!not-for-mail NNTP-Posting-Date: Wed, 16 Oct 2024 06:38:09 +0000 Subject: Re: Well DUH ! AI People Finally Realize They Can Ditch Most Floating-Point for Big Ints Newsgroups: comp.os.linux.misc References: <YPKdnaTfaLzAq5b6nZ2dnZfqnPidnZ2d@earthlink.com> <veg8cq$k36i$1@dont-email.me> <LpScnb7e54pgk5P6nZ2dnZfqn_qdnZ2d@earthlink.com> <velibm$1m3bg$4@dont-email.me> From: "186282@ud0s4.net" <186283@ud0s4.net> Organization: wokiesux Date: Wed, 16 Oct 2024 02:38:08 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.13.0 MIME-Version: 1.0 In-Reply-To: <velibm$1m3bg$4@dont-email.me> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Message-ID: <VEGdnTMGMJLMwpL6nZ2dnZfqn_adnZ2d@earthlink.com> Lines: 27 X-Usenet-Provider: http://www.giganews.com NNTP-Posting-Host: 99.101.150.97 X-Trace: sv3-95Rww+L+9+LcgV2vU9FOwduWvfH8t168mGCRBNTGef4JIAHm8BGMn4LJqZTnlU/AfgBRr86xyPHe4ZX!mOyHtB7lErgTtYPx+xDEKzTNnBiWeQyh6Xh6YOxAIs7PbZoIIgbEqp6Hp+O9WisV9yVjxTtBLKoB!bSUldCd7atOD/a3JZksu X-Abuse-and-DMCA-Info: Please be sure to forward a copy of ALL headers X-Abuse-and-DMCA-Info: Otherwise we will be unable to process your complaint properly X-Postfilter: 1.3.40 Bytes: 2497 On 10/15/24 7:06 AM, The Natural Philosopher wrote: > On 15/10/2024 07:43, 186282@ud0s4.net wrote: >> The question is how EXACT the precision HAS to be for >> most "AI" uses. Might be safe to throw away a few >> decimal points at the bottom. > > My thesis is that *in some applications*, more low quality calculations > bets a fewer high quality ones anyway. > I wasn't thinking of AI, as much as modelling complex turbulent flow in > aero and hydrodynamics or weather forecasting Well, weather, any decimal points are BS anyway :-) However, AI and fuzzy logic and neural networks - it has just been standard practice to use floats to handle all values. I've got books going back into the mid 80s on all those and you JUST USED floats. BUT ... as said, even a 32-bit int can handle fairly large vals. Mult little vals by 100 or 1000 and you can throw away the need for decimal points - and the POWER required to do such calx. Accuracy should be more than adequate. In any case, I'm happy SOMEONE finally realized this. TOOK a really LONG time though ......