Deutsch English Français Italiano |
<v1tre1$3leqn$1@dont-email.me> View for Bookmarking (what is this?) Look up another Usenet article |
Path: ...!news.misty.com!2.eu.feeder.erje.net!3.eu.feeder.erje.net!feeder.erje.net!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail From: BGB <cr88192@gmail.com> Newsgroups: comp.arch Subject: Re: Making Lemonade (Floating-point format changes) Date: Mon, 13 May 2024 14:58:53 -0500 Organization: A noiseless patient Spider Lines: 66 Message-ID: <v1tre1$3leqn$1@dont-email.me> References: <abe04jhkngt2uun1e7ict8vmf1fq8p7rnm@4ax.com> <memo.20240512203459.16164W@jgd.cix.co.uk> <v1rab7$2vt3u$1@dont-email.me> <20240513151647.0000403f@yahoo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Injection-Date: Mon, 13 May 2024 21:58:58 +0200 (CEST) Injection-Info: dont-email.me; posting-host="75ba9b458c42a35cbb3f936414c9d69d"; logging-data="3849047"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19bj3ZA8j03KyJFeykb0BpSUmsaymfl4G0=" User-Agent: Mozilla Thunderbird Cancel-Lock: sha1:wkGO9u5ajC/2P/H5KeG4dGcj9bM= Content-Language: en-US In-Reply-To: <20240513151647.0000403f@yahoo.com> Bytes: 3787 On 5/13/2024 7:16 AM, Michael S wrote: > On Sun, 12 May 2024 20:55:03 -0000 (UTC) > Thomas Koenig <tkoenig@netcologne.de> wrote: > >> John Dallman <jgd@cix.co.uk> schrieb: >>> In article <abe04jhkngt2uun1e7ict8vmf1fq8p7rnm@4ax.com>, >>> quadibloc@servername.invalid (John Savard) wrote: >>> >>>> I'm not really sure such floating-pont precision is useful, but I >>>> do remember some people telling me that higher float precision is >>>> indeed something to be desired. >> >>> I would be in favour of 128-bit being available. >> >> Me, too. Solving tricky linear systems, or obtaining derivatives >> numerically (for example for Jacobians) eats up a _lot_ of precision >> bits, and double precision can sometimes run into trouble. >> >> At least gcc and gfortran now support POWER's native 128-bit format >> in hardware. On other systems, software emulation is used, which >> is of course much slower. >> > > Much slower? > I think, at least for matrix multiplication, my emulation on modern x86 > was within factor of 1.5x from your measurements on POWER9. And that > despite rather poorly chosen ABI for support routines. With better ABI > (pure integer, with no copies from/to XMM slowing things down, esp. on > Zen3) I would expect it to be a wash. > With slightly higher-level API, (qaxpy instead of individual mul/add) > a software can actually pull ahead. > IME, the cost of doing floating point in software isn't so bad so long as one has integer types larger than the mantissa (and an OK way to perform a widening multiply and take the high-order results). So, for Binary64, one needs 64-bit integers, and for Binary128, 128-bit can somewhat help (though, AFAIK, x86-64 still doesn't have any 128-bit integer ops; and is a little awkward in some ways). Emulation via traps is very slow, but typical for many ISA's is to just quietly turn the soft-float operations into runtime calls. .... >>> I'm not sure my field >>> has need for 256- or 512-bit, but that doesn't mean that nobody >>> has. >> >> I've finally found the time to play around with Julia in the last >> few weeks. One of the nice things does is that you can just use >> the same packages with different numerical types, for example for >> ODE integration. Just set up the problem as you would normally >> and supply an starting vector with a different precision. >> >> So, for doing some experiments on numerical data types, Julia >> is quite nice. > > It's a pity that something like that is not available in GNU Octave. > > >