Deutsch English Français Italiano |
<vao14g$3jt75$1@dont-email.me> View for Bookmarking (what is this?) Look up another Usenet article |
Path: ...!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail From: BGB <cr88192@gmail.com> Newsgroups: comp.arch Subject: Re: Computer architects leaving Intel... Date: Wed, 28 Aug 2024 15:25:47 -0500 Organization: A noiseless patient Spider Lines: 541 Message-ID: <vao14g$3jt75$1@dont-email.me> References: <vajo7i$2s028$1@dont-email.me> <memo.20240827205925.19028i@jgd.cix.co.uk> <valki8$35fk2$1@dont-email.me> <2644ef96e12b369c5fce9231bfc8030d@www.novabbs.org> <vam5qo$3bb7o$1@dont-email.me> <vamhm0$3cs7q$1@dont-email.me> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Injection-Date: Wed, 28 Aug 2024 22:25:53 +0200 (CEST) Injection-Info: dont-email.me; posting-host="71bf2b6df1b18b30fe538c7206ecda35"; logging-data="3798245"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX188Y0imV6AQq6XAcROSSOHS0MG+9kUyUTg=" User-Agent: Mozilla Thunderbird Cancel-Lock: sha1:CJLbWMOXKAbvbsSZB4Agnl/MeEM= In-Reply-To: <vamhm0$3cs7q$1@dont-email.me> Content-Language: en-US Bytes: 21370 On 8/28/2024 1:55 AM, Robert Finch wrote: > On 2024-08-27 11:33 p.m., BGB wrote: >> On 8/27/2024 6:50 PM, MitchAlsup1 wrote: >>> On Tue, 27 Aug 2024 22:39:02 +0000, BGB wrote: >>> >>>> On 8/27/2024 2:59 PM, John Dallman wrote: >>>>> In article <vajo7i$2s028$1@dont-email.me>, tkoenig@netcologne.de >>>>> (Thomas >>>>> Koenig) wrote: >>>>> >>>>>> Just read that some architects are leaving Intel and doing their own >>>>>> startup, apparently aiming to develop RISC-V cores of all things. >>>>> >>>>> They're presumably intending to develop high-performance cores, since >>>>> they have substantial experience in doing that for x86-64. The >>>>> question >>>>> is if demand for those will develop. >>>>> >>>> >>>> Making RISC-V "not suck" in terms of performance will probably at least >>>> be easier than making x86-64 "not suck". >>>> >>> Yet, these people have decades of experience building complex things >>> that >>> made x86 (also() not suck. They should have the "drawing power" to get >>> more people with similar experiences. >>> >>> The drawback is that they are competing with "everyone else in >>> RISC-V-land, >>> and starting several years late. >> >> Though, if anything, they probably have the experience to know how to >> make things like the fabled "opcode fusion" work without burning too >> many resources. >> >> >>>> >>>>> Android is apparently waiting for a new RISC-V instruction set >>>>> extension; >> you can run various Linuxes, but I have not heard >>>> about anyone wanting to do so on a large scale. >>>>> >>>> >>>> My thoughts for "major missing features" is still: >>>> Needs register-indexed load; >>>> Needs an intermediate size constant load (such as 17-bit sign extended) >>>> in a 32-bit op. >>> >>> Full access to constants. >>> >> >> That would be better, but is unlikely within the existing encoding >> constraints. >> >> But, say, if one burned one of the remaining unused "OP Rd, Rs, >> Imm12s" encodings as an Imm17s, well then... >> >> There were a few holes in this space. Like, for example, there are no >> ANDW/ORW/XORW ops with Imm12s, so these spots could be reclaimed and >> used for such a purpose, treating the Imm12 and Rs as a combined 17- >> bit field. >> >> >> But, arguably, LUI+ADD, or LUI+ADD+LUI+ADD+SLLI+ADD, may not matter as >> much if one can afford the pattern-matching logic to turn 2 (or 6) >> operations into a fused operation... >> >> >>>> Where, there is a sizeable chunk of constants between 12 and 17 bits, >>>> but not quite as many between 17 and 32 (and 32-64 bits is comparably >>>> infrequent). >>> >>> Except in in "math codes". >>> >>> But 64-bit memory reference displacements means one does not have to >>> even bother to have a strategy of what to do when you need a single >>> FORTRAN common block to be 74GB in size in order to run 5-decade old >>> FEM codes. >>> >> >> I don't assume that RISC-V would be getting a 64-bit FPU immediate >> anytime soon. >> >> >>>> I could also make a case for an instruction to load a Binary16 value >>>> and >>>> convert to Binary32 or Binary64 in an FPR, but this is arguably a bit >>>> niche (but, would still beat out using a memory load). >>> >>> Most of these are covered by something like:: >>> >>> CVTSD Rd,#1 // 32-bit instruction >>> >> >> My case, I have: >> FLDCH Imm16f, Rn //also a 32-bit instruction >> Which can cover a significant majority of typical FP constants. >> >> >> In RISC-V, one needs to use a memory load, and store in memory using >> the full 64-bits if one needs the value as "double". This kinda sucks. >> >> Though, arguably still not as bad as it was on SH-4 (where constant >> loading in general was a PITA; and loading a FP constant typically >> involved multiple memory loads, and an address generation). >> >> Eg: >> MOVA @(PC, Disp8), R3 >> FMOV.S @R3+, FR5 >> FMOV.S @R3+, FR4 >> AKA: Suck... >> >> >>>> >>>> Big annoying thing with it, is that to have any hope of adoption, one >>>> needs an "actually involved" party to add it. There doesn't seem to be >>>> any sort of aggregated list of "known in-use" opcodes, or any real >>>> mechanism for "informal" extensions. >>> >>> With the OpCode space already 98% filled there does not need to >>> be such a list. >>> >> >> One would still need it if multiple parties want to be able to define >> an extension independently of each other and not step on the same >> encodings. >> >> >> Well, or it becomes like the file-extension space where there are >> seemingly pretty much no unused 2 or 3 letter filename extensions. >> >> So, for some recent formats I went and used ".GTF" and ".UPI", which >> while not unused, were not used by anything I had reason to care about >> (medical research and banks). >> >> >> Though, with file extensions and names, at least one can web-search >> them (which is more than one can do to check whether or not a part of >> the RISC-V opcode map is used by a 3rd party extension). >> >> What provisions have been made, don't scale much beyond "specific SoC >> provides extensions within a block generally provisioned for SoC >> specific extensions". >> >> >>>> The closest we have on the latter point is the "Composable Extensions" >>>> extension by Jan Gray, which seems to be mostly that part of the ISA's >>>> encoding space can be banked out based on a CSR or similar. >>>> >>>> >>>> Though, bigger immediate values and register-indexed loads do arguably >>>> better belong in the base ISA encoding space. >>> >>> Agreed, but there is so much more. >>> >>> FCMP Rt,#14,R19 // 32-bit instruction >>> ENTER R16,R0,#400 // 32-bit instruction >>> .. >>> >> >> These are likely a bit further down the priority list. >> >> High priority cases would likely be things that happen often enough to >> significantly effect performance. >> >> >> As I see it, array loads/stores, and integer constant values in the >> 12-17 bit range, are common enough to justify this. >> >> >> Prolog/Epilog happens once per function, and often may be skipped for >> small leaf functions, so seems like a lower priority. More so, if one >> lacks a good way to optimize it much beyond the sequence of load/store >> ops which is would be replacing (and maybe not a way to do it much >> faster than however can be moved in a single clock cycle with the >> available register ports). >> ========== REMAINDER OF ARTICLE TRUNCATED ==========