Deutsch English Français Italiano |
<jwv1pzhsahr.fsf-monnier+comp.arch@gnu.org> View for Bookmarking (what is this?) Look up another Usenet article |
Path: ...!eternal-september.org!feeder2.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail From: Stefan Monnier <monnier@iro.umontreal.ca> Newsgroups: comp.arch Subject: Re: Reverse engineering of Intel branch predictors Date: Mon, 11 Nov 2024 17:10:14 -0500 Organization: A noiseless patient Spider Lines: 19 Message-ID: <jwv1pzhsahr.fsf-monnier+comp.arch@gnu.org> References: <vfbfn0$256vo$1@dont-email.me> <c517f562a19a0db2f3d945a1c56ee2e6@www.novabbs.org> <jwv1q002k2s.fsf-monnier+comp.arch@gnu.org> <a3d81b5c64ce058ad21f42a8081162cd@www.novabbs.org> <jwvcyj1sefl.fsf-monnier+comp.arch@gnu.org> <abef7481ff0dd5d832cef0b9d3ea087a@www.novabbs.org> MIME-Version: 1.0 Content-Type: text/plain Injection-Date: Mon, 11 Nov 2024 23:10:15 +0100 (CET) Injection-Info: dont-email.me; posting-host="3b98026d4d04c50d28c1d363b2681b28"; logging-data="1261857"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+2gYTbIvCFpggtPVToiqjizELy3x8oKU0=" User-Agent: Gnus/5.13 (Gnus v5.13) Cancel-Lock: sha1:IaHS38Gl8lJLAdZqbNnl8P5gSMA= sha1:0y4UkE5ntB3YXHwKzk7yJr7wXGU= Bytes: 2037 >> Hmm... but in order not to have bubbles, your prediction structure still >> needs to give you a predicted target address (rather than a predicted >> index number), right? > Yes, but you use the predicted index number to find the predicted > target IP. Hmm... but that would require fetching that info from memory. Can you do that without introducing bubbles? If you're lucky it's in the L1 Icache, but that still takes a couple cycles to get, doesn't it? Or do you have a dedicated "jump table cache" as part of your jump prediction tables? [ Even if you do, it still means your prediction has to first predict an index and then look it up in the table, which increases its latency. I don't know what kind of latency is used in current state of the art predictors, but IIUC any increase in latency can be quite costly. ] Stefan