| Deutsch English Français Italiano | 
| <vlqkhr$3sp5m$7@dont-email.me> View for Bookmarking (what is this?) Look up another Usenet article | 
Path: news.eternal-september.org!eternal-september.org!.POSTED!not-for-mail From: The Natural Philosopher <tnp@invalid.invalid> Newsgroups: comp.os.linux.misc Subject: Re: (Almost) Rock-n-Roll - "The Bunny Hop" (1953) Date: Fri, 10 Jan 2025 08:08:27 +0000 Organization: A little, after lunch Lines: 71 Message-ID: <vlqkhr$3sp5m$7@dont-email.me> References: <KJScnT-S8K3a4uL6nZ2dnZfqnPudnZ2d@earthlink.com> <slrnvnvso6.fl4.trepidation@vps.jonz.net> <ztWcnU7j3cxz9x36nZ2dnZfqnPqdnZ2d@earthlink.com> <vlqh76$3sp5m$1@dont-email.me> <s8mdnU91-9aqTh36nZ2dnZfqnPudnZ2d@earthlink.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Injection-Date: Fri, 10 Jan 2025 09:08:28 +0100 (CET) Injection-Info: dont-email.me; posting-host="69a8bc4b6b1b544a55cc011a9662fe80"; logging-data="4089014"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/CvugW0Zo4MV1IQwjMcddTKqeMExhTY3I=" User-Agent: Mozilla Thunderbird Cancel-Lock: sha1:RFTYa+S7NMdjrA9rcOkFUNuswGA= In-Reply-To: <s8mdnU91-9aqTh36nZ2dnZfqnPudnZ2d@earthlink.com> Content-Language: en-GB On 10/01/2025 07:58, 186282@ud0s4.net wrote: > On 1/10/25 2:11 AM, The Natural Philosopher wrote: >> On 10/01/2025 00:33, 186282@ud0s4.net wrote: >>> I've been trying to find out >>> if with modern 'flat address space' CPUs there's >>> any speed advantage in setting functions and >>> data blocks at specific addresses - what in the >>> old days would have been 'page boundaries' or >>> such. In short does an i7 or ARM and/or popular >>> mem-management chips have less work to do setting >>> up reading/writing at some memory addresses ? >>> Maybe a critical app could run ten percent faster >>> if, even 'wasting' memory, you put some stuff in >>> kind of exact places. Older chips with banked >>> memory and even mag HDDs, the answer was Yes. >> >> Mm. >> >> I don't think so. About the only thing that is proximity sensitive is >> cacheing. That is you want to try and ensure that you are operating >> out of cache, but the algorithms for what part of the instructions are >> cached and what are not is beyond my ability to identify, let alone >> code in... > > I did a lot of searching but never found a > good answer. IF you can do stuff entirely > within CPU cache then it WILL be faster. > Alas not MUCH stuff will be adaptable to > that strategy - esp with today's bloatware. > RK is probably the best person to understand that, but in fact a modern compiler will optimise for a specific processor architecture normally. It is quite instructive to see how 'real world' programs speed up on a chipset that simply has more cache. > We MAY be talking maker/sub-brand specifics ... > intel i3/i5/i7/i9 may all be different. Different > gens different yet. ARMs too. > Of course. And indeed many architectures are optimised for e.g. C programs. Imagine if you chipset detects a 'ca;; subroutine' code nugget and then proceeds to cache the new stack pointer's stack before doing anything. All your 'local' variables are now in cache. > Seems that CPUs and MMUs can do certain register > ops faster/easier than others - fewer calx and > switching settings. Therein my quest. If you want > some code to run AS FAST AS POSSIBLE it's worth > thinking about. And compilers do. And chipsets do,. So we don't have to. I used to do *86 assembler, but what todays C compilers spit out is better than any hand crafted assembler is. My mathematical friend only uses assembler to access some weird features of a specific intel architecture to do with vector arithmetic. He writes C library functions in assembler to access them. Because the compilers don't acknowledge their existence - yet. -- "I guess a rattlesnake ain't risponsible fer bein' a rattlesnake, but ah puts mah heel on um jess the same if'n I catches him around mah chillun".