Deutsch   English   Français   Italiano  
<v4tfu3$1ostn$1@dont-email.me>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!weretis.net!feeder8.news.weretis.net!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: BGB <cr88192@gmail.com>
Newsgroups: comp.arch
Subject: Re: Stealing a Great Idea from the 6600
Date: Tue, 18 Jun 2024 21:31:28 -0500
Organization: A noiseless patient Spider
Lines: 165
Message-ID: <v4tfu3$1ostn$1@dont-email.me>
References: <lge02j554ucc6h81n5q2ej0ue2icnnp7i5@4ax.com>
 <v02eij$6d5b$1@dont-email.me>
 <152f8504112a37d8434c663e99cb36c5@www.novabbs.org>
 <v04tpb$pqus$1@dont-email.me> <v4f5de$2bfca$1@dont-email.me>
 <jwvzfrobxll.fsf-monnier+comp.arch@gnu.org> <v4f97o$2bu2l$1@dont-email.me>
 <613b9cb1a19b6439266f520e94e2046b@www.novabbs.org>
 <v4hsjk$2vk6n$1@dont-email.me>
 <6b5691e5e41d28d6cb48ff6257555cd4@www.novabbs.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Wed, 19 Jun 2024 04:31:31 +0200 (CEST)
Injection-Info: dont-email.me; posting-host="1b230587210f6f877000eb5e9d42f72f";
	logging-data="1864631"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX19uVRE+qKvAo7ZpbEHxzoKKldmGTG3tWHA="
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:Wq2FILPkQ+bZv0LgxfVVNXNrhuY=
In-Reply-To: <6b5691e5e41d28d6cb48ff6257555cd4@www.novabbs.org>
Content-Language: en-US
Bytes: 7148

On 6/18/2024 4:09 PM, MitchAlsup1 wrote:
> BGB wrote:
> 
>> On 6/13/2024 3:40 PM, MitchAlsup1 wrote:
>>> BGB wrote:
>>>
>>>> On 6/13/2024 11:52 AM, Stefan Monnier wrote:
>>>>>> This is a late reply, but optimal static ordering for N-wide may be
>>>>>> very non-optimal for N-1 (or N-2, etc.).  As an example, assume a
>>>>>> perfectly
>>>>>
>>>>> AFAICT Terje was talking about scheduling for OoO CPUs, and wasn't
>>>>> talking about the possible worst case situations, but about how things
>>>>> usually turn out in practice.
>>>>>
>>>>> For statically-scheduled or in-order CPUs, it can be indeed more
>>>>> difficult to generate code that will run (almost) optimally on a
>>>>> variety
>>>>> of CPUs.
>>>>>
>>>
>>>> Yeah, you need to know the specifics of the pipeline for either optimal
>>>>
>>>> machine code (in-order superscalar) or potentially to be able to run at
>>>>
>>>> all (LIW / VLIW).
>>>
>>>
>>>> That said, on some OoO CPU's, such as when I was running a 
>>>> Piledriver based core, it did seem as if things were scheduled to 
>>>> assume an in-order CPU (such as putting other instructions between 
>>>> memory loads and the instructions using the results, etc), it did 
>>>> perform better (seemingly implying there are limits to the OoO magic).
>>>
>>> When doing both Mc 88120 and K9 we found lots of sequences if code
>>> where
>>> the scheduling to more orderly or narrower implementations were
>>> impeding
>>> performance on the GBOoO core.
>>>
> 
>> In this case, scheduling as-if it were an in-order core was leading to 
>> better performance than a more naive ordering (such as directly using 
>> the results of previous instructions or memory loads, vs shuffling
>> other
>>
>> instructions in between them).
> 
>> Either way, seemed to be different behavior than seen on either the 
>> Ryzen or on Intel Core based CPUs (where, seemingly, the CPU does not 
>> care about the relative order).
> 
> Because it had no requirement of code scheduling, unlike 1st generation
> 
> RISCs, so the cores were designed to put up good performance scores 
> without any code scheduling.
> 

Yeah, but why was Bulldozer/Piledriver seemingly much more sensitive to 
instruction scheduling issues than either its predecessors (such as the 
Phenom II) and successors (Ryzen)?...


Though, apparently "low IPC" was a noted issue with this processor 
family (apparently trying to gain higher clock-speeds at the expense of 
IPC; using a 20-stage pipeline, ...).

Though, less obvious how having a longer pipeline than either its 
predecessors or successors would effect instruction scheduling.

....


>>>> Though, OTOH, a lot of the sorts of optimization tricks I found for the
>>>>
>>>> Piledriver were ineffective on the Ryzen, albeit mostly because the
>>>> more
>>>> generic stuff caught up.
>>>
>>>> For example, I had an LZ compressor that was faster than LZ4 on that
>>>> CPU
>>>> (it was based around doing everything in terms of aligned 32-bit
>>>> dwords,
>>>> gaining speed at the cost of worse compression), but then when going 
>>>> over to the Ryzen, LZ4 got faster...
>>>
>>> It is the continuous nature of having to reschedule code every
>>> generation
>>> that lead to my wanting the compiler to just spit out correct code and
>>> in the fewest number of instructions that lead to a lot of My 66000
>>> architecture and microarchitectures.
>>>
> 
>> Mostly works for x86-64 as well.
> 
>> Though, I had noted that the optimization strategies that worked well
>> on
>> MSVC + Piledriver, continue to work effectively on my custom ISA /
>> core.
> 
> One of the things we found in Mc 88120 was that the compiler should
> NEVER
> be allowed to put unnecessary instructions in decode-execute slots that
> were unused--and that almost invariable--the best code for the GBOoO 
> machine was almost invariably the one with the fewest instructions, and
> if several sequences had equally few instructions, it basically did not
> matter.
> 
> For example::
> 
>      for( i = 0; i < max, i++ )
>           a[i] = b[i];
> 
> was invariably faster than::
> 
>      for( ap = &a[0], bp = & b[0];, i = 0; i < max; i++ )
>           *ap++ = *bp++;
> 
> because the later has 3 ADDs in the loop wile the former has but 1.
> Because of this, I altered my programming style and almost never end up
> using ++ or -- anymore.



In this case, it would often be something more like:
   maxn4=max&(~3);
   for(i=0; i<maxn4; i+=4)
   {
     ap=a+i;    bp=b+i;
     t0=ap[0];  t1=ap[1];
     t2=ap[2];  t3=ap[3];
     bp[0]=t0;  bp[1]=t1;
     bp[2]=t2;  bp[3]=t3;
   }
   if(max!=maxn4)
   {
     for(; i < max; i++ )
       a[i] = b[i];
   }

If things are partially or fully unrolled, they often go faster. Using a 
large number of local variables seems to be effective (even in cases 
where the number of local variables exceeds the number of CPU registers).

Generally also using as few branches as possible.
Etc...

The seemingly significant change with the Ryzen was that branchy code 
got faster, as well as there seemingly being less of a penalty from 
using small and misaligned memory loads.


Say, it seemed like 8 and 16-bit loads were slower than 32 or 64 bit 
loads, as were DWORD/QWORD loads that were not aligned to a proper 32 or 
64 bit boundary.

Granted, a special case like this does currently exist for the BJX2 core 
as well:
Aligned DWORD and QWORD may have latency reduced to 2 cycles, vs 3 
cycles for normal memory access. So it doesn't seem entirely implausible 
that Piledriver might have been similar.

....