Warning: mysqli::__construct(): (HY000/1203): User howardkn already has more than 'max_user_connections' active connections in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\includes\artfuncs.php on line 21
Failed to connect to MySQL: (1203) User howardkn already has more than 'max_user_connections' active connections
Warning: mysqli::query(): Couldn't fetch mysqli in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\index.php on line 66
Article <v4hsjk$2vk6n$1@dont-email.me>
Deutsch   English   Français   Italiano  
<v4hsjk$2vk6n$1@dont-email.me>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: BGB <cr88192@gmail.com>
Newsgroups: comp.arch
Subject: Re: Stealing a Great Idea from the 6600
Date: Fri, 14 Jun 2024 11:54:09 -0500
Organization: A noiseless patient Spider
Lines: 157
Message-ID: <v4hsjk$2vk6n$1@dont-email.me>
References: <lge02j554ucc6h81n5q2ej0ue2icnnp7i5@4ax.com>
 <v02eij$6d5b$1@dont-email.me>
 <152f8504112a37d8434c663e99cb36c5@www.novabbs.org>
 <v04tpb$pqus$1@dont-email.me> <v4f5de$2bfca$1@dont-email.me>
 <jwvzfrobxll.fsf-monnier+comp.arch@gnu.org> <v4f97o$2bu2l$1@dont-email.me>
 <613b9cb1a19b6439266f520e94e2046b@www.novabbs.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Fri, 14 Jun 2024 18:54:12 +0200 (CEST)
Injection-Info: dont-email.me; posting-host="0c68b24bee81243abea4013a80cbc313";
	logging-data="3133655"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX1+X2OGWLhw59Afk76ZuwGShcGVMx1HHrTc="
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:suCxShDJ1q2GF+zdXWKfGkF92nk=
In-Reply-To: <613b9cb1a19b6439266f520e94e2046b@www.novabbs.org>
Content-Language: en-US
Bytes: 7468

On 6/13/2024 3:40 PM, MitchAlsup1 wrote:
> BGB wrote:
> 
>> On 6/13/2024 11:52 AM, Stefan Monnier wrote:
>>>> This is a late reply, but optimal static ordering for N-wide may be
>>>> very non-optimal for N-1 (or N-2, etc.).  As an example, assume a
>>>> perfectly
>>>
>>> AFAICT Terje was talking about scheduling for OoO CPUs, and wasn't
>>> talking about the possible worst case situations, but about how things
>>> usually turn out in practice.
>>>
>>> For statically-scheduled or in-order CPUs, it can be indeed more
>>> difficult to generate code that will run (almost) optimally on a
>>> variety
>>> of CPUs.
>>>
> 
>> Yeah, you need to know the specifics of the pipeline for either optimal
>>
>> machine code (in-order superscalar) or potentially to be able to run at
>>
>> all (LIW / VLIW).
> 
> 
>> That said, on some OoO CPU's, such as when I was running a Piledriver 
>> based core, it did seem as if things were scheduled to assume an 
>> in-order CPU (such as putting other instructions between memory loads 
>> and the instructions using the results, etc), it did perform better 
>> (seemingly implying there are limits to the OoO magic).
> 
> When doing both Mc 88120 and K9 we found lots of sequences if code
> where
> the scheduling to more orderly or narrower implementations were
> impeding
> performance on the GBOoO core.
> 

In this case, scheduling as-if it were an in-order core was leading to 
better performance than a more naive ordering (such as directly using 
the results of previous instructions or memory loads, vs shuffling other 
instructions in between them).

Either way, seemed to be different behavior than seen on either the 
Ryzen or on Intel Core based CPUs (where, seemingly, the CPU does not 
care about the relative order).


>> Though, OTOH, a lot of the sorts of optimization tricks I found for the
>>
>> Piledriver were ineffective on the Ryzen, albeit mostly because the
>> more
>> generic stuff caught up.
> 
>> For example, I had an LZ compressor that was faster than LZ4 on that
>> CPU
>> (it was based around doing everything in terms of aligned 32-bit
>> dwords,
>> gaining speed at the cost of worse compression), but then when going 
>> over to the Ryzen, LZ4 got faster...
> 
> It is the continuous nature of having to reschedule code every
> generation
> that lead to my wanting the compiler to just spit out correct code and
> in the fewest number of instructions that lead to a lot of My 66000
> architecture and microarchitectures.
> 

Mostly works for x86-64 as well.

Though, I had noted that the optimization strategies that worked well on 
MSVC + Piledriver, continue to work effectively on my custom ISA / core.


>> Like, seemingly all my efforts in "aggressively optimizing" some things
>>
>> became moot simply by upgrading my PC.
> 
> I want to compile once and then use forever (in a dynamic library).
> 

Some of it was optimizing the design tradeoffs, rather than ASM code.

One example was an LZ compressor:
   Encoded stream was expressed entirely as 32-bit DWORDs;
   Decoded data existed as DWORDs;
   Decoding was basically copying DWORDs around.

For some types of data, it worked well enough (but, for ASCII text, it 
was dismal...).

It would pull off around 3GB/sec on the AMD FX I was running at the 
time, whereas LZ4 was generally only around 1GB/s.

On the Ryzen, both jump to around 3.4 GB/s for similar data (for larger 
decompression, this also seems to be the limit). For most cases, LZ 
compressors seem to max out at a point near that of memcpy speed, though 
highly repetitive data can exceed memcpy speed and start to approach 
"memset()" speed (around 7GB/s or so).


Had also had a video codec that operated in this area (gigapixel/second 
on the FX). Also decrowned in a similar way.

Still turned out to not be all that usable on the BJX2 core though, 
mostly because it used a Rice coder for command and color endpoint data, 
and the Rice coder wasn't fast enough. Technically it was a more 
advanced form of color-cell encoder.


I had also investigated CRAM and LZ compressed CRAM. While CRAM is fast 
to decode, it isn't so great for IO bandwidth, which also became a 
limiting factor.

Had OK results with a format that was kinda like:
   pppppp00  //4x4 pattern from fixed pattern table, reused endpoints
   pppppp10  aaaaaaaa bbbbbbbb //4x4 pattern, 2 color endpoints.
   ...
Where, the endpoint colors were pulled from a 256-color color-palette; 
and the codec would treat 4x4x1 blocks as special (rather than the 
default). There were also "flat run of color A" and "flat run of color 
B", along with delta/skip blocks (the handling of delta is to 
double-buffer the image and copy blocks from one to another; unlike CRAM 
which merely copies new blocks on top of the prior frame with the option 
to skip over blocks).

Well, along with runs of 2x2x1 blocks (4 bits per 4x4 pixel block, each 
selecting one of the color endpoints for each 2x2 sub-block); etc.


The resulting image was then also LZ compressed, generally with my 
byte-oriented RP2 compressor (LZ4 also works, but gives worse compression).

The pattern table was generally based on using an X and Y frequency and 
X/Y polarity, along similar lines to single-point IDCT coefficients; 
which would unpack (via a table lookup) into the corresponding 4x4x1 
patterns.


This was able to beat LZ+CRAM in terms of Q/bpp while having a similar 
computational cost in the decoder (and was fast enough to consistently 
pull off 320x200@16Hz video playback on my BJX2 core).

One minor issue at the time was that the encoder would attempt to 
dynamically optimize the color palette which could lead to ugliness when 
the scene changed and the color-palette no longer matched very well (the 
palette was updated only on I-Frames). A potentially better option might 
have been to use a global palette, or select between one of several 
generic palettes (allowing some optimization, while limiting the 
"badness" when it doesn't match up well).

....



>> ....