Path: ...!eternal-september.org!feeder2.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail From: "Chris M. Thomasson" Newsgroups: comp.lang.c++ Subject: Re: smrproxy v2 Date: Tue, 29 Oct 2024 23:40:12 -0700 Organization: A noiseless patient Spider Lines: 59 Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Injection-Date: Wed, 30 Oct 2024 07:40:13 +0100 (CET) Injection-Info: dont-email.me; posting-host="455c24f95a17f59c8fbd0a2a00f2944a"; logging-data="2104851"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19hq7rwSwkXYv4WIeMuxKC6OQD1D9+gnpw=" User-Agent: Mozilla Thunderbird Cancel-Lock: sha1:zDoVC9eRXVdcOc9jFyE+Y5D4Zck= Content-Language: en-US In-Reply-To: Bytes: 3098 On 10/28/2024 10:02 PM, Chris M. Thomasson wrote: > On 10/17/2024 5:10 AM, jseigh wrote: >> I replaced the hazard pointer logic in smrproxy.  It's now wait-free >> instead of mostly wait-free.  The reader lock logic after loading >> the address of the reader lock object into a register is now 2 >> instructions a load followed by a store.  The unlock is same >> as before, just a store. >> >> It's way faster now. >> >> It's on the feature/003 branch as a POC.   I'm working on porting >> it to c++ and don't want to waste any more time on c version. >> >> No idea of it's a new algorithm.  I suspect that since I use >> the term epoch that it will be claimed that it's ebr, epoch >> based reclamation, and that all ebr algorithms are equivalent. >> Though I suppose you could argue it's qsbr if I point out what >> the quiescent states are. > > For some reason you made me think of another very simple proxy technique > using per thread mutexes. It was an experiment a while back: > ___________________ > per_thread > { >     std::mutex m_locks[2]; > >     lock() >     { >         word ver = g_version; >         m_locks[ver % 2].lock(); >     } > >     unlock(word ver) >     { >         m_locks[ver % 2].unlock(); >     } > } > ___________________ > > The polling thread would increase the g_version counter then lock and > unlock all of the threads previous locks. Iirc, it worked way better > than a read write lock for sure. Basically: > ___________________ > word ver = g_version.inc(); // ver is the previous version > > for all threads as t > { >    t.m_locks[ver % 2].lock(); >    t.m_locks[ver % 2].unlock(); > } > ___________________ > > After that, it knew the previous generation was completed. > > It was just a way for using a mutex to get distributed proxy like behavior. There are fun things to do here. A thread can do an unlock lock cycle ever often, say 1000 iterations. The fun part is that this can beat a read write lock.