Deutsch English Français Italiano |
<CuWdneEdN-eZoaj7nZ2dnZfqnPWdnZ2d@giganews.com> View for Bookmarking (what is this?) Look up another Usenet article |
Path: ...!Xl.tags.giganews.com!local-1.nntp.ord.giganews.com!news.giganews.com.POSTED!not-for-mail NNTP-Posting-Date: Fri, 03 May 2024 19:33:24 +0000 Subject: Re: Threads across programming languages Newsgroups: comp.lang.c++,comp.lang.c References: <GIL-20240429161553@ram.dialup.fu-berlin.de> <v0ogum$1rc5n$1@dont-email.me> <v0ovvl$1ur12$4@dont-email.me> <v0p06i$1uq6q$5@dont-email.me> <v0shti$2vrco$2@raubtier-asyl.eternal-september.org> <v0spsh$31ds4$3@dont-email.me> <v0stic$325kv$3@raubtier-asyl.eternal-september.org> <v0svtn$32o8h$1@dont-email.me> <v0t091$32qj6$1@raubtier-asyl.eternal-september.org> <v0u90h$3c1r5$4@dont-email.me> <v0v7rf$3lu04$1@dont-email.me> <v0v8u3$3m7rm$1@dont-email.me> <v10t0v$20cs$1@dont-email.me> <v116q4$4at1$1@dont-email.me> <v119bu$4pfa$1@dont-email.me> <v11hvg$aajl$1@dont-email.me> From: Ross Finlayson <ross.a.finlayson@gmail.com> Date: Fri, 3 May 2024 12:33:30 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0 MIME-Version: 1.0 In-Reply-To: <v11hvg$aajl$1@dont-email.me> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Message-ID: <CuWdneEdN-eZoaj7nZ2dnZfqnPWdnZ2d@giganews.com> Lines: 57 X-Usenet-Provider: http://www.giganews.com X-Trace: sv3-ffpKcubqPTvkwrp+Y5kqU48q+tV9qsfjia0PU8FvnTtuBPjF/L2orlrnjpvDyEA3X+z9mHHeczX5oPm!pCVBtDMUZf7rPl5gbpuR6P5/kAJb4vWNkpJ7NZt5Q4i2UmsprcuNXYuZVCevl2PxTULV/i05YQ== X-Complaints-To: abuse@giganews.com X-DMCA-Notifications: http://www.giganews.com/info/dmca.html X-Abuse-and-DMCA-Info: Please be sure to forward a copy of ALL headers X-Abuse-and-DMCA-Info: Otherwise we will be unable to process your complaint properly X-Postfilter: 1.3.40 Bytes: 3896 On 05/02/2024 07:25 PM, Lawrence D'Oliveiro wrote: > On Thu, 2 May 2024 16:58:54 -0700, Chris M. Thomasson wrote: > >> The CPU can become a bottleneck. > > Then that becomes an entirely different situation from what we’re > discussing. > >> So, there is no way to take advantage of multiple threads on Python? > > There is, but the current scheme has limitations in CPU-intensive > situations. They’re working on a fix, without turning it into a memory hog > like Java. > Yeah, it can be that way. "How are things?" "Yesterday I implemented an entire web service on the cloud." "Oh, really, how'd that go?" "I opened Initializer and added a starter and copied how to pop the queue and put the queue name in a file, then I added it to git and it went into the CICD pipeline and now it's in Prod." "Great." "It only even needs 1 gigabyte of RAM." Surely when it's like, "the only time this framework app uses 1 gigabyte of RAM is at boot time it totally templates itself into a gigabyte of RAM", then the guy's like "see, I'm totally not using RAM." Yet it's like, "well, yeah, but the meter for the RAM you're not using is on". At least then for re-routines, and if it helps it's quite an idee fixe at this point, it's clear as described they can be implemented in most languages with or without threads as with just a minimum of threads and thread locals and exception handling being well-defined and the most usual sort of procedural call stack, then, get this: taking plain usual code, giving it a ton of threads, making every invocation one of these things, and automatically parallelizing the code automatically according to the flow-graph dependencies declared in the synchronous, blocking, routine. Now _that's_ ridiculous. Though, in C++ with this sort of approach, the only sort of "unusable" object is a future<result<T, E>> as it were, or "the ubiquitous type" sort of thing then as to overload its access as to invoke "get()", if there was a sort of way to overload the "." and "->" operators, and have them most simply be compiled as invoke "." and "->". Does std::identity work this way?