Deutsch   English   Français   Italiano  
<vkvffu$1sirn$3@dont-email.me>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!feeder3.eternal-september.org!news.eternal-september.org!eternal-september.org!.POSTED!not-for-mail
From: Lawrence D'Oliveiro <ldo@nz.invalid>
Newsgroups: alt.comp.os.windows-11,comp.os.linux.advocacy
Subject: Re: The problem with not owning the software
Date: Tue, 31 Dec 2024 00:56:30 -0000 (UTC)
Organization: A noiseless patient Spider
Lines: 21
Message-ID: <vkvffu$1sirn$3@dont-email.me>
References: <Tn39P.50437$%aWb.4583@fx18.iad>
	<nuy9P.28629$aTp4.27488@fx09.iad>
	<vkbuqd$18m92$2@toylet.eternal-september.org>
	<9OCcnRW7grqkbPT6nZ2dnZfqnPUAAAAA@earthlink.com>
	<vkfva1$28j6k$1@toylet.eternal-september.org> <vkhqif$2hvap$1@dont-email.me>
	<vkmd9n$3l76a$4@toylet.eternal-september.org> <vko7up$6qks$2@dont-email.me>
	<ltb26aFov8dU1@mid.individual.net> <fUYbP.143220$bYV2.129957@fx17.iad>
	<ltbjk1FrgqvU3@mid.individual.net>
	<zKecnbsivueyNO36nZ2dnZfqn_HTpa6r@earthlink.com>
	<vks1gq.ufk.1@ID-201911.user.individual.net>
	<ltdrjlF7nkqU6@mid.individual.net> <vkso91$12a03$10@dont-email.me>
	<vkt8va$1fes1$3@dont-email.me> <vkun59$1mknq$3@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 31 Dec 2024 01:56:31 +0100 (CET)
Injection-Info: dont-email.me; posting-host="41fdacfd7af50925eaa1291753caf57e";
	logging-data="1985399"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX1/hc/1OwM+6lC6jSuTQfLUs"
User-Agent: Pan/0.161 (Chasiv Yar; )
Cancel-Lock: sha1:+NeQMz/rRBgsGMI0p3cwhkVkU+s=
Bytes: 2531

On Mon, 30 Dec 2024 13:01:13 -0500, -hh wrote:

> On 12/29/24 11:52 PM, Lawrence D'Oliveiro wrote:
>
>> How much data was involved, really? I suspect a more sensible app would
>> deal with the same data much more efficiently and easily.
> 
> There was a pretty modest chunk of data ... maybe just 1000 unique data 
> points?
> 
> What made it large & computationally intensive was that the dataset was 
> routed iteratively through a ~dozen different "Monte Carlo" statistical 
> exercises and filters to identify & glean signal from noise.

I’m sure something could be whipped up in Python with NumPy/Pandas/
Matplotlib etc that would go through the same operations much more quickly 
and efficiently.

Microsoft is even offering access to these Python toolkits to Excel users 
now -- at a cost. You know -- charging for something that the users could 
bypass Microsoft and access for free, only they’re too dumb to realize it.