| Deutsch English Français Italiano |
|
<Slow-20240531003307@ram.dialup.fu-berlin.de> View for Bookmarking (what is this?) Look up another Usenet article |
Path: ...!fu-berlin.de!uni-berlin.de!not-for-mail
From: ram@zedat.fu-berlin.de (Stefan Ram)
Newsgroups: comp.misc
Subject: Re: Slow Computing
Date: 30 May 2024 23:40:21 GMT
Organization: Stefan Ram
Lines: 40
Expires: 1 Feb 2025 11:59:58 GMT
Message-ID: <Slow-20240531003307@ram.dialup.fu-berlin.de>
References: <slrnv5hjro.15f.bencollver@svadhyaya.localdomain>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Trace: news.uni-berlin.de Qq9xEisGhZqX1bTohs4mFgfgfj10igv4evIQ/XGKrd73Tm
Cancel-Lock: sha1:ZXj+phNRGW4ehv0V4pKBjKdQQEQ= sha256:kpREk12zsUBqZoxDHGDQXQP+cZLhf5eBzyr0oBoVMLE=
X-Copyright: (C) Copyright 2024 Stefan Ram. All rights reserved.
Distribution through any means other than regular usenet
channels is forbidden. It is forbidden to publish this
article in the Web, to change URIs of this article into links,
and to transfer the body without this notice, but quotations
of parts in other Usenet posts are allowed.
X-No-Archive: Yes
Archive: no
X-No-Archive-Readme: "X-No-Archive" is set, because this prevents some
services to mirror the article in the web. But the article may
be kept on a Usenet archive server with only NNTP access.
X-No-Html: yes
Content-Language: en-US
Bytes: 3124
Ben Collver <bencollver@tilde.pink> wrote or quoted:
>Slow Computing
First, there was "fast food" (a term coined in the 1950s).
In 1986, Carlo Petrini in Rome coined "slow food" as a counterterm.
What's vying for our attention, broadly speaking, is advertising.
And we encounter it just as much in printed magazines as on
computer screens. For me, it's not necessarily a question of speed.
In fact, I see more opportunities to filter out ads on the computer.
I've develop a custom program that curates the news reports
I consume by filtering out content deemed trivial, such as
gossip and frivolous matters.
Through the use of keywords, my program eliminates articles
containing specific phrases or terms associated with such
trivialities, much like it would filter out advertisements.
For instance, if a news item's description includes the string
"Prince Harry", that particular story would be automatically
omitted from my view, courtesy of this program.
This Python program demonstrates the fundamental process.
fn = '''output-file-20240531003240-tmpdml.html'''
output = open( fn, "w", errors='ignore' )
uri = fr'''http://example.com/article_list.html'''
request = urllib.request.Request( uri )
resource = urllib.request.urlopen( request )
cs = resource.headers.get_content_charset()
content = resource.read().decode( cs, errors="ignore" )
# assuming each article link is in an element of type "p"
for p in re.finditer( r'''<p[^\001]*?</p>''', content, flags=re.DOTALL ):
if "Prince Harry" not in p.group( 0 ):
print( p.group( 0 ), file=output )
output.close()
subprocess.Popen( fn, shell=True ) # opens the output file in a browser!
But yes, writing all these Python programs does slow me down . . .