Deutsch   English   Français   Italiano  
<17da6bead1f52684$159717$3694546$802601b3@news.usenetexpress.com>

View for Bookmarking (what is this?)
Look up another Usenet article

From: Lester Thorpe <lt@gnu.rocks>
Subject: Need Assistance -- Network Programming
Newsgroups: comp.os.linux.advocacy
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Lines: 16
Path: ...!weretis.net!feeder9.news.weretis.net!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!feeder.usenetexpress.com!tr2.iad1.usenetexpress.com!news.usenetexpress.com!not-for-mail
Date: Wed, 19 Jun 2024 13:47:44 +0000
Nntp-Posting-Date: Wed, 19 Jun 2024 13:47:44 +0000
X-Received-Bytes: 913
X-Complaints-To: abuse@usenetexpress.com
Organization: UsenetExpress - www.usenetexpress.com
Message-Id: <17da6bead1f52684$159717$3694546$802601b3@news.usenetexpress.com>
Bytes: 1310

Ordinarily, I don't give a flying fuck about network programming
but necessity does dictate.

I need to read through an HTML file, find all external HTTP(S) links,
and then determine if those external links are still viable, i.e.
if the pages to which they link still exist.

Perl is the language of choice.

Finding the links is not a problem, but how do I determine viability?
Do I look for the "404" error or is there another way?

I don't want no fucking Python code.

Pseudocode or just the relevant commands would be preferable.