Linux and UNIX Man Pages

Linux & Unix Commands - Search Man Pages

urlwatch(1) [debian man page]

URLWATCH(1)							   User Commands						       URLWATCH(1)

NAME
urlwatch - Watch web pages and arbitrary URLs for changes SYNOPSIS
urlwatch [options] DESCRIPTION
urlwatch watches a list of URLs for changes and prints out unified diffs of the changes. You can filter always-changing parts of websites by providing a "hooks.py" script. OPTIONS
--version show program's version number and exit -h, --help show the help message and exit -v, --verbose Show debug/log output --urls=FILE Read URLs from the specified file --hooks=FILE Use specified file as hooks.py module -e, --display-errors Include HTTP errors (404, etc..) in the output ADVANCED FEATURES
urlwatch includes some advanced features that you have to activate by creating a hooks.py file that specifies for which URLs to use a spe- cific feature. You can also use the hooks.py file to filter trivially-varying elements of a web page. ICALENDAR FILE PARSING This module allows you to parse .ics files that are in iCalendar format and provide a very simplified text-based format for the diffs. Use it like this in your hooks.py file: from urlwatch import ical2txt def filter(url, data): if url.endswith('.ics'): return ical2txt.ical2text(data).encode('utf-8') + data # ...you can add more hooks here... HTML TO TEXT CONVERSION There are three methods of converting HTML to text in the current version of urlwatch: "lynx" (default), "html2text" and "re". The former two use command-line utilities of the same name to convert HTML to text, and the last one uses a simple regex-based tag stripping method (needs no extra tools). Here is an example of using it in your hooks.py file: from urlwatch import html2txt def filter(url, data): if url.endswith('.html') or url.endswith('.htm'): return html2txt.html2text(data, method='lynx') # ...you can add more hooks here... FILES
~/.urlwatch/urls.txt A list of HTTP/FTP URLs to watch (one URL per line) ~/.urlwatch/lib/hooks.py A Python module that can be used to filter contents ~/.urlwatch/cache/ The state of web pages is saved in this folder AUTHOR
Thomas Perl <thp@thpinfo.com> WEBSITE
http://thpinfo.com/2008/urlwatch/ urlwatch 1.11 July 2010 URLWATCH(1)

Check Out this Related Man Page

HTML::LinkExtor(3)					User Contributed Perl Documentation					HTML::LinkExtor(3)

NAME
HTML::LinkExtor - Extract links from an HTML document SYNOPSIS
require HTML::LinkExtor; $p = HTML::LinkExtor->new(&cb, "http://www.perl.org/"); sub cb { my($tag, %links) = @_; print "$tag @{[%links]} "; } $p->parse_file("index.html"); DESCRIPTION
HTML::LinkExtor is an HTML parser that extracts links from an HTML document. The HTML::LinkExtor is a subclass of HTML::Parser. This means that the document should be given to the parser by calling the $p->parse() or $p->parse_file() methods. $p = HTML::LinkExtor->new $p = HTML::LinkExtor->new( $callback ) $p = HTML::LinkExtor->new( $callback, $base ) The constructor takes two optional arguments. The first is a reference to a callback routine. It will be called as links are found. If a callback is not provided, then links are just accumulated internally and can be retrieved by calling the $p->links() method. The $base argument is an optional base URL used to absolutize all URLs found. You need to have the URI module installed if you provide $base. The callback is called with the lowercase tag name as first argument, and then all link attributes as separate key/value pairs. All non-link attributes are removed. $p->links Returns a list of all links found in the document. The returned values will be anonymous arrays with the following elements: [$tag, $attr => $url1, $attr2 => $url2,...] The $p->links method will also truncate the internal link list. This means that if the method is called twice without any parsing between them the second call will return an empty list. Also note that $p->links will always be empty if a callback routine was provided when the HTML::LinkExtor was created. EXAMPLE
This is an example showing how you can extract links from a document received using LWP: use LWP::UserAgent; use HTML::LinkExtor; use URI::URL; $url = "http://www.perl.org/"; # for instance $ua = LWP::UserAgent->new; # Set up a callback that collect image links my @imgs = (); sub callback { my($tag, %attr) = @_; return if $tag ne 'img'; # we only look closer at <img ...> push(@imgs, values %attr); } # Make the parser. Unfortunately, we don't know the base yet # (it might be different from $url) $p = HTML::LinkExtor->new(&callback); # Request document and parse it as it arrives $res = $ua->request(HTTP::Request->new(GET => $url), sub {$p->parse($_[0])}); # Expand all image URLs to absolute ones my $base = $res->base; @imgs = map { $_ = url($_, $base)->abs; } @imgs; # Print them out print join(" ", @imgs), " "; SEE ALSO
HTML::Parser, HTML::Tagset, LWP, URI::URL COPYRIGHT
Copyright 1996-2001 Gisle Aas. This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl v5.18.2 2013-03-25 HTML::LinkExtor(3)
Man Page