lwp-rget - Retrieve web documents recursively
lwp-rget [--verbose] [--auth=USER:PASS] [--depth=N] [--hier] [--iis] [--keepext=mime/type[,mime/type]] [--limit=N] [--nospace] [--prefix=URL] [--referer=URL] [--sleep=N] [--tolower] <URL> lwp-rget --version
This program will retrieve a document and store it in a local file. It will follow any links found in the document and store these documents as well, patching links so that they refer to these local copies. This process continues until there are no more unvisited links or the process is stopped by the one or more of the limits which can be controlled by the command line arguments.
This program is useful if you want to make a local copy of a collection of documents or want to do web reading off-line.
All documents are stored as plain files in the current directory. The file names chosen are derived from the last component of URL paths.
The options are:
Limit the recursive level. Embedded images are always loaded, even if they fall outside the --depth. This means that one can use --depth=0 in order to fetch a single document together with all inline graphics.
The default depth is 5.
"NONE"
can be used to suppress the Referer header in
any of subsequent requests. The Referer header will always be suppressed
in all normal http
requests if the referring page was transmitted over
https
as recommended in RFC 2616.
Limit the links to follow. Only URLs that start the prefix string are followed.
The default prefix is set as the "directory" of the initial URL to
follow. For instance if we start lwp-rget with the URL
http://www.sn.no/foo/bar.html
, then prefix will be set to
http://www.sn.no/foo/
.
Use --prefix=''
if you don't want the fetching to be limited by any
prefix.
Before the program exits the name of the file, where the initial URL is stored, is printed on stdout. All used filenames are also printed on stderr as they are loaded. This printing can be suppressed with the --quiet option.
Gisle Aas <aas@sn.no>