GNU Wget is a freely available network utility to retrieve files from the World Wide Web, using HTTP (Hyper Text Transfer Protocol) and FTP (File Transfer Protocol), the two most widely used Internet protocols. It has many useful features to make downloading easier, some of them being:
- Wget is capable of descending recursively through the structure of HTML documents and FTP directory trees, making a local copy of the directory hierarchy similar to the one on the remote server. This feature can be used to mirror archives and home pages, or traverse the web in search of data, like a WWW robot.
- Wget works exceedingly well on slow or unstable connections, retrying the document until it is fully retrieved, or until a user-specified retry count is surpassed. It will try to resume the download from the point of interruption, using REST with FTP and Range with HTTP servers that support them.
- File name wildcard matching and recursive mirroring of directories are available when retrieving via FTP. Wget can read the time-stamp information given by both HTTP and FTP servers, and store it locally. Thus Wget can see if the remote file has changed since last retrieval, and automatically retrieve the new version if it has. This makes Wget suitable for mirroring of FTP sites, as well as home pages.
Invoking a command-line program like GNU wget isn't allways easy. There are a lot of parameters and switches to remember. I've created a frontend for the switches I use the most and I've added one feature that isn't in GNU wget: scheduling a download.
The original wget for unix is written by Hrvoje Niksic and is released under GNU public licence. It can be found at: ftp://prep.ai.mit.edu/pub/gnu/
The Win32 port was created by Tim Charron and is also released under GNU public licence. It can be found at: http://www.interlog.com/~tcharron/wgetwin.html