'Getting directory listing over http
There is a directory that is being served over the net which I'm interested in monitoring. Its contents are various versions of software that I'm using and I'd like to write a script that I could run which checks what's there, and downloads anything that is newer that what I've already got.
Is there a way, say with wget
or something, to get a a directory listing. I've tried using wget
on the directory, which gives me html. To avoid having to parse the html document, is there a way of retrieving a simple listing like ls
would give?
Solution 1:[1]
I just figured out a way to do it:
$ wget --spider -r --no-parent http://some.served.dir.ca/
It's quite verbose, so you need to pipe through grep
a couple of times depending on what you're after, but the information is all there. It looks like it prints to stderr, so append 2>&1
to let grep
at it. I grepped for "\.tar\.gz" to find all of the tarballs the site had to offer.
Note that wget
writes temporary files in the working directory, and doesn't clean up its temporary directories. If this is a problem, you can change to a temporary directory:
$ (cd /tmp && wget --spider -r --no-parent http://some.served.dir.ca/)
Solution 2:[2]
What you are asking for best served using FTP, not HTTP.
HTTP has no concept of directory listings, FTP does.
Most HTTP servers do not allow access to directory listings, and those that do are doing so as a feature of the server, not the HTTP protocol. For those HTTP servers, they are deciding to generate and send an HTML page for human consumption, not machine consumption. You have no control over that, and would have no choice but to parse the HTML.
FTP is designed for machine consumption, more so with the introduction of the MLST
and MLSD
commands that replace the ambiguous LIST
command.
Solution 3:[3]
The following is not recursive, but it worked for me:
$ curl -s https://www.kernel.org/pub/software/scm/git/
The output is HTML and is written to stdout
. Unlike with wget
, there is nothing written to disk.
-s
(--silent
) is relevant when piping the output, especially within a script that must not be noisy.
Whenever possible, remember not to use ftp
or http
instead of https
.
Solution 4:[4]
If it's being served by http then there's no way to get a simple directory listing. The listing you see when you browse there, which is the one wget is retrieving, is generated by the web server as an HTML page. All you can do is parse that page and extract the information.
Solution 5:[5]
AFAIK, there is no way to get a directory listing like that for security purposes. It is rather lucky that your target directory has the HTML listing because it does allow you to parse it and discover new downloads.
Solution 6:[6]
You can use IDM (internet download manager)
It has a utility named "IDM SITE GRABBER" input the http/https
URLs and it will download all files and folders from http/https
protocol for you.
Solution 7:[7]
elinks
does a halfway decent job of this. Just elinks <URL>
to interact with a directory tree through the terminal.
You can also dump the content to the terminal. In that case, you may want flags like --no-references
and --no-numbering
.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | |
Solution 2 | Remy Lebeau |
Solution 3 | |
Solution 4 | Optimal Cynic |
Solution 5 | Samuel |
Solution 6 | Bhaskara Arani |
Solution 7 | manabear |