页面包含指向一组.zip文件的链接,我都希望下载所有这些文件。我知道这可以通过wget和curl来完成。怎么做?
Answers:
该命令是:
wget -r -np -l 1 -A zip http://example.com/download/
选项含义:
-r, --recursive specify recursive download.
-np, --no-parent don't ascend to the parent directory.
-l, --level=NUMBER maximum recursion depth (inf or 0 for infinite).
-A, --accept=LIST comma-separated list of accepted extensions.
-np
。如果他们在不同的主机上,则需要--span-host
。
上述解决方案不适用于我。对我来说,只有一个有效:
wget -r -l1 -H -t1 -nd -N -np -A.mp3 -erobots=off [url of website]
选项含义:
-r recursive
-l1 maximum recursion depth (1=use only this directory)
-H span hosts (visit other hosts in the recursion)
-t1 Number of retries
-nd Don't make new directories, put downloaded files in this one
-N turn on timestamping
-A.mp3 download only mp3s
-erobots=off execute "robots.off" as if it were a part of .wgetrc
-H
开关。这就是阻止第一个答案(这是我在查看SO之前尝试过的)的原因。
-nd
,如果你不希望任何额外的目录中创建(即,所有的文件将在根文件夹)事(目录)的标志是很方便的。