如何提取网站上的所有PDF链接?


10

这是一个有点题外话,但我希望你们会帮助我。我发现一个网站上充斥着我需要的文章,但这些文章中夹杂着许多无用的文件(主要是jpg)。

我想知道是否有一种方法可以找到(而不是下载)服务器上的所有PDF来创建链接列表。基本上,我只是想过滤掉所有非PDF内容,以便更好地了解下载内容和不下载内容。


3
您可能可以将DownThemAll用于该任务。这是一个firefox扩展程序,允许通过过滤器等下载文件。我本人从未使用过它,所以我将无法发布完整的教程,但其他人可能会发布。如果您对该扩展程序更加熟悉,请随时发布正确的答案。
谷氨酰胺

啊,我刚刚看到您只是想过滤掉链接,而不是下载它们。我不知道我发布的扩展名是否可行。但这值得一试!
谷氨酰胺

Answers:


15

总览

好,你去。这是脚本形式的程序化解决方案:

#!/bin/bash

# NAME:         pdflinkextractor
# AUTHOR:       Glutanimate (http://askubuntu.com/users/81372/), 2013
# LICENSE:      GNU GPL v2
# DEPENDENCIES: wget lynx
# DESCRIPTION:  extracts PDF links from websites and dumps them to the stdout and as a textfile
#               only works for links pointing to files with the ".pdf" extension
#
# USAGE:        pdflinkextractor "www.website.com"

WEBSITE="$1"

echo "Getting link list..."

lynx -cache=0 -dump -listonly "$WEBSITE" | grep ".*\.pdf$" | awk '{print $2}' | tee pdflinks.txt

# OPTIONAL
#
# DOWNLOAD PDF FILES
#
#echo "Downloading..."    
#wget -P pdflinkextractor_files/ -i pdflinks.txt

安装

你需要有wgetlynx安装:

sudo apt-get install wget lynx

用法

该脚本将获取.pdf网站上所有文件的列表,并将其转储到命令行输出以及工作目录中的文本文件。如果注释掉“可选” wget命令,脚本将继续将所有文件下载到新目录。

$ ./pdflinkextractor http://www.pdfscripting.com/public/Free-Sample-PDF-Files-with-scripts.cfm
Getting link list...
http://www.pdfscripting.com/public/FreeStuff/PDFSamples/JSPopupCalendar.pdf
http://www.pdfscripting.com/public/FreeStuff/PDFSamples/ModifySubmit_Example.pdf
http://www.pdfscripting.com/public/FreeStuff/PDFSamples/DynamicEmail_XFAForm_V2.pdf
http://www.pdfscripting.com/public/FreeStuff/PDFSamples/AcquireMenuItemNames.pdf
http://www.pdfscripting.com/public/FreeStuff/PDFSamples/BouncingButton.pdf
http://www.pdfscripting.com/public/FreeStuff/PDFSamples/JavaScriptClock.pdf
http://www.pdfscripting.com/public/FreeStuff/PDFSamples/Matrix2DOperations.pdf
http://www.pdfscripting.com/public/FreeStuff/PDFSamples/RobotArm_3Ddemo2.pdf
http://www.pdfscripting.com/public/FreeStuff/PDFSamples/SimpleFormCalculations.pdf
http://www.pdfscripting.com/public/FreeStuff/PDFSamples/TheFlyv3_EN4Rdr.pdf
http://www.pdfscripting.com/public/FreeStuff/PDFSamples/ImExportAttachSample.pdf
http://www.pdfscripting.com/public/FreeStuff/PDFSamples/AcroForm_BasicToggle.pdf
http://www.pdfscripting.com/public/FreeStuff/PDFSamples/AcroForm_ToggleButton_Sample.pdf
http://www.pdfscripting.com/public/FreeStuff/PDFSamples/AcorXFA_BasicToggle.pdf
http://www.pdfscripting.com/public/FreeStuff/PDFSamples/ConditionalCalcScripts.pdf
Downloading...
--2013-12-24 13:31:25--  http://www.pdfscripting.com/public/FreeStuff/PDFSamples/JSPopupCalendar.pdf
Resolving www.pdfscripting.com (www.pdfscripting.com)... 74.200.211.194
Connecting to www.pdfscripting.com (www.pdfscripting.com)|74.200.211.194|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 176008 (172K) [application/pdf]
Saving to: `/Downloads/pdflinkextractor_files/JSPopupCalendar.pdf'

100%[===========================================================================================================================================================================>] 176.008      120K/s   in 1,4s    

2013-12-24 13:31:29 (120 KB/s) - `/Downloads/pdflinkextractor_files/JSPopupCalendar.pdf' saved [176008/176008]

...

为什么用"$(pwd)/pdflinks.txt"代替pdflinks.txt
jfs

@JFSebastian是的,这是多余的。我修改了脚本。谢谢!
谷氨酰胺

完美的作品!
克里斯·史密斯

6

一个简单的javascript代码段可以解决此问题:(注意:我假设所有pdf文件在链接中都以.pdf结尾。)

打开您的浏览器JavaScript控制台,复制以下代码并将其粘贴到js控制台,完成!

//get all link elements
var link_elements = document.querySelectorAll(":link");

//extract out all uris.
var link_uris = [];
for (var i=0; i < link_elements.length; i++)
{
    //remove duplicated links
    if (link_elements[i].href in link_uris)
        continue;

    link_uris.push (link_elements[i].href);
}

//filter out all links containing ".pdf" string
var link_pdfs = link_uris.filter (function (lu) { return lu.indexOf (".pdf") != -1});

//print all pdf links
for (var i=0; i < link_pdfs.length; i++)
    console.log (link_pdfs[i]);

1
对我来说,这回报太多了。该lu函数需要为:lu.endsWith (".pdf") == 1,这仅使我获得了PDF链接,而不是所有包含“ * .pdf *”的链接,这就是我发布的代码所得到的。FWIW。
2013年
By using our site, you acknowledge that you have read and understand our Cookie Policy and Privacy Policy.
Licensed under cc by-sa 3.0 with attribution required.