首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >在python中通过爬取子URL下载文件

在python中通过爬取子URL下载文件
EN

Stack Overflow用户
提问于 2021-03-13 08:47:52
回答 1查看 53关注 0票数 1

我正在尝试从大量的web链接下载文档(主要是pdf格式),如下所示:

https://projects.worldbank.org/en/projects-operations/document-detail/P167897?type=projects

https://projects.worldbank.org/en/projects-operations/document-detail/P173997?type=projects

https://projects.worldbank.org/en/projects-operations/document-detail/P166309?type=projects

但是,无法从这些链接直接访问pdf文件。用户需要单击子URL才能访问pdfs。有没有办法搜索子URL并从中下载所有相关文件?我正在尝试使用以下代码,但到目前为止还没有任何成功,特别是对于这里列出的这些URL。

如果您需要进一步的澄清,请告诉我。我很乐意这样做。谢谢。

代码语言:javascript
复制
from simplified_scrapy import Spider, SimplifiedDoc, SimplifiedMain, utils

class MySpider(Spider):
    name = 'download_pdf'
    allowed_domains = ["www.worldbank.org"]
    start_urls = [
        "https://projects.worldbank.org/en/projects-operations/document-detail/P167897?type=projects",
        "https://projects.worldbank.org/en/projects-operations/document-detail/P173997?type=projects",
        "https://projects.worldbank.org/en/projects-operations/document-detail/P166309?type=projects"
    ]  # Entry page

    def afterResponse(self, response, url, error=None, extra=None):
        if not extra:
            print ("The version of library simplified_scrapy is too old, please update.")
            SimplifiedMain.setRunFlag(False)
            return
        try:
            path = './pdfs'
            # create folder start
            srcUrl = extra.get('srcUrl')
            if srcUrl:
                index = srcUrl.find('year/')
                year = ''
                if index > 0:
                    year = srcUrl[index + 5:]
                    index = year.find('?')
                    if index>0:
                        path = path + year[:index]
                        utils.createDir(path)
            # create folder end

            path = path + url[url.rindex('/'):]
            index = path.find('?')
            if index > 0: path = path[:index]
            flag = utils.saveResponseAsFile(response, path, fileType="pdf")
            if flag:
                return None
            else:  # If it's not a pdf, leave it to the frame
                return Spider.afterResponse(self, response, url, error, extra)
        except Exception as err:
            print(err)

    def extract(self, url, html, models, modelNames):
        doc = SimplifiedDoc(html)
        lst = doc.selects('div.list >a').contains("documents/", attr="href")
        if not lst:
            lst = doc.selects('div.hidden-md hidden-lg >a')
        urls = []
        for a in lst:
            a["url"] = utils.absoluteUrl(url.url, a["href"])
            # Set root url start
            a["srcUrl"] = url.get('srcUrl')
            if not a['srcUrl']:
                a["srcUrl"] = url.url
            # Set root url end
            urls.append(a)

        return {"Urls": urls}

    # Download again by resetting the URL. Called when you want to download again.
    def resetUrl(self):
        Spider.clearUrl(self)
        Spider.resetUrlsTest(self)

SimplifiedMain.startThread(MySpider())  # Start download
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2021-03-13 17:59:08

有一个API端点,它包含您在网站上看到的整个响应,以及...文档pdf的URL。:D

因此,您可以查询API,获取URLS,并最终获取文档。

下面是操作步骤:

代码语言:javascript
复制
import requests

pids = ["P167897", "P173997", "P166309"]

for pid in pids:
    end_point = f"https://search.worldbank.org/api/v2/wds?" \
                f"format=json&includepublicdocs=1&" \
                f"fl=docna,lang,docty,repnb,docdt,doc_authr,available_in&" \
                f"os=0&rows=20&proid={pid}&apilang=en"
    documents = requests.get(end_point).json()["documents"]
    for document_data in documents.values():
        try:
            pdf_url = document_data["pdfurl"]
            print(f"Fetching: {pdf_url}")
            with open(pdf_url.rsplit("/")[-1], "wb") as pdf:
                pdf.write(requests.get(pdf_url).content)
        except KeyError:
            continue

输出:(完全下载的.pdf文件)

代码语言:javascript
复制
Fetching: http://documents.worldbank.org/curated/en/106981614570591392/pdf/Official-Documents-Grant-Agreement-for-Additional-Financing-Grant-TF0B4694.pdf
Fetching: http://documents.worldbank.org/curated/en/331341614570579132/pdf/Official-Documents-First-Restatement-to-the-Disbursement-Letter-for-Grant-D6810-SL-and-for-Additional-Financing-Grant-TF0B4694.pdf
Fetching: http://documents.worldbank.org/curated/en/387211614570564353/pdf/Official-Documents-Amendment-to-the-Financing-Agreement-for-Grant-D6810-SL.pdf
Fetching: http://documents.worldbank.org/curated/en/799541612993594209/pdf/Sierra-Leone-AFRICA-WEST-P167897-Sierra-Leone-Free-Education-Project-Procurement-Plan.pdf
Fetching: http://documents.worldbank.org/curated/en/310641612199201329/pdf/Disclosable-Version-of-the-ISR-Sierra-Leone-Free-Education-Project-P167897-Sequence-No-02.pdf

and more ...
票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/66609022

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档