首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >熊猫:将所有re.search结果写在BeautifulSoup的csv上

熊猫:将所有re.search结果写在BeautifulSoup的csv上
EN

Stack Overflow用户
提问于 2015-07-04 13:58:37
回答 2查看 283关注 0票数 2

我有一个Python脚本的开头,它在Google上搜索值,并抓取它在第一页上可以找到的任何pandas链接。

我有两个问题,如下所列。

代码语言:javascript
复制
import pandas as pd
from bs4 import BeautifulSoup
import urllib2
import re

df = pd.DataFrame(["Shakespeare", "Beowulf"], columns=["Search"])    

print "Searching for PDFs ..."

hdr = {"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11",
    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
    "Accept-Charset": "ISO-8859-1,utf-8;q=0.7,*;q=0.3",
    "Accept-Encoding": "none",
    "Accept-Language": "en-US,en;q=0.8",
    "Connection": "keep-alive"}

def crawl(search):
    google = "http://www.google.com/search?q="
    url = google + search + "+" + "PDF"
    req = urllib2.Request(url, headers=hdr)

    pdf_links = None
    placeholder = None #just a column placeholder

    try:
        page = urllib2.urlopen(req).read()
        soup = BeautifulSoup(page)
        cite = soup.find_all("cite", attrs={"class":"_Rm"})
        for link in cite:
            all_links = re.search(r".+", link.text).group().encode("utf-8")
            if all_links.endswith(".pdf"):
                pdf_links = re.search(r"(.+)pdf$", all_links).group()
            print pdf_links

    except urllib2.HTTPError, e:
        print e.fp.read()

    return pd.Series([pdf_links, placeholder])

df[["PDF links", "Placeholder"]] = df["Search"].apply(crawl)

df.to_csv(FileName, index=False, delimiter=",")

print pdf_links的结果将是:

代码语言:javascript
复制
davidlucking.com/documents/Shakespeare-Complete%20Works.pdf
sparks.eserver.org/books/shakespeare-tempest.pdf
www.w3.org/People/maxf/.../hamlet.pdf
www.w3.org/People/maxf/.../hamlet.pdf
www.w3.org/People/maxf/.../hamlet.pdf
www.w3.org/People/maxf/.../hamlet.pdf
www.w3.org/People/maxf/.../hamlet.pdf
www.w3.org/People/maxf/.../hamlet.pdf
www.w3.org/People/maxf/.../hamlet.pdf
calhoun.k12.il.us/teachers/wdeffenbaugh/.../Shakespeare%20Sonnets.pdf
www.yorku.ca/inpar/Beowulf_Child.pdf
www.yorku.ca/inpar/Beowulf_Child.pdf
https://is.muni.cz/el/1441/.../2._Beowulf.pdf
https://is.muni.cz/el/1441/.../2._Beowulf.pdf
https://is.muni.cz/el/1441/.../2._Beowulf.pdf
https://is.muni.cz/el/1441/.../2._Beowulf.pdf
www.penguin.com/static/pdf/.../beowulf.pdf
www.neshaminy.org/cms/lib6/.../380/text.pdf
www.neshaminy.org/cms/lib6/.../380/text.pdf
sparks.eserver.org/books/beowulf.pdf

csv输出将类似于:

代码语言:javascript
复制
Search         PDF Links
Shakespeare    calhoun.k12.il.us/teachers/wdeffenbaugh/.../Shakespeare%20Sonnets.pdf
Beowulf        sparks.eserver.org/books/beowulf.pdf

问题:

  • 是否有一种方法可以将所有结果作为行写入csv,而不仅仅是底部的?如果可能,在Search中包含对应于"Shakespeare""Beowulf"的每一行的值。
  • 我如何写出完整的pdf链接,没有长链接自动缩写为"..."
EN

回答 2

Stack Overflow用户

回答已采纳

发布于 2015-07-06 17:13:22

这将使您使用soup.find_all("a",href=True)获得所有适当的pdf链接,并将它们保存在一个Dataframe和一个csv中:

代码语言:javascript
复制
hdr = {
    "User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11",
    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
    "Accept-Charset": "ISO-8859-1,utf-8;q=0.7,*;q=0.3",
    "Accept-Encoding": "none",
    "Accept-Language": "en-US,en;q=0.8",
    "Connection": "keep-alive"}


def crawl(columns=None, *search):
    df = pd.DataFrame(columns= columns)
    for term in search:
        google = "http://www.google.com/search?q="
        url = google + term + "+" + "PDF"
        req = urllib2.Request(url, headers=hdr)
        try:
            page = urllib2.urlopen(req).read()
            soup = BeautifulSoup(page)
            pdfs = []
            links = soup.find_all("a",href=True)
            for link in links:
                lk = link["href"]
                if lk.endswith(".pdf"):
                     pdfs.append((term, lk))
            df2 = pd.DataFrame(pdfs, columns=columns)
            df = df.append(df2, ignore_index=True)
        except urllib2.HTTPError, e:
            print e.fp.read()
    return df


df = crawl(["Search", "PDF link"],"Shakespeare","Beowulf")
df.to_csv("out.csv",index=False)

out.csv:

代码语言:javascript
复制
Search,PDF link
Shakespeare,http://davidlucking.com/documents/Shakespeare-Complete%20Works.pdf
Shakespeare,http://www.w3.org/People/maxf/XSLideMaker/hamlet.pdf
Shakespeare,http://sparks.eserver.org/books/shakespeare-tempest.pdf
Shakespeare,https://phillipkay.files.wordpress.com/2011/07/william-shakespeare-plays.pdf
Shakespeare,http://www.artsvivants.ca/pdf/eth/activities/shakespeare_overview.pdf
Shakespeare,http://triggs.djvu.org/djvu-editions.com/SHAKESPEARE/SONNETS/Download.pdf
Beowulf,http://www.yorku.ca/inpar/Beowulf_Child.pdf
Beowulf,https://is.muni.cz/el/1441/podzim2013/AJ2RC_STAL/2._Beowulf.pdf
Beowulf,http://teacherweb.com/IL/Steinmetz/MottramM/Beowulf---Seamus-Heaney.pdf
Beowulf,http://www.penguin.com/static/pdf/teachersguides/beowulf.pdf
Beowulf,http://www.neshaminy.org/cms/lib6/PA01000466/Centricity/Domain/380/text.pdf
Beowulf,http://www.sparknotes.com/free-pdfs/uscellular/download/beowulf.pdf
票数 2
EN

Stack Overflow用户

发布于 2021-09-22 09:47:19

要获得PDF链接,您需要以下选择器:

代码语言:javascript
复制
for result in soup.select('.tF2Cxc'):

  # check if PDF is present via according CSS class OR use try/except instead
  if result.select_one('.ZGwO7'):
    pdf_file = result.select_one('.yuRUbf a')['href']

CSS选择器https://www.w3schools.com/cssref/css_selectors.asp.通过单击浏览器中所需的元素,查看https://selectorgadget.com/ Chrome扩展以获取CSS选择器。

要将它们保存到CSV中,您需要查找以下内容:

代码语言:javascript
复制
# store all links from a for loop
pdfs = []

# create PDF Link column and append PDF links from a pdfs list()
df = pd.DataFrame({'PDF Link': pdfs})

# save to csv and delete default pandas index column. Done!
df.to_csv('PDFs.csv', index=False)

代码和联机IDE中的示例 (还展示了如何在本地保存):

代码语言:javascript
复制
from bs4 import BeautifulSoup
import requests, lxml
import pandas as pd

headers = {
    'User-agent':
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582"
}

params = {
  "q": "best lasagna recipe:pdf"
}

html = requests.get('https://www.google.com/search', headers=headers, params=params)
soup = BeautifulSoup(html.text, 'lxml')

pdfs = []

for result in soup.select('.tF2Cxc'):

  # check if PDF is present via according CSS class
  if result.select_one('.ZGwO7'):
    pdf_file = result.select_one('.yuRUbf a')['href']
    pdfs.append(pdf_file)

# creates PDF Link column and appends PDF links from a pdfs list()
df = pd.DataFrame({'PDF Link': pdfs})
df.to_csv('Bs4_PDFs.csv', index=False)

-----------
# from CSV
'''
PDF Link
http://www.bakersedge.com/PDF/Lasagna.pdf
http://greatgreens.ca/recipes/Recipe%20-%20Worlds%20Best%20Lasagna.pdf
https://liparifoods.com/wp-content/uploads/2015/10/lipari-foods-holiday-recipes.pdf
...
'''

或者,您也可以通过使用来自Google有机结果API的SerpApi来实现相同的目标。这是一个有免费计划的付费API。

您的不同之处在于,您所需要做的不是从头开始创建所有东西,而是找出某些事情不按预期工作的原因,然后随着时间的推移进行维护,只需在结构化JSON上迭代并获取所需的数据。它也可能更易读,并且能更快地理解代码中发生的事情。

要与示例集成的代码:

代码语言:javascript
复制
from serpapi import GoogleSearch
import os
import pandas as pd

params = {
  "api_key": os.getenv("API_KEY"),
  "engine": "google",
  "q": "best lasagna recipe:pdf",
  "hl": "en"
}

search = GoogleSearch(params)
results = search.get_dict()

pdfs = []

# iterate over organic results and check if .pdf file type exists in link
for result in results['organic_results']:
  if '.pdf' in result['link']:
    pdf_file = result['link']
    pdfs.append(pdf_file)

df = pd.DataFrame({'PDF Link': pdfs})
df.to_csv('SerpApi_PDFs.csv', index=False)

-----------
# from CSV
'''
PDF Link
http://www.bakersedge.com/PDF/Lasagna.pdf
http://greatgreens.ca/recipes/Recipe%20-%20Worlds%20Best%20Lasagna.pdf
https://liparifoods.com/wp-content/uploads/2015/10/lipari-foods-holiday-recipes.pdf
...
'''

免责声明,我为SerpApi工作。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/31221442

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档