我正在尝试从Kickstarter项目网页上抓取一个项目的正文。我有以下代码,这对第一个URL有效,但不适用于第二个和第三个URL。我想知道是否有一个简单的修复我的代码,而不需要使用其他包?
url = "https://www.kickstarter.com/projects/1365297844/kuhkubus-3d-escher-figures?ref=discovery_staff_picks_category_newest"
#url = "https://www.kickstarter.com/projects/clarissaredwine/swingby-a-voyager-gravity-puzzle?ref=discovery_staff_picks_category_newest"
#url = "https://www.kickstarter.com/projects/100389301/us-army-navy-marines-air-force-special-challenge-c?ref=category"
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html.parser')
body_text = soup.find(class_='rte__content')
all_text = body_text.find_all('p')
for i in all_text:
print(i.get_text())发布于 2020-06-12 09:28:50
此网站在以下位置使用了GraphQL API:
POST https://www.kickstarter.com/graph我们可以使用它来获取站点数据,而不是抓取任何URL (任何项目)的html。此外,我们还将提取两个字段story和risks。
此Graphql API需要一个csrf令牌,该令牌嵌入到页面的meta标记中(任何页面都可以)。此外,我们还需要使用请求会话存储cookie,否则调用将失败。
以下是该接口使用python的简单使用示例:
import requests
from bs4 import BeautifulSoup
s = requests.Session()
r = s.get("https://www.kickstarter.com")
soup = BeautifulSoup(r.text, 'html.parser')
xcsrf = soup.find("meta", {"name": "csrf-token"})["content"]
query = """
query GetEndedToLive($slug: String!) {
project(slug: $slug) {
id
deadlineAt
showCtaToLiveProjects
state
description
url
__typename
}
}"""
r = s.post("https://www.kickstarter.com/graph",
headers= {
"x-csrf-token": xcsrf
},
json = {
"query": query,
"variables": {
"slug":"kuhkubus-3d-escher-figures"
}
})
print(r.json())在第二个链接中,它显示了查询中感兴趣的字段。完整的查询如下:
query Campaign($slug: String!) {
project(slug: $slug) {
id
isSharingProjectBudget
risks
story(assetWidth: 680)
currency
spreadsheet {
displayMode
public
url
data {
name
value
phase
rowNum
__typename
}
dataLastUpdatedAt
__typename
}
environmentalCommitments {
id
commitmentCategory
description
__typename
}
__typename
}
}我们只对story和risks感兴趣,因此我们将拥有:
query Campaign($slug: String!) {
project(slug: $slug) {
risks
story(assetWidth: 680)
}
}请注意,我们需要的项目段塞是网址的一部分,例如clarissaredwine/swingby-a-voyager-gravity-puzzle是你的第二个网址的段塞。
下面是一个样例实现,它提取段塞,循环遍历段塞,并为每个段塞调用GraphQL端点,它会打印每个段塞的情况和风险:
import requests
from bs4 import BeautifulSoup
import re
urls = [
"https://www.kickstarter.com/projects/1365297844/kuhkubus-3d-escher-figures?ref=discovery_staff_picks_category_newest",
"https://www.kickstarter.com/projects/clarissaredwine/swingby-a-voyager-gravity-puzzle?ref=discovery_staff_picks_category_newest",
"https://www.kickstarter.com/projects/100389301/us-army-navy-marines-air-force-special-challenge-c?ref=category"
]
slugs = []
#extract slugs from url
for url in urls:
slugs.append(re.search('/projects/(.*)\?', url).group(1))
s = requests.Session()
r = s.get("https://www.kickstarter.com")
soup = BeautifulSoup(r.text, 'html.parser')
xcsrf = soup.find("meta", {"name": "csrf-token"})["content"]
query = """
query Campaign($slug: String!) {
project(slug: $slug) {
risks
story(assetWidth: 680)
}
}"""
for slug in slugs:
print(f"--------{slug}------")
r = s.post("https://www.kickstarter.com/graph",
headers= {
"x-csrf-token": xcsrf
},
json = {
"operationName":"Campaign",
"variables":{
"slug": slug
},
"query": query
})
result = r.json()
print("-------STORY--------")
story_html = result["data"]["project"]["story"]
soup = BeautifulSoup(story_html, 'html.parser')
for i in soup.find_all('p'):
print(i.get_text())
print("-------RISKS--------")
print(result["data"]["project"]["risks"])我猜,如果您正在抓取此站点上的其他内容,则可以将graphQL端点用于许多其他事情。但是,请注意,此接口上的the introspection已被禁用,因此您只能查找站点上现有的模式使用情况(不能获取整个模式)
https://stackoverflow.com/questions/62335537
复制相似问题