我正在抓取雅虎财经新闻使用以下代码。
class YfinNewsSpider(scrapy.Spider):
name = 'yfin_news_spider'
custom_settings = {'DOWNLOAD_DELAY': '0.5', 'COOKIES_ENABLED': True, 'COOKIES_DEBUG': True}
def __init__(self, month, year, **kwargs):
self.start_urls = ['https://finance.yahoo.com/sitemap/2020_03_all']
self.allowed_domains = ['finance.yahoo.com']
super().__init__(**kwargs)
def parse(self, response):
all_news_urls = response.xpath('//ul/li[@class="List(n) Py(3px) Lh(1.2)"]')
for news in all_news_urls:
news_url = news.xpath('.//a[@class="Td(n) Td(u):h C($c-fuji-grey-k)"]/@href').extract_first()
yield scrapy.Request(news_url, callback=self.parse_news, dont_filter=True)
def parse_news(self, response):
news_url = str(response.url)
title = response.xpath('//title/text()').extract_first()
paragraphs = response.xpath('//div[@class="caas-body"]/p/text()').extract()
date_time = response.xpath('//div[@class="caas-attr-time-style"]/time/@datetime').extract_first()
yield {'title': title, 'url': news_url, 'body_text': paragraphs, 'timestamp': date_time}然而,当我运行我的爬虫时,它给我的结果如下。
2020-11-28 20:42:40 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://consent.yahoo.com/v2/collectConsent?sessionId=3_cc-session_05cc09ea-0bc0-439d-8b4c-2d6f20f52d6e> (referer: https://finance.yahoo.com/sitemap/2020_03_all)
2020-11-28 20:42:40 [scrapy.downloadermiddlewares.cookies] DEBUG: Sending cookies to: <GET https://finance.yahoo.com/news/onegold-becomes-first-company-offer-110000241.html>
Cookie: B=cnmvgrdfs5a0r&b=3&s=o1; GUCS=ASXMbR9p
2020-11-28 20:42:40 [scrapy.core.scraper] DEBUG: Scraped from <200 https://consent.yahoo.com/v2/collectConsent?sessionId=3_cc-session_05cc09ea-0bc0-439d-8b4c-2d6f20f52d6e>
{'title': 'Yahoo er nu en del af Verizon Media', 'url': 'https://consent.yahoo.com/v2/collectConsent?sessionId=3_cc-session_05cc09ea-0bc0-439d-8b4c-2d6f20f52d6e', 'body_text': [], 'timestamp': None}
2020-11-28 20:42:41 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://consent.yahoo.com/v2/collectConsent?sessionId=3_cc-session_d6731ce6-78bc-4222-914f-24cf98f874b8> (referer: https://finance.yahoo.com/sitemap/2020_03_all)这似乎表明,当我的蜘蛛转到在https://finance.yahoo.com/sitemap/2020_03_all中找到的https://finance.yahoo.com/news/onegold-becomes-first-company-offer-110000241.html。它试图向https://finance.yahoo.com/news/onegold-becomes-first-company-offer-110000241.html发送cookie,但被重定向为同意接受wall https://consent.yahoo.com/v2/collectConsent?sessionId=3_cc-session_05cc09ea-0bc0-439d-8b4c-2d6f20f52d6e。
我在浏览器中打开这个同意墙https://consent.yahoo.com/v2/collectConsent?sessionId=3_cc-session_05cc09ea-0bc0-439d-8b4c-2d6f20f52d6e,发现数据同意接受屏幕。当我点击accept时,它会将我带到我想要抓取的正确站点。抓取结果也与此同意屏幕中的内容完全相同。
我尝试将COOKIES_ENABLED设置为True,但不起作用。那么,有没有办法绕过scrapy中的接受屏幕呢?
谢谢。
发布于 2020-11-29 05:01:14
您可以尝试一种方法:打开网络选项卡上的同意页面,然后单击给予同意按钮。在那里,您可以识别当您同意时它发送的请求。您可以尝试使用scrapy复制相同的请求。也许这样你的问题就可以解决了。另一种选择是使用scrapy-selenium手动单击该按钮,然后scrapy可以接管该按钮。
https://stackoverflow.com/questions/65054103
复制相似问题