对于此示例代码:
import scrapy
class QuotesSpider(scrapy.Spider):
name = "quotes"
start_urls = [
'http://quotes.toscrape.com/page/1/',
]
def parse(self, response):
for quote in response.css('div.quote'):
yield {
'text': quote.css('span.text::text').get(),
'author': quote.css('span small::text').get(),
'tags': quote.css('div.tags a.tag::text').getall(),
}
next_page = response.css('li.next a::attr(href)').get()
if next_page is not None:
yield response.follow(next_page, callback=self.parse)如果我最多只能访问3级链接,该如何限制?不是访问的链接总数,而是相对于初始链接的链接级别。
发布于 2021-06-06 03:06:27
你可以在你的爬虫中使用DEPTH_LIMIT setting来限制爬行的深度:
class QuotesSpider(scrapy.Spider):
name = "quotes"
start_urls = [
'http://quotes.toscrape.com/page/1/',
]
custom_settins = {
`DEPTH_LIMIT`:3
}
...https://stackoverflow.com/questions/66403848
复制相似问题