我试图在没有下一个链接的情况下解析分页。html是belove:
<div id="pagination" class="pagination">
<ul>
<li>
<a href="//www.demopage.com/category_product_seo_name" class="page-1 ">1</a>
</li>
<li>
<a href="//www.demopage.com/category_product_seo_name?page=2" class="page-2 ">2</a>
</li>
<li>
<a href="//www.demopage.com/category_product_seo_name?page=3" class="page-3 ">3</a>
</li>
<li>
<a href="//www.demopage.com/category_product_seo_name?page=4" class="page-4 active">4</a>
</li>
<li>
<a href="//www.demopage.com/category_product_seo_name?page=5" class="page-5">5</a>
</li>
<li>
<a href="//www.demopage.com/category_product_seo_name?page=6" class="page-6 ">6</a>
</li>
<li>
<span class="page-... three-dots">...</span>
</li>
<li>
<a href="//www.demopage.com/category_product_seo_name?page=50" class="page-50 ">50</a>
</li>
</ul>
</div>对于这个html,我尝试了这个xpath:
response.xpath('//div[@class="pagination"]/ul/li/a/@href').extract()
or
response.xpath('//div[@class="pagination"]/ul/li/a/@href/following-sibling::a[1]/@href').extract()有一个解析这个分页的好方法吗?谢谢大家。
PS:我也查过这个答案:
发布于 2020-08-04 09:55:16
一种解决方案是刮取x个页面数,但是如果总页数不是固定的,这并不总是一个好的解决方案:
class MySpider(scrapy.spider):
num_pages = 10
def start_requests(self):
requests = []
for i in range(1, self.num_pages)
requests.append(scrapy.Request(
url='www.demopage.com/category_product_seo_name?page={0}'.format(i)
))
return requests
def parse(self, response):
#parse pages here.更新
您还可以跟踪页面计数,并执行类似的操作。a[href~="?page=2"]::attr(href)将以a元素为目标,href属性包含指定的字符串。(我目前无法测试这段代码是否有效,但这种方式应该可以实现)
class MySpider(scrapy.spider):
start_urls = ['https://demopage.com/search?p=1']
page_count = 1
def parse(self, response):
self.page_count += 1
#parse response
next_url = response.css('#pagination > ul > li > a[href~="?page={0}"]::attr(href)'.format(self.page_count))
if next_url:
yield scrapy.Request(
url = next_url
)发布于 2020-08-04 10:56:37
您可以简单地获取所有分页链接并在循环中运行它,每次您必须调用下面的代码时,可用的分页链接将由选择器返回。您不需要担心重复的URL,因为scrapy会为您处理这个URL。您也可以使用刮伤规则。
response.css('.pagination ::attr(href)').getall()https://stackoverflow.com/questions/63244175
复制相似问题