我正在尝试制作一个蜘蛛,它从一个页面上抓取产品,完成后,抓取目录中的下一页和下一页,依此类推。
我从一个页面上获得了所有的产品(我正在抓取亚马逊)
rules = {
Rule(LinkExtractor(allow =(), restrict_xpaths = ('//a[contains(@class, "a-link-normal") and contains(@class,"a-text-normal")]') ),
callback = 'parse_item', follow = False)
}这就很好用了。问题是我应该转到“下一页”继续抓取。
我想要做的就是这样的规则
rules = {
#Next Button
Rule(LinkExtractor(allow =(), restrict_xpaths = ('(//li[@class="a-normal"]/a/@href)[2]') )),
}问题是xPath返回(例如,从这个页面:https://www.amazon.com/s?k=mac+makeup&lo=grid&page=2&crid=2JQQNTWC87ZPV&qid=1559841911&sprefix=MAC+mak%2Caps%2C312&ref=sr_pg_2)
/s?k=mac+makeup&lo=grid&page=3&crid=2JQQNTWC87ZPV&qid=1559841947&sprefix=MAC+mak%2Caps%2C312&ref=sr_pg_3这将是下一个页面的URL,但没有www.amazon.com。
我想我的代码不能工作了,因为我错过了上面url之前的www.amazon.com。
你知道怎么做吗?也许我做这件事的方式并不正确。
发布于 2019-06-07 02:04:21
尝试使用urljoin。
link = "/s?k=mac+makeup&lo=grid&page=3&crid=2JQQNTWC87ZPV&qid=1559841947&sprefix=MAC+mak%2Caps%2C312&ref=sr_pg_3"
new_link = response.urljoin(link)下面的爬行器是一种可能的解决方案,其主要思想是使用parse_links函数获取指向单个页面的链接,该链接产生对解析函数的响应,并且您还可以生成对相同函数的下一个页面响应,直到爬行完所有页面为止。
class AmazonSpider(scrapy.spider):
start_urls = ['https://www.amazon.com/s?k=mac+makeup&lo=grid&crid=2JQQNTWC87ZPV&qid=1559870748&sprefix=MAC+mak%2Caps%2C312&ref=sr_pg_1'
wrapper_xpath = '//*[@id="search"]/div[1]/div[2]/div/span[3]/div[1]/div' # Product wrapper
link_xpath = './//div/div/div/div[2]/div[2]/div/div[1]/h2/a/@href' # Link xpath
np_xpath = '(//li[@class="a-normal"]/a/@href)[2]' # Next page xpath
def parse_links(self, response):
for li in response.xpath(self.wrapper_xpath):
link = li.xpath(self.link_xpath).extract_first()
link = response.urljoin(link)
yield scrapy.Request(link, callback = self.parse)
next_page = response.xpath(self.np_xpath).extract_first()
if next_page is not None:
next_page_link = response.urljoin(next_page)
yield scrapy.Request(url=next_page_link, callback=self.parse_links)
else:
print("next_page is none")https://stackoverflow.com/questions/56482563
复制相似问题