我必须创建一个爬行信息的蜘蛛https://www.karton.eu/einwellig-ab-100-mm以及产品的重量,该产品在跟随产品链接到其自己的页面后可抓取。
运行代码后,我收到以下错误消息:

我已经检查了url是否损坏,所以我可以在我的scrapy shell中获取它。
使用代码:
import scrapy
from ..items import KartonageItem
class KartonSpider(scrapy.Spider):
name = "kartons"
allow_domains = ['karton.eu']
start_urls = [
'https://www.karton.eu/einwellig-ab-100-mm'
]
custom_settings = {'FEED_EXPORT_FIELDS': ['SKU', 'Title', 'Link', 'Price', 'Delivery_Status', 'Weight'] }
def parse(self, response):
card = response.xpath('//div[@class="text-center artikelbox"]')
for a in card:
items = KartonageItem()
link = a.xpath('@href')
items ['SKU'] = a.xpath('.//div[@class="signal_image status-2"]/small/text()').get()
items ['Title'] = a.xpath('.//div[@class="title"]/a/text()').get()
items ['Link'] = link.get()
items ['Price'] = a.xpath('.//div[@class="price_wrapper"]/strong/span/text()').get()
items ['Delivery_Status'] = a.xpath('.//div[@class="signal_image status-2"]/small/text()').get()
yield response.follow(url=link.get(),callback=self.parse, meta={'items':items})
def parse_item(self,response):
table = response.xpath('//span[@class="staffelpreise-small"]')
items = KartonageItem()
items = response.meta['items']
items['Weight'] = response.xpath('//span[@class="staffelpreise-small"]/text()').get()
yield items是什么导致了这个错误?
发布于 2020-07-30 01:52:36
问题是你的link.get()返回一个None价值。看起来问题出在你的XPath上。
def parse(self, response):
card = response.xpath('//div[@class="text-center artikelbox"]')
for a in card:
items = KartonageItem()
link = a.xpath('@href')而card变量选择了几个div标记,则没有@href在该div的self轴中(这就是为什么它返回空的原因),但在后代中有a标签。所以我相信这应该会给你预期的结果:
def parse(self, response):
card = response.xpath('//div[@class="text-center artikelbox"]')
for a in card:
items = KartonageItem()
link = a.xpath('a/@href') # FIX HERE <<<<<https://stackoverflow.com/questions/63159056
复制相似问题