写完我的第一篇?递归?蜘蛛侠,我面临一些问题,我一整天都不能修好。
我做了一些研究,其中的错误会导致301错误,但我尝试的每一个解决方案,都没有帮助我走出困境。
我的控制台输出

My改良的settings.py
USER_AGENT = 'kartonage (Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:31.0) Gecko/20100101 Firefox/31.0)'
DOWNLOAD_DELAY = 0.5
HTTPERROR_ALLOW_ALL = True这个user_Agent和httperror_allow_all是其他重定向301错误的人的一些解决方案。
My改良的items.py
import scrapy
class KartonageItem(scrapy.Item):
SKU = scrapy.Field()
Title = scrapy.Field()
Link = scrapy.Field()
Price = scrapy.Field()
Delivery_Status = scrapy.Field()
Weight = scrapy.Field()
QTY = scrapy.Field()
Volume = scrapy.Field()我的代码我使用了
import scrapy
from ..items import KartonageItem
class KartonSpider(scrapy.Spider):
name = "kartons12"
allow_domains = ['karton.eu']
start_urls = [
'https://www.karton.eu/Faltkartons'
]
custom_settings = {'FEED_EXPORT_FIELDS': ['SKU', 'Title', 'Link', 'Price', 'Delivery_Status', 'Weight', 'QTY', 'Volume'] }
def parse(self, response):
url = response.xpath('//div[@class="cat-thumbnails"]')
for a in url:
link = a.xpath('a/@href')
yield response.follow(url=link.get(), callback=self.parse_category_cartons)
def parse_category_cartons(self, response):
url2 = response.xpath('//div[@class="cat-thumbnails"]')
for a in url2:
link = a.xpath('a/@href')
yield response.follow(url=link.get(), callback=self.parse_target_page)
def parse_target_page(self, response):
card = response.xpath('//div[@class="text-center articelbox"]')
for a in card:
items = KartonageItem()
link = a.xpath('a/@href')
items ['SKU'] = a.xpath('.//div[@class="delivery-status"]/small/text()').get()
items ['Title'] = a.xpath('.//h5[@class="title"]/a/text()').get()
items ['Link'] = a.xpath('.//h5[@class="text-center artikelbox"]/a/@href').extract()
items ['Price'] = a.xpath('.//strong[@class="price-ger price text-nowrap"]/span/text()').get()
items ['Delivery_Status'] = a.xpath('.//div[@class="signal_image status-2"]/small/text()').get()
yield response.follow(url=link.get(),callback=self.parse_item, meta={'items':items})
def parse_item(self,response):
table = response.xpath('//span[@class="product-info-inner"]')
items = KartonageItem()
items = response.meta['items']
items['Weight'] = a.xpath('.//span[@class="staffelpreise-small"]/text()').get()
items['Volume'] = a.xpath('.//td[@class="icon_contenct"][7]/text()').get()
yield items发布于 2020-07-31 13:07:35
HTTP 301不是一个错误,它是对永久移动的响应。它会自动将您重定向到该页的新地址。您可以在执行日志中看到您被重定向。
这本身不应该是个问题。这可能是别的什么原因吗?蜘蛛有什么意外的行为吗?
https://stackoverflow.com/questions/63192380
复制相似问题