首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Scrapy Csv导出在一个单元中提取了所有数据。

Scrapy Csv导出在一个单元中提取了所有数据。
EN

Stack Overflow用户
提问于 2017-10-19 16:53:01
回答 1查看 452关注 0票数 0

我目前正在建设我的第一个刮刮项目。目前,我正试图从HTML表中提取数据。这是迄今为止我的爬行蜘蛛:

代码语言:javascript
复制
import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from digikey.items import DigikeyItem
from scrapy.selector import Selector

class DigikeySpider(CrawlSpider):
    name = 'digikey'
    allowed_domains = ['digikey.com']
    start_urls = ['https://www.digikey.com/products/en/capacitors/aluminum-electrolytic-capacitors/58/page/3?stock=1']
    ['www.digikey.com/products/en/capacitors/aluminum-electrolytic-capacitors/58/page/4?stock=1']

    rules = (
        # Extract links matching 'category.php' (but not matching 'subsection.php')
        # and follow links from them (since no callback means follow=True by default).
        Rule(LinkExtractor(allow=('/products/en/capacitors/aluminum-electrolytic-capacitors/58/page/3?stock=1', ), deny=('subsection\.php', ))),
    )

    def parse_item(self, response):
        item = DigikeyItem()
        item['partnumber'] = response.xpath('//td[@class="tr-mfgPartNumber"]/a/span[@itemprop="name"]/text()').extract()
        item['manufacturer'] =  response.xpath('///td[6]/span/a/span/text()').extract()
        item['description'] = response.xpath('//td[@class="tr-description"]/text()').extract()
        item['quanity'] = response.xpath('//td[@class="tr-qtyAvailable ptable-param"]//text()').extract()
        item['price'] = response.xpath('//td[@class="tr-unitPrice ptable-param"]/text()').extract()
        item['minimumquanity'] = response.xpath('//td[@class="tr-minQty ptable-param"]/text()').extract()
        yield item

    parse_start_url = parse_item

它在www.digikey.com/products/en/capacitors/aluminum-electrolytic-capacitors/58/page/4?stock=1擦拭桌子。然后,它将所有数据导出到一个digikey.csv文件中,但所有数据都位于一个单元格中。在一个单元格中刮取数据的Csv文件

setting.py

代码语言:javascript
复制
BOT_NAME = 'digikey'

SPIDER_MODULES = ['digikey.spiders']
NEWSPIDER_MODULE = 'digikey.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'digikey ("Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.93 Safari/537.36")'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

我希望信息一次用一行刮擦,并使用与该部分编号相关联的相应信息。

items.py

代码语言:javascript
复制
import scrapy


class DigikeyItem(scrapy.Item):
    partnumber = scrapy.Field()
    manufacturer = scrapy.Field()
    description = scrapy.Field()
    quanity= scrapy.Field()
    minimumquanity = scrapy.Field()
    price = scrapy.Field()
    pass

任何帮助都是非常感谢的!

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2017-10-23 08:19:49

问题是,您正在将整个列加载到单个项的每个字段中。我觉得你想要的是:

代码语言:javascript
复制
for row in response.css('table#productTable tbody tr'):
    item = DigikeyItem()
    item['partnumber'] = (row.css('.tr-mfgPartNumber [itemprop="name"]::text').extract_first() or '').strip()
    item['manufacturer'] =  (row.css('[itemprop="manufacture"] [itemprop="name"]::text').extract_first() or '').strip()
    item['description'] = (row.css('.tr-description::text').extract_first() or '').strip()
    item['quanity'] = (row.css('.tr-qtyAvailable::text').extract_first() or '').strip()
    item['price'] = (row.css('.tr-unitPrice::text').extract_first() or '').strip()
    item['minimumquanity'] = (row.css('.tr-minQty::text').extract_first() or '').strip()
    yield item

我已经改变了一些选择器,以使它更短。顺便说一句,请避免我在这里使用的手动extract_firststrip重复(只是为了测试目的),并且考虑使用物品装载机,应该更容易获得第一个和条形/格式化所需的输出。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/46835059

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档