首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >如何修复CSV /JSON的Scrapy字典输出格式

如何修复CSV /JSON的Scrapy字典输出格式
EN

Stack Overflow用户
提问于 2016-03-16 10:31:14
回答 2查看 968关注 0票数 0

我的代码如下。我希望将结果提取到CSV中。然而,scrapy的结果是一个有2个键的字典,所有的值都集中在每个键中。输出看起来不太好。我该怎么解决这个问题。这可以通过管道/项目加载器等来完成吗?

非常感谢。

代码语言:javascript
复制
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from scrapy.loader import ItemLoader
from scrapy.loader.processors import TakeFirst, MapCompose, Join
from gumtree1.items import GumtreeItems

class AdItemLoader(ItemLoader):
    jobs_in = MapCompose(unicode.strip)

class GumtreeEasySpider(CrawlSpider):
    name = 'gumtree_easy'
    allowed_domains = ['gumtree.com.au']
    start_urls = ['http://www.gumtree.com.au/s-jobs/page-2/c9302?ad=offering']

    rules = (
        Rule(LinkExtractor(restrict_xpaths='//a[@class="rs-paginator-btn next"]'), callback='parse_item', follow=True),
    )

    def parse_item(self, response):
        loader = AdItemLoader(item=GumtreeItems(), response=response)
        loader.add_xpath('jobs','//div[@id="recent-sr-title"]/following-sibling::*//*[@itemprop="name"]/text()')
        loader.add_xpath('location', '//div[@id="recent-sr-title"]/following-sibling::*//*[@class="rs-ad-location-area"]/text()')
        yield loader.load_item() 

结果是:

代码语言:javascript
复制
2016-03-16 01:51:32 [scrapy] DEBUG: Scraped from <200 http://www.gumtree.com.au/s-jobs/page-5/c9302?ad=offering>
{'jobs': [u'Technical Account Manager',
          u'Service & Maintenance Advisor',
          u'we are hiring motorbike driver delivery leaflet.Strat NOW(BE...',
          u'Casual Gardner/landscape maintenance labourer',
          u'Seeking for Experienced Builders Cleaners with white card',
          u'Babysitter / home help for approx 2 weeks',
          u'Toothing brickwork | Dapto',
          u'EXPERIENCED CHEF',
          u'ChildCare Trainee Wanted',
          u'Skilled Pipelayers & Drainer- Sydney Region',
          u'Casual staff required for Royal Easter Show',
          u'Fencing contractor',
          u'Excavator & Loader Operator',
          u'***EXPERIENCED STRAWBERRY AND RASPBERRY PICKERS WANTED***',
          u'Kitchenhand required for Indian restaurant',
          u'Taxi Driver Wanted',
          u'Full time nanny/sitter',
          u'Kitchen hand and meal packing',
          u'Depot Assistant Required',
          u'hairdresser Junior apprentice required for salon in Randwick',
          u'Insulation Installers Required',
          u'The Knox is seeking a new apprentice',
          u'Medical Receptionist Needed in Bankstown Area - Night Shifts',
          u'On Call Easy Work, Do you live in Berala, Lidcombe or Auburn...',
          u'Looking for farm jon'],
 'location': [u'Melbourne City',
              u'Eastern Suburbs',
              u'Rockdale Area',
              u'Logan Area',
              u'Greater Dandenong',
              u'Brisbane North East',
              u'Kiama Area',
              u'Byron Area',
              u'Dardanup Area',
              u'Blacktown Area',
              u'Auburn Area',
              u'Kingston Area',
              u'Inner Sydney',
              u'Northern Midlands',
              u'Inner Sydney',
              u'Hume Area',
              u'Maribyrnong Area',
              u'Perth City',
              u'Brisbane South East',
              u'Eastern Suburbs',
              u'Gold Coast South',
              u'North Canberra',
              u'Bankstown Area',
              u'Auburn Area',
              u'Gingin Area']}

应该是这样的吗?工作和位置作为单独的字典吗?这将正确地写入CSV,将Job和Location放在单独的单元格中,但我发现使用for循环和zip不应该是最好的方法。

代码语言:javascript
复制
import scrapy
from gumtree1.items import GumtreeItems

class AussieGum1Spider(scrapy.Spider):
    name = "aussie_gum1"
    allowed_domains = ["gumtree.com.au"]
    start_urls = (
        'http://www.gumtree.com.au/s-jobs/page-2/c9302?ad=offering',
    )

    def parse(self, response):
        item = GumtreeItems()
        jobs = response.xpath('//div[@id="recent-sr-title"]/following-sibling::*//*[@itemprop="name"]/text()').extract()
        location = response.xpath('//div[@id="recent-sr-title"]/following-sibling::*//*[@class="rs-ad-location-area"]/text()').extract()
        for j, l in zip(jobs, location):
            item['jobs'] = j.strip()
            item['location'] = l
            yield item

下面是部分结果。

代码语言:javascript
复制
2016-03-16 02:20:46 [scrapy] DEBUG: Crawled (200) <GET http://www.gumtree.com.au/s-jobs/page-3/c9302?ad=offering> (referer: http://www.gumtree.com.au/s-jobs/page-2/c9302?ad=offering)
2016-03-16 02:20:46 [scrapy] DEBUG: Scraped from <200 http://www.gumtree.com.au/s-jobs/page-3/c9302?ad=offering>
{'jobs': u'Live In Au pair-Urgent', 'location': u'Wanneroo Area'}
2016-03-16 02:20:46 [scrapy] DEBUG: Scraped from <200 http://www.gumtree.com.au/s-jobs/page-3/c9302?ad=offering>
{'jobs': u'live in carer', 'location': u'Fraser Coast'}
2016-03-16 02:20:46 [scrapy] DEBUG: Scraped from <200 http://www.gumtree.com.au/s-jobs/page-3/c9302?ad=offering>
{'jobs': u'Mental Health Nurse', 'location': u'Perth Region'}
2016-03-16 02:20:46 [scrapy] DEBUG: Scraped from <200 http://www.gumtree.com.au/s-jobs/page-3/c9302?ad=offering>
{'jobs': u'Experienced NBN pit and pipe installers/node and cabinet wor...',
 'location': u'Marrickville Area'}
2016-03-16 02:20:46 [scrapy] DEBUG: Scraped from <200 http://www.gumtree.com.au/s-jobs/page-3/c9302?ad=offering>
{'jobs': u'Delivery Driver / Pizza Maker Job - Dominos Pizza',
 'location': u'Hurstville Area'}

非常感谢。

EN

回答 2

Stack Overflow用户

发布于 2016-03-17 01:43:04

老实说,使用for循环是正确的方法,但您可以在管道上解决它:

代码语言:javascript
复制
from scrapy.http import Response
from gumtree1.items import GumtreeItems, CustomItem
from scrapy.exceptions import DropItem


class CustomPipeline(object):

    def __init__(self, crawler):
        self.crawler = crawler

    @classmethod
    def from_crawler(cls, crawler):
        return cls(crawler)

    def process_item(self, item, spider):
        if isinstance(item, GumtreeItems):
            for i, jobs in enumerate(item['jobs']):
                self.crawler.engine.scraper._process_spidermw_output(
                    CustomItem(jobs=jobs, location=item['location'][i]), None, Response(''), spider)
            raise DropItem("main item dropped")
        return item

还要添加自定义项:

代码语言:javascript
复制
class CustomItem(scrapy.Item):
    jobs = scrapy.Field()
    location = scrapy.Field()

希望这对你有所帮助,我再次认为你应该使用循环。

票数 3
EN

Stack Overflow用户

发布于 2016-03-16 11:01:59

为每个项目设置一个父选择器,并提取与之相关的joblocation

代码语言:javascript
复制
rows = response.xpath('//div[@id="recent-sr-title"]/following-sibling::*')
for row in rows:
    item = GumtreeItems()
    item['jobs'] = row.xpath('.//*[@itemprop="name"]/text()').extract_first().strip()
    item['location'] = row.xpath('.//*[@class="rs-ad-location-area"]/text()').extract_first().strip()
    yield item
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/36025821

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档