首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >擦伤传递响应,缺少一个位置参数

擦伤传递响应,缺少一个位置参数
EN

Stack Overflow用户
提问于 2017-05-25 02:43:52
回答 1查看 2.3K关注 0票数 0

新到python,来自php。我想刮一些网站使用Scrapy,并已通过教程和简单的脚本很好。现在,写真正的交易会出现这样的错误:

回溯(最近一次调用):文件"C:\Users\Naltroc\Miniconda3\lib\site-packages\twisted\internet\defer.py",行653,在_runCallbacks current.result =回调(current.result,*args,**kw)中 文件"C:\Users\Naltroc\Documents\Python Scripts\tutorial\tutorial\spiders\quotes_spider.py",第52行,在解析self.dispatchersite中 TypeError:叙词库()缺少一个必需的位置参数:'response'

当调用shell命令scrapy crawl words时,Scrapy会自动实例化一个对象。

据我所知,self是任何类方法的第一个参数。调用类方法时,不将self作为参数传递,而是将其发送给变量。

首先,这被称为:

代码语言:javascript
复制
# Scrapy automatically provides `response` to `parse()` when coming from `start_requests()`
def parse(self, response):
        site = response.meta['site']
        #same as "site = thesaurus"
        self.dispatcher[site](response)
        #same as "self.dispatcher['thesaurus'](response)

然后

代码语言:javascript
复制
def thesaurus(self, response):
        filename = 'thesaurus.txt'
        words = ''
        ul = response.css('.relevancy-block ul')
        for idx, u in enumerate(ul):
            if idx == 1: 
                break;
            words = u.css('.text::text').extract()

        self.save_words(filename, words)

在php中,这应该与调用$this->thesaurus($response)相同。parse显然是将response作为一个变量发送,但是python说它缺少。它去哪了?

这里的完整代码:

代码语言:javascript
复制
import scrapy

class WordSpider(scrapy.Spider):
    def __init__(self, keyword = 'apprehensive'):
        self.k = keyword
    name = "words"
    # Utilities
    def make_csv(self, words):
        csv = ''
        for word in words:
            csv += word + ','
        return csv

    def save_words(self, words, fp):
        with ofpen(fp, 'w') as f:
            f.seek(0)
            f.truncate()
            csv = self.make_csv(words)
            f.write(csv)

    # site specific parsers
    def thesaurus(self, response):
        filename = 'thesaurus.txt'
        words = ''
        print("in func self is defined as ", self)
        ul = response.css('.relevancy-block ul')
        for idx, u in enumerate(ul):
            if idx == 1:
                break;
            words = u.css('.text::text').extract()
            print("words is ", words)

        self.save_words(filename, words)

    def oxford(self):
        filename = 'oxford.txt'
        words = ''

    def collins(self):
        filename = 'collins.txt'
        words = ''

    # site/function mapping
    dispatcher = {
        'thesaurus': thesaurus,
        'oxford': oxford,
        'collins': collins,
    }

    def parse(self, response):
        site = response.meta['site']
        self.dispatcher[site](response)

    def start_requests(self):
        urls = {
            'thesaurus': 'http://www.thesaurus.com/browse/%s?s=t' % self.k,
            #'collins': 'https://www.collinsdictionary.com/dictionary/english-thesaurus/%s' % self.k,
            #'oxford': 'https://en.oxforddictionaries.com/thesaurus/%s' % self.k,
        }

        for site, url in urls.items():
            print(site, url)
            yield scrapy.Request(url, meta={'site': site}, callback=self.parse)
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2017-05-25 06:43:00

在您的代码中,有很多微小的erorrs。我冒昧地清理了一下,以遵循常见的python/scrapy成语:)

代码语言:javascript
复制
import logging
import scrapy


# Utilities
# should probably use csv module here or `scrapy crawl -o` flag instead
def make_csv(words):
    csv = ''
    for word in words:
        csv += word + ','
    return csv


def save_words(words, fp):
    with open(fp, 'w') as f:
        f.seek(0)
        f.truncate()
        csv = make_csv(words)
        f.write(csv)


class WordSpider(scrapy.Spider):
    name = "words"

    def __init__(self, keyword='apprehensive', **kwargs):
        super(WordSpider, self).__init__(**kwargs)
        self.k = keyword

    def start_requests(self):
        urls = {
            'thesaurus': 'http://www.thesaurus.com/browse/%s?s=t' % self.k,
            # 'collins': 'https://www.collinsdictionary.com/dictionary/english-thesaurus/%s' % self.k,
            # 'oxford': 'https://en.oxforddictionaries.com/thesaurus/%s' % self.k,
        }

        for site, url in urls.items():
            yield scrapy.Request(url, meta={'site': site}, callback=self.parse)

    def parse(self, response):
        parser = getattr(self, response.meta['site'])  # retrieve method by name
        logging.info(f'parsing using: {parser}')
        parser(response)

    # site specific parsers
    def thesaurus(self, response):
        filename = 'thesaurus.txt'
        words = []
        print("in func self is defined as ", self)
        ul = response.css('.relevancy-block ul')
        for idx, u in enumerate(ul):
            if idx == 1:
                break
            words = u.css('.text::text').extract()
            print("words is ", words)
        save_words(filename, words)

    def oxford(self):
        filename = 'oxford.txt'
        words = ''

    def collins(self):
        filename = 'collins.txt'
        words = ''
票数 3
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/44171452

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档