首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >如何通过scrapinghub使用peewee

如何通过scrapinghub使用peewee
EN

Stack Overflow用户
提问于 2017-04-15 16:40:40
回答 1查看 171关注 0票数 0

我想使用peewee将我的数据保存到远程机器上。当我运行我的爬虫程序时,我发现了以下错误,

代码语言:javascript
复制
File "/usr/local/lib/python2.7/site-packages/scrapy/commands/crawl.py", line 57, in run
    self.crawler_process.crawl(spname, **opts.spargs)
  File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 163, in crawl
    return self._crawl(crawler, *args, **kwargs)
  File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 167, in _crawl
    d = crawler.crawl(*args, **kwargs)
  File "/usr/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1445, in unwindGenerator
    return _inlineCallbacks(None, gen, Deferred())
--- <exception caught here> ---
  File "/usr/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1299, in _inlineCallbacks
    result = g.send(result)
  File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 90, in crawl
    six.reraise(*exc_info)
  File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 72, in crawl
    self.engine = self._create_engine()
  File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 97, in _create_engine
    return ExecutionEngine(self, lambda _: self.stop())
  File "/usr/local/lib/python2.7/site-packages/scrapy/core/engine.py", line 70, in __init__
    self.scraper = Scraper(crawler)
  File "/usr/local/lib/python2.7/site-packages/scrapy/core/scraper.py", line 71, in __init__
    self.itemproc = itemproc_cls.from_crawler(crawler)
  File "/usr/local/lib/python2.7/site-packages/scrapy/middleware.py", line 58, in from_crawler
    return cls.from_settings(crawler.settings, crawler)
  File "/usr/local/lib/python2.7/site-packages/scrapy/middleware.py", line 34, in from_settings
    mwcls = load_object(clspath)
  File "/usr/local/lib/python2.7/site-packages/scrapy/utils/misc.py", line 44, in load_object
    mod = import_module(module)
  File "/usr/local/lib/python2.7/importlib/__init__.py", line 37, in import_module
    __import__(name)
  File "/app/__main__.egg/annuaire_agence_bio/pipelines.py", line 8, in <module>

exceptions.ImportError: No module named peewee

任何建议都非常受欢迎。

EN

回答 1

Stack Overflow用户

发布于 2017-04-15 22:32:12

您不能在Scrapinhub上安装您自己选择的模块...据我所知,你只能安装MySQLDB来做到这一点。

在项目的主文件夹中创建一个名为scrapinghub.yml的文件,其中包含以下内容。

代码语言:javascript
复制
projects:
  default: 111149
requirements:
  file: requirements.txt

其中111149是我在scrapinghub上的项目ID。

在同一目录中创建另一个名为requirements.txt的文件。

并将所需的模块和正在使用的版本号放在该文件中,如下所示:

代码语言:javascript
复制
MySQL-python==1.2.5

PS:,我使用的是MySQLDB模块,所以我把它放在。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/43424019

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档