首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >小菜鸟:教程。运行scrapy爬行dmoz时出错。

小菜鸟:教程。运行scrapy爬行dmoz时出错。
EN

Stack Overflow用户
提问于 2013-08-29 11:04:57
回答 1查看 727关注 0票数 1

我是蟒蛇的新手。我在64位windows 7上运行python2.7.2版本64位。我遵循了教程,并在我的机器上安装了scrapy。然后我创建了一个项目,demoz。但是当我进入刮刮爬行demoz时,它会显示一个错误。

代码语言:javascript
复制
d:\Scrapy workspace\tutorial>scrapy crawl dmoz
2013-08-29 16:10:45+0800 [scrapy] INFO: Scrapy 0.18.1 started (bot: tutorial)
2013-08-29 16:10:45+0800 [scrapy] DEBUG: Optional features available: ssl, http1
1
2013-08-29 16:10:45+0800 [scrapy] DEBUG: Overridden settings: {'NEWSPIDER_MODULE
': 'tutorial.spiders', 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tuto
rial'}
2013-08-29 16:10:45+0800 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetCon
sole, CloseSpider, WebService, CoreStats, SpiderState
Traceback (most recent call last):
File "C:\Python27\lib\runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "C:\Python27\lib\runpy.py", line 72, in _run_code
exec code in run_globals
File "C:\Python27\lib\site-packages\scrapy-0.18.1-py2.7.egg\scrapy\cmdline.py"
, line 168, in <module>
execute()
File "C:\Python27\lib\site-packages\scrapy-0.18.1-py2.7.egg\scrapy\cmdline.py"
, line 143, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "C:\Python27\lib\site-packages\scrapy-0.18.1-py2.7.egg\scrapy\cmdline.py"
, line 88, in _run_print_help
func(*a, **kw)
File "C:\Python27\lib\site-packages\scrapy-0.18.1-py2.7.egg\scrapy\cmdline.py"
, line 150, in _run_command
cmd.run(args, opts)
File "C:\Python27\lib\site-packages\scrapy-0.18.1-py2.7.egg\scrapy\commands\cr
awl.py", line 46, in run
spider = self.crawler.spiders.create(spname, **opts.spargs)
File "C:\Python27\lib\site-packages\scrapy-0.18.1-py2.7.egg\scrapy\command.py"
, line 34, in crawler
self._crawler.configure()
File "C:\Python27\lib\site-packages\scrapy-0.18.1-py2.7.egg\scrapy\crawler.py"
, line 44, in configure
self.engine = ExecutionEngine(self, self._spider_closed)  
File "C:\Python27\lib\site-packages\scrapy-0.18.1-py2.7.egg\scrapy\core\engine
.py", line 61, in __init__
self.scheduler_cls = load_object(self.settings['SCHEDULER']) 
File "C:\Python27\lib\site-packages\scrapy-0.18.1-py2.7.egg\scrapy\utils\misc.
py", line 40, in load_object
raise ImportError, "Error loading object '%s': %s" % (path, e)
ImportError: Error loading object 'scrapy.core.scheduler.Scheduler': No module n
amed queuelib'

我想他们的安装有问题,谁能帮上忙吗?提前谢谢..。

EN

回答 1

Stack Overflow用户

发布于 2013-08-29 12:09:14

您能否验证一下您创建的"demoz“或"dmoz”项目中的蜘蛛的名称?

您在命令中将"dmoz“指定为蜘蛛名。

代码语言:javascript
复制
d:\Scrapy workspace\tutorial>scrapy crawl dmoz
票数 2
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/18509225

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档