首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Crawlera不使用Scrapy,下载机不工作

Crawlera不使用Scrapy,下载机不工作
EN

Stack Overflow用户
提问于 2013-12-23 14:45:27
回答 1查看 1.8K关注 0票数 2

试图在刮刮中实现共同做法。因此试图实现爬虫库。

我安装并设置了Crawlera,正如提到的这里。(我可以通过执行scrapylib在我的系统中看到help('modules')库)

这是我的刮痕的settings.py

代码语言:javascript
复制
BOT_NAME = 'cnn'

SPIDER_MODULES = ['cnn.spiders']
NEWSPIDER_MODULE = 'cnn.spiders'
COOKIES_ENABLED = False
DOWNLOADER_MIDDLEWARES = {'scrapylib.crawlera.CrawleraMiddleware': 600,'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': None,}
CRAWLERA_ENABLED = True
CRAWLERA_USER = 'abc'
CRAWLERA_PASS = 'abc@abc'  

但当我跑蜘蛛的时候什么都不会发生。

我可以从我的刮伤日志中看到CrawleraMiddleware已被加载:

代码语言:javascript
复制
2013-12-23 20:12:54+0530 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CrawleraMiddleware, ChunkedTransferMiddleware, DownloaderStats  

为什么它不爬行?

这是启用了Crawlera 的日志:

代码语言:javascript
复制
2013-12-23 21:58:14+0530 [scrapy] INFO: Scrapy 0.20.2 started (bot: cnn)
2013-12-23 21:58:14+0530 [scrapy] DEBUG: Optional features available: ssl, http11
2013-12-23 21:58:14+0530 [scrapy] DEBUG: Overridden settings: {'NEWSPIDER_MODULE': 'cnn.spiders', 'FEED_URI': 'news.json', 'MEMDEBUG_ENABLED': True, 'RETRY_ENABLED': False, 'SPIDER_MODULES': ['cnn.spiders'], 'BOT_NAME': 'cnn', 'DOWNLOAD_TIMEOUT': 240, 'COOKIES_ENABLED': False, 'FEED_FORMAT': 'json', 'MEMUSAGE_REPORT': True, 'REDIRECT_ENABLED': False, 'MEMUSAGE_ENABLED': True}
2013-12-23 21:58:14+0530 [scrapy] DEBUG: Enabled extensions: FeedExporter, LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, MemoryUsage, MemoryDebugger, SpiderState
2013-12-23 21:58:14+0530 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, CrawleraMiddleware, ChunkedTransferMiddleware, DownloaderStats
2013-12-23 21:58:14+0530 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2013-12-23 21:58:14+0530 [scrapy] DEBUG: Enabled item pipelines: 
2013-12-23 21:58:14+0530 [cnn] INFO: Spider opened
2013-12-23 21:58:14+0530 [cnn] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2013-12-23 21:58:14+0530 [cnn] INFO: Using crawlera at http://proxy.crawlera.com:8010 (user: xmpirate)
2013-12-23 21:58:14+0530 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2013-12-23 21:58:14+0530 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2013-12-23 21:58:15+0530 [cnn] DEBUG: Crawled (407) <GET http://www.example1.com> (referer: None)
2013-12-23 21:58:15+0530 [cnn] DEBUG: Crawled (407) <GET http://www.example2.com> (referer: None)
2013-12-23 21:58:15+0530 [cnn] INFO: Closing spider (finished)
2013-12-23 21:58:15+0530 [cnn] INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 464,
     'downloader/request_count': 2,
     'downloader/request_method_count/GET': 2,
     'downloader/response_bytes': 364,
     'downloader/response_count': 2,
     'downloader/response_status_count/407': 2,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2013, 12, 23, 16, 28, 15, 679961),
     'log_count/DEBUG': 8,
     'log_count/INFO': 4,
     'memusage/max': 30236737536,
     'memusage/startup': 30236737536,
     'response_received_count': 2,
     'scheduler/dequeued': 2,
     'scheduler/dequeued/memory': 2,
     'scheduler/enqueued': 2,
     'scheduler/enqueued/memory': 2,
     'start_time': datetime.datetime(2013, 12, 23, 16, 28, 14, 853975)}
2013-12-23 21:58:15+0530 [cnn] INFO: Spider closed (finished)  

这是与Crawlera 禁用

代码语言:javascript
复制
2013-12-23 22:00:45+0530 [scrapy] INFO: Scrapy 0.20.2 started (bot: cnn)
2013-12-23 22:00:45+0530 [scrapy] DEBUG: Optional features available: ssl, http11
2013-12-23 22:00:45+0530 [scrapy] DEBUG: Overridden settings: {'NEWSPIDER_MODULE': 'cnn.spiders', 'FEED_URI': 'news.json', 'MEMDEBUG_ENABLED': True, 'RETRY_ENABLED': False, 'SPIDER_MODULES': ['cnn.spiders'], 'BOT_NAME': 'cnn', 'DOWNLOAD_TIMEOUT': 240, 'COOKIES_ENABLED': False, 'FEED_FORMAT': 'json', 'MEMUSAGE_REPORT': True, 'REDIRECT_ENABLED': False, 'MEMUSAGE_ENABLED': True}
2013-12-23 22:00:46+0530 [scrapy] DEBUG: Enabled extensions: FeedExporter, LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, MemoryUsage, MemoryDebugger, SpiderState
2013-12-23 22:00:46+0530 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, CrawleraMiddleware, ChunkedTransferMiddleware, DownloaderStats
2013-12-23 22:00:46+0530 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2013-12-23 22:00:46+0530 [scrapy] DEBUG: Enabled item pipelines: 
2013-12-23 22:00:46+0530 [cnn] INFO: Spider opened
2013-12-23 22:00:46+0530 [cnn] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2013-12-23 22:00:46+0530 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2013-12-23 22:00:46+0530 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2013-12-23 22:00:46+0530 [cnn] DEBUG: Crawled (200) <GET http://www.example1.com> (referer: None)
2013-12-23 22:00:47+0530 [cnn] DEBUG: Crawled (200) <GET http://www.example2.com> (referer: None)
**Pages are crawled here**
2013-12-23 22:01:00+0530 [cnn] INFO: Closing spider (finished)
2013-12-23 22:01:00+0530 [cnn] INFO: Stored json feed (7 items) in: news.json
2013-12-23 22:01:00+0530 [cnn] INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 10151,
     'downloader/request_count': 36,
     'downloader/request_method_count/GET': 36,
     'downloader/response_bytes': 762336,
     'downloader/response_count': 36,
     'downloader/response_status_count/200': 35,
     'downloader/response_status_count/404': 1,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2013, 12, 23, 16, 31, 0, 376888),
     'item_scraped_count': 7,
     'log_count/DEBUG': 49,
     'log_count/INFO': 4,
     'memusage/max': 30157045760,
     'memusage/startup': 30157045760,
     'request_depth_max': 1,
     'response_received_count': 36,
     'scheduler/dequeued': 36,
     'scheduler/dequeued/memory': 36,
     'scheduler/enqueued': 36,
     'scheduler/enqueued/memory': 36,
     'start_time': datetime.datetime(2013, 12, 23, 16, 30, 46, 61019)}
2013-12-23 22:01:00+0530 [cnn] INFO: Spider closed (finished)
EN

回答 1

Stack Overflow用户

发布于 2017-02-23 09:47:00

来自Crawlera的407错误代码是一个身份验证错误,可能在每个one中有一个错误,或者您没有使用正确的错误。

来源

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/20745797

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档