首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >姜戈地区芹菜随机碎裂

姜戈地区芹菜随机碎裂
EN

Stack Overflow用户
提问于 2015-10-11 00:05:43
回答 1查看 351关注 0票数 4

我正在Ubuntu服务器上的Django中运行我的Scrapy项目。问题是,即使它只运行一只蜘蛛,它也会随机崩溃。

下面是TraceBack的一个片段。作为一个无专家,我在谷歌上搜索过

_SIGCHLDWaker报废

但无法理解为下面的片段找到的解决方案:

代码语言:javascript
复制
--- <exception caught here> ---
  File "/home/b2b/virtualenvs/venv/local/lib/python2.7/site-packages/twisted/internet/posixbase.py", line 602, in _doReadOrWrite
    why = selectable.doWrite()
exceptions.AttributeError: '_SIGCHLDWaker' object has no attribute 'doWrite'

我对扭曲并不熟悉,尽管我试着去理解它,但它对我来说似乎很不友好。

以下是完整的回溯:

代码语言:javascript
复制
2015-10-10 14:17:13,652: INFO/Worker-4] Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, RandomUserAgentMiddleware, ProxyMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
[2015-10-10 14:17:13,655: INFO/Worker-4] Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
[2015-10-10 14:17:13,656: INFO/Worker-4] Enabled item pipelines: MadePipeline
[2015-10-10 14:17:13,656: INFO/Worker-4] Spider opened
[2015-10-10 14:17:13,657: INFO/Worker-4] Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
Unhandled Error
Traceback (most recent call last):
  File "/home/b2b/virtualenvs/venv/local/lib/python2.7/site-packages/twisted/python/log.py", line 101, in callWithLogger
    return callWithContext({"system": lp}, func, *args, **kw)
  File "/home/b2b/virtualenvs/venv/local/lib/python2.7/site-packages/twisted/python/log.py", line 84, in callWithContext
    return context.call({ILogContext: newCtx}, func, *args, **kw)
  File "/home/b2b/virtualenvs/venv/local/lib/python2.7/site-packages/twisted/python/context.py", line 118, in callWithContext
    return self.currentContext().callWithContext(ctx, func, *args, **kw)
  File "/home/b2b/virtualenvs/venv/local/lib/python2.7/site-packages/twisted/python/context.py", line 81, in callWithContext
    return func(*args,**kw)
--- <exception caught here> ---
  File "/home/b2b/virtualenvs/venv/local/lib/python2.7/site-packages/twisted/internet/posixbase.py", line 602, in _doReadOrWrite
    why = selectable.doWrite()
exceptions.AttributeError: '_SIGCHLDWaker' object has no attribute 'doWrite'

下面是我如何根据刮痕文档实现任务

代码语言:javascript
复制
from scrapy.crawler import CrawlerProcess, CrawlerRunner
from twisted.internet import reactor
from scrapy.utils.project import get_project_settings
@shared_task
def run_spider(**kwargs):
    task_id = run_spider.request.id
    status = AsyncResult(str(task_id)).status
    source = kwargs.get("source")

    pro, created = Project.objects.get_or_create(name="b2b")
    query, _ = SearchTerm.objects.get_or_create(term=kwargs['query'])
    src, _ = Source.objects.get_or_create(term=query, engine=kwargs['source'])

    b, _ = Bot.objects.get_or_create(project=pro, query=src, spiderid=str(task_id), status=status, start_time=timezone.now())

    process = CrawlerRunner(get_project_settings())

    if source == "amazon":
        d = process.crawl(ComberSpider, query=kwargs['query'], job_id=task_id)
        d.addBoth(lambda _: reactor.stop())
    else:
        d = process.crawl(MadeSpider, query=kwargs['query'], job_id=task_id)
        d.addBoth(lambda _: reactor.stop())
    reactor.run()

此外,我尝试过类似于这个教程,但它导致了一个不同的问题,我无法得到回溯

为了完整起见,这里是我的蜘蛛的一个片段

代码语言:javascript
复制
class ComberSpider(CrawlSpider):

    name = "amazon"
    allowed_domains = ["amazon.com"]
    rules = (Rule(LinkExtractor(allow=r'corporations/.+/-*50/[0-9]+\.html', restrict_xpaths="//a[@class='next']"),
                  callback="parse_items", follow=True),
             )

    def __init__(self, *args, **kwargs):
        super(ComberSpider, self).__init__(*args, **kwargs)
        self.query = kwargs.get('query')
        self.job_id = kwargs.get('job_id')
        SignalManager(dispatcher.Any).connect(self.closed_handler, signal=signals.spider_closed)
        self.start_urls = (
            "http://www.amazon.com/corporations/%s/------------"
            "--------50/1.html" % self.query.strip().replace(" ", "_").lower(),
        )
EN

回答 1

Stack Overflow用户

发布于 2019-01-30 12:15:32

这是一个众所周知的问题。有关详细信息和可能的解决方案,请参见发布报告线程

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/33060257

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档