首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >使用Scrapy-Splash持续得到"502 Bad Gateway“错误

使用Scrapy-Splash持续得到"502 Bad Gateway“错误
EN

Stack Overflow用户
提问于 2020-07-24 23:58:58
回答 1查看 661关注 0票数 1

我有一个蜘蛛,我试图运行在实际的网站上,我试图抓取,但我一直得到502坏网关,这让我认为我被禁止。作为一个测试,我在http://quotes.toscrape.com/上使用了下面的代码,我仍然得到502坏网关!不用说,我从来没有读到过“我们做到了”这一行。我正在使用Scrapy-Splash帮助!

注意:代理中间件与我的其他爬行器工作得很好,所以我不认为有任何问题。此外,我可以使用splash web界面渲染所有内容。

蜘蛛:

代码语言:javascript
复制
import scrapy
from scrapy_splash import SplashRequest

LUA_SCRIPT = """
function main(splash, args)
  splash.private_mode_enabled = false
  assert(splash:go(args.url))
  assert(splash:wait(20))
  return {
    html = splash:html(),
    png = splash:png(),
    har = splash:har(),
  }
end
"""



class ExampleSpider(scrapy.Spider):
    name = 'example'
    allowed_domains = []
    start_urls = ["http://quotes.toscrape.com/"]


    def start_requests(self):
        for url in self.start_urls:
            print()

            yield SplashRequest(url, self.parse,
                                endpoint='execute',
                                args={'lua_source': LUA_SCRIPT}
                                )

    def parse(self, response):
        soup = BeautifulSoup(response.text, "lxml")
        print("we made it")

settings.py:

代码语言:javascript
复制
BOT_NAME = 'project'
SPIDER_MODULES = ['project.spiders']
NEWSPIDER_MODULE = 'project.spiders'

ROBOTSTXT_OBEY = False

AUTOTHROTTLE_ENABLED = True
AUTOTHROTTLE_START_DELAY = 1
AUTOTHROTTLE_MAX_DELAY = 3
DEFAULT_REQUEST_HEADERS = {
    'Referer': 'http://www.google.com'
}

SPIDER_MIDDLEWARES = {
    'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
    'project.middlewares.ProjectSpiderMiddleware': 543,
}

DOWNLOADER_MIDDLEWARES = {
    'scrapy_splash.SplashCookiesMiddleware': 723,
    'scrapy_splash.SplashMiddleware': 725,
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
    'project.middlewares.ProjectDownloaderMiddleware': 543,
    'project.middlewares.CustomProxyMiddleware': 350,
    'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 400,
    'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
    'scrapy_useragents.downloadermiddlewares.useragents.UserAgentsMiddleware': 500
}

ITEM_PIPELINES = {
    'project.pipelines.ProjectPipeline': 900,

}

SPLASH_URL = 'http://localhost:8050'
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'

USER_AGENTS = [("Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) "
               "Chrome/83.0.4103.116 Safari/537.36"),  # chrome
               ("Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Firefox/78.0",  # firefox
               "Mozilla/5.0 (Windows NT 10.0; Trident/7.0; rv:11.0) like Gecko"),  # internet explorer
               ("Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) "
               "Chrome/83.0.4103.116 Safari/537.36 Edg/83.0.478.61")]  # microsoft edge

middleswares.py:

代码语言:javascript
复制
    from scrapy import signals
    from w3lib.http import basic_auth_header
    
    
    class ProjectSpiderMiddleware:
        # Not all methods need to be defined. If a method is not defined,
        # scrapy acts as if the spider middleware does not modify the
        # passed objects.
    
        @classmethod
        def from_crawler(cls, crawler):
            # This method is used by Scrapy to create your spiders.
            s = cls()
            crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
            return s
    
        def process_spider_input(self, response, spider):
            # Called for each response that goes through the spider
            # middleware and into the spider.
    
            # Should return None or raise an exception.
            return None
    
        def process_spider_output(self, response, result, spider):
            # Called with the results returned from the Spider, after
            # it has processed the response.
    
            # Must return an iterable of Request, dict or Item objects.
            for i in result:
                yield i
    
        def process_spider_exception(self, response, exception, spider):
            # Called when a spider or process_spider_input() method
            # (from other spider middleware) raises an exception.
    
            # Should return either None or an iterable of Request, dict
            # or Item objects.
            pass
    
        def process_start_requests(self, start_requests, spider):
            # Called with the start requests of the spider, and works
            # similarly to the process_spider_output() method, except
            # that it doesn’t have a response associated.
    
            # Must return only requests (not items).
            for r in start_requests:
                yield r
    
        def spider_opened(self, spider):
            spider.logger.info('Spider opened: %s' % spider.name)
    
    
    class ProjectDownloaderMiddleware:
        # Not all methods need to be defined. If a method is not defined,
        # scrapy acts as if the downloader middleware does not modify the
        # passed objects.
    
        @classmethod
        def from_crawler(cls, crawler):
            # This method is used by Scrapy to create your spiders.
            s = cls()
            crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
            return s
    
        #
        def process_request(self, request, spider):
            # Called for each request that goes through the downloader
            # middleware.
    
            # Must either:
            # - return None: continue processing this request
            # - or return a Response object
            # - or return a Request object
            # - or raise IgnoreRequest: process_exception() methods of
            #   installed downloader middleware will be called
            return None
    
        def process_response(self, request, response, spider):
            # Called with the response returned from the downloader.
    
            # Must either;
            # - return a Response object
            # - return a Request object
            # - or raise IgnoreRequest
            return response
    
        def process_exception(self, request, exception, spider):
            # Called when a download handler or a process_request()
            # (from other downloader middleware) raises an exception.
    
            # Must either:
            # - return None: continue processing this exception
            # - return a Response object: stops process_exception() chain
            # - return a Request object: stops process_exception() chain
            pass
    
        def spider_opened(self, spider):
            spider.logger.info('Spider opened: %s' % spider.name)

class CustomProxyMiddleware(object):

    def process_request(self, request, spider):
        request.meta["proxy"] = "IP"
        request.headers["Proxy-Authorization"] = basic_auth_header("username",
                                                                   "password")
EN

回答 1

Stack Overflow用户

发布于 2021-01-29 17:45:27

当你将请求发布到网站时,第一次启动时,启动时不需要代理就会访问目标网站。

代码语言:javascript
复制
SPIDER_MIDDLEWARES = {
    'project.middlewares.ProjectSpiderMiddleware': 543,
    'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}

试试看。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/63077212

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档