首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Scrapy: CrawlSpider不解析响应

Scrapy: CrawlSpider不解析响应
EN

Stack Overflow用户
提问于 2018-07-14 17:41:29
回答 0查看 196关注 0票数 0

我以前成功地使用过CrawlSpider。但是当我更改代码以便与Redis集成并添加我自己的中间件来设置UserAgent和cookie时,爬行器不再解析响应,因此爬行器不会生成新的请求,爬行器在开始后不久就关闭了。

Here's the running stats

即使我这样写: def parse_start_url(self,response):return self.parse_item( response ),它也只解析来自第一个url的响应

下面是我的代码: Spider:

代码语言:javascript
复制
# -*- coding: utf-8 -*-
from scrapy.linkextractors import LinkExtractor
from yydzh.items import YydzhItem
from scrapy.spiders import Rule, CrawlSpider


class YydzhSpider(CrawlSpider):
    name = 'yydzhSpider'
    allowed_domains = ['yydzh.com']
    start_urls = ['http://www.yydzh.com/thread.php?fid=198']
    rules = (
         Rule(LinkExtractor(allow='thread\.php\?fid=198&page=([1-9]|1[0-9])#s', 
         restrict_xpaths=("//div[@class='pages']")), 
         callback='parse_item', follow=True,
         ),
    )

#def parse_start_url(self, response):
#   return self.parse_item(response)

def parse_item(self, response):
    item = YydzhItem()
    for each in response.xpath \
        ("//*[@id='ajaxtable']//tr[@class='tr2'][last()]/following-sibling::tr[@class!='tr2']"):
        item['title'] = each.xpath("./td[2]/h3[1]/a//text()").extract()[0]
        item['author'] = each.xpath('./td[3]/a//text()').extract()[0]
        item['category'] = each.xpath('./td[2]/span[1]//text()').extract()[0]
        item['url'] = each.xpath("./td[2]/h3[1]//a/@href").extract()[0]
        yield item

我认为至关重要的设置:

代码语言:javascript
复制
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
DOWNLOADER_MIDDLEWARES = {
'yydzh.middlewares.UserAgentmiddleware': 500,
'yydzh.middlewares.CookieMiddleware': 600
}
COOKIES_ENABLED = True

中间件: UserAgentmiddleware随机更改用户代理,以避免被服务器注意到

CookieMiddleware将cookies添加到请求请求登录以进行扫描的页面

代码语言:javascript
复制
logger = logging.getLogger(__name__)


class UserAgentmiddleware(UserAgentMiddleware):

def process_request(self, request, spider):
    agent = random.choice(agents)
    request.headers["User-Agent"] = agent

class CookieMiddleware(RetryMiddleware):

def __init__(self, settings, crawler):
    RetryMiddleware.__init__(self, settings)
    self.rconn = redis.Redis(host=REDIS_HOST, port=REDIS_PORT,
                             password=REDIS_PASS, db=1, decode_responses=True)  
    init_cookie(self.rconn, crawler.spider.name)


@classmethod
def from_crawler(cls, crawler):
    return cls(crawler.settings, crawler)

def process_request(self, request, spider):
    redisKeys = self.rconn.keys()
    while len(redisKeys) > 0:
        elem = random.choice(redisKeys)
        if spider.name + ':Cookies' in elem:
            cookie = json.loads(self.rconn.get(elem))
            request.cookies = cookie
            request.meta["accountText"] = elem.split("Cookies:")[-1]
            break
        else:
            redisKeys.remove(elem)

def process_response(self, request, response, spider):
    if('您没有登录或者您没有权限访问此页面' in str(response.body)):
        accountText = request.meta["accountText"]
        remove_cookie(self.rconn, spider.name, accountText)
        update_cookie(self.rconn, spider.name, accountText)
        logger.warning("更新Cookie成功!(账号为:%s)" % accountText)
        return request

    return response
EN

回答

页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/51337218

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档