首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >抓抓蜘蛛停止爬行

抓抓蜘蛛停止爬行
EN

Stack Overflow用户
提问于 2020-01-09 07:11:03
回答 1查看 38关注 0票数 0

我尝试在一个需要登录授权和爬行到同一站点内的不同页面的.asp站点上运行爬行器。我昨天成功地使用我的爬行器登录,并使用不同的函数抓取数据,当我在更改las函数后再次运行爬行器时,爬行器停止工作。我不知道发生了什么,我对网络抓取还是个新手。代码如下:

代码语言:javascript
复制
import scrapy
from scrapy.http import FormRequest
from scrapy.utils.response import open_in_browser

class TourSpider(scrapy.Spider):
    name = 'data'
    start_url= ["https://www.translamex.com/partners/login.asp?ret_link=%2Fpartners%2FDefault%2Easp&type=notLogged"]
    url="https://www.translamex.com/"

    def parse(self, response):
        return FormRequest.from_response(response=response,
        clickdata={'id':'Login1Button_DoLogin'}, 
        formdata={
            'login':"username",
            'password':"password"
        }, callback=self.next_page)

    def next_page(self,response):
        next_page=response.css('a[title="View Rates / Tarifas"]::attr(href)').extract_first()
        rates = response.urljoin(next_page)
        yield scrapy.Request(url=rates, callback=self.start_scraping)

    def start_scraping(self,response):
        open_in_browser(response)
        for row in response.css("tr.Row"):
            yield {
            "destination": row.css("td::text")[0].extract(),
            "tour": row.css(".td::text")[1].extract(),
            "public_rate_adult":row.css(".td::text")[2].extract(),
            "public_rate_child":row.css(".td::text")[3].extract(),
            "rate_adult":row.css(".td::text")[4].extract(),
            "rate_child":row.css(".td::text")[5].extract()
            }

日志如下:

代码语言:javascript
复制
020-01-08 22:55:13 [scrapy.utils.log] INFO: Scrapy 1.8.0 started (bot: Tour)
2020-01-08 22:55:13 [scrapy.utils.log] INFO: Versions: lxml 4.4.2.0, libxml2 2.9.9, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 19.10.0, Python 3.7.5 (default, Oct 25 2019, 15:51:11) - [GCC 7.3.0], pyOpenSSL 19.1.0 (OpenSSL 1.1.1d  10 Sep 2019), cryptography 2.8, Platform Linux-4.15.0-72-generic-x86_64-with-debian-buster-sid
2020-01-08 22:55:13 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'Tour', 'FEED_FORMAT': 'json', 'FEED_URI': 'data.json', 'NEWSPIDER_MODULE': 'Tour.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['Tour.spiders']}
2020-01-08 22:55:13 [scrapy.extensions.telnet] INFO: Telnet Password: e98d8ad268ff9643
2020-01-08 22:55:13 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.feedexport.FeedExporter',
 'scrapy.extensions.logstats.LogStats']
2020-01-08 22:55:13 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-01-08 22:55:13 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-01-08 22:55:13 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2020-01-08 22:55:13 [scrapy.core.engine] INFO: Spider opened
2020-01-08 22:55:13 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-01-08 22:55:13 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-01-08 22:55:13 [scrapy.core.engine] INFO: Closing spider (finished)
2020-01-08 22:55:13 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'elapsed_time_seconds': 0.004019,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2020, 1, 8, 22, 55, 13, 962196),
 'log_count/INFO': 10,
 'memusage/max': 54812672,
 'memusage/startup': 54812672,
 'start_time': datetime.datetime(2020, 1, 8, 22, 55, 13, 958177)}
2020-01-08 22:55:13 [scrapy.core.engine] INFO: Spider closed (finished)

代码曾经能够尝试从页面中抓取一些我想要的数据,但没有成功,但我相信这只是因为我使用了错误的css选择器。现在,它只是打开和关闭,不做任何事情。

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2020-01-09 07:41:33

我解决了。我只是在开始时更改了url的分配。我必须创建一个新方法,如下所示:

代码语言:javascript
复制
    def start_requests(self):
         start_url= "https://www.translamex.com/partners/login.asp?ret_link=%2Fpartners%2FDefault%2Easp&type=notLogged"
         yield scrapy.Request(url=start_url, callback=self.parse)
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/59655065

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档