伙计们!我试图得到所有的内部URL在整个网站的SEO的目的,我最近发现Scrapy在这个任务中帮助我。但是我的代码总是返回一个错误:
2017-10-11 10:32:00 [scrapy.core.engine] INFO: Spider opened
2017-10-11 10:32:00 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min
)
2017-10-11 10:32:00 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-10-11 10:32:01 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.**test**.com/> from
<GET http://www.**test**.com/robots.txt>
2017-10-11 10:32:02 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.**test**.com/> (referer: None)
2017-10-11 10:32:03 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.**test**.com/> from
<GET http://www.**test**.com>
2017-10-11 10:32:03 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.**test**.com/> (referer: None)
2017-10-11 10:32:03 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.**test**.com/> (referer: None)
Traceback (most recent call last):
File "c:\python27\lib\site-packages\twisted\internet\defer.py", line 653, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "c:\python27\lib\site-packages\scrapy\spiders\__init__.py", line 90, in parse
raise NotImplementedError
NotImplementedError我把原来的网址改了。
这是我正在运行的代码
# -*- coding: utf-8 -*-
import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
class TestSpider(scrapy.Spider):
name = "test"
allowed_domains = ["http://www.test.com"]
start_urls = ["http://www.test.com"]
rules = [Rule (LinkExtractor(allow=['.*']))]谢谢!
编辑:
这对我起了作用:
rules = (
Rule(LinkExtractor(), callback='parse_item', follow=True),
)
def parse_item(self, response):
filename = response.url
arquivo = open("file.txt", "a")
string = str(filename)
arquivo.write(string+ '\n')
arquivo.close=D
发布于 2017-10-11 14:05:58
您所得到的错误是由于您没有在您的蜘蛛中定义parse方法这一事实,如果您将您的蜘蛛建立在scrapy.Spider类的基础上,这是强制性的。
为了您的目的(即爬行整个网站),最好将您的蜘蛛建立在scrapy.CrawlSpider类上。此外,在Rule中,您必须将callback属性定义为解析您访问的每个页面的方法。最后一个外观更改,在LinkExtractor中,如果您想访问每个页面,您可以省略allow,因为它的默认值是空元组,这意味着它将匹配找到的所有链接。
有关具体代码,请参阅 example。
https://stackoverflow.com/questions/46689783
复制相似问题