首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >如何修复CrawlSpider重定向?

如何修复CrawlSpider重定向?
EN

Stack Overflow用户
提问于 2013-11-05 21:01:42
回答 1查看 861关注 0票数 0

我试图为这个站点编写一个CrawlSpider:http://www.shams-stores.com/shop/index.php,这是我的代码:

代码语言:javascript
复制
import urlparse
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from project.items import Product
import re



class ShamsStoresSpider(CrawlSpider):
    name = "shamsstores2"
    domain_name = "shams-stores.com"
    CONCURRENT_REQUESTS = 1

    start_urls = ["http://www.shams-stores.com/shop/index.php"]

    rules = (
            #categories
            Rule(SgmlLinkExtractor(restrict_xpaths=('//div[@id="categories_block_left"]/div/ul/li/a'), unique=False), callback='process', follow=True),
            )

    def process(self,response):
        print response

这是我使用刮擦爬行shamsstores2时得到的响应。

代码语言:javascript
复制
2013-11-05 22:56:36+0200 [scrapy] DEBUG: Web service listening on 0.0.0.0:6081
2013-11-05 22:56:41+0200 [shamsstores2] DEBUG: Crawled (200) <GET http://www.shams-stores.com/shop/index.php> (referer: None)
2013-11-05 22:56:42+0200 [shamsstores2] DEBUG: Redirecting (301) to <GET http://www.shams-stores.com/shop/index.php?id_category=14&controller=category&id_lang=1> from <GET http://www.shams-stores.com/shop/index.php?controller=category&id_category=14&id_lang=1>
2013-11-05 22:56:42+0200 [shamsstores2] DEBUG: Filtered duplicate request: <GET http://www.shams-stores.com/shop/index.php?id_category=14&controller=category&id_lang=1> - no more duplicates will be shown (see DUPEFILTER_CLASS)
2013-11-05 22:56:43+0200 [shamsstores2] DEBUG: Redirecting (301) to <GET http://www.shams-stores.com/shop/index.php?id_category=13&controller=category&id_lang=1> from <GET http://www.shams-stores.com/shop/index.php?controller=category&id_category=13&id_lang=1>
2013-11-05 22:56:43+0200 [shamsstores2] DEBUG: Redirecting (301) to <GET http://www.shams-stores.com/shop/index.php?id_category=12&controller=category&id_lang=1> from <GET http://www.shams-stores.com/shop/index.php?controller=category&id_category=12&id_lang=1>
2013-11-05 22:56:43+0200 [shamsstores2] DEBUG: Redirecting (301) to <GET http://www.shams-stores.com/shop/index.php?id_category=10&controller=category&id_lang=1> from <GET http://www.shams-stores.com/shop/index.php?controller=category&id_category=10&id_lang=1>
2013-11-05 22:56:43+0200 [shamsstores2] DEBUG: Redirecting (301) to <GET http://www.shams-stores.com/shop/index.php?id_category=9&controller=category&id_lang=1> from <GET http://www.shams-stores.com/shop/index.php?controller=category&id_category=9&id_lang=1>
2013-11-05 22:56:44+0200 [shamsstores2] DEBUG: Redirecting (301) to <GET http://www.shams-stores.com/shop/index.php?id_category=8&controller=category&id_lang=1> from <GET http://www.shams-stores.com/shop/index.php?controller=category&id_category=8&id_lang=1>
2013-11-05 22:56:44+0200 [shamsstores2] DEBUG: Redirecting (301) to <GET http://www.shams-stores.com/shop/index.php?id_category=7&controller=category&id_lang=1> from <GET http://www.shams-stores.com/shop/index.php?controller=category&id_category=7&id_lang=1>
2013-11-05 22:56:44+0200 [shamsstores2] DEBUG: Redirecting (301) to <GET http://www.shams-stores.com/shop/index.php?id_category=6&controller=category&id_lang=1> from <GET http://www.shams-stores.com/shop/index.php?controller=category&id_category=6&id_lang=1>
2013-11-05 22:56:44+0200 [shamsstores2] INFO: Closing spider (finished)

它点击从规则中提取的链接,这些链接重定向到其他链接,然后在不执行函数: process的情况下停止。我可以通过使用一个基本蜘蛛来修复这个问题,但是我可以修复它并且仍然使用爬行蜘蛛吗?

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2013-11-06 17:25:30

问题不在于重定向。按照服务器的建议,Scrapy会转到备用位置,并从那里获取页面。

对于所有已访问的页面,您的"restrict_xpaths=('//div@id="categories_block_left"/div/ul/li/a')“问题,它只需提取相同的8个urls集,并将它们过滤为重复。

我唯一不明白的是为什么刮刮只给了一页信息。如果我找到原因我会更新的。

编辑:参考github.com/scrapy/scrapy/blob/master/scrapy/utils/request.py

基本上,首先,请求排队并存储指纹。接下来,生成重定向url,并通过比较指纹检查它是否重复,scrapy找到相同的指纹。Scarpy查找相同的指纹,因为正如示例中引用的那样,重定向url和原始url的重新排序查询字符串是相同的。

某种“利用”解决方案

代码语言:javascript
复制
rules = (
    #categories
        Rule(SgmlLinkExtractor(restrict_xpaths=('//div[@id="categories_block_left"]/div/ul/li/a') ), callback='process', process_links= 'appendDummy', follow=True),

    def process(self,response):
        print 'response is called'
        print response

    def appendDummy(self, links):
        for link in links:
            link.url = link.url +"?dummy=true"
        return links

因为服务器忽略了重定向url中附加的虚拟,所以我们在某种程度上欺骗了指纹处理原始请求和重定向请求来处理不同的请求。

另一个解决方案是自己重新排序process_link回调中的查询参数(在示例appendDummy中)。

其他解决方案可能是重写finger_print来区分这些类型的url(我认为这在一般情况下是错误的,在这里可能很好)或基于url的简单指纹(同样只适用于这种情况)。

如果这个解决方案对你有用,请让我来。

P.S. scrapy的行为是正确的。我不明白服务器重定向重排序查询字符串的原因是什么。

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/19798886

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档