首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >只刮掉15个启动url列表中的第一个启动url。

只刮掉15个启动url列表中的第一个启动url。
EN

Stack Overflow用户
提问于 2015-07-30 22:56:00
回答 2查看 2.2K关注 0票数 3

我对Scrapy很陌生,我正试图教自己一些基本知识。我已经编制了一个代码,去路易斯安那州自然资源部网站上检索某些油井的序列号。

我在start命令中列出了每个well的链接,但是scrappy只从第一个url下载数据。我做错了什么?

代码语言:javascript
复制
import scrapy
from scrapy import Spider
from scrapy.selector import Selector
from mike.items import MikeItem

class SonrisSpider(Spider):
   name = "sspider"

   start_urls = [
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=207899",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=971683",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=214206",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=159420",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=243671",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=248942",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=156613",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=972498",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=215443",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=248463",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=195136",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=179181",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=199930",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=203419",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=220454",

]


    def parse(self, response):
       item = MikeItem()
       item['serial'] =    response.xpath('/html/body/table[1]/tr[2]/td[1]/text()').extract()[0]
       yield item   

谢谢你能提供的任何帮助。如果我没有详细解释我的问题,请告诉我,我会尽量澄清。

EN

回答 2

Stack Overflow用户

回答已采纳

发布于 2015-07-31 04:09:03

我觉得这个密码可能有帮助,

默认情况下,刮擦可防止重复请求。因为只有您的start-url抓取中的参数不同,所以scrapy会将start-url中的其余url视为第一个url的重复请求。这就是为什么你的蜘蛛在获取第一个url之后就停止了。为了解析其余的urls,我们在scrapy请求中启用了dont_filter标志。(start_request()切克)

代码语言:javascript
复制
# -*- coding: utf-8 -*-
import scrapy
from scrapy.http import Request
from mike.items import MikeItem


class SonrisSpider(scrapy.Spider):
    name = "sspider"
    allowed_domains = ["sonlite.dnr.state.la.us"]
    start_urls = [
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=207899",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=971683",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=214206",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=159420",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=243671",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=248942",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=156613",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=972498",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=215443",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=248463",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=195136",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=179181",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=199930",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=203419",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=220454",
            ]

    def start_requests(self):
        for url in self.start_urls:
            yield Request(url=url, callback=self.parse_data, dont_filter=True)

    def parse_data(self, response):
        item = MikeItem()
        serial = response.xpath(
            '/html/body/table[1]/tr[2]/td[1]/text()').extract()
        serial = serial[0] if serial else 'n/a'
        item['serial'] = serial
        yield item

此蜘蛛返回的示例输出如下,

代码语言:javascript
复制
{'serial': u'207899'}
{'serial': u'971683'}
{'serial': u'214206'}
{'serial': u'159420'}
{'serial': u'248942'}
{'serial': u'243671'}
票数 3
EN

Stack Overflow用户

发布于 2015-07-31 02:28:30

您的代码听起来不错,请尝试添加此函数。

代码语言:javascript
复制
class SonrisSpider(Spider):
    def start_requests(self):
            for url in self.start_urls:
                print(url)
                yield self.make_requests_from_url(url)
    #the result of your code goes here

现在应该打印URL了。试试看,如果没有,请说

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/31735436

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档