首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Scrapy中的变量

Scrapy中的变量
EN

Stack Overflow用户
提问于 2014-03-01 23:01:52
回答 3查看 1.6K关注 0票数 4

我可以在start_urls中使用变量吗?请参考下面的脚本:

此脚本运行良好:

代码语言:javascript
复制
from scrapy.spider import Spider
from scrapy.selector import Selector
from example.items import ExampleItem

class ExampleSpider(Spider):
name = "example"
allowed_domains = ["example.com"]
start_urls = [

"http://www.example.com/search-keywords=['0750692995']",
"http://www.example.com/search-keywords=['0205343929']",
"http://www.example.com/search-keywords=['0874367379']",

]

def parse(self, response):
   hxs = Selector(response)
   item = ExampleItem()
   item['url'] = response.url
   item['price'] = hxs.select("//li[@class='mpbold']/a/text()").extract()
   item['title'] = hxs.select("//span[@class='title L']/text()").extract()
   return item

但我希望是这样的:

代码语言:javascript
复制
from scrapy.spider import Spider
from scrapy.selector import Selector
from example.items import ExampleItem

class ExampleSpider(Spider):
name = "example"
allowed_domains = ["example.com"]
pro_id = ["0750692995", "0205343929", "0874367379"] ***(I added this line)
start_urls = [

"http://www.example.com/search-keywords=['pro_id']", ***(and I changed this line)

]

def parse(self, response):
   hxs = Selector(response)
   item = ExampleItem()
   item['url'] = response.url
   item['price'] = hxs.select("//li[@class='mpbold']/a/text()").extract()
   item['title'] = hxs.select("//span[@class='title L']/text()").extract()
   return item

我想通过一个接一个地将pro_id号拉入start_urls函数来运行此脚本。有没有办法做到这一点?我运行了脚本,但是URL仍然是这样的"http://www.example.com/search-keywords=['pro_id']“而不是"http://www.example.com/search-keywords=0750692995”。脚本应该是怎样的?谢谢你的帮助。

编辑:在进行@paul t建议的更改后,会出现以下错误

代码语言:javascript
复制
2014-03-02 08:39:44+0700 [example] ERROR: Obtaining request from start requests
    Traceback (most recent call last):
      File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 1192, in run
        self.mainLoop()
      File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 1201, in mainLoop
        self.runUntilCurrent()
      File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 824, in runUntilCurrent
        call.func(*call.args, **call.kw)
      File "C:\Python27\lib\site-packages\scrapy-0.22.2-py2.7.egg\scrapy\utils\reactor.py", line 41, in __call__
        return self._func(*self._a, **self._kw)
    --- <exception caught here> ---
      File "C:\Python27\lib\site-packages\scrapy-0.22.2-py2.7.egg\scrapy\core\engine.py", line 111, in _next_request

        request = next(slot.start_requests)
      File "C:\Users\S\desktop\example\example\spiders\example_spider.py", line 13, in start_requests
        yield Request(self.start_urls_base % pro_id, dont_filter=True)
    exceptions.NameError: global name 'Request' is not defined
EN

回答 3

Stack Overflow用户

发布于 2014-03-01 23:21:00

一种方法是覆盖爬行器的start_requests()方法:

代码语言:javascript
复制
class ExampleSpider(Spider):
    name = "example"
    allowed_domains = ["example.com"]
    pro_ids = ["0750692995", "0205343929", "0874367379"]
    start_urls_base = "http://www.example.com/search-keywords=['%s']"

    def start_requests(self):
        for pro_id in self.pro_ids:
            yield Request(self.start_urls_base % pro_id, dont_filter=True)
票数 5
EN

Stack Overflow用户

发布于 2014-05-14 18:59:26

首先,您必须导入请求

代码语言:javascript
复制
from scrapy.http import Request

在此之后,您可以遵循Paul的建议

代码语言:javascript
复制
    def start_requests(self):
    for pro_id in self.pro_ids:
        yield Request(self.start_urls_base % pro_id, dont_filter=True)
票数 0
EN

Stack Overflow用户

发布于 2016-01-15 13:25:45

我认为你可以使用for循环来解决它,如下所示:

代码语言:javascript
复制
start_urls = [

"http://www.example.com/search-keywords="+i for i in pro_id

]
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/22115977

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档