首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >如何用scrapy或其他工具抓取使用if-non-match和cookies的页面?

如何用scrapy或其他工具抓取使用if-non-match和cookies的页面?
EN

Stack Overflow用户
提问于 2020-10-18 21:19:12
回答 1查看 385关注 0票数 0

我正在尝试抓取一个返回JSON对象的API,但它只在第一次返回JSON之后,不会返回任何内容。我正在使用带有Cookie的"if-none-match“头,但我想在没有Cookie的情况下这样做,因为我有很多这类API要抓取。

下面是我的爬虫代码:

代码语言:javascript
复制
import scrapy
from scrapy import Spider, Request
import json
from scrapy.crawler import CrawlerProcess

header_data = {'authority': 'shopee.com.my',
    'method': 'GET',
    'scheme': 'https',
    'accept': '*/*',
    'if-none-match-': '*',
    'accept-encoding': 'gzip, deflate, br',
    'accept-language': 'en-US,en;q=0.9',
    'user-agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.121 Safari/537.36',
    'x-requested-with': 'XMLHttpRequest',
    'x-shopee-language': 'en',
    'Cache-Control': 'max-age=0',
    }


class TestSales(Spider):
    name = "testsales"
    allowed_domains = ['shopee.com', 'shopee.com.my', 'shopee.com.my/api/']
    cookie_string = {'SPC_U':'-', 'SPC_IA':'-1' , 'SPC_EC':'-' , 'SPC_F':'7jrWAm4XYNNtyVAk83GPknN8NbCMQEIk', 'REC_T_ID':'476673f8-eeb0-11ea-8919-48df374df85c', '_gcl_au':'1.1.1197882328.1599225148', '_med':'refer', '_fbp':'fb.2.1599225150134.114138691', 'language':'en', '_ga':'GA1.3.1167355736.1599225151', 'SPC_SI':'mall.gTmrpiDl24JHLSNwnCw107mao3hd8qGP', 'csrftoken':'2ntG40uuWzOLUsjv5Sn8glBUQjXtbGgo', 'welcomePkgShown':'true', '_gid':'GA1.3.590966412.1602427202', 'AMP_TOKEN':'%24NOT_FOUND', 'SPC_CT_21c6f4cb':'1602508637.vtyz9yfI6ckMZBdT9dlICuAYf7crlEQ6NwQScaB2VXI=', 'SPC_CT_087ee755':'1602508652.ihdXyWUp3wFdBN1FGrKejd91MM8sJHEYCPqcgmKqpdA=', '_dc_gtm_UA-61915055-6':'1', 'SPC_R_T_ID':'vT4Yxil96kYSRG2GIhtzk8fRJldlPJ1/szTbz9sG21nTJr4zDoOnnxFEgYe2Ea+RhM0H8q0m/SFWBMO7ktpU5Kim0CJneelIboFavxAVwb0=', 'SPC_T_IV':'hhHcCbIpVvuchn7SbLYeFw==', 'SPC_R_T_IV':'hhHcCbIpVvuchn7SbLYeFw==', 'SPC_T_ID':'vT4Yxil96kYSRG2GIhtzk8fRJldlPJ1/szTbz9sG21nTJr4zDoOnnxFEgYe2Ea+RhM0H8q0m/SFWBMO7ktpU5Kim0CJneelIboFavxAVwb0='}

    custom_settings = {
        'AUTOTHROTTLE_ENABLED' : 'True',
        # The initial download delay
        'AUTOTHROTTLE_START_DELAY' : '0.5',
        # The maximum download delay to be set in case of high latencies
        'AUTOTHROTTLE_MAX_DELAY' : '10',
        # The average number of requests Scrapy should be sending in parallel to
        # each remote server
        'AUTOTHROTTLE_TARGET_CONCURRENCY' : '1.0',
        # 'DNSCACHE_ENABLED' : 'False',
        # 'COOKIES_ENABLED': 'False',
    }
            
        

    def start_requests(self):
        subcat_url = '/Baby-Toddler-Play-cat.27.23785'
        id = subcat_url.split('.')[-1]
        header_data['path'] = f'/api/v2/search_items/?by=sales&limit=50&match_id={id}&newest=0&order=desc&page_type=search&version=2'
        header_data['referer'] = f'https://shopee.com.my{subcat_url}?page=0&sortBy=sales'
        url = f'https://shopee.com.my/api/v2/search_items/?by=sales&limit=50&match_id={id}&newest=0&order=desc&page_type=search&version=2'

        yield Request(url=url, headers=header_data, #cookies=self.cookie_string,
                        cb_kwargs={'subcat': 'baby tobbler play cat', 'category': 'baby and toys' })



    def parse(self, response, subcat, category):
        # pass
        try:
            jdata = json.loads(response.body)
        except Exception as e:
            print(f'exception: {e}')
            print(response.body)
            return None

        items = jdata['items']

        for item in items:
            name = item['name']
            image_path = item['image']
            absolute_image = f'https://cf.shopee.com.my/file/{image_path}_tn'
            print(f'this is  absolute image {absolute_image}')
            subcategory = subcat
            monthly_sold = 'pending'
            price = float(item['price'])/100000
            total_sold = item['sold']
            location = item['shop_location']
            stock = item['stock']

            print(name)
            print(price)
            print(total_sold)
            print(location)
            print(stock)


app = CrawlerProcess()
app.crawl(TestSales)
app.start()

这是您可以在浏览器上看到的页面url:https://shopee.com.my/Baby-Toddler-Play-cat.27.23785?page=0&sortBy=sales

这是API url,您也可以从该页面的开发人员工具中找到:https://shopee.com.my/api/v2/search_items/?by=sales&limit=50&match_id=23785&newest=0&order=desc&page_type=search&version=2

请告诉我如何处理“cache”或“if-none-match”,因为我无法理解如何处理它。提前感谢!

EN

回答 1

Stack Overflow用户

发布于 2020-10-18 23:05:31

生成API GET请求所需要的就是类别标识符,即、match_id、和start item number,这是最新的参数。

使用链接样板,您可以获取任何API类别端点。

在这种情况下,不需要管理cookie甚至头文件。API是完全没有限制性的。

更新:

这在scrapy shell中对我很有效:

代码语言:javascript
复制
from scrapy import Request

url = 'https://shopee.com.my/api/v2/search_items/?by=sales&limit=50&match_id=23785&newest=50&order=desc&page_type=search&version=2'

headers = {
    "User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:81.0) Gecko/20100101 Firefox/81.0",
    "Accept": "*/*",
    "Accept-Language": "en-US,en;q=0.5",
    "X-Requested-With": "XMLHttpRequest",
}


request = Request(
    url=url,
    method='GET',
    dont_filter=True,
    headers=headers,
)

fetch(request)
票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/64413815

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档