首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >使用selenium进行Web抓取:移到下一页

使用selenium进行Web抓取:移到下一页
EN

Stack Overflow用户
提问于 2020-08-03 16:01:30
回答 1查看 903关注 0票数 2

我如何才能从这个网站得到以下信息,如果在下一页有更多的评论,检查gif?我想使用selenium和web驱动程序。

  • <span class="a-profile-name">NAME</span>

  • <i data-hook="review-star-rating" class="a-icon a-icon-star a-star-2 review-rating"><span class="a-icon-alt">2.0 out of 5 stars</span></I>

  • Fell apart after a few months
    • <span data-hook="review-date" class="a-size-base a-color-secondary review-date">Reviewed in the United States on January 23, 2019</span>

审查机构:

这个鞋底在办公室里穿了大约4个月后就完全脱胶了。我无法想象一双合法的反式运动鞋会有如此低劣的品质。我不是专家,但我认为他们是假的。

不管怎样,这双鞋不值钱。

我更喜欢使用selenium,因为我可以轻松地移动到下一个页面并存储收集到的数据。

对于这些字段中的每一个,我应该有单独的列表,它们收集:author, dates, stars, review's title and review's body。一个例子可以是:

https://www.amazon.com/Converse-Chuck-Taylor-Star-Core/dp/B07KLM7JRL/ref=sr_1_1?dchild=1&keywords=converse&qid=1596469913&sr=8-1&th=1

有2226个评等。

你认为硒是可行的吗?

代码(代码包含丢失的信息,可能搜索的部分也是错误的,):

代码语言:javascript
复制
from bs4 import BeautifulSoup
import time
from selenium import webdriver
import re
def spider():
 
    driver = webdriver.Chrome('path/chromedriver'))
    

    driver.get('https://www.amazon.com/Converse-Chuck-Taylor-Star-Core/dp/B07KLM7JRL/ref=sr_1_1?dchild=1&keywords=converse&qid=1596469913&sr=8-1&th=1') #in th I should add page number info

    time.sleep(1)
    search = driver.find_element_by_name('q')
    time.sleep(2)
    search.submit()

    author = []
    dates = []
    score = []
    review_min = []
    review = []
   
    while True:
        soup = BeautifulSoup(driver.page_source,'lxml')
        result_div = soup.find_all('div', attrs={'class': 'g'})
        time.sleep(2)
        for r in result_div:
                    # here there should be the part to get info about author, dates, scores, ...
                        time.sleep(1)
# part where I append results scraped

        next_page_btn =driver.find_elements_by_xpath("//a[@id='pnnext']")
        if len(next_page_btn) <1:
            print("no more pages left")
            break

        element =WebDriverWait(driver,100).until(expected_conditions.element_to_be_clickable((By.ID,'pnnext')))
        driver.execute_script("return arguments[0].scrollIntoView();", element)
        element.click()
        time.sleep(2)

   
    driver.quit()
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2020-08-13 19:09:42

您的解决方案需要由几个层组成。每个层负责不同的动作和行为。

第一层

负责导航和页面迭代-将重复每一页。

第二层

负责项目-将提取一个单一的项目审查信息,并将重复每一个项目在一页。

这是最棘手的部分,因为它必须打开不同页面中的每一项(如果您使用“后退”,它将刷新并丢失数据),导航到新页面,切换,提取,关闭和切换回-所以我们回到第0点,准备下一项。

第三层

负责评审-将提取单个项目的所有评审,对每个项目页重复所有评审。

摘要

代码语言:javascript
复制
For Each Page Extract
    > Item, For Each Item Extract
        > Reviews

结果将是一个数组,用于以下列格式查看项目

代码语言:javascript
复制
{
    "product": "My Product",
    "link": "https://products/my_product",
    "reviews": [
        { "author": "foo", "date": "0000-000"... },
        { "author": "bar", "date": "0000-000"... },
        ...
    ]
}

代码样本

这将是你的出发点,你可以实现缺失的部分。这将提取单个页面中所有项目的评论。

按原样运行,只需更改驱动程序路径即可。

代码语言:javascript
复制
import re

from selenium import webdriver
from selenium.webdriver import ActionChains
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions
from selenium.webdriver.support.wait import WebDriverWait


def spider(page_number: int):
    # setup: web driver > wait object > url format > page number
    driver = webdriver.Chrome('D:\\automation-env\\web-drivers\\chromedriver.exe')
    wait = WebDriverWait(driver, 15)
    url_format =\
        "https://www.amazon.com/Converse-Chuck-Taylor-Star-Core/dp/B07KLM7JRL/" \
        "ref=sr_1_1?" \
        "dchild=1&" \
        "keywords=converse&" \
        "qid=1596469913&" \
        "sr=8-1&" \
        "th={page_number}"

    try:
        # navigate
        driver.get(url_format.format(page_number=page_number))
        driver.maximize_window()

        # search your product
        __search(driver_wait=wait, search_for='converse')

        # cache item
        rate_locator = (By.XPATH, "//i[contains(@class,'a-star-small-')]")
        items = wait.until(expected_conditions.visibility_of_all_elements_located(rate_locator))

        # product cycle
        reviews = []
        for i in range(len(items)):
            reviews.append(__product_cycle(on_driver=driver, on_element=items[i], on_element_index=i + 1))

        # output
        print(reviews)

    except Exception as e:
        print(e)

    finally:
        if driver is not None:
            driver.quit()


# execute search product
def __search(driver_wait: WebDriverWait, search_for: str):
    # search
    search = driver_wait.until(expected_conditions.element_to_be_clickable((By.ID, 'twotabsearchtextbox')))
    search.clear()
    search.send_keys(search_for)
    search.submit()


# execute an extraction on single item in the products list
# you can add more logic to extract the rest of the review
def __product_cycle(on_driver, on_element, on_element_index):
    # hover the review element
    ActionChains(driver=on_driver).move_to_element(on_element).perform()

    # open reviews in new page (the index is here to handle amazon keeping in the DOM all reviews already inspected)
    wait = WebDriverWait(on_driver, 15)
    link_element_locator = (By.XPATH, "(//a[.='See all customer reviews'])[" + f'{on_element_index}' + "]")
    link_element =\
        wait.until(expected_conditions.element_to_be_clickable(link_element_locator))
    link = link_element.get_attribute(name='href')

    on_driver.execute_script(script="window.open('about:blank', '_blank');")
    on_driver.switch_to_window(on_driver.window_handles[1])
    on_driver.get(link)

    # cache review elements
    review_locator = (By.XPATH, "//div[contains(@id,'customer_review-')]")
    review_elements = wait.until(expected_conditions.visibility_of_all_elements_located(review_locator))

    # extract reviews for page
    # if you want to iterate pages put this inside page iteration loop
    reviews = {
        "product": on_driver.title,
        "link": on_driver.current_url,
        "data": []
    }
    reviews_data = []
    for e in review_elements:
        reviews["data"].append(__get_item_review(on_driver, e))

    # return to point 0
    on_driver.close()
    on_driver.switch_to_window(on_driver.window_handles[0])

    # results
    return reviews


# extracts a single item reviews collection
def __get_item_review(on_driver, on_element) -> dict:
    # locators
    author_locator = ".//span[@class='a-profile-name']"
    date_locator = ".//span[@data-hook='review-date']"
    score_locator = ".//a[.//i[@data-hook='review-star-rating']]"
    review_locator = ".//div[@data-hook='review-collapsed']/span"

    # data
    review_data = {
        'author': on_element.find_element_by_xpath(author_locator).text.strip(),
        'date': re.findall('(?<=on ).*', on_element.find_element_by_xpath(date_locator).text.strip())[0],
        'score': re.findall('\\d+.\\d+', on_element.find_element_by_xpath(score_locator).get_attribute("title"))[0],
        'review': on_element.find_element_by_xpath(review_locator).text.strip(),
    }

    # TODO: add more logic to get also the hidden reviews for this item.

    # results data
    return review_data


spider(page_number=1)
票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/63232892

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档