我需要从这个Amazon页面中获取一些信息:
具体来说,我对这些领域感兴趣:
Author | Star | Date | Title | Review例如:
Gi
1.0 out of 5 stars Unacceptable Launch State for PS4
Reviewed in the United States on September 14, 2019
Platform: PlayStation 4Edition: Super DeluxeVerified Purchase
I'm a huge fan of this franchise. Own all of the games, for both PS4 and PC. Waited a very long time for this game and I'm speechless. You can find many reviews of the gameplay and other aspects of the game, but I'll focus on my initial thoughts and will update accordingly. First and foremost, the performance on the PS4 Slim is terrible. Frames per second is unacceptable for a split screen configuration, where scrolling between screens and reviewing the map and fighting a screen full of NPCs is horrendous. Take 2 / Gearbox couldn't even get the scaling correct with the menus, loot menus, and any text (aside from subtitles) and it's similar to reading 8 pt font on a 65 inch screen. There is no vertical split screen and no other options to improve performance. Missions are uneventful and no concise storyline that enables campaign mode truly enjoyable. In many aspects, you'd wish this game was more linear than it is, but it's storyline isn't inspiring at all. Only after a few hours of gameplay, we decided it's not worth our time until the developers make significant improvements with performance. I wish we could refund this garbage.因为我以前从来没有这样做过,所以我想知道这是我可以用Scrapy/BeautifulSoup/Selenium做的事情,还是需要API,尽管这些信息来自
Author under <span class="a-profile-name">Gi</span>
Rating <span class="a-icon-alt">1.0 out of 5 stars</span>
Review <div data-hook="review-collapsed" aria-expanded="false" class="a-expander-content a-expander-partial-collapse-content" style="padding-bottom: 19px;"> ...TEXT...</div>发布于 2020-07-31 08:59:44
刮擦将是这个任务的一个不错的选择。这将是相当简单的蜘蛛,它将能够收集所需的信息。
import scrapy
class TestSpider(scrapy.Spider):
name = 'test'
start_urls = ['https://www.amazon.com/dp/B07Q6H83VY']
def parse(self, response):
for row in response.css('div.review'):
item = {}
item['author'] = row.css('span.a-profile-name::text').extract_first()
rating = row.css('i.review-rating > span::text').extract_first().strip().split(' ')[0]
item['rating'] = int(float(rating.strip().replace(',', '.')))
item['title'] = row.css('span.review-title > span::text').extract_first()
created_date = row.css('span.review-date::text').extract_first().strip()
item['created_date'] = created_date
review_content = row.css('div.reviewText ::text').extract()
review_content = [rc.strip() for rc in review_content if rc.strip()]
item['content'] = ', '.join(review_content)
yield item产出实例:
{
"author": "Jhona Diaz",
"rating": 4,
"title": "Recomendable solo si eres fan ya que si está algo caro",
"created_date": "Reviewed in Mexico on November 23, 2019",
"content": "Buena calidad y pues muy completo"
},
{
"author": "MANUEL MENDOZA OLVERA",
"rating": 5,
"title": "Perfecto Estado",
"created_date": "Reviewed in Mexico on September 28, 2019",
"content": "excelente, la edición es de caja metálica y llegó intacta"
},发布于 2020-07-30 23:03:59
先做pip安装selenium
第二,使用Python库干刮来刮javascript驱动的网站。在这个url https://phantomjs.org/download.html上
from selenium import webdriver
#the path below from dryscrape folder from step2
driver = webdriver.PhantomJS(executable_path='C:\\Users\\nayef\\Desktop\\New folder\\phantomjs-2.1.1-windows\\bin\\phantomjs')
driver.get('https://www.amazon.com/dp/B07Q6H83VY')
p_element = driver.find_element_by_id('deliveryMessageMirId')
driver.get(my_url)
p_element = driver.find_element_by_id(id_='intro-text')
print(p_element.text)
# result:
Arrives: Friday, Aug 7 Details
https://stackoverflow.com/questions/63181993
复制相似问题