首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >从tripadvisor获取要做的事情的列表

从tripadvisor获取要做的事情的列表
EN

Stack Overflow用户
提问于 2017-04-17 15:29:11
回答 1查看 1.4K关注 0票数 0

如何得到“事情要做”的清单?我对网络抓取是新手,我不知道如何循环每一页来得到所有‘事情要做’的href ?告诉我我在哪里做错了什么?任何帮助都会得到高度认可。提前谢谢。

代码语言:javascript
复制
import requests
import re
from bs4 import BeautifulSoup
from urllib.request import urlopen



offset = 0
url = 'https://www.tripadvisor.com/Attractions-g255057-Activities-oa' + str(offset) + '-Canberra_Australian_Capital_Territory-Hotels.html#ATTRACTION_LIST_CONTENTS'
urls = []
r = requests.get(url)
soup = BeautifulSoup(r.text, "html.parser")


for link in soup.find_all('a', {'last'}):
    page_number = link.get('data-page-number')
    last_offset = int(page_number) * 30
    print('last offset:', last_offset)


for offset in range(0, last_offset, 30):
    print('--- page offset:', offset, '---')
    url = 'https://www.tripadvisor.com/Attractions-g255057-oa' + str(offset) + '-Canberra_Australian_Capital_Territory-Hotels.html#ATTRACTION_LIST_CONTENTS'
    r = requests.get(url)
    soup = BeautifulSoup(r.text, "html.parser")

    for link in soup.find_all('a', {'property_title'}):
        iurl='https://www.tripadvisor.com/Attraction_Review-g255057' + link.get('href')
        print(iurl)

基本上,我想要每个‘事情做’的href。我对“要做的事情”的期望输出是:

代码语言:javascript
复制
   https://www.tripadvisor.com/Attraction_Review-g255057-d3377852-Reviews-Weston_Park-Canberra_Australian_Capital_Territory.html
   https://www.tripadvisor.com/Attraction_Review-g255057-d591972-Reviews-Canberra_Museum_and_Gallery-Canberra_Australian_Capital_Territory.html
   https://www.tripadvisor.com/Attraction_Review-g255057-d312426-Reviews-Lanyon_Homestead-Canberra_Australian_Capital_Territory.html
   https://www.tripadvisor.com/Attraction_Review-g255057-d296666-Reviews-Australian_National_University-Canberra_Australian_Capital_Territory.html

与下面的示例一样,我使用了下面的代码来获取堪培拉市每一家餐厅的href,我的餐馆代码是:

代码语言:javascript
复制
import requests
import re
from bs4 import BeautifulSoup
from urllib.request import urlopen



with requests.Session() as session:
    for offset in range(0, 1050, 30):
        url = 'https://www.tripadvisor.com/Restaurants-g255057-oa{0}-Canberra_Australian_Capital_Territory.html#EATERY_LIST_CONTENTS'.format(offset)

        soup = BeautifulSoup(session.get(url).content, "html.parser")
        for link in soup.select('a.property_title'):
            iurl = 'https://www.tripadvisor.com/' + link.get('href')
            print(iurl)        

餐厅代码的输出是:

代码语言:javascript
复制
   https://www.tripadvisor.com/Restaurant_Review-g255057-d1054676-Reviews-Lanterne_Rooms-Canberra_Australian_Capital_Territory.html
   https://www.tripadvisor.com/Restaurant_Review-g255057-d755055-Reviews-Courgette_Restaurant-Canberra_Australian_Capital_Territory.html
   https://www.tripadvisor.com/Restaurant_Review-g255057-d6893178-Reviews-Pomegranate-Canberra_Australian_Capital_Territory.html
   https://www.tripadvisor.com/Restaurant_Review-g255057-d7262443-Reviews-Les_Bistronomes-Canberra_Australian_Capital_Territory.html
    .
    .
    .
    .
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2017-04-18 11:51:14

好吧,没那么难,你只需要知道要使用哪些标签。

让我用这个例子来解释:

代码语言:javascript
复制
import requests
from bs4 import BeautifulSoup

base_url = 'https://www.tripadvisor.com/'  ## we need this to join the links later ##
main_page = 'https://www.tripadvisor.com/Attractions-g255057-Activities-oa{}-Canberra_Australian_Capital_Territory-Hotels.html#ATTRACTION_LIST_CONTENTS'
links = []

## get the initial page to find the number of pages ##
r = requests.get(main_page.format(0))  
soup = BeautifulSoup(r.text, "html.parser")
## select the last page from the list of pages ('a', {'class':'pageNum taLnk'}) ##
last_page = max([ int(page.get('data-offset')) for page in soup.find_all('a', {'class':'pageNum taLnk'}) ])

## now iterate over that range (first page, last page, number of links), and extract the links from each page ##
for i in range(0, last_page + 30, 30):
    page = main_page.format(i)
    soup = BeautifulSoup(requests.get(page).text, "html.parser") ## get the next page and parse it with BeautifulSoup ##  
    ## get the hrefs from ('div', {'class':'listing_title'}), and join them with base_url to make the links ##
    links += [ base_url + link.find('a').get('href') for link in soup.find_all('div', {'class':'listing_title'}) ]

for link in links : 
    print(link)

这给了我们8页和212总链接( 30页在每页,2在最后)。

我希望这能把事情弄清楚一点

票数 2
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/43454459

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档