我正在和一位朋友合作,尝试将几个网页的结果放入一个数据帧(https://motos.coches.net/ocasion/barcelona/?pg=1&fi=oTitle&or=1&Tops=1,其中的页数会增加)。我以前没有做过太多的网络抓取工作,也尝试过使用Pandas read_html和BeautifulSoup,但我找不到从哪里开始的问题。
理想情况下,我们希望将所有5000+结果放到一个CSV中,显示标题、发布日期、里程、年份、抄送和位置。
使用Pandas和web抓取库,这样的事情很容易做到吗?谢谢你的帮忙!
发布于 2017-11-22 00:56:56
你还没有表现出自己的努力来达成一个解决方案,但是你可以这样做:
offset = 0
pg = 1
base_url = 'https://url?start={0}&pg={1}'
url = base_url.format(offset,pg)
results = first page from BeautifulSoup scrape or requests.get
all_results= results
while results:
# Rebuild url base on current start.
start += rows
url = base_url.format(offset, pg)
results = next page from BeautifulSoup scrape or requests.get
all_results += results发布于 2017-11-22 05:54:01
我想出了一个解决方案,尽管可能不是最优雅的:
import requests
from bs4 import BeautifulSoup
import pandas as pd
from time import sleep
base_url = 'https://motos.coches.net/ocasion/barcelona/?pg={}&fi=CreationDate&or=-1'
#excluding page from base_url for further adding
res = []
for page in range(1,300): # unknown last page
request = requests.get(base_url.format(page), headers={'User-Agent':'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36'}) # here adding page
if request.status_code == 404: #added just in case of error
break
soup = BeautifulSoup(request.content, 'lxml')
for url in soup.find_all('div', class_ = 'col2-grid'):
res.append([
url.find('h2', class_ = 'floatleft').contents[0].encode('utf8')
,url.find('p', class_ = 'data floatright').contents[0].encode('utf8')
,url.find('p', class_ = 'preu').contents[0].encode('utf8')
,url.find('span', class_ = 'd1').contents[0].encode('utf8')
,url.find('span', class_ = 'd2').contents[0].encode('utf8')
,url.find('span', class_ = 'd3').contents[0].encode('utf8')
,url.find('span', class_ = 'lloc').contents[0].encode('utf8')
]
)
sleep(2) #pause code
#create dataframe
df = pd.DataFrame(data=res, columns=['title', 'date_posted', 'price_in_euros', 'km', 'year', 'engine_size', 'location'])
df = df.replace({'<span>|</span>': ''}, regex=True) #remove span tags
df['engine_size_metric'] = None
df.loc[df['engine_size'].str.contains(' cc'),'engine_size_metric'] = 'cc'
df.loc[df['engine_size'].str.contains(' kw'),'engine_size_metric'] = 'kw'
df['price_in_euros'] = df['price_in_euros'].replace({'\.|€': ''}, regex=True)
df['price_in_euros'] = df['price_in_euros'].astype(float)
df['km'] = df['km'].replace({'\.| km': ''}, regex=True)
df['km'] = df['km'].replace({'N/D': None}, regex=True)
df['km'] = df['km'].astype(float)
df['engine_size'] = df['engine_size'].str.split(' ').str[0].replace({'\.|cc|kw': ''}, regex=True)
df.loc[df['engine_size']=='','engine_size'] = None
df['engine_size'] = df['engine_size'].astype(float)
df.to_csv('output.csv', index=False)https://stackoverflow.com/questions/47418322
复制相似问题