我在练习用python3爬行。
我在爬这个网站。
http://www.keri.org/web/www/research_0201?p_p_id=EXT_BBS&p_p_lifecycle=0&p_p_state=normal&p_p_mode=view&p_p_col_id=column-1&p_p_col_count=1&_EXT_BBS_struts_action=%2Fext%2Fbbs%2Fview&_EXT_BBS_sCategory=&_EXT_BBS_sKeyType=&_EXT_BBS_sKeyword=&_EXT_BBS_curPage=1&_EXT_BBS_optKeyType1=&_EXT_BBS_optKeyType2=&_EXT_BBS_optKeyword1=&_EXT_BBS_optKeyword2=&_EXT_BBS_sLayoutId=0我想从html代码中获得pdf的地址。
在html中,pdf下载网址是
http://www.keri.org/web/www/research_0201?p_p_id=EXT_BBS&p_p_lifecycle=1&p_p_state=exclusive&p_p_mode=view&p_p_col_id=column-1&p_p_col_count=1&_EXT_BBS_struts_action=%2Fext%2Fbbs%2Fget_file&_EXT_BBS_extFileId=5326但是,我的爬虫结果
http://www.keri.org/web/www/research_0201**;jsessionid=3875698676A3025D8877C4EEBA67D6DF**p_p_id=EXT_BBS&p_p_lifecycle=1&p_p_state=exclusive&p_p_mode=view&p_p_col_id=column-1&p_p_col_count=1&_EXT_BBS_struts_action=%2Fext%2Fbbs%2Fget_file&_EXT_BBS_extFileId=5306我甚至无法将文件下载到下面的地址。
jsessionid从哪里来的?
我可以把它抹去,但我不知道为什么。
**为什么URL这么长?LOL
发布于 2017-02-06 07:49:47
我在代码中测试了jsessionid不影响下载文件:
import requests, bs4
r = requests.get('http://www.keri.org/web/www/research_0201?p_p_id=EXT_BBS&p_p_lifecycle=0&p_p_state=normal&p_p_mode=view&p_p_col_id=column-1&p_p_col_count=1&_EXT_BBS_struts_action=%2Fext%2Fbbs%2Fview&_EXT_BBS_sCategory=&_EXT_BBS_sKeyType=&_EXT_BBS_sKeyword=&_EXT_BBS_curPage=1&_EXT_BBS_optKeyType1=&_EXT_BBS_optKeyType2=&_EXT_BBS_optKeyword1=&_EXT_BBS_optKeyword2=&_EXT_BBS_sLayoutId=0')
soup = bs4.BeautifulSoup(r.text, 'lxml')
down_links = [(a.get('href'), a.find_previous('a').text )for a in soup('a', class_="download")]
for link, title in down_links:
filename = title + '.pdf'
r = requests.get(link, stream=True, headers=headers)
with open(filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
f.write(chunk)https://stackoverflow.com/questions/42062175
复制相似问题