我是一个网络蜘蛛的初学者,当我使用aiohttp的时候,我感到很困惑。这是我的代码:
header = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1',
'Referer': 'https://www.mzitu.com/',
'Accept': "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
'Accept-Encoding': 'gzip',
}
class MZiTu(object):
def __init__(self):
self.timeout = 5
self.file_path = 'D:\mzitu'
self.common_page_url = 'https://www.mzitu.com/page/'
self.total_page_num = 0
self.end_album_num = 0
self.session = None
async def start(self):
async with aiohttp.ClientSession(headers=header) as mzt.session:
for page in range(1, self.total_page_num+1):
await self.crawlAlbum(self.common_page_url, page)
async def crawlAlbum(self, common_url, page_num):
page_url = self.common_page_url + str(page_num)
async with self.session.get(page_url, timeout=self.timeout) as resp:
html = await resp.text()
bsop = BeautifulSoup(html, 'lxml')
album_items = bsop.find('ul', {'id': 'pins'}).findAll('li')
for item in album_items:
try:
album_title = item.find('img').attrs['alt']
album_url = item.find('a').attrs['href']
if not os.path.exists(os.path.join(self.file_path, album_title)):
os.mkdir(os.path.join(self.file_path, album_title))
os.chdir(os.path.join(self.file_path, album_title))
await self.crawlImgs(album_url)
except:
continue
async def crawlImgs(self, album_url):
self.end_album_num = await self.getAlbumTotalNum(album_url)
for i in range(1, self.end_album_num+1):
img_page_url = album_url + str(i)
async with self.session.get(img_page_url, timeout=self.timeout) as resq:
html = await resq.text()
bsop = BeautifulSoup(html, 'lxml')
try:
img_url = bsop.find('div', {'class': 'main-image'}).find('img').attrs['src']
await self.downloadImg(i, img_url)
except:
continue
async def getAlbumTotalNum(self, album_url):
async with self.session.get(album_url, timeout=self.timeout) as resq:
html = await resq.text()
bsop = BeautifulSoup(html, 'lxml')
total_num = int(bsop.find('div', {'class': 'nav-links'}).findAll('a', {'class': 'page-numbers'})[-2].text)
return total_num
async def downloadImg(self,index, img_url):
async with self.session.get(img_url, timeout=self.timeout) as resq:
content = await resq.read()
async with aiofiles.open(str(index)+'.jpg', 'wb') as f:
await f.write(content)
if __name__ == "__main__":
mzt = MZiTu()
mzt.total_page_num = 2
loop = asyncio.get_event_loop()
to_do = [mzt.start()]
wait_future = asyncio.wait(to_do)
loop.run_until_complete(wait_future)
loop.close()我的代码直接返回到下面的第一行,为什么?如此迷茫
async def getAlbumTotalNum(self, album_url):
async with self.session.get(album_url, timeout=self.timeout) as resq:
html = await resq.text()
bsop = BeautifulSoup(html, 'lxml')
total_num = int(bsop.find('div', {'class': 'nav-links'}).findAll('a', {'class': 'page-numbers'})[-2].text)
return total_num我在程序中找不到任何错误。太迷茫了。太迷茫了。如果有一些关于aiohttp和异步的学习材料,我觉得很难。
发布于 2019-03-05 15:45:43
第一个问题是,您使用的是口袋妖怪异常处理,您真的不想全部捕获它们。
只捕获特定的异常,或者至少只捕获Exception,并确保重新引发asyncio.CancelledError (您不想阻止任务取消),并记录或打印引发的异常,以便进一步清理处理程序。作为快速修复,我将您的try:... except: continue块替换为:
try:
# ...
except asyncio.CancelledError:
raise
except Exception:
traceback.print_exc()
continue并在顶部添加import traceback。然后运行代码时,您将看到代码失败的原因:
Traceback (most recent call last):
File "test.py", line 43, in crawlAlbum
await self.crawlImgs(album_url)
File "test.py", line 51, in crawlImgs
self.end_album_num = await self.getAlbumTotalNum(album_url)
File "test.py", line 72, in getAlbumTotalNum
total_num = int(bsop.find('div', {'class': 'nav-links'}).findAll('a', {'class': 'page-numbers'})[-2].text)
AttributeError: 'NoneType' object has no attribute 'findAll'要么是网站标记链接的方式改变了,要么是网站在加载HTML后使用Javascript在浏览器中更改DOM。无论哪种方式,在不记录错误的情况下使用except:子句都会对您隐藏这些问题,并使调试变得非常困难。
我至少会添加一些日志来记录代码在出现异常时试图解析的URL,这样您就可以在交互式的、非异步的设置中复制这个问题,并尝试不同的方法来解析页面。
与其使用.find()和.findAll()调用,不如使用CSS选择器查找正确的元素:
links = bsop.select(f'div.pagenavi a[href^="{album_url}"] span')
return 1 if len(links) < 3 else int(links[-2].string)上面的URL使用当前URL将搜索限制为具有span属性的a元素父元素,该元素的值至少以当前页面URL开头。
请注意,上述问题并不是唯一的问题,但是,当修复该问题时,下一个异常是
Traceback (most recent call last):
File "test.py", line 59, in crawlImgs
img_url = bsop.find('div', {'class': 'main-image'}).find('img').attrs['src']
AttributeError: 'NoneType' object has no attribute 'find'实际上,这是因为您对专辑的URL处理不正确,假设它们总是以/结尾。更正如下:
async def crawlImgs(self, album_url):
end_album_num = await self.getAlbumTotalNum(album_url)
if album_url[-1] != '/':
album_url += '/'
for i in range(1, end_album_num + 1):
img_page_url = album_url + str(i)
# ...但是,您不希望将album_num设置为self上的一个属性!类实例状态是在任务之间共享的,虽然您实际上没有在代码中创建多个任务(目前这都是一个顺序任务),但您希望避免更改共享状态。
https://stackoverflow.com/questions/55005664
复制相似问题