我对这个问题也有同样的问题:Python requests.get fails with 403 forbidden, even after using headers and Session object
不幸的是,没有answer.So,我该如何解决禁止的403?
我试过了:
Python requests - 403 forbidden - despite setting User-Agent headers
和:
Python requests. 403 Forbidden
有人知道解决这个问题的另一个选择吗?
import requests
url_complete='https://smartsub.les.inf.puc-rio.br//media/imagens/5f667ec98b21262d4fc0a9dc5df4d0e4/8c6bbb5844e009eab139442e4024684d.jpg'
session = requests.Session()
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36 Edg/95.0.1020.53',
'referer':'https://smartsub.les.inf.puc-rio.br/login/?next=/'}
Picture_request = session.get(url_complete,headers=headers)
print(Picture_request)发布于 2021-11-18 13:35:07
对于有同样问题的人来说,我的问题的解决方案是在标题中填写cookers信息。
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36 Edg/95.0.1020.53',
'cookie':" ..."}您可以使用与useg-agent相同的方式获取cookie信息,如下所述
发布于 2021-11-16 15:57:45
尝试HTTP代理,例如'Zyte‘
https://stackoverflow.com/questions/69992169
复制相似问题