我用python编写了一个脚本,它使用代理、来抓取不同帖子的链接,遍历网页的不同页面。我试图从列表中使用proxies。脚本应该从列表中随机获取proxies,并向该网站发送请求,并最终解析项目。但是,如果任何proxy不起作用,则应该将其从列表中删除。
我认为我在number of proxies和list of urls中使用ThreadPool(10).starmap(make_requests, zip(proxyVault,lead_url))的方式是准确的,但它不会产生任何结果;相反,脚本会被卡住。
如何传递代理和指向ThreadPool的链接,以便脚本产生结果?
import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin
from multiprocessing.pool import ThreadPool
from itertools import cycle
import random
base_url = 'https://stackoverflow.com/questions/tagged/web-scraping'
lead_url = ["https://stackoverflow.com/questions/tagged/web-scraping?sort=newest&page={}&pagesize=15".format(page) for page in range(1,6)]
proxyVault = ['104.248.159.145:8888', '113.53.83.252:54356', '206.189.236.200:80', '218.48.229.173:808', '119.15.90.38:60622', '186.250.176.156:42575']
def make_requests(proxyVault,lead_url):
while True:
random.shuffle(proxyVault)
global pitem
pitem = cycle(proxyVault)
proxy = {'https':'http://{}'.format(next(pitem))}
try:
res = requests.get(lead_url,proxies=proxy)
soup = BeautifulSoup(res.text,"lxml")
[get_title(proxy,urljoin(base_url,item.get("href"))) for item in soup.select(".summary .question-hyperlink")]
except Exception:
try:
proxyVault.pop(0)
make_requests(proxyVault,lead_url)
except Exception:pass
def get_title(proxy,itemlink):
res = requests.get(itemlink,proxies=proxy)
soup = BeautifulSoup(res.text,"lxml")
print(soup.select_one("h1[itemprop='name'] a").text)
if __name__ == '__main__':
ThreadPool(10).starmap(make_requests, zip(proxyVault,lead_url))顺便说一下,上面使用的proxies只是占位符。
发布于 2018-12-29 20:47:00
代码的问题在于它在线程中创建了许多无休止的循环。而且他们处理代理的方式对我来说有点奇怪,所以我改变了。我还认为您误解了数据是如何发送到线程的,它们得到一个可迭代的元素,而不是整个过程。所以我改了一些名字来反映这一点。
它现在的工作方式是每个线程从lead_url中获取自己的url,然后从proxyVault中选择一个随机代理。他们获取网页并解析它,并在每个解析的链接上调用get_title。
如果请求由于代理而失败,则将该代理从列表中删除,因此不会再次使用该代理,并再次调用make_requests,这将从仍然可用的代理中随机选择一个新代理。我没有改变实际的解析,因为我无法判断这是否是您想要的。
可运行代码:
https://repl.it/@zlim00/unable-to-pass-proxies-and-links-to-the-threadpool-to-get-re
from bs4 import BeautifulSoup
from multiprocessing.pool import ThreadPool
from random import choice
import requests
from urllib.parse import urljoin
base_url = 'https://stackoverflow.com/questions/tagged/web-scraping'
lead_url = [f'https://stackoverflow.com/questions/tagged/web-scraping?sort='
f'newest&page={page}&pagesize=15' for page in range(1, 6)]
proxyVault = ['36.67.57.45:53367', '5.202.150.233:42895',
'85.187.184.129:8080', '109.195.23.223:45947']
def make_requests(url):
proxy_url = choice(proxyVault)
proxy = {'https': f'http://{proxy_url}'}
try:
res = requests.get(url, proxies=proxy)
soup = BeautifulSoup(res.text, "lxml")
[get_title(proxy, urljoin(base_url, item.get("href")))
for item in soup.select(".summary .question-hyperlink")]
except requests.exceptions.ProxyError:
# Check so that the bad proxy was not removed by another thread
if proxy_url in proxyVault:
proxyVault.remove(proxy_url)
print(f'Removed bad proxy: {proxy_url}')
return make_requests(url)
def get_title(proxy, itemlink):
res = requests.get(itemlink, proxies=proxy)
soup = BeautifulSoup(res.text, "lxml")
print(soup.select_one("h1[itemprop='name'] a").text)
if __name__ == '__main__':
ThreadPool(10).map(make_requests, lead_url)发布于 2018-12-26 23:03:02
也许您可以使用另一种方法来获得这样的代理。
def get_proxy():
url = 'https://free-proxy-list.net/anonymous-proxy.html'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'lxml')
table = soup.find('table', attrs={'id': 'proxylisttable'})
table_body = table.find('tbody')
proxies = table_body.find_all('tr')
proxy_row = random.choice(proxies).find_all('td')
return proxy_row[0].text + ':' + proxy_row[1].text https://stackoverflow.com/questions/53937949
复制相似问题