任何人都可以解释一下,我如何才能得到简单的url和回复结果?我尝试了这么多次,但什么也没有,因为现在我可以打印,就像:
50
0.4110674999999999
........, [<Response [200]>], [<Response [200]>], [<Response [200]>]]
[......, ['http://example.com.com/catalogue/page-48.html'], ['http://example.com.com/catalogue/page-49.html'], ['http://example.com.com/catalogue/page-50.html']]我需要像
<Response [200]>
https://example.com/非常感谢。
Ps。另外,为什么在安装模块grequests之后,我在控制台上得到了这条消息?
C:\P3\lib\site-packages\grequests.py:22: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (C:\\P3\\lib\\site-packages\\urllib3\\util\\ssl_.py)', 'urllib3.util (C:\\P3\\lib\\site-packages\\urllib3\\util\\__init__.py)'].
curious_george.patch_all(thread=False, select=False)我怎么才能修好它?卸载完整的python,安装补丁还是什么?谢谢!
import grequests
from bs4 import BeautifulSoup
import time
def get_urls():
urls = []
for x in range(1,51):
urls.append(f'http://books.toscrape.com/catalogue/page-{x}.html')
return urls
def get_data(urls):
reqs = [grequests.get(link) for link in urls]
resp = grequests.map(reqs)
return resp
if __name__ == '__main__':
start = time.perf_counter()
urls = get_urls()
url = len(get_urls())
resp = get_data(urls)
respo = len(get_data(urls))
fin = time.perf_counter() - start
resp_list = resp
chunked_resp = list()
chunk_size = respo
urls_list = urls
chunked_url = list()
chunk_size = url
print(urls)
print(url)
print(resp)
print(respo)
print(fin)
resp_list = resp
chunked_resp = list()
chunk_size = 1
for i in range(0, len(resp_list), chunk_size):
chunked_resp.append(resp_list[i:i+chunk_size])
print(chunked_resp)
urls_list = urls
chunked_url = list()
chunk_size = 1
for i in range(0, len(urls_list), chunk_size):
chunked_url.append(urls_list[i:i+chunk_size])
print(chunked_url) 发布于 2022-02-22 12:45:03
好的,我只能为打印url找到一个解决方案:
def get_data(urls):
reqs = [grequests.get(link) for link in urls]
resp = grequests.get(reqs)
return resp
if __name__ == '__main__':
start = time.perf_counter()
urls = get_urls()
resp = get_data(urls)
resp = '\n'.join(resp)
url = '\n'.join(urls)
http://books.toscrape.com/catalogue/page-48.html
http://books.toscrape.com/catalogue/page-49.html
http://books.toscrape.com/catalogue/page-50.html
resp = '\n'.join(resp)
TypeError: can only join an iterable但是对于resp,我得到了TypeError:只能加入一个可迭代的
Ps。我开始学习python最多一个月..。:(
https://stackoverflow.com/questions/71221000
复制相似问题