我正在从spotify api中检索轨道数据,总共有10条,但运行大约需要2-3秒。有没有办法通过使用一些python库(如multiprocessing或其他东西)来加快它的速度?
track_url = []
track_name = []
album_image = []
for i in range(len(tracks_recommend)):
track_id = tracks_recommend.at[i, 'id']
# call to spotify api
res = spotify.track(track_id=track_id)
track_url.append(res['external_urls'])
track_name.append(res['name'])
album_image.append(res['album']['images'][0]['url'])发布于 2022-05-23 16:39:30
是否有任何方法通过使用一些python库(如multiprocessing )来加快速度?
是的,多进程并行运行API请求非常好。这会让你开始:
from multiprocessing.pool import ThreadPool as Pool
def recommend(track_id):
return spotify.track(track_id=track_id)
track_ids = [tracks_recommend.at[i, 'id']
for i in range(len(tracks_recommend))]
with Pool(5) as pool:
for res in pool.map(recommend, track_ids):
...发布于 2022-05-23 16:39:41
取决于Spotify是否跟踪您并将您限制为一个请求。如果他们不这样做,你可以从以下几个方面着手:
def process_track(track_id)
# call to spotify api
res = spotify.track(track_id=track_id)
return (res['external_urls'], res['name'], res['album']['images'][0]['url'])
with Pool(4) as p: # replace 4 with whatever number you want
track_ids = [tracks_recommend.at[i, 'id'] for i in range(len(tracks_recommend))]
output = p.map(process_track, track_ids)
track_url, track_name, album_image = zip(*output)这无助于延迟,但它可能会增加吞吐量。
https://stackoverflow.com/questions/72351886
复制相似问题