首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >用于下载Python中NCBI文件的多线程

用于下载Python中NCBI文件的多线程
EN

Stack Overflow用户
提问于 2014-03-23 01:13:30
回答 2查看 583关注 0票数 2

因此,最近我承担了从ncbi数据库下载大量文件的任务。然而,我遇到了不得不创建多个数据库的情况。这里的代码可以从ncbi网站下载所有病毒。我的问题是有没有办法加快下载这些文件的过程。

目前,该程序的运行时间超过5小时。我已经研究过多线程,并且无法让它工作,因为其中一些文件需要超过10秒才能下载,而且我不知道如何处理延迟。也有一种处理urllib2.HTTPError: HTTP错误502:坏网关的方法。有时,我会通过某些重新启动和重新启动的组合来获得这个结果。这会使程序崩溃,我必须通过更改for语句中的0来重新启动从不同位置下载的文件。

代码语言:javascript
复制
import urllib2
from BeautifulSoup import BeautifulSoup

#This is the SearchQuery into NCBI. Spaces are replaced with +'s.
SearchQuery = 'viruses[orgn]+NOT+Retroviridae[orgn]'
#This is the Database that you are searching.
database = 'protein'
#This is the output file for the data
output = 'sample.fasta'


#This is the base url for NCBI eutils.
base = 'http://eutils.ncbi.nlm.nih.gov/entrez/eutils/'
#Create the search string from the information above
esearch = 'esearch.fcgi?db='+database+'&term='+SearchQuery+'&usehistory=y'
#Create your esearch url
url = base + esearch
#Fetch your esearch using urllib2
print url
content = urllib2.urlopen(url)
#Open url in BeautifulSoup
doc = BeautifulSoup(content)
#Grab the amount of hits in the search
Count = int(doc.find('count').string)
#Grab the WebEnv or the history of this search from usehistory.
WebEnv = doc.find('webenv').string
#Grab the QueryKey
QueryKey = doc.find('querykey').string
#Set the max amount of files to fetch at a time. Default is 500 files.
retmax = 10000
#Create the fetch string
efetch = 'efetch.fcgi?db='+database+'&WebEnv='+WebEnv+'&query_key='+QueryKey
#Select the output format and file format of the files. 
#For table visit: http://www.ncbi.nlm.nih.gov/books/NBK25499/table/chapter4.chapter4_table1
format = 'fasta'
type = 'text'
#Create the options string for efetch
options = '&rettype='+format+'&retmode='+type


#For statement 0 to Count counting by retmax. Use xrange over range
for i in xrange(0,Count,retmax):
    #Create the position string
    poision = '&retstart='+str(i)+'&retmax='+str(retmax)
    #Create the efetch URL
    url = base + efetch + poision + options
    print url
    #Grab the results
    response = urllib2.urlopen(url)
    #Write output to file
    with open(output, 'a') as file:
        for line in response.readlines():
            file.write(line)
    #Gives a sense of where you are
    print Count - i - retmax
EN

回答 2

Stack Overflow用户

回答已采纳

发布于 2014-03-23 18:29:43

要使用多个线程下载文件:

代码语言:javascript
复制
#!/usr/bin/env python
import shutil
from contextlib import closing
from multiprocessing.dummy import Pool # use threads
from urllib2 import urlopen

def generate_urls(some, params): #XXX pass whatever parameters you need
    for restart in range(*params):
        # ... generate url, filename
        yield url, filename

def download((url, filename)):
    try:
        with closing(urlopen(url)) as response, open(filename, 'wb') as file:
            shutil.copyfileobj(response, file)
    except Exception as e:
        return (url, filename), repr(e)
    else: # success
        return (url, filename), None

def main():
    pool = Pool(20) # at most 20 concurrent downloads
    urls = generate_urls(some, params)
    for (url, filename), error in pool.imap_unordered(download, urls):
        if error is not None:
           print("Can't download {url} to {filename}, "
                 "reason: {error}".format(**locals())

if __name__ == "__main__":
   main()
票数 5
EN

Stack Overflow用户

发布于 2014-03-23 02:43:55

您应该使用多线程,这是下载任务的正确方法。

代码语言:javascript
复制
"these files take more than 10seconds to download and I do not know how to handle stalling",

我不认为这会成为一个问题,因为Python的多线程将处理这个问题,或者我更愿意说多线程只是用于这种I/O绑定的工作。当线程等待下载完成时,CPU将让其他线程完成它们的工作。

不管怎样,你最好至少试着看看会发生什么。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/22585819

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档