首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >挂在多处理队列中

挂在多处理队列中
EN

Stack Overflow用户
提问于 2020-02-21 13:08:49
回答 1查看 680关注 0票数 0

我试图使用multiprocessing.Queue将一个文件拆分成几个较小的文件。虽然下面的代码经常起作用,但有时它会坚持:

代码语言:javascript
复制
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
  File "/home/pedroq/miniconda3/envs/drax_annot/lib/python3.7/multiprocessing/util.py", line 265, in _run_finalizers
    finalizer()
  File "/home/pedroq/miniconda3/envs/drax_annot/lib/python3.7/multiprocessing/util.py", line 189, in __call__
    res = self._callback(*self._args, **self._kwargs)
  File "/home/pedroq/miniconda3/envs/drax_annot/lib/python3.7/multiprocessing/queues.py", line 192, in _finalize_join
    thread.join()
  File "/home/pedroq/miniconda3/envs/drax_annot/lib/python3.7/threading.py", line 1032, in join
    self._wait_for_tstate_lock()
  File "/home/pedroq/miniconda3/envs/drax_annot/lib/python3.7/threading.py", line 1048, in _wait_for_tstate_lock
    elif lock.acquire(block, timeout):
KeyboardInterrupt

我不知道为什么?传递给进程的数据可能相当大,这是否是一个问题?下面是我使用的伪代码:

代码语言:javascript
复制
def generate_split_processes_to_run(self,protein_seqs,seq_chunks):
    c=0
    for chunk in seq_chunks:
        self.queue.put([protein_seqs,chunk,c])
        c+=1

def sample_split_handler(self,protein_seqs,protein_seqs_groups,worker_count):
    #loading the queue
    self.generate_split_processes_to_run(protein_seqs,protein_seqs_groups)
    #spawning the processes
    processes = [Process(target=self.sample_split_worker, args=(self.queue,)) for _ in range(worker_count)]
    #starting the processes
    for process in processes:
        process.start()
    #joining processes
    for process in processes:
        process.join()
        print(processes)


def sample_split_worker(self, queue):
    while not queue.empty():
        seqs, chunk, chunk_number= queue.get()
        self.save_chunks(seqs, chunk, chunk_number)

def split_sample(self):
    seqs=self.read_file(self.target_path)
    seqs_keys=list(seqs.keys())
    worker_count= 7
    seq_chunks= chunk_generator(seqs_keys, 1000)
    self.sample_split_handler(seqs,seq_chunks,worker_count)

def save_chunks(self,seqs,
                      chunk,
                      chunk_number):
    with open(chunk_path, 'w+') as file:
        while chunk:
            seq_id = chunk.pop(0)
            chunk_str = 'something'
            file.write('>' + seq_id + '\n' + chunk_str + '\n')

当我打印进程列表时,它们似乎都完成了:

代码语言:javascript
复制
[<Process(Process-1, stopped)>, <Process(Process-2, stopped)>, <Process(Process-3, stopped)>, <Process(Process-4, stopped)>, <Process(Process-5, stopped)>, <Process(Process-6, stopped)>, <Process(Process-7, stopped)>]

我以前使用过池,但现在我想使用队列。欢迎任何帮助!

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2020-02-23 19:24:08

因此,显然这是一个“常见”的问题。使用.empty().qsize()检查队列大小并不能保证队列真正为空。在快速进程中,即使队列不是空的,也经常可以得到一个queue.empty()==True

为此提出了一些解决办法:

1-在进程之间包括一个定时器sleep(),以便队列有时间获取新项。

2-在队列中添加一个哨兵。

哨兵将保证您始终完成队列,而不是计时器选项。将项插入队列后,添加一个哨兵,如so queue.put(None) (用于低内存消耗的None)。为正在运行的每个进程插入一个None。这将导致这样的结果:

代码语言:javascript
复制
def sample_split_worker(self, queue):
    while True:
        #when the queue is finished, each process will receive a None, thus breaking the cycle.
        record = queue.get()
        If not record: break
        seqs, chunk, chunk_number = record
        self.save_chunks(seqs, chunk, chunk_number)

希望这能帮上忙。

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/60339323

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档