我想使用多个进程来堆叠许多图像。每个堆栈由5个图像组成,这意味着我有一个图像列表和一个应该合并的图像子列表:
img_lst = [01_A,01_B,01_C,01_D,01_E,02_A,02_B,02_C,02_D,02_E,03_A,03_B,03_C,03_D,03_E]
此时,我用循环调用函数do_stacking(sub_lst):
for sub_lst in img_lst:
# example: do_stacking([01_A, 01_B, 01_C, 01_D, 01_E])
do_stacking(sub_lst)我想加快多处理的速度,但我不知道如何调用pool.map函数:
if __name__ == '__main__':
from multiprocessing import Pool
# I store my lists in a file
f_in = open(stacking_path + "stacks.txt", 'r')
f_stack = f_in.readlines()
for data in f_stack:
data = data.strip()
data = data.split('\t')
# data is now my sub_lst
# Not sure what to do here, set the sublist, f_stack?
pool = Pool()
pool.map(do_stacking, ???)
pool.close()
pool.join()编辑
我有一份清单:
[
01_A,01_B,01_C,01_D,01_E
02_A,02_B,02_C,02_D,02_E
03_A,03_B,03_C,03_D,03_E
]
每个子列表都应该传递给一个名为do_stacking(子列表)的函数。我只想处理子列表,而不是整个列表。
我的问题是如何处理列表的循环(在img_lst中为x)?我应该为每个池创建一个循环吗?
发布于 2016-09-19 11:07:28
Pool.map的工作方式类似于内置的map function.It,每次从第二个参数中获取一个元素,并将其传递给由第一个参数表示的函数。
if __name__ == '__main__':
from multiprocessing import Pool
# I store my lists in a file
f_in = open(stacking_path + "stacks.txt", 'r')
f_stack = f_in.readlines()
img_list = []
for data in f_stack:
data = data.strip()
data = data.split('\t')
# data is now my sub_lst
img_list.append(data)
print img_list # check if the img_list is right?
# Not sure what to do here, set the sublist, f_stack?
pool = Pool()
pool.map(do_stacking, img_list)
pool.close()
pool.join()https://stackoverflow.com/questions/39571222
复制相似问题