我始终确信,没有必要有比CPU内核更多的线程/进程(从性能角度来看)。然而,我的python示例显示了一个不同的结果。
import concurrent.futures
import random
import time
def doSomething(task_num):
print("executing...", task_num)
time.sleep(1) # simulate heavy operation that takes ~ 1 second
return random.randint(1, 10) * random.randint(1, 500) # real operation, used random to avoid caches and so on...
def main():
# This part is not taken in consideration because I don't want to
# measure the worker creation time
executor = concurrent.futures.ProcessPoolExecutor(max_workers=60)
start_time = time.time()
for i in range(1, 100): # execute 100 tasks
executor.map(doSomething, [i, ])
executor.shutdown(wait=True)
print("--- %s seconds ---" % (time.time() - start_time))
if __name__ == '__main__':
main()项目结果:
1名工人-- 100.28233647346497秒 两个工人- 50.26122164726257秒 3名工人- 33.32741022109985秒 4个工人- 25.399883031845093秒 5个工人- 20.434186220169067秒 10名工人-- 10.903695344924927秒-- 50名工人-- 6.363946914672852秒-- 60名工人-- 4.819359302520752秒--
--如何更快地工作--只需要4个逻辑处理器?
以下是我的计算机规范(在Windows 8和Ubuntu 14上进行了测试):
CPU英特尔(R) Core(TM) i5-3210M CPU @ 2.50GHz套接字:1核:2逻辑处理器: 4
发布于 2017-07-04 10:54:06
原因是因为sleep()只使用了微不足道的CPU。在这种情况下,它是对线程执行的实际工作的拙劣模拟。
sleep()真正做的就是挂起线程,直到计时器过期。当线程挂起时,它不使用任何CPU周期。
发布于 2017-07-04 11:43:25
我用更密集的计算扩展了你的例子。矩阵反演)。您将看到您所期望的:计算时间将减少到内核的数量,随后会增加(因为上下文切换的成本)。
import concurrent.futures
import random
import time
import numpy as np
import matplotlib.pyplot as plt
def doSomething(task_num):
print("executing...", task_num)
for i in range(100000):
A = np.random.normal(0,1,(1000,1000))
B = np.inv(A)
return random.randint(1, 10) * random.randint(1, 500) # real operation, used random to avoid caches and so on...
def measureTime(nWorkers: int):
executor = concurrent.futures.ProcessPoolExecutor(max_workers=nWorkers)
start_time = time.time()
for i in range(1, 40): # execute 100 tasks
executor.map(doSomething, [i, ])
executor.shutdown(wait=True)
return (time.time() - start_time)
def main():
# This part is not taken in consideration because I don't want to
# measure the worker creation time
maxWorkers = 20
dT = np.zeros(maxWorkers)
for i in range(maxWorkers):
dT[i] = measureTime(i+1)
print("--- %s seconds ---" % dT[i])
plt.plot(np.linspace(1,maxWorkers, maxWorkers), dT)
plt.show()
if __name__ == '__main__':
main()https://stackoverflow.com/questions/44903970
复制相似问题