我正在尝试实现一个可以访问共享数据资源的多处理应用程序。我使用锁定机制来确保安全地访问共享资源。然而,我是打错了。令人惊讶的是,如果进程1首先获得锁,它正在为请求提供服务,并且在下一个进程上失败,该进程试图获取lock.But,如果某个除1之外的其他进程试图首先获得锁,那么它在第一次运行时就失败了。我对python很陌生,并且使用文档来实现这一点,所以我不知道我是否缺少任何基本的安全机制-- here.Any数据点,因为我为什么要看到这一点会有很大的帮助。
节目:
#!/usr/bin/python
from multiprocessing import Process, Manager, Lock
import os
import Queue
import time
lock = Lock()
def launch_worker(d,l,index):
global lock
lock.acquire()
d[index] = "new"
print "in process"+str(index)
print d
lock.release()
return None
def dispatcher():
i=1
d={}
mp = Manager()
d = mp.dict()
d[1] = "a"
d[2] = "b"
d[3] = "c"
d[4] = "d"
d[5] = "e"
l = mp.list(range(10))
for i in range(4):
p = Process(target=launch_worker, args=(d,l,i))
i = i+1
p.start()
return None
if __name__ == '__main__':
dispatcher()首先服务流程1时出错
in process0
{0: 'new', 1: 'a', 2: 'b', 3: 'c', 4: 'd', 5: 'e'}
Process Process-3:
Traceback (most recent call last):
File "/usr/lib/python2.6/multiprocessing/process.py", line 232, in _bootstrap
self.run()
File "/usr/lib/python2.6/multiprocessing/process.py", line 88, in run
self._target(*self._args, **self._kwargs)
File "dispatcher.py", line 10, in launch_worker
d[index] = "new"
File "<string>", line 2, in __setitem__
File "/usr/lib/python2.6/multiprocessing/managers.py", line 722, in _callmethod
self._connect()
File "/usr/lib/python2.6/multiprocessing/managers.py", line 709, in _connect
conn = self._Client(self._token.address, authkey=self._authkey)
File "/usr/lib/python2.6/multiprocessing/connection.py", line 143, in Client
c = SocketClient(address)
File "/usr/lib/python2.6/multiprocessing/connection.py", line 263, in SocketClient
s.connect(address)
File "<string>", line 1, in connect
error: [Errno 2] No such file or directory首先为进程2服务时出错
Process Process-2:
Traceback (most recent call last):
File "/usr/lib/python2.6/multiprocessing/process.py", line 232, in _bootstrap
self.run()
File "/usr/lib/python2.6/multiprocessing/process.py", line 88, in run
self._target(*self._args, **self._kwargs)
File "dispatcher.py", line 10, in launch_worker
d[index] = "new"
File "<string>", line 2, in __setitem__
File "/usr/lib/python2.6/multiprocessing/managers.py", line 722, in _callmethod
self._connect()
File "/usr/lib/python2.6/multiprocessing/managers.py", line 709, in _connect
conn = self._Client(self._token.address, authkey=self._authkey)
File "/usr/lib/python2.6/multiprocessing/connection.py", line 150, in Client
deliver_challenge(c, authkey)
File "/usr/lib/python2.6/multiprocessing/connection.py", line 373, in deliver_challenge
response = connection.recv_bytes(256) # reject large message
IOError: [Errno 104] Connection reset by peer发布于 2014-01-29 02:15:57
您的工作人员修改的dict是一个由调度流程管理的共享对象;工作人员对该对象的修改要求他们与调度过程通信。您所看到的错误来自这样一个事实,即您的调度程序在启动它们之后没有等待工作进程;它退出得太快,因此当它们需要时,可能不存在与它们通信。
第一个或两个尝试更新共享dict的工作人员可能会成功,因为当他们修改共享dict时,包含Manager实例的进程可能仍然存在(例如,它可能仍在创建更多工作人员的过程中)。因此,在您的示例中,您可以看到一些成功的输出。但是管理过程很快就会退出,下一个尝试修改的工作人员将失败。(您看到的错误消息是进程间通信尝试失败的典型;如果再运行几次程序,您可能也会看到EOF错误。)
您需要做的是调用Process对象上的Process方法,以等待每个对象退出。对dispatcher的以下修改展示了基本思想:
def dispatcher():
mp = Manager()
d = mp.dict()
d[1] = "a"
d[2] = "b"
d[3] = "c"
d[4] = "d"
d[5] = "e"
procs = []
for i in range(4):
p = Process(target=launch_worker, args=(d,i))
procs.append(p)
p.start()
for p in procs:
p.join()https://stackoverflow.com/questions/21420413
复制相似问题