'Python 3.6+: Nested multiprocessing managers cause FileNotFoundError
So I'm trying to use multiprocessing Manager on a dict of dicts, this was my initial try:
from multiprocessing import Process, Manager
def task(stat):
test['z'] += 1
test['y']['Y0'] += 5
if __name__ == '__main__':
test = Manager().dict({'x': {'X0': 10, 'X1': 20}, 'y': {'Y0': 0, 'Y1': 0}, 'z': 0})
p = Process(target=task, args=(test,))
p.start()
p.join()
print(test)
of course when I run this, the output is not what I expect, z
updates correctly while y
is unchanged! This is the output:
{'x': {'X0': 10, 'X1': 20}, 'y': {'Y0': 0, 'Y1': 0}, 'z': 1}
Then I googled, and found an explanation here, apparently the nested dicts have to also be Manager().dict()
s rather than normal python dicts (possible since Python 3.6). So I did the following:
from multiprocessing import Process, Manager
def task(stat):
test['z'] += 1
test['y']['Y0'] += 5
if __name__ == '__main__':
test = Manager().dict({'x': Manager().dict({'X0': 10, 'X1': 20}), 'y': Manager().dict({'Y0': 0, 'Y1': 0}), 'z': 0})
p = Process(target=task, args=(test,))
p.start()
p.join()
print(test)
print(test['y'])
But instead of it working properly, I get this unexplainable error(s), split to three parts for clarity. The first part corresponds to the test['y']['Y0'] += 5
while the second is simply the print(test)
and the last is the output of print(test['y'])
Process Process-4:
Traceback (most recent call last):
File "/usr/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/usr/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "shit.py", line 5, in task
test['y']['Y0'] += 5
File "<string>", line 2, in __getitem__
File "/usr/lib/python3.7/multiprocessing/managers.py", line 796, in _callmethod
kind, result = conn.recv()
File "/usr/lib/python3.7/multiprocessing/connection.py", line 251, in recv
return _ForkingPickler.loads(buf.getbuffer())
File "/usr/lib/python3.7/multiprocessing/managers.py", line 920, in RebuildProxy
return func(token, serializer, incref=incref, **kwds)
File "/usr/lib/python3.7/multiprocessing/managers.py", line 770, in __init__
self._incref()
File "/usr/lib/python3.7/multiprocessing/managers.py", line 824, in _incref
conn = self._Client(self._token.address, authkey=self._authkey)
File "/usr/lib/python3.7/multiprocessing/connection.py", line 492, in Client
c = SocketClient(address)
File "/usr/lib/python3.7/multiprocessing/connection.py", line 619, in SocketClient
s.connect(address)
FileNotFoundError: [Errno 2] No such file or directory
{'x': <DictProxy object, typeid 'dict' at 0x7f01de2c5860>, 'y': <DictProxy object, typeid 'dict' at 0x7f01de2c5898>, 'z': 1}
Traceback (most recent call last):
File "test.py", line 16, in <module>
print(test['y'])
File "<string>", line 2, in __getitem__
File "/usr/lib/python3.7/multiprocessing/managers.py", line 796, in _callmethod
kind, result = conn.recv()
File "/usr/lib/python3.7/multiprocessing/connection.py", line 251, in recv
return _ForkingPickler.loads(buf.getbuffer())
File "/usr/lib/python3.7/multiprocessing/managers.py", line 920, in RebuildProxy
return func(token, serializer, incref=incref, **kwds)
File "/usr/lib/python3.7/multiprocessing/managers.py", line 770, in __init__
self._incref()
File "/usr/lib/python3.7/multiprocessing/managers.py", line 824, in _incref
conn = self._Client(self._token.address, authkey=self._authkey)
File "/usr/lib/python3.7/multiprocessing/connection.py", line 492, in Client
c = SocketClient(address)
File "/usr/lib/python3.7/multiprocessing/connection.py", line 619, in SocketClient
s.connect(address)
FileNotFoundError: [Errno 2] No such file or directory
I'm not sure why this happens. The inner dicts evidently get created (as shown by the second part of the output). But for some reason, they cannot be read or written at all!. Why does this happen?
Extra: If I run the same python code through a python console (rather than a script) the error changes from FileNotFoundError
to ConnectionRefusedError
. But with the same exact traceback!
Solution 1:[1]
With Manager()
in Manager().dict()
you are starting a new manager-process each time, so you are really nesting managers (like the title says) and this is not the way it's supposed to be. What you need to do instead, is instantiate one Manager and then create dictionaries on that manager instance:
from multiprocessing import Process, Manager
from multiprocessing.managers import DictProxy
def task(test): # use parameter `test`, else you rely on forking
test['z'] += 1
test['y']['Y0'] += 5
if __name__ == '__main__':
with Manager() as m:
test = m.dict({'x': m.dict({'X0': 10, 'X1': 20}),
'y': m.dict({'Y0': 0, 'Y1': 0}),
'z': 0})
p = Process(target=task, args=(test,))
p.start()
p.join()
print(test)
print(test['y'])
# convert to normal dict before closing manager for persistence
# in parent or for printing dict behind proxies
d = {k: dict(v) if isinstance(v, DictProxy) else v
for k, v in test.items()}
print(d) # Manager already closed here
Example Output:
{'x': <DictProxy object, typeid 'dict' at 0x7f98cdaaa588>, 'y': <DictProxy object, typeid 'dict' at 0x7f98cda99c50>, 'z': 1}
{'Y0': 5, 'Y1': 0}
{'x': {'X0': 10, 'X1': 20}, 'y': {'Y0': 5, 'Y1': 0}, 'z': 1}
Process finished with exit code 0
You would also need to use Manager.Lock
in case you plan to modify manager-objects from multiple processes.
Solution 2:[2]
I got this issue when running the python code line by line in pycharm. This was resolved by running the whole script with the green arrow next to:
if __name__ == '__main__':
For some reason python doesn't like running pools line by line, not sure why though
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | Darkonaut |
Solution 2 | Wev |