我使用pytorch来分布式训练我的模型。我有两个节点,每个节点有两个gpu,我为一个节点运行代码:
python train_net.py --config-file configs/InstanceSegmentation/pointrend_rcnn_R_50_FPN_1x_coco.yaml --num-gpu 2 --num-machines 2 --machine-rank 0 --dist-url tcp://192.168.**.***:8000另一种是:
python train_net.py --config-file configs/InstanceSegmentation/pointrend_rcnn_R_50_FPN_1x_coco.yaml --num-gpu 2 --num-machines 2 --machine-rank 1 --dist-url tcp://192.168.**.***:8000然而,另一个有RuntimeError问题
global_rank 3 machine_rank 1 num_gpus_per_machine 2 local_rank 1
global_rank 2 machine_rank 1 num_gpus_per_machine 2 local_rank 0
Traceback (most recent call last):
File "train_net.py", line 109, in <module>
args=(args,),
File "/root/detectron2_repo/detectron2/engine/launch.py", line 49, in launch
daemon=False,
File "/root/anaconda3/envs/PointRend/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 171, in spawn
while not spawn_context.join():
File "/root/anaconda3/envs/PointRend/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 118, in join
raise Exception(msg)
Exception:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/root/anaconda3/envs/PointRend/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/root/detectron2_repo/detectron2/engine/launch.py", line 72, in _distributed_worker
comm.synchronize()
File "/root/detectron2_repo/detectron2/utils/comm.py", line 79, in synchronize
dist.barrier()
File "/root/anaconda3/envs/PointRend/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 1489, in barrier
work = _default_pg.barrier()
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:410, unhandled system error, NCCL version 2.4.8如果我将mask-rank =1更改为mask-rank = 0,则不会报告错误,但不能分布式训练,有人知道为什么会发生这个错误吗?
https://stackoverflow.com/questions/61075390
复制相似问题