首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Tensorflow 2.0.0 MirroredStrategy NCCL问题

Tensorflow 2.0.0 MirroredStrategy NCCL问题
EN

Stack Overflow用户
提问于 2020-02-07 02:10:02
回答 4查看 4.7K关注 0票数 1

我正在尝试使用tf.distribute.MirroredStrategy()来训练多gpu。

在多次尝试应用于我的自定义代码之后,它出现了一些关于NcclAllReduce的错误。

因此,我使用tf.distribute从tensorflow页面复制mnist教程,运行它有相同的错误。日志和我的环境如下

我的env Sys.Platform?-窗口10。 Python----------3.7.6 Numpy

代码语言:javascript
复制
INFO:tensorflow:batch_all_reduce: 8 all-reduces with algorithm = nccl, num_packs = 1, agg_small_grads_max_bytes = 0 and agg_small_grads_max_group = 10
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:batch_all_reduce: 8 all-reduces with algorithm = nccl, num_packs = 1, agg_small_grads_max_bytes = 0 and agg_small_grads_max_group = 10
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
<ipython-input-12-a7ead7e91ea5> in <module>
     19     num_batches = 0
     20     for x in train_dist_dataset:
---> 21       total_loss += distributed_train_step(x)
     22       num_batches += 1
     23     train_loss = total_loss / num_batches

~\Anaconda3\envs\tf-MSTO-DL\lib\site-packages\tensorflow_core\python\eager\def_function.py in __call__(self, *args, **kwds)
    455 
    456     tracing_count = self._get_tracing_count()
--> 457     result = self._call(*args, **kwds)
    458     if tracing_count == self._get_tracing_count():
    459       self._call_counter.called_without_tracing()

~\Anaconda3\envs\tf-MSTO-DL\lib\site-packages\tensorflow_core\python\eager\def_function.py in _call(self, *args, **kwds)
    518         # Lifting succeeded, so variables are initialized and we can run the
    519         # stateless function.
--> 520         return self._stateless_fn(*args, **kwds)
    521     else:
    522       canon_args, canon_kwds = \

~\Anaconda3\envs\tf-MSTO-DL\lib\site-packages\tensorflow_core\python\eager\function.py in __call__(self, *args, **kwargs)
   1821     """Calls a graph function specialized to the inputs."""
   1822     graph_function, args, kwargs = self._maybe_define_function(args, kwargs)
-> 1823     return graph_function._filtered_call(args, kwargs)  # pylint: disable=protected-access
   1824 
   1825   @property

~\Anaconda3\envs\tf-MSTO-DL\lib\site-packages\tensorflow_core\python\eager\function.py in _filtered_call(self, args, kwargs)
   1139          if isinstance(t, (ops.Tensor,
   1140                            resource_variable_ops.BaseResourceVariable))),
-> 1141         self.captured_inputs)
   1142 
   1143   def _call_flat(self, args, captured_inputs, cancellation_manager=None):

~\Anaconda3\envs\tf-MSTO-DL\lib\site-packages\tensorflow_core\python\eager\function.py in _call_flat(self, args, captured_inputs, cancellation_manager)
   1222     if executing_eagerly:
   1223       flat_outputs = forward_function.call(
-> 1224           ctx, args, cancellation_manager=cancellation_manager)
   1225     else:
   1226       gradient_name = self._delayed_rewrite_functions.register()

~\Anaconda3\envs\tf-MSTO-DL\lib\site-packages\tensorflow_core\python\eager\function.py in call(self, ctx, args, cancellation_manager)
    509               inputs=args,
    510               attrs=("executor_type", executor_type, "config_proto", config),
--> 511               ctx=ctx)
    512         else:
    513           outputs = execute.execute_with_cancellation(

~\Anaconda3\envs\tf-MSTO-DL\lib\site-packages\tensorflow_core\python\eager\execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
     65     else:
     66       message = e.message
---> 67     six.raise_from(core._status_to_exception(e.code, message), None)
     68   except TypeError as e:
     69     keras_symbolic_tensors = [

~\Anaconda3\envs\tf-MSTO-DL\lib\site-packages\six.py in raise_from(value, from_value)

InvalidArgumentError: No OpKernel was registered to support Op 'NcclAllReduce' used by {{node Adam/NcclAllReduce}}with these attrs: [reduction="sum", shared_name="c1", T=DT_FLOAT, num_devices=2]
Registered devices: [CPU, GPU]
Registered kernels:
  <no registered kernels>

     [[Adam/NcclAllReduce]] [Op:__inference_distributed_train_step_1755]
EN

回答 4

Stack Overflow用户

发布于 2020-03-03 18:54:51

NCCL驱动程序不适用于Windows。据我所知,他们只使用Linux。我已经读到,可能有一个NCCL驱动程序相当于Windows,但一直未能找到他们自己。如果您想继续使用Windows,可以尝试使用HierarchicalCopyAllReduce

代码语言:javascript
复制
mirrored_strategy = tf.distribute.MirroredStrategy(devices=["/job:localhost/replica:0/task:0/device:GPU:0", "/job:localhost/replica:0/task:0/device:GPU:1"],
                    cross_device_ops = tf.distribute.HierarchicalCopyAllReduce())
with mirrored_strategy.scope():
票数 1
EN

Stack Overflow用户

发布于 2021-08-25 05:51:32

看来,cross_device_ops有几种选择

代码语言:javascript
复制
strategy = tf.distribute.MirroredStrategy(
     cross_device_ops=tf.distribute.NcclAllReduce())

会产生NCCL错误,这取决于您的体系结构和配置。

此选项适用于NVIDIA DGX-1体系结构,在其他架构上可能表现不佳:

代码语言:javascript
复制
strategy = tf.distribute.MirroredStrategy(
    cross_device_ops=tf.distribute.HierarchicalCopyAllReduce())

应起作用:

代码语言:javascript
复制
strategy = tf.distribute.MirroredStrategy(
     cross_device_ops=tf.distribute.ReductionToOneDevice())

这样它就可以被建议尝试不同的选择。

请参阅Tensorflow多GPU- NCCL

票数 1
EN

Stack Overflow用户

发布于 2020-06-24 21:18:39

在使用tf.distribbute.MirrorStrategy时,tensorflow 2.2.0出现了这个问题,发现我运行的Cuda版本是错误的--我运行的是10.0,应该是Cud10.1。

我认为值得检查一下您正在运行的版本以及它是否正确。

对于tensorflow 2.0.0,您应该使用cuda 10.0。有关两个库之间的组合,请参见这里

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/60106201

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档