首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >无法在tf.distribute.MirroredStrategy中使用嵌入层

无法在tf.distribute.MirroredStrategy中使用嵌入层
EN

Stack Overflow用户
提问于 2021-03-18 09:38:17
回答 1查看 357关注 0票数 1

我试图在tensorflow版本2.4.1上使用嵌入层并行化一个模型。但是它给我带来了以下错误:

代码语言:javascript
复制
InvalidArgumentError: Cannot assign a device for operation sequential/emb_layer/embedding_lookup/ReadVariableOp: Could not satisfy explicit device specification '' because the node {{colocation_node sequential/emb_layer/embedding_lookup/ReadVariableOp}} was colocated with a group of nodes that required incompatible device '/job:localhost/replica:0/task:0/device:GPU:0'. All available devices [/job:localhost/replica:0/task:0/device:CPU:0, /job:localhost/replica:0/task:0/device:XLA_CPU:0, /job:localhost/replica:0/task:0/device:XLA_GPU:0, /job:localhost/replica:0/task:0/device:GPU:0]. 
Colocation Debug Info:
Colocation group had the following types and supported devices: 
Root Member(assigned_device_name_index_=2 requested_device_name_='/job:localhost/replica:0/task:0/device:GPU:0' assigned_device_name_='/job:localhost/replica:0/task:0/device:GPU:0' resource_device_name_='/job:localhost/replica:0/task:0/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
GatherV2: GPU CPU XLA_CPU XLA_GPU 
Cast: GPU CPU XLA_CPU XLA_GPU 
Const: GPU CPU XLA_CPU XLA_GPU 
ResourceSparseApplyAdagradV2: CPU 
_Arg: GPU CPU XLA_CPU XLA_GPU 
ReadVariableOp: GPU CPU XLA_CPU XLA_GPU 

Colocation members, user-requested devices, and framework assigned devices, if any:
  sequential_emb_layer_embedding_lookup_readvariableop_resource (_Arg)  framework assigned device=/job:localhost/replica:0/task:0/device:GPU:0
  adagrad_adagrad_update_update_0_resourcesparseapplyadagradv2_accum (_Arg)  framework assigned device=/job:localhost/replica:0/task:0/device:GPU:0
  sequential/emb_layer/embedding_lookup/ReadVariableOp (ReadVariableOp) 
  sequential/emb_layer/embedding_lookup/axis (Const) 
  sequential/emb_layer/embedding_lookup (GatherV2) 
  gradient_tape/sequential/emb_layer/embedding_lookup/Shape (Const) 
  gradient_tape/sequential/emb_layer/embedding_lookup/Cast (Cast) 
  Adagrad/Adagrad/update/update_0/ResourceSparseApplyAdagradV2 (ResourceSparseApplyAdagradV2) /job:localhost/replica:0/task:0/device:GPU:0

     [[{{node sequential/emb_layer/embedding_lookup/ReadVariableOp}}]] [Op:__inference_train_function_631]

将模型简化为基本模型,使其可复制:

代码语言:javascript
复制
import tensorflow as tf
central_storage_strategy = tf.distribute.MirroredStrategy()
with central_storage_strategy.scope():
  user_model = tf.keras.Sequential([
       tf.keras.layers.Embedding(10, 2, name = "emb_layer")
     ])
user_model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1), loss="mse")
user_model.fit([1],[[1,2]], epochs=3) 

任何帮助都将不胜感激。谢谢!

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2021-03-22 05:17:23

所以我终于想出了问题,如果有人在找答案的话。

到目前为止,Tensorflow还没有完全实现Adagrad优化器的GPU。ResourceSparseApplyAdagradV2操作在GPU上产生误差,这是嵌入层的一个重要组成部分。因此,它不能用于数据并行策略的嵌入层。使用Adam或Adam可以很好地工作。

票数 3
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/66688358

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档