在TensorFlow CIFAR10示例中
# Build the portion of the Graph calculating the losses. Note that we will
# assemble the total_loss using a custom function below.
_ = cifar10.loss(logits, labels)
# Assemble all of the losses for the current tower only.
losses = tf.get_collection('losses', scope)
# Calculate the total loss for the current tower.
total_loss = tf.add_n(losses, name='total_loss')
# Attach a scalar summary to all individual losses and the total loss; do the
# same for the averaged version of the losses.
for l in losses + [total_loss]:
# Remove 'tower_[0-9]/' from the name in case this is a multi-GPU training
# session. This helps the clarity of presentation on tensorboard.
loss_name = re.sub('%s_[0-9]*/' % cifar10.TOWER_NAME, '', l.op.name)
tf.contrib.deprecated.scalar_summary(loss_name, l)
return total_loss为什么不使用cifa10.loss函数返回的损失?相反,损失是由tf.get_collection(‘损失’,范围)计算的。
发布于 2018-07-06 06:40:18
该代码返回到目前为止塔(单个GPU)中所有批次的总损失。我认为这是因为他们对存储单个批次的损失不感兴趣,而是存储所有批次的总损失。
tf.get_collection使用塔的作用域检索塔的所有损失。然后,我们可以使用它来计算一个塔中或两个塔中所有批次的平均损失。
https://stackoverflow.com/questions/49048171
复制相似问题