我使用的是Tensorflow 1.5中的卷积LSTM信元,它位于估计器的model_fn内。我想将L2正则化添加到那个单元格中。我尝试了以下代码:
def myModelFn(features,labels,mode,params):
trainingFlag = (mode == tf.estimator.ModeKeys.TRAIN)
inferFlag = (mode == tf.estimator.ModeKeys.PREDICT)
dataShape = tuple(params['dataShape'])
XSize,YSize,ZSize = dataShape[0],dataShape[1],dataShape[2]
dataTensor = features['data']
dataTensor = tf.reshape(dataTensor, [-1, XSize, YSize, ZSize, 1])
labelTensor = tf.cast(labels['labels'], tf.int64)
with tf.variable_scope('myModel'):
normalizedData = tf.layers.batch_normalization(dataTensor,
center=True,
scale=True,
training=trainingFlag,
name='bnInput')
with tf.variable_scope('module1'):
conv1 = tf.layers.conv3d(normalizedData,
filters = 3,
kernel_size = (5,5,5),
kernel_regularizer=tf.nn.l2_loss,
name='conv3d_1')
max1 = tf.layers.max_pooling3d(conv1,
pool_size = (5,5,5),
strides= (2,2,2),
name = 'max_1')下面是我创建convLSTM2D的地方,我想在其中添加L2正则化:
with tf.variable_scope('module2'):
lstmInput = tf.transpose(max1, [0, 3, 1, 2, 4], 'lstmInput')
lstmInputShape = lstmInput.shape.as_list()[2:]
lstmInput = tf.unstack(lstmInput, axis=1)
convLSTMNet = tf.contrib.rnn.ConvLSTMCell(conv_ndims=2,
input_shape=lstmInputShape,
output_channels=3,
kernel_shape=[3, 3],
use_bias=True,
name='lstmConv2d')
lstmKernelVars = [var for var in tf.trainable_variables(
convLSTMNet.scope_name) if 'kernel' in var.name]
tf.contrib.layers.apply_regularization(tf.nn.l2_loss,
lstmKernelVars)
lstmOutput, _ = tf.nn.static_rnn(convLSTMNet, lstmInput,
dtype=tf.float32)[-1]
module2Output = tf.layers.flatten(lstmOutput, name='module2Output')
with tf.variable_scope('module3'):
dense1 = tf.layers.dense(module2Output, 150, name='dense1')
dropout1 = tf.layers.dropout(dense1, 0.6, training=trainingFlag,
name='dropout1')
dense2 = tf.layers.dense(dropout1,50,name='dense2')
dropout2 = tf.layers.dropout(dense2, 0.5, training=trainingFlag,
name='dropout2')
logits = tf.layers.dense(dropout2, 4, name='logits')
outputLabel = tf.nn.softmax(logits,name='myLabel')
predictions = {'prediction': tf.cast(tf.argmax(outputLabel, 1), tf.int64)}
if not inferFlag:
loss = tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits,
labels=labelTensor),
name='myLoss')
l2Loss = tf.reduce_sum(
tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES), name='l2Loss')
fullLoss = tf.add(loss, l2Loss)
tf.summary.scalar('fullLoss', fullLoss)
if trainingFlag:
globalStep = tf.train.get_global_step()
optimizer = tf.train.AdamOptimizer()
updateOps = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(updateOps):
trainOp = optimizer.minimize(
fullLoss, global_step=globalStep)
else:
trainOp = None
if not inferFlag:
evalOp = tf.metrics.accuracy(labelTensor,predictions['prediction'])
return tf.estimator.EstimatorSpec(mode, predictions, fullLoss, trainOp,
evalOp)我收到以下错误消息:
ValueError:由于层"lstmConv2d“尚未被使用,所以没有用于层作用域的名称。在第一次调用层实例时确定作用域名称。因此,您必须在查询“scope_name”之前调用该层。
如果我用任何其他类型的tf.layer替换凸起的LSTM2D/static_rnn,它就能正常工作(如果我使用kernel_regularizer=tf.nn.l2_loss).
发布于 2018-02-14 13:40:21
我无法复制这条准确的错误信息。但是这种trainable_weights的使用是不正确的,因为这个集合包含在单元格内创建的所有变量,包括偏差(因为use_bias=True)。正则化偏见通常不是一个好主意,例如,on CV.SE。
要获得内核,请执行以下操作:
variables = [var for var in tf.trainable_variables(convLSTMNet.scope_name)
if 'kernel' in var.name]..。它从单元范围(在您的例子中是'rnn/lstmConv2d' )获取所有内容,并且只过滤内核。
如果不返回变量,第一件事是签出整个tf.trainable_variables()集合;接下来,确保在一个层中实例化了lstmInput单元,即检查实际的lstmInput长度。这将帮助您缩小问题范围(这很可能是在提供的片段之外)。
发布于 2018-05-28 13:17:31
您的convLSTMNet已经有一个由您指定的变量名lstmConv2d。
所以就打电话吧:
myvariables = [var for var in tf.trainable_variables() if var.name.split('/')[0] in ['lstmConv2d']]在convLSTMNet中有所有变量,然后可以筛选内核:
myvariables = [var for var in myvariables if 'kernel' in var.name]https://stackoverflow.com/questions/48744890
复制相似问题