我试图在Tensorflow中改进我的ResNeXt实现的性能。David提到了与在twitter上相比的潜在改进。我想把这个应用到我的实现中-- reshape+sum是如何适应这个的?
# one resnext block per figure 3c
# see also https://arxiv.org/pdf/1611.05431.pdf
def bottleneck(x, strides, dim):
x = tf.layers.conv2d(x, filters=64, kernel_size=1, strides=strides)
x = tf.layers.batch_normalization(x, training=is_training)
x = tf.nn.relu(x)
w = tf.get_variable(name='depthwise_filter', shape=[3, 3, 64, cardinality])
x = tf.nn.depthwise_conv2d_native(x, w, strides=1, padding='same')
x = tf.layers.batch_normalization(x, training=is_training)
x = tf.nn.relu(x)
x = tf.layers.conv2d(x, filters=dim, kernel_size=1, strides=1)
x = tf.layers.batch_normalization(x, training=is_training)
return tf.nn.relu(x)编辑:--我认为这个实现是正确的,我只需要添加几个操作来提高性能。再看看David的评论,depthwise+reshape+sum不是一个深度操作,而是其他一些方法;上面的代码并不计算瓶颈块3d版本的等效值。
发布于 2019-03-30 01:23:13
下面是我实现它的方法
class LayerCardinalConv(object):
"""Aggregated Residual Transformations for Deep Neural Networks https://arxiv.org/abs/1611.05431"""
def __init__(self, name, w, nin, card, use_bias=True, init='he'):
self.group = nin // card
with tf.name_scope(name):
self.conv = tf.Variable(weight_init(nin, self.group, [*w, nin, self.group], init), name='conv')
self.bias = tf.Variable(tf.zeros([nin]), name='bias') if use_bias else 0
def __call__(self, vin, train):
s = tf.shape(vin)
vout = tf.nn.depthwise_conv2d(vin, self.conv, strides=[1] * 4, padding='SAME')
vout = tf.reshape(vout, [s[0], s[1], s[2], self.group, s[3]])
vout = tf.reduce_sum(vout, 3)
return vout + self.bias备注:
希望能帮上忙。
发布于 2018-03-01 02:11:50
深卷积和分组卷积是非常相似的。分组卷积在多个信道组中应用一组独立的核,而深度卷积则对每个输入信道应用一组独立的核。关键是,在这两种情况下,输入和输出通道之间的单个连接使用的权重在这两种情况下都不与任何其他输入-输出通道对共享。因此,我们可以申请(就像那个人说的那样!)一种用深度卷积来模拟分组卷积的整形和和。这种方法以牺牲内存为代价,因为我们必须分配一个多倍大的张量来执行中间计算。
深度卷积将单个输入信道映射为多个输出信道,分组卷积映射输入信道块到输出信道块。如果我们要对128通道输入中的32组应用分组卷积,则可以采用信道乘法器128/32=4的深度卷积。输出张量表示等效分组卷积输出的分解版本--深度卷积输出的前16个通道对应分组卷积输出的前四个通道。我们可以将这些通道重塑成一组4x4空间,并沿着其中一个新的轴求和,以实现分组卷积输出的等效。在所有输出通道中,我们只是通过添加两个维数为4的新轴进行整形,和,并将整形返回到128个通道。
# one resnext block per figure 3c
# see also https://arxiv.org/pdf/1611.05431.pdf
def bottleneck(x, strides, dim, is_training):
input_channels = x.shape.as_list()[-1]
bottleneck_depth = input_channels // 2
x = tf.layers.conv2d(x, filters=bottleneck_depth, kernel_size=1, strides=strides)
x = tf.layers.batch_normalization(x, training=is_training)
x = tf.nn.relu(x)
group_size = bottleneck_depth // cardinality
w = tf.get_variable(name='depthwise_filter', shape=[3, 3, bottleneck_depth, group_size])
x = tf.nn.depthwise_conv2d_native(x, w, strides=1, padding='same')
depthwise_shape = x.shape.as_list()
x = tf.reshape(x, depthwise_shape[:3] + [cardinality, group_size, group_size])
x = tf.reduce_sum(x, axis=4)
x = tf.reshape(x, depthwise_shape[:3] + [bottleneck_depth])
x = tf.layers.batch_normalization(x, training=is_training)
x = tf.nn.relu(x)
x = tf.layers.conv2d(x, filters=dim, kernel_size=1, strides=1)
x = tf.layers.batch_normalization(x, training=is_training)
return tf.nn.relu(x)编辑:,我似乎没有正确地表示整形/和。我更新了上面的代码示例,以反映我现在认为是正确的转换。较旧的版本可还原为深度卷积,channel_multiplier为1。
我将说明使用numpy将权重固定在1的错误和正确的行为,以更好地理解两者之间的差异。我们将看一个更简单的8通道输入与两组。
input = np.arange(8)
# => [0, 1, 2, 3, 4, 5, 6, 7]
# the result of applying a depthwise convolution with a channel multiplier of 2 and weights fixed at 1
depthwise_output = output.repeat(input, 4)
# => [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, ..., 6, 6, 7, 7, 7, 7]不正确的转变:
x = depthwise_output.reshape((8, 4))
# => [[0, 0, 0, 0],
# [1, 1, 1, 1],
# [2, 2, 2, 2],
# [3, 3, 3, 3],
# [4, 4, 4, 4],
# [5, 5, 5, 5],
# [6, 6, 6, 6],
# [7, 7, 7, 7]]
x = x.sum(axis=1)
# => [ 0, 4, 8, 12, 16, 20, 24, 28]正确的转变:
x = depthwise_output.reshape((2, 4, 4))
# => [[[0, 0, 0, 0],
# [1, 1, 1, 1],
# [2, 2, 2, 2],
# [3, 3, 3, 3]],
#
# [[4, 4, 4, 4],
# [5, 5, 5, 5],
# [6, 6, 6, 6],
# [7, 7, 7, 7]]]
x = x.sum(axis=1)
# => [[ 6, 6, 6, 6],
# [22, 22, 22, 22]])
x = x.reshape((8,))
# => [ 6, 6, 6, 6, 22, 22, 22, 22]https://stackoverflow.com/questions/48994369
复制相似问题