首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Tensorflow Batchnormalization - TypeError:轴必须是整型或list,给定类型:

Tensorflow Batchnormalization - TypeError:轴必须是整型或list,给定类型:
EN

Stack Overflow用户
提问于 2020-01-27 23:19:29
回答 1查看 527关注 0票数 0

以下是我训练U-Net的代码。它大部分是带有我自己的损失函数和度量的普通Keras代码,这对错误并不重要。为了避免过拟合,我尝试在每个卷积层之后添加一个BatchNormalization层,但是,我总是得到一个非常奇怪的错误。

代码语言:javascript
复制
inputs = tf.keras.layers.Input((self.height, self.width, self.channel))
c1 = tf.keras.layers.Conv2D(16, (3, 3), padding='same')(inputs)
c1 = tf.keras.layers.BatchNormalization(c1)
c1 = tf.keras.layers.LeakyReLU(self.alpha)(c1)
c1 = tf.keras.layers.Dropout(self.dropout_rate)(c1)
c1 = tf.keras.layers.Conv2D(16, (3, 3), padding='same')(c1)
c1 = tf.keras.layers.LeakyReLU(self.alpha)(c1)
c1 = tf.keras.layers.Dropout(self.dropout_rate)(c1)
p1 = tf.keras.layers.MaxPooling2D((2, 2))(c1)

....

u9 = tf.keras.layers.Conv2DTranspose(16, (2, 2), strides=(2, 2), padding='same')(c8)
u9 = tf.keras.layers.concatenate([u9, c1], axis=3)
c9 = tf.keras.layers.Conv2D(16, (3, 3), padding='same')(u9)
c9 = tf.keras.layers.LeakyReLU(self.alpha)(c9)
c9 = tf.keras.layers.Dropout(self.dropout_rate)(c9)
c9 = tf.keras.layers.Conv2D(16, (3, 3), padding='same')(c9)
c9 = tf.keras.layers.LeakyReLU(self.alpha)(c9)
c9 = tf.keras.layers.Dropout(self.dropout_rate)(c9)

outputs = tf.keras.layers.Conv2D(self.num_classes, (1, 1), activation='softmax')(c9)

self.model = tf.keras.Model(inputs=[inputs], outputs=[outputs])

self.model.compile(optimizer=tf.keras.optimizers.Adam(lr=self.learning_rate),
                   loss=cce_iou_coef,
                   metrics=[iou_coef, dice_coef])

每当我尝试添加BatchNormalization层时,我都会得到以下错误。我找不到问题,我做错了什么?

代码语言:javascript
复制
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-4-5c6c9c85bbcc> in <module>
----> 1 unet_dev = UNetDev()
      2 unet_dev.summary()

~/Desktop/notebook/bachelor-thesis/code/bachelorthesis/unet_dev.py in __init__(self, weight_url, width, height, channel, learning_rate, num_classes, alpha, dropout_rate)
     29             inputs = tf.keras.layers.Input((self.height, self.width, self.channel))
     30             c1 = tf.keras.layers.Conv2D(16, (3, 3), padding='same')(inputs)
---> 31             c1 = tf.keras.layers.BatchNormalization(c1)
     32             c1 = tf.keras.layers.LeakyReLU(self.alpha)(c1)
     33             c1 = tf.keras.layers.Dropout(self.dropout_rate)(c1)

~/anaconda3/envs/code/lib/python3.7/site-packages/tensorflow_core/python/keras/layers/normalization.py in __init__(self, axis, momentum, epsilon, center, scale, beta_initializer, gamma_initializer, moving_mean_initializer, moving_variance_initializer, beta_regularizer, gamma_regularizer, beta_constraint, gamma_constraint, renorm, renorm_clipping, renorm_momentum, fused, trainable, virtual_batch_size, adjustment, name, **kwargs)
    167     else:
    168       raise TypeError('axis must be int or list, type given: %s'
--> 169                       % type(axis))
    170     self.momentum = momentum
    171     self.epsilon = epsilon

TypeError: axis must be int or list, type given: <class 'tensorflow.python.framework.ops.Tensor'>
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2020-01-27 23:22:21

只需替换

代码语言:javascript
复制
c1 = tf.keras.layers.BatchNormalization(c1)

通过

代码语言:javascript
复制
c1 = tf.keras.layers.BatchNormalization()(c1)

就像keras中的其他层一样,这是调用它们的方式。您为keras层提供了参数,如doc中所示。而你并不需要它

票数 2
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/59933970

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档