我目前正在尝试构建一个有条件的GAN网络,但是我在使用级联层时遇到了一些问题。
我得到以下错误代码:
WARNING:tensorflow:Model was constructed with shape (None, 256, 256, 1) for input KerasTensor(type_spec=TensorSpec(shape=(None, 256, 256, 1), dtype=tf.float32, name='discriminator_data_input'), name='discriminator_data_input', description="created by layer 'discriminator_data_input'"), but it was called on an input with incompatible shape (None, 128, 128, 1).从逻辑上讲,它们不应该合身吗?因为它们都是形状(256,256,1)?
关于我所使用的参数的一些上下文:
1)
这是我用来构造鉴别器的代码:
def _build_discriminator(self):
#Should be a variable on the class:
n_classes = 4
#Conditional feature: label input
in_label = Input(shape=(1,), name='discriminator_label_input')
#Conditional feature: embedding for categorical input
#Each of the 10 classes for the Fashion MNIST dataset (0 through 9) will map to a different 50-element-
#- vector representation that will be learned by the discriminator model.
li = Embedding(n_classes, 50)(in_label)
#Conditional feature: scale up to image dimensions with linear activation
n_nodes = self.input_dim[0] * self.input_dim[1]
li = Dense(n_nodes)(li)
#Conditional feature: reshape to additional channel
li = Reshape((self.input_dim[0], self.input_dim[1], 1))(li)
### THE discriminator
in_image = Input(shape=self.input_dim, name='discriminator_data_input')
#Conditional feature: concat label as a channel
merge = Concatenate()([li, in_image])
for i in range(self.n_layers_discriminator):
if i == 0:
x = Conv2D(
filters = self.discriminator_conv_filters[i]
, kernel_size = self.discriminator_conv_kernel_size[i]
, strides = self.discriminator_conv_strides[i]
, padding = 'same'
, name = 'discriminator_conv_' + str(i)
, kernel_initializer = self.weight_init
)(merge)
else:
x = Conv2D(
filters = self.discriminator_conv_filters[i]
, kernel_size = self.discriminator_conv_kernel_size[i]
, strides = self.discriminator_conv_strides[i]
, padding = 'same'
, name = 'discriminator_conv_' + str(i)
, kernel_initializer = self.weight_init
)(x)
if self.discriminator_batch_norm_momentum and i > 0:
x = BatchNormalization(momentum = self.discriminator_batch_norm_momentum)(x)
x = self.get_activation(self.discriminator_activation)(x)
if self.discriminator_dropout_rate:
x = Dropout(rate = self.discriminator_dropout_rate)(x)
x = Flatten()(x)
discriminator_output = Dense(1, activation='sigmoid', kernel_initializer = self.weight_init)(x)
self.discriminator = Model([in_image, in_label], discriminator_output)发布于 2021-02-22 04:45:59
错误消息本身阐述了这个问题。该模型预计将接收形状(None, 256, 256, 1)的数据,但您在训练(None, 128, 128, 1)时给出的图像。在将数据输入模型之前,必须对其进行整形,或者将self.input_dim更改为(256, 256, 1)。
https://stackoverflow.com/questions/66308972
复制相似问题