我正在尝试使用来自tensorflow-hub的预训练模型来构建CNN + RNN:
base_model = hub.KerasLayer('https://tfhub.dev/google/imagenet/resnet_v2_50/feature_vector/4', input_shape=(244, 244, 3)
base_model.trainable = False
model = Sequential()
model.add(TimeDistributed(base_model, input_shape=(15, 244, 244, 3)))
model.add(LSTM(512))
model.add(Dense(256, activation='relu'))
model.add(Dense(3, activation='softmax'))
adam = Adam(learning_rate=learning_rate)
model.compile(loss='categorical_crossentropy' , optimizer=adam , metrics=['accuracy'])
model.summary()这就是我所得到的:
2020-01-29 16:1
6:37.585888: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2494000000 Hz
2020-01-29 16:16:37.586205: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x3b553f0 executing computations on platform Host. Devices:
2020-01-29 16:16:37.586231: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version
Traceback (most recent call last):
File "./RNN.py", line 45, in <module>
model.add(TimeDistributed(base_model, input_shape=(None, 244, 244, 3)))
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/tracking/base.py", line 457, in _method_wrapper
result = method(self, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/sequential.py", line 178, in add
layer(x)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/base_layer.py", line 842, in __call__
outputs = call_fn(cast_inputs, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/layers/wrappers.py", line 256, in call
output_shape = self.compute_output_shape(input_shape).as_list()
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/layers/wrappers.py", line 210, in compute_output_shape
child_output_shape = self.layer.compute_output_shape(child_input_shape)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/base_layer.py", line 639, in compute_output_shape
raise NotImplementedError
NotImplementedError有什么建议吗?是否可以将KerasLayer转换为Conv2D,...层?
发布于 2020-01-30 07:30:50
看起来你不能使用TimeDistributed层来解决这个问题。但是,由于您不希望Resnet进行训练并且只需要输出,因此可以执行以下操作来避免TimeDistributed层。
不是model.add(TimeDistributed(base_model, input_shape=(15, 244, 244, 3))),而是do
选项1
# 2048 is the output size
model.add(
Lambda(
lambda x: tf.reshape(base_model(tf.reshape(x, [-1, 244, 244,3])),[-1, 15, 2048])
, input_shape=(15, 244, 244, 3))
)选项2
如果您不想过多地依赖于输出形状(这会牺牲性能)。
model.add(
Lambda(
lambda x: tf.stack([base_model(xx) for xx in tf.unstack(x, axis=1) ], axis=1)
, input_shape=(15, 244, 244, 3))
)发布于 2020-08-25 23:52:17
请参阅我的答案here。抛出错误是因为它不能计算输出形状,您也许可以通过手动实现compute_output_shape来解决您的问题。
发布于 2021-10-28 16:28:09
如果有人面临这个问题,就像@josef提到的那样,问题是compute_output_shape没有实现。您可以通过指定图层的输出形状来解决此问题:
def build_cnn_tfhub():
extractor = tfhub.KerasLayer("https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4",
input_shape=(IMG_SIZE, IMG_SIZE, CHANNELS),
output_shape=(EXTRACTOR_SIZE),
trainable=False)
return keras.layers.Lambda(lambda x: extractor(x))正如您所看到的,我还必须将该层包装在Lambda函数中,因为看起来不能将KerasLayer直接包装在TimeDistributed层中。
在代码中,EXTRACTOR_SIZE是1280,但这是特定于MobileNet的。
这个变通方法在我身上起作用了。
https://stackoverflow.com/questions/59970196
复制相似问题