首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >从官方tf2_image_retraining.ipynb转换到tfjs_graph_model后的错误预测?

从官方tf2_image_retraining.ipynb转换到tfjs_graph_model后的错误预测?
EN

Stack Overflow用户
提问于 2020-08-03 20:14:09
回答 1查看 45关注 0票数 0

我使用make_image_classifier制作了自己的分类器,并成功地将其转换为Tensorflow.js,给出了像A: 0.67XXX,B: 0.10XXX,C: 0.23XXXX (合计1.0)这样的置信度。

我想对官方对应的Colab notebook做同样的事情,以获得更多的控制,但如果我使用默认设置,在转换后我会得到奇怪的预测,比如:

A: 0.09XXX,B: 2.69XXX,C: 0.77XXXX

我想知道为什么会发生这种情况--我如何使用笔记本并成功转换?

对于这两种格式,我都使用了https://github.com/tensorflow/tfjs/tree/master/tfjs-converter中的转换格式

代码语言:javascript
复制
tensorflowjs_converter
--input_format=tf_saved_model
--output_format=tfjs_graph_model
--signature_name=serving_default
--saved_model_tags=serve
/mobilenet/saved_model
/mobilenet/web_model

make_image_classifier:

代码语言:javascript
复制
tensorflowjs_converter \
> --input_format=tf_saved_model \
> --output_format=tfjs_graph_model \
> --signature_name=serving_default \
> --saved_model_tags=serve \
> /Users/myname/Desktop/final_project/DATASETS/hub_maker_3.8. \
> /Users/myname/Desktop/final_project/DATASETS/hub_maker_3.8./tfjs 

Notebook代码:

代码语言:javascript
复制
import itertools
import os

import matplotlib.pylab as plt
import numpy as np

import tensorflow as tf
import tensorflow_hub as hub

print("TF version:", tf.__version__)
print("Hub version:", hub.__version__)
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")

module_selection = ("mobilenet_v2_100_224", 224) #@param ["(\"mobilenet_v2_100_224\", 224)", "(\"inception_v3\", 299)"] {type:"raw", allow-input: true}
handle_base, pixels = module_selection
MODULE_HANDLE ="https://tfhub.dev/google/imagenet/{}/feature_vector/4".format(handle_base)
IMAGE_SIZE = (pixels, pixels)
print("Using {} with input size {}".format(MODULE_HANDLE, IMAGE_SIZE))

BATCH_SIZE = 32 #@param {type:"integer"}

data_dir = '/content/gdrive/My Drive/Colab Notebooks/hub_maker/images'

datagen_kwargs = dict(rescale=1./255, validation_split=.20)
dataflow_kwargs = dict(target_size=IMAGE_SIZE, batch_size=BATCH_SIZE,
                   interpolation="bilinear")

valid_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
    **datagen_kwargs)
valid_generator = valid_datagen.flow_from_directory(
    data_dir, subset="validation", shuffle=False, **dataflow_kwargs)

do_data_augmentation = False #@param {type:"boolean"}
if do_data_augmentation:
  train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
      rotation_range=40,
      horizontal_flip=True,
      width_shift_range=0.2, height_shift_range=0.2,
      shear_range=0.2, zoom_range=0.2,
      **datagen_kwargs)
else:
  train_datagen = valid_datagen
train_generator = train_datagen.flow_from_directory(
    data_dir, subset="training", shuffle=True, **dataflow_kwargs)
class_dictionary = train_generator.class_indices
print(class_dictionary)

do_fine_tuning = False #@param {type:"boolean"}

print("Building model with", MODULE_HANDLE)
model = tf.keras.Sequential([
    # Explicitly define the input shape so the model can be properly
    # loaded by the TFLiteConverter
    tf.keras.layers.InputLayer(input_shape=IMAGE_SIZE + (3,)),
    hub.KerasLayer(MODULE_HANDLE, trainable=do_fine_tuning),
    tf.keras.layers.Dropout(rate=0.2),
    tf.keras.layers.Dense(train_generator.num_classes,
                          kernel_regularizer=tf.keras.regularizers.l2(0.0001))
])
model.build((None,)+IMAGE_SIZE+(3,))
model.summary()

model.compile(
  optimizer=tf.keras.optimizers.SGD(lr=0.005, momentum=0.9), 
  loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True, label_smoothing=0.1),
  metrics=['accuracy'])

steps_per_epoch = train_generator.samples // train_generator.batch_size
validation_steps = valid_generator.samples // valid_generator.batch_size
hist = model.fit(
    train_generator,
    epochs=10, steps_per_epoch=steps_per_epoch,
    validation_data=valid_generator,
    validation_steps=validation_steps).history

steps_per_epoch = train_generator.samples // train_generator.batch_size
validation_steps = valid_generator.samples // valid_generator.batch_size
hist = model.fit(
    train_generator,
    epochs=10, steps_per_epoch=steps_per_epoch,
    validation_data=valid_generator,
    validation_steps=validation_steps).history

steps_per_epoch = train_generator.samples // train_generator.batch_size
validation_steps = valid_generator.samples // valid_generator.batch_size
hist = model.fit(
    train_generator,
    epochs=10, steps_per_epoch=steps_per_epoch,
    validation_data=valid_generator,
    validation_steps=validation_steps).history

saved_model_path = '/content/gdrive/My Drive/Colab Notebooks/hub_maker'
tf.saved_model.save(model, saved_model_path)
EN

回答 1

Stack Overflow用户

发布于 2021-06-09 22:57:08

有一个更新版本的TF集线器模型可用:https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/5。也许值得一试,看看使用它是否能解决这个问题。

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/63229218

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档