首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Android应用中tensorflow-lite精度差

Android应用中tensorflow-lite精度差
EN

Stack Overflow用户
提问于 2021-09-27 19:20:33
回答 1查看 310关注 0票数 1

我创建了一个基于"iris“示例的模型:https://www.tensorflow.org/tutorials/customization/custom_training_walkthrough

唯一的区别是18个参数而不是4个参数。

代码语言:javascript
复制
import os
import matplotlib.pyplot as plt
import tensorflow as tf
print("TensorFlow version: {}".format(tf.__version__))
print("Eager execution: {}".format(tf.executing_eagerly()))

import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

# meaningless column names (these are just points on path)
column_names = []
for i in range(18):
    column_names.append(str(i))
column_names.append('code')

feature_names = column_names[:-1]
label_name = column_names[-1]

print("Features: {}".format(feature_names))
print("Label: {}".format(label_name))

batch_size = 32

train_dataset_fp='gestures_dataset.csv'
test_fp='gestures_test_dataset.csv'

train_dataset = tf.data.experimental.make_csv_dataset(
    train_dataset_fp,
    batch_size,
    column_names=column_names,
    label_name=label_name,
    num_epochs=1)

features, labels = next(iter(train_dataset))

print(features)

def pack_features_vector(features, labels):
  """Pack the features into a single array."""
  features = tf.stack(list(features.values()), axis=1)
  return features, labels

train_dataset = train_dataset.map(pack_features_vector)

features, labels = next(iter(train_dataset))

print(features[:5])

model = tf.keras.Sequential([
  tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(18,)),  # input shape required
  tf.keras.layers.Dense(10, activation=tf.nn.relu),
  tf.keras.layers.Dense(99) # max y
])

predictions = model(features)
predictions[:5]
tf.nn.softmax(predictions[:5])

print("Prediction: {}".format(tf.argmax(predictions, axis=1)))
print("    Labels: {}".format(labels))

loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)

def loss(model, x, y, training):
  # training=training is needed only if there are layers with different
  # behavior during training versus inference (e.g. Dropout).
  y_ = model(x, training=training)

  return loss_object(y_true=y, y_pred=y_)


l = loss(model, features, labels, training=False)
print("Loss test: {}".format(l))

def grad(model, inputs, targets):
  with tf.GradientTape() as tape:
    loss_value = loss(model, inputs, targets, training=True)
  return loss_value, tape.gradient(loss_value, model.trainable_variables)

optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)

loss_value, grads = grad(model, features, labels)

print("Step: {}, Initial Loss: {}".format(optimizer.iterations.numpy(),
                                          loss_value.numpy()))

optimizer.apply_gradients(zip(grads, model.trainable_variables))

print("Step: {},         Loss: {}".format(optimizer.iterations.numpy(),
                                          loss(model, features, labels, training=True).numpy()))

## Note: Rerunning this cell uses the same model variables

# Keep results for plotting
train_loss_results = []
train_accuracy_results = []

num_epochs = 201

for epoch in range(num_epochs):
  epoch_loss_avg = tf.keras.metrics.Mean()
  epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()

  # Training loop - using batches of 32
  for x, y in train_dataset:
    # Optimize the model
    loss_value, grads = grad(model, x, y)
    optimizer.apply_gradients(zip(grads, model.trainable_variables))

    # Track progress
    epoch_loss_avg.update_state(loss_value)  # Add current batch loss
    # Compare predicted label to actual label
    # training=True is needed only if there are layers with different
    # behavior during training versus inference (e.g. Dropout).
    epoch_accuracy.update_state(y, model(x, training=True))

  # End epoch
  train_loss_results.append(epoch_loss_avg.result())
  train_accuracy_results.append(epoch_accuracy.result())

  if epoch % 50 == 0:
    print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch,
                                                                epoch_loss_avg.result(),
                                                                epoch_accuracy.result()))

print("Train Features: {}".format(feature_names))
print("Train Label: {}".format(label_name))

test_dataset = tf.data.experimental.make_csv_dataset(
    test_fp,
    batch_size,
    column_names=column_names,
    label_name='code',
    num_epochs=1,
    shuffle=False)

test_dataset = test_dataset.map(pack_features_vector)


test_accuracy = tf.keras.metrics.Accuracy()

for (x, y) in test_dataset:
  # training=False is needed only if there are layers with different
  # behavior during training versus inference (e.g. Dropout).
  logits = model(x, training=False)
  prediction = tf.argmax(logits, axis=1, output_type=tf.int32)
  test_accuracy(prediction, y)

print("Test set accuracy: {:.3%}".format(test_accuracy.result()))

predict_dataset = tf.convert_to_tensor([
    [-1.71912153733768,-1.600284570848521,-1.5381268862069348,-1.4715348931204082,-1.288139905246001,-1.1430043332521034,-1.0377966435805905,-0.917447255766948,-0.8243115053236005,-0.7221500095444899,-0.6122901812467855,-0.5279331246212958,-0.4711441530129924,-0.42382317547383797,-0.379125249974646,-0.3584395324813018,-0.3773178934213158,-0.209925435991827], # 97
    [-1.719142448637883,-1.6070012986091102,-1.572271179903342,-1.5020486993447422,-1.228204782287214,-0.9707573015762415,-0.7300762773069808,-0.6385491168374685,-0.6184750884169654,-0.5158419098312659,-0.4242179680726442,-0.3985456036342834,-0.45447802906758966,-0.32393179755297036,-0.34842004414413613,-0.3137233709782729,-0.2703212385408242,-0.179563755582959], #97
    [-1.2434308209471674,-1.290953157205209,-1.227196227844711,-1.0484221580036261,-1.1187694579488316,-1.0359029865213372,-0.9530690852958581,-0.8660790849613064,-0.7502219135026331,-0.7117469449838862,-0.6296399221137823,-0.4460378665376745,-0.24453369606529932,-0.19766809148302653,-0.2370765208790035,-0.16973967297444315,-0.13099311119922252,-0.190617456158117], #97
    [0.0959563553661881,0.005325043913705397,-0.29823757321078925,-0.2705936526978625,-0.3951014272470887,-0.6356791899284542,-0.7295967272279017,-0.7692828405100477,-0.8921483861419475,-0.938276499224535,-1.1041582457726553,-0.9828524991149495,-1.2643027491255159,-1.2100579380905314,-1.3432258234528074,-1.3906264041632181,-1.4631851219015923,-1.550324747999713], #98
    [-0.4255400547149646,-0.4170633144859319,-0.30617172166256573,-0.32146197989889846,-0.3761100957494884,-0.47974793791908504,-0.49056452749853213,-0.591392915980335,-0.6902519118143285,-0.7964676298996293,-0.9360219373132298,-0.9111343228162241,-0.8983688928253518,-0.966734388774943,-0.9693728140937128,-1.1077741379921604,-1.2581032583883935,-1.49385364736419], #98
    [-0.08883842414482357,-0.17460057376690552,-0.20487321174320916,-0.34615149849742594,-0.4007630921307977,-0.5304849239297219,-0.6679686860060923,-0.7331022614090361,-0.8046170893211384,-0.8465804703035087,-0.9042750283451748,-1.052137679065288,-1.2453931454800324,-1.2529541366362975,-1.3787691439448853,-1.4353235462331002,-1.408970918123915,-1.542773717680230] #98
])

# training=False is needed only if there are layers with different
# behavior during training versus inference (e.g. Dropout).
predictions = model(predict_dataset, training=False)

for i, logits in enumerate(predictions):
  class_idx = tf.argmax(logits).numpy()
  p = tf.nn.softmax(logits)[class_idx]
  print("Example {} prediction: {} ({:4.1f}%)".format(i, class_idx, 100*p))

print("Saving...")

model.save('gestures.h5')

然后将.h5转换为.tflite,并在我的测试数据集上进行测试:

gest_tflite.py:

代码语言:javascript
复制
#!python

import tensorflow as tf
import numpy as np
import pandas as pd
import sys
import os

os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

# generate .tflite file from .h5 file
tflite_model = tf.keras.models.load_model('gestures.h5')
converter = tf.lite.TFLiteConverter.from_keras_model(tflite_model)
tflite_save = converter.convert()
open("gestures.tflite", "wb").write(tflite_save)    

# Load the TFLite model and allocate tensors
interpreter = tf.lite.Interpreter(model_path="gestures.tflite")
interpreter.allocate_tensors()

# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

# Test the model on input data.
input_shape = input_details[0]['shape']
print("input_shape: ", input_shape)


from numpy import genfromtxt
my_data = genfromtxt('android.csv', delimiter=',')
X = my_data[:,0:18]
y = my_data[:,18]

i = 0
for g in X:
    interpreter.set_tensor(input_details[0]['index'], np.array([g], dtype=np.float32))

    interpreter.invoke()

    output_data = interpreter.get_tensor(output_details[0]['index'])
    logits = output_data[0]
    class_idx = tf.argmax(logits).numpy()
    p = tf.nn.softmax(logits)[class_idx]
    print("recognized code {} ({:4.1f}%), original code {}".format(class_idx, 100*p, y[i]))
    i += 1

它工作得很好:

代码语言:javascript
复制
recognized code 97 (80.8%), original code 97.0
recognized code 97 (91.0%), original code 97.0
recognized code 97 (86.1%), original code 97.0
recognized code 97 (84.2%), original code 97.0
recognized code 97 (90.5%), original code 97.0
recognized code 97 (80.0%), original code 97.0
recognized code 97 (91.8%), original code 97.0
recognized code 97 (85.7%), original code 97.0
recognized code 97 (80.5%), original code 97.0
recognized code 97 (89.8%), original code 97.0
recognized code 97 (87.4%), original code 97.0
recognized code 97 (78.6%), original code 97.0
recognized code 97 (77.8%), original code 97.0
recognized code 97 (86.7%), original code 97.0
recognized code 97 (85.6%), original code 97.0
recognized code 97 (85.3%), original code 97.0
recognized code 98 (96.8%), original code 98.0
recognized code 98 (97.3%), original code 98.0
recognized code 98 (95.2%), original code 98.0
recognized code 98 (93.3%), original code 98.0
recognized code 98 (91.7%), original code 98.0
recognized code 98 (93.8%), original code 98.0
recognized code 98 (93.1%), original code 98.0
recognized code 98 (93.3%), original code 98.0
recognized code 98 (94.1%), original code 98.0
recognized code 98 (98.6%), original code 98.0
recognized code 98 (96.0%), original code 98.0
recognized code 98 (86.6%), original code 98.0
recognized code 98 (94.1%), original code 98.0

所以,.tflite没事。但是现在,我使用相同的.tflite文件,使用AndroidStudio4.1在我的代码中添加.tflite模型:

代码语言:javascript
复制
    public static int my_argmax(float[] array) {
        if (array.length <= 0)
            throw new IllegalArgumentException("The array is empty");
        float max = array[0];
        int maxi=0;
        for (int i = 1; i < array.length; i++)
            if (array[i] > max) {
                max = array[i];
                maxi = i;
            }
        return maxi;
    }

    private int do_classify(float f[]) {
        int retcode = -1;
        Log.d("DE", "do_classify: Begin");
        Log.d("DE", "do_classify: input array size = " + f.length);
        Context c = getApplicationContext();
        try {
            Gestures model = Gestures.newInstance(c);

            // Creates inputs for reference.
            TensorBuffer inputFeature0 = TensorBuffer.createFixedSize(new int[]{1, f.length}, DataType.FLOAT32);
            ByteBuffer bb = ByteBuffer.allocateDirect(f.length*4);

            for (int i = 0; i < f.length; i++) {
                bb.putFloat(f[i]);
            }
            bb.rewind();

            inputFeature0.loadBuffer(bb);

            // Runs model inference and gets result.
            Gestures.Outputs outputs = model.process(inputFeature0);
            TensorBuffer outputFeature0 = outputs.getOutputFeature0AsTensorBuffer();

            // Releases model resources if no longer used.
            float res[] = outputFeature0.getFloatArray();
            int size = outputFeature0.getFlatSize();
            retcode = my_argmax(res); // find out index of max element
            Log.d("DE","do_classify: outputFeature0 size = " + size);
            Log.d("DE","do_classify: datatype = " + outputFeature0.getDataType());
            Log.d("DE","do_classify: max = " + retcode);
            model.close();
            Log.d("DE", "do_classify: model.close()");

        } catch (IOException e) {
            Log.d("DE", "do_classify: IO Exception");
        }
        Log.d("DE", "do_classify: retcode = " + retcode);
        return retcode;
    }
    private void onClassify() {
        List<List<String>> records = new ArrayList<List<String>>();
        try (CSVReader csvReader = new CSVReader(new FileReader("/sdcard/Documents/gst/gestures.csv"));) {
            String[] values = null;
            while ((values = csvReader.readNext()) != null) {
                records.add(Arrays.asList(values));
            }

            float[] argg = new float[18];
            for(int i=0; i< records.size(); i++) {
                List<String> strn = records.get(i);
                Log.d("DE", "onClassify: begin CSV line #" + i + " size = " + strn.size());
                for (int y=0; y < strn.size(); y++) {
                    if (y<18) {
                        Log.d("DE", "onClassify y = " + y + " value = " + strn.get(y));
                        try {
                            argg[y] = Float.parseFloat(strn.get(y));
                        } catch (NumberFormatException ex) {
                            Log.d("DE","onClassify: ERROR: " + ex.toString());
                        }
                    } else {
                        int ret = do_classify(argg);
                        Thread.sleep(100); // sometimes classification fails without this delay
                        Log.d("DE", "onClassify: recognized code " + ret + ", original code " + strn.get(y));
                    }
                }
            }
            Log.d("DE","onClassify: records = " + records.toString());
        } catch (FileNotFoundException e) {
            Log.d("DE","onClassify: ERROR: " + e.getStackTrace().toString());
        } catch (IOException e) {
            Log.d("DE","onClassify: ERROR: " + e.getStackTrace().toString());
        } catch (CsvValidationException e) {
            Log.d("DE","onClassify: ERROR: " + e.getStackTrace().toString());
        } catch (InterruptedException e) {
            Log.d("DE", "onClassify: ERROR: " + e.getStackTrace().toString());
        }
        Log.d("DE","onClassify: end");
    }

我大概有1/3的分类错误:

代码语言:javascript
复制
recognized code 98, original code 97
recognized code 97, original code 97
recognized code 98, original code 97
recognized code 97, original code 97
recognized code 0, original code 97
recognized code 98, original code 97
recognized code 97, original code 97
recognized code 97, original code 97
recognized code 97, original code 97
recognized code 97, original code 97
recognized code 97, original code 97
recognized code 98, original code 97
recognized code 97, original code 97
recognized code 97, original code 97
recognized code 97, original code 97
recognized code 97, original code 97
recognized code 98, original code 98
recognized code 97, original code 98
recognized code 97, original code 98
recognized code 97, original code 98
recognized code 98, original code 98
recognized code 98, original code 98
recognized code 97, original code 98
recognized code 98, original code 98
recognized code 98, original code 98
recognized code 97, original code 98
recognized code 98, original code 98
recognized code 98, original code 98
recognized code 98, original code 98

不能说Android的结果是完全垃圾,但准确性太低,无法使用。如果相同的.tflite模型在python中运行得很好,那么Android有什么问题呢?

EN

回答 1

Stack Overflow用户

发布于 2022-01-04 23:11:55

我也有类似的问题。似乎问题出在模型转换的方式上。添加下面的行为我修正了tflite模型。

代码语言:javascript
复制
converter.target_spec.supported_ops = [
        tf.lite.OpsSet.TFLITE_BUILTINS,  # enable TensorFlow Lite ops.
        tf.lite.OpsSet.SELECT_TF_OPS  # enable TensorFlow ops.
    ]
converter.experimental_enable_resource_variables = True
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/69352192

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档