'TFlite model.process() sometimes needs input data TensorImage and sometimes TensorBuffer to process an image? Are there different image input data?
Some TFlite models model.process() seems to need TensorBuffer and other rather needs TensorImage . I don't know why?
First, I took a regular TensorFlow / Keras model that was saved using:
model.save(keras_model_path,
include_optimizer=True,
save_format='tf')
Then I compress and quantize this Keras model (300 MB) to a TFlite format using:
converter = tf.lite.TFLiteConverter.from_keras_model(keras_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = tf.keras.utils.image_dataset_from_directory(dir_val,
batch_size=batch_size,
image_size=(150,150))
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model = converter.convert()
with open(tflite_model_path, 'wb') as file:
file.write(tflite_model)
I've got a lot smaller TFlite model (40 Mo) which needs TensorBuffer <input_data> when calling model.process(<input_data>)
Second, I've trained and saved as TFLite model using TensorFlow Lite Model Maker and now I've got a TFLite model that needs TensorImage <input_data> when calling model.process(<input_data>).
Are there two different TFlite models depending on how you build and train it?
Maybe it's related to the fact that the Keras model was based on Inception and the TensorFlow Lite Model Maker uses EfficientNet. How convert from one TFlite model to the other? How someone can change the input of images to be able to process the same, for example TensorImage or bitmap data input?
Solution 1:[1]
With the very valuable help of @Farmaker, I've solve my problem. I simply wanted to convert a Keras
model into a more compact TFlite
model to install it in a mobile application. I realized that the generated TFlite
model was not compatible and @Farmaker pointed out to me very correctly that the metadata was missing.
- You should use
TensorFlow
2.6.0 or less because incompatibility withFlatbuffer
.
pip3 uninstall tensorflow
pip3 install tensorflow==2.6.0
pip3 install keras==2.6.0
- Convert the Keras model to TFlite
converter = tf.lite.TFLiteConverter.from_keras_model(keras_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = tf.keras.utils.image_dataset_from_directory(dir_val,
batch_size=batch_size,
image_size=(150,150))
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model = converter.convert()
with open(tflite_model_path, 'wb') as file:
file.write(tflite_model)
- Add metadata as shown here in the « TensorFlow Lite Metadata Writer API » tutorial
- 3.1 Provide a labels.txt file (a file of all the target class labels, one label by line)
For instance, to create such a file,
your_labels_list = [
'class1','class2',...]
with open('labels.txt', 'w') as labels_file:
for label in your_labels_list:
labels_file.write(label + "\n")
pip3 install tflite-support-nightly
from tflite_support.metadata_writers import image_classifier
from tflite_support.metadata_writers import writer_utils
ImageClassifierWriter = image_classifier.MetadataWriter
# Normalization parameters are required when processing the image
# https://www.tensorflow.org/lite/convert/metadata#normalization_and_quantization_parameters)
_INPUT_NORM_MEAN = 127.5
_INPUT_NORM_STD = 127.5
_TFLITE_MODEL_PATH = "<your_path_to_model.tflite>"
_LABELS_FILE = ""<your_path_to_labels.txt>""
_TFLITE_METADATA_MODEL_PATHS = ""<your_path_to_model_with_metadata.tflite>""
# Create the metadata writer
metadata_generator = ImageClassifierWriter.create_for_inference(
writer_utils.load_file(_TFLITE_MODEL_PATH),
[_INPUT_NORM_MEAN], [_INPUT_NORM_STD],
[_LABELS_FILE])
# Verify the metadata generated
print(metadata_generator.get_metadata_json())
# Integrate the metadata into the TFlite model
writer_utils.save_file(metadata_generator.populate(), _TFLITE_METADATA_MODEL_PATHS)
That's all folks!
Solution 2:[2]
You can use tdfs, dataset, dataset_image, tf.constants, and other data formats.
You also can use tf.constants where you input required parameters OR you can input weights algorithms. ( Convolution layer also capable )
I determine the input and target response catagorizes.
[ Sequence to Sequence mapping ]:
group_1_ShoryuKen_Left = tf.constant([ 0,0,0,0,0,1,0,0,0,0,0,0, 0,0,0,0,0,1,0,1,0,0,0,0, 0,0,0,0,0,0,0,1,0,0,0,0, 0,0,0,0,0,0,0,0,0,1,0,0 ], shape=(1, 1, 48), dtype=tf.float32)
# get_weights
layer1_lstm = model.get_layer( name="layer1_bidirection-lstm" )
lstm_weight_1 = layer1_lstm.get_weights()[0]
lstm_filter_1 = layer1_lstm.get_weights()[1]
# set weights
layer1_lstm = model.get_layer( name="layer1_bidirection-lstm " )
layer1_conv.set_weights([lstm_weight_1, lstm_filter_1])
[ TDFS ]:
builder = tfds.builder('cats_vs_dogs', data_dir='file:\\\\F:\\datasets\\downloads\\PetImages\\')
ds = tfds.load('cats_vs_dogs', split='train', shuffle_files=True)
assert isinstance(ds, tf.data.Dataset)
data = DataLoader.from_folder('F:\\datasets\\downloads\\flower_photos\\')
train_data, test_data = data.split(0.9)
for example in ds.take(1):
image, label = example["image"], example["label"]
model = image_classifier.create(train_data)
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | |
Solution 2 | Martijn Pieters |