'"No Operation named [input] in the Graph" when using in a java programm a model from hub.KerasLayer with python and tensorflow 2.1.0
I trained a mobilnet_v2 tensorflow model using tf 2.1.0 and hub.KerasLayer with python and exported it in pb format with tf.keras.models.save_model. I loaded it with java but I can't find a way to properly feed the graph.
Here is the model building and exporting code :
for image_batch, label_batch in train_generator:
break
IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3)
feature_extractor_url = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/2"
feature_extractor_layer = hub.KerasLayer(feature_extractor_url,
input_shape=IMG_SHAPE,
name='input')
feature_batch = feature_extractor_layer(image_batch)
feature_extractor_layer.trainable = False
model = tf.keras.Sequential([
feature_extractor_layer,
layers.Dense(train_generator.num_classes, name='output')
])
...... training .......
tf.keras.models.save_model(model,export_path)
Here is the way I try to feed it in java :
Tensor inputImage = getTensorFromImage() // a method defined in other code and tested ok
final Session s = new Session(graphFromPBLoadedModel);
final Tensor result = s.runner().feed("input", inputImage )
.fetch("output").run().get(0))
Here is the generated exception:
java.lang.IllegalArgumentException: No Operation named [input] in the Graph
I assume it's a signature problem during build or export but don't find the right way to do it...
Solution 1:[1]
I could see where the problem was coming from running saved_model_cli show --dir '.' --all
in my exported model directory :
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['__saved_model_init_op']:
The given SavedModel SignatureDef contains the following input(s):
The given SavedModel SignatureDef contains the following output(s):
outputs['__saved_model_init_op'] tensor_info:
dtype: DT_INVALID
shape: unknown_rank
name: NoOp
Method name is:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['input_input'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 224, 224, 3)
name: serving_default_input_input:0
The given SavedModel SignatureDef contains the following output(s):
outputs['output'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 5)
name: StatefulPartitionedCall:0
Method name is: tensorflow/serving/predict
Using "serving_default_input_input"
, I could solve the problem.
Solution 2:[2]
Thanks@toupieBleue 's answer,I directed to my exported model directory and run the commond-line: saved_model_cli show --dir '.' --all
,some information listed as follow:
(base) pro@prodeMacBook-Pro my_model % saved_model_cli show --dir '.' --all
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['__saved_model_init_op']:
The given SavedModel SignatureDef contains the following input(s):
The given SavedModel SignatureDef contains the following output(s):
outputs['__saved_model_init_op'] tensor_info:
dtype: DT_INVALID
shape: unknown_rank
name: NoOp
Method name is:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['lstm_input'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 7, 6)
name: serving_default_lstm_input:0
The given SavedModel SignatureDef contains the following output(s):
outputs['dense_1'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: StatefulPartitionedCall:0
Method name is: tensorflow/serving/predict
Using serving_default_lstm_input:0
, I could solve the problem.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | toupieBleue |
Solution 2 | LannyXu |