'I can't run my deployed Tensorflow Lite Model in Android Studio due to customized model?
I was deploying my Tensorflow Lite model into my Android Studio. The application does run but when after I took a picture and tried to predict the result, the app crashes. It seems that it is due to the model I use since I'm using a customized trained model ? Here is what the model looks like after I imported it to my project. It had like 4 outputFeatures rather than from what I'm seeing in some guides which has only 1. Can please someone help me how to resolve this issue that'll be a huge lend of a hand for me.
EDIT: I tried changing the imageSize value to 320 which matches with the createFixedSize values. The application does RUN, produced a result and did not crash after I entered the image. However, It only displays the same class as a result every time I run the prediction, . I tried inputting different images hoping to get different class result but it does not work at all. I also tried printing some values to see if I can debug it but Im not sure what to do after.
I also entered the code of the project below for you guys can see.
Note: I'm heavily relying on source codes around the internet right now and my level of understanding is not that high skilled as a programmer but I'm familiar how to code in Java.
try {
Tflite model = Tflite.newInstance(context);
// Creates inputs for reference.
TensorBuffer inputFeature0 = TensorBuffer.createFixedSize(new int[]{1, 320, 320, 3}, DataType.FLOAT32);
inputFeature0.loadBuffer(byteBuffer);
// Runs model inference and gets result.
Signlanguage.Outputs outputs = model.process(inputFeature0);
TensorBuffer outputFeature0 = outputs.getOutputFeature0AsTensorBuffer();
// TensorBuffer outputFeature1 = outputs.getOutputFeature1AsTensorBuffer();
// TensorBuffer outputFeature2 = outputs.getOutputFeature2AsTensorBuffer();
// TensorBuffer outputFeature3 = outputs.getOutputFeature3AsTensorBuffer();
// Releases model resources if no longer used.
model.close();
} catch (IOException e) {
// TODO Handle the exception
}
And this is what my object looks like in my MainActivity.java
public class MainActivity extends AppCompatActivity {
TextView result, demoTxt, classified, clickHere;
ImageView imageView, arrowImage;
Button picture;
int imageSize = 320; // default image size
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
result = findViewById(R.id.result);
imageView = findViewById(R.id.imageView);
picture = findViewById(R.id.button);
demoTxt = findViewById(R.id.demoText);
clickHere = findViewById(R.id.click_here);
arrowImage = findViewById(R.id.demoArrow);
classified = findViewById(R.id.classified);
demoTxt.setVisibility(View.VISIBLE);
clickHere.setVisibility(View.GONE);
arrowImage.setVisibility(View.VISIBLE);
classified.setVisibility(View.GONE);
result.setVisibility(View.GONE);
picture.setOnClickListener(new View.OnClickListener(){
@RequiresApi(api = Build.VERSION_CODES.M)
@Override
public void onClick(View view){
//launch camera if we have permission
if(checkSelfPermission(Manifest.permission.CAMERA)== PackageManager.PERMISSION_GRANTED){
Intent cameraIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
startActivityForResult(cameraIntent,1);
}else {
//request camera permission if we don't have
requestPermissions(new String[]{Manifest.permission.CAMERA},100);
}
}
});
}
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
if (requestCode ==1 && resultCode == RESULT_OK){
Bitmap image = (Bitmap) data.getExtras().get("data");
int dimension = Math.min(image.getWidth(),image.getHeight());
image = ThumbnailUtils.extractThumbnail(image,dimension,dimension);
imageView.setImageBitmap(image);
demoTxt.setVisibility(View.GONE);
clickHere.setVisibility(View.VISIBLE);
arrowImage.setVisibility(View.GONE);
classified.setVisibility(View.VISIBLE);
result.setVisibility(View.VISIBLE);
image = Bitmap.createScaledBitmap(image, imageSize, imageSize, false);
classifyImage(image);
System.out.println("image.getWidth1: "+image.getWidth());
System.out.println("image.getHeight1: "+image.getHeight());
System.out.println("dimension: "+dimension);
}
super.onActivityResult(requestCode, resultCode, data);
}
private void classifyImage(Bitmap image) {
try{
Tflite model = Tflite.newInstance(getApplicationContext());
//create input for reference
TensorBuffer inputFeature0 = TensorBuffer.createFixedSize(new int[]{1,320,320,3}, DataType.FLOAT32);
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(4 * imageSize * imageSize * 3);
byteBuffer.order(ByteOrder.nativeOrder());
//get 1D array of 224 * 224 pixels in image
int[] intValue = new int[imageSize * imageSize];
image.getPixels(intValue,0,image.getWidth(),0,0,image.getWidth(),image.getHeight());
//iterate over pixels and extract R, G, B values, add to bytebuffer
int pixel = 0;
for(int i = 0; i<imageSize; i++){
for(int j =0;j<imageSize;j++){
int val= intValue[pixel++]; // RGB
byteBuffer.putFloat(((val >> 16) & 0xFF) * (1.f/255.f));
byteBuffer.putFloat(((val >> 8) & 0xFF) * (1.f/255.f));
byteBuffer.putFloat((val & 0xFF) * (1.f/255.f));
}
}
inputFeature0.loadBuffer(byteBuffer);
//run model interface and gets result
Tflite.Outputs outputs = model.process(inputFeature0);
//model inferences
TensorBuffer outputFeatures0 = outputs.getOutputFeature0AsTensorBuffer();
//THIS IS WHAT IAM TRYING TO WORK
TensorBuffer outputFeatures1 = outputs.getOutputFeature1AsTensorBuffer();
TensorBuffer outputFeatures2 = outputs.getOutputFeature2AsTensorBuffer();
TensorBuffer outputFeatures3 = outputs.getOutputFeature3AsTensorBuffer();
//Inferences
float[] confidence = outputFeatures0.getFloatArray();
//find the index of the class with the biggest confidence
int maxPos = 0;
float maxConfidence = 0;
for (int i = 0; i < confidence.length; i++){
if(confidence[i] > maxConfidence){
maxConfidence = confidence[i];
maxPos = i;
}
}
String[] classes = {"A","B","C","Catch","D","E","Emergency","F","Fear","Feel","Fine","G","H",
"Help","ILoveYou","IReallyLoveYou","I","It","J","K","L","LiveLong","Love","M","Mine","Myself",
"N","O","Okay","P","Q","R","S","Sorry","Stop","T","ThankYou","This","Time","Tremble","U","V",
"W","Where","X","Y","Yes","You","Yours","Z"};
result.setText(classes[maxPos]);
result.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
//to search the sign language on internet
startActivity(new Intent(Intent.ACTION_VIEW,
Uri.parse("https://www.google.com/search?q=Signs+"+result.getText())));
}
});
model.close();
}catch(IOException e){
// TODO Handle the exception
}
}
I have no idea how to involve these ?inferences? in finding the index of the classes with confidences to be able to get the result.
This is the error whenever I click the button for predicting the given image was this from LogCat:
EDIT: This resolves the issue after I set the imageSize value to 320 but the prediction always predicts the same no matter what the image is inserted.
022-05-04 21:25:33.677 21148-21148/com.example.tflite E/AndroidRuntime: FATAL EXCEPTION: main
Process: com.example.tflite, PID: 21148
java.lang.RuntimeException: Failure delivering result ResultInfo{who=null, request=1, result=-1, data=Intent { act=inline-data (has extras) }} to activity {com.example.tflite/com.example.tflite.MainActivity}: java.lang.IllegalArgumentException: The size of byte buffer and the shape do not match.
Caused by: java.lang.IllegalArgumentException: The size of byte buffer and the shape do not match.
// THE LINE FOR 119 IS inputFeature0.loadBuffer(byteBuffer);
at com.example.tflite.MainActivity.classifyImage(MainActivity.java:119)
// THE LINE FOR 90 IS classifyImage(image); which is outside the function.
at com.example.tflite.MainActivity.onActivityResult(MainActivity.java:90)
These 2 lines were in blue highlight so I put them here.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|