'input_image_meta shape error while using pixellib custom trainig on images
I am using pixellib fot training custom image instance segmentation. I have created a dataset whiche can be seen below in link. Dataset:https://drive.google.com/drive/folders/1MjpDNZtzGRNxEtCDcTmrjUuB1ics_3Jk?usp=sharing the code which I used to make a custom model is
import pixellib
from pixellib.custom_train import instance_custom_training
train_maskrcnn = instance_custom_training()
train_maskrcnn.modelConfig(network_backbone = "resnet101", num_classes= 2, batch_size = 4)
train_maskrcnn.load_pretrained_model("/content/drive/MyDrive/AI ML Trainee/Damage Detection/pix/mask_rcnn_coco.h5")
train_maskrcnn.load_dataset("/content/drive/MyDrive/AI ML Trainee/Damage Detection/pix/Dataset")
train_maskrcnn.train_model(num_epochs = 300, augmentation=True, path_trained_models = "mask_rcnn_models")
the console output which I get is below :
Using resnet101 as network backbone For Mask R-CNN model
Applying Default Augmentation on Dataset
Train 48 images
Validate 0 images
Checkpoint Path: /content/mask_rcnn_models
Selecting layers to train
Epoch 1/300
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-6-c2bd46bd70ab> in <module>()
7 train_maskrcnn.load_pretrained_model("/content/drive/MyDrive/AI ML Trainee/Damage Detection/pix/mask_rcnn_coco.h5")
8 train_maskrcnn.load_dataset("/content/drive/MyDrive/AI ML Trainee/Damage Detection/pix/Dataset")
----> 9 train_maskrcnn.train_model(num_epochs = 300, augmentation=True, path_trained_models = "mask_rcnn_models")
8 frames
/usr/local/lib/python3.7/dist-packages/pixellib/custom_train.py in train_model(self, num_epochs, path_trained_models, layers, augmentation)
122
123 self.model.train(self.dataset_train, self.dataset_test,models = path_trained_models, augmentation = augmentation,
--> 124 epochs=num_epochs,layers=layers)
125
126
/usr/local/lib/python3.7/dist-packages/pixellib/mask_rcnn.py in train(self, train_dataset, val_dataset, epochs, layers, models, augmentation, no_augmentation_sources)
2316 max_queue_size=100,
2317 workers=workers,
-> 2318 verbose = 1
2319
2320 )
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training_v1.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
806 max_queue_size=max_queue_size,
807 workers=workers,
--> 808 use_multiprocessing=use_multiprocessing)
809
810 def evaluate(self,
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training_generator_v1.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing)
591 shuffle=shuffle,
592 initial_epoch=initial_epoch,
--> 593 steps_name='steps_per_epoch')
594
595 def evaluate(self,
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training_generator_v1.py in model_iteration(model, data, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch, mode, batch_size, steps_name, **kwargs)
257
258 is_deferred = not model._is_compiled
--> 259 batch_outs = batch_function(*batch_data)
260 if not isinstance(batch_outs, list):
261 batch_outs = [batch_outs]
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training_v1.py in train_on_batch(self, x, y, sample_weight, class_weight, reset_metrics)
1061 x, y, sample_weights = self._standardize_user_data(
1062 x, y, sample_weight=sample_weight, class_weight=class_weight,
-> 1063 extract_tensors_from_dataset=True)
1064
1065 # If `self._distribution_strategy` is True, then we are in a replica context
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training_v1.py in _standardize_user_data(self, x, y, sample_weight, class_weight, batch_size, check_steps, steps_name, steps, validation_split, shuffle, extract_tensors_from_dataset)
2334 is_dataset=is_dataset,
2335 class_weight=class_weight,
-> 2336 batch_size=batch_size)
2337
2338 def _standardize_tensors(self, x, y, sample_weight, run_eagerly, dict_inputs,
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training_v1.py in _standardize_tensors(self, x, y, sample_weight, run_eagerly, dict_inputs, is_dataset, class_weight, batch_size)
2361 feed_input_shapes,
2362 check_batch_axis=False, # Don't enforce the batch size.
-> 2363 exception_prefix='input')
2364
2365 # Get typespecs for the input data and sanitize it if necessary.
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training_utils_v1.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
538 ': expected ' + names[i] + ' to have shape ' +
539 str(shape) + ' but got array with shape ' +
--> 540 str(data_shape))
541 return data
542
ValueError: Error when checking input: expected input_image_meta to have shape (15,) but got array with shape (14,)
I check my dataset every image has one annotation
I feel something is wrong with my dataset. Any help is appreciated.
Solution 1:[1]
Okay, this error is solved, I went to the pixellib library and according to them, we need validation data too in order to run the model. So I added validation data, (just a few images) and the library is functioning perfectly.
Sorry for the trouble.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | Arihant Kamdar |