'Tensorflow fit method with generator error. AttributeError: 'tuple' object has no attribute 'shape'
I'm trying to get a basic segmentation model going before making major tweaks and no matter how simple I make it I receive this error. I'm working on Collaboratory
Found 500 images belonging to 1 classes.
Found 500 images belonging to 1 classes.
Found 50 images belonging to 1 classes.
Found 50 images belonging to 1 classes.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-23-420c271bfe7a> in <module>()
3 steps_per_epoch = (32),
4 validation_data=val_generator(),
----> 5 callbacks=callbacks_list)
10 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
971 except Exception as e: # pylint:disable=broad-except
972 if hasattr(e, "ag_error_metadata"):
--> 973 raise e.ag_error_metadata.to_exception(e)
974 else:
975 raise
AttributeError: in user code:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:806 train_function *
return step_function(self, iterator)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:796 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:1211 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2585 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2945 _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:789 run_step **
outputs = model.train_step(data)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:759 train_step
self.compiled_metrics.update_state(y, y_pred, sample_weight)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:388 update_state
self.build(y_pred, y_true)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:319 build
self._metrics, y_true, y_pred)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/nest.py:1139 map_structure_up_to
**kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/nest.py:1235 map_structure_with_tuple_paths_up_to
*flat_value_lists)]
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/nest.py:1234 <listcomp>
results = [func(*args, **kwargs) for args in zip(flat_path_list,
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/nest.py:1137 <lambda>
lambda _, *values: func(*values), # Discards the path arg.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:419 _get_metric_objects
return [self._get_metric_object(m, y_t, y_p) for m in metrics]
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:419 <listcomp>
return [self._get_metric_object(m, y_t, y_p) for m in metrics]
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:440 _get_metric_object
y_t_rank = len(y_t.shape.as_list())
AttributeError: 'tuple' object has no attribute 'shape'
I think it's related to the generator based off what I found online but I can't pinpoint what exactly it is. It could be that I've also improperly compiled the model for segmentation? (I'm a novice with this type of model)
Here is my model
Model: "functional_4"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_4 (InputLayer) [(None, 1024, 1024, 3)] 0
_________________________________________________________________
blockx_conv1 (Conv2D) (None, 1024, 1024, 64) 1792
_________________________________________________________________
blockx_conv2 (Conv2D) (None, 1024, 1024, 64) 36928
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 (None, 512, 512, 64) 0
_________________________________________________________________
blocky_conv1 (Conv2D) (None, 512, 512, 128) 73856
_________________________________________________________________
blocky_conv2 (Conv2D) (None, 512, 512, 256) 295168
_________________________________________________________________
max_pooling2d_7 (MaxPooling2 (None, 256, 256, 256) 0
_________________________________________________________________
blockxy_conv1 (Conv2D) (None, 256, 256, 512) 1180160
_________________________________________________________________
dropout_3 (Dropout) (None, 256, 256, 512) 0
_________________________________________________________________
blockxy_conv2 (Conv2D) (None, 256, 256, 1024) 25691136
_________________________________________________________________
max_pooling2d_8 (MaxPooling2 (None, 128, 128, 1024) 0
_________________________________________________________________
blockxy_conv3 (Conv2D) (None, 128, 128, 1024) 1049600
_________________________________________________________________
blockxy_conv4 (Conv2D) (None, 128, 128, 3) 3075
_________________________________________________________________
up_sampling2d_3 (UpSampling2 (None, 1024, 1024, 3) 0
=================================================================
Total params: 28,331,715
Trainable params: 28,331,715
Non-trainable params: 0
My compile as follows. I think this could also be a potential source of error as I still am not sure what optimizers and loss functions I should be using.
model.compile(optimizer='adam', loss='categorical_crossentropy',metrics=['acc','loss','val_loss','val_acc'])
Here is my fit method. I kept it simple to try to troubleshoot
results = model.fit(train_generator(), epochs=1,
steps_per_epoch = (32),
validation_data=val_generator(),
callbacks=callbacks_list)
here's the generator I've used just in case
def train_generator(batch=16):
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(
rescale=1./255)
train_image_generator = train_datagen.flow_from_directory(
'/content/drive/My Drive/Thesis Pics/train_frames/',
batch_size = batch,
target_size=(1024,768))
train_mask_generator = train_datagen.flow_from_directory(
'/content/drive/My Drive/Thesis Pics/train_masks/',
batch_size = batch,
target_size=(1024,768))
train_generator = zip(train_image_generator, train_mask_generator)
return train_generator
Since I'm new to segmentation I'm not quite sure of the nuances I need to be aware of that may be different from classification. Is there something obvious I have missed?
Solution 1:[1]
I am guessing now, but .fit()
expects data, a tf.data.Dataset
structure or a data_generator (which I am not great familiar with). However, you are passing a tuple as you return zip(train_image_generator, train_mask_generator)
, which is no format .fit()
can use for training
Solution 2:[2]
The first parameter of model.fit is bellow
Arguments:
x: Input data. It could be:
- A Numpy array (or array-like), or a list of arrays
(in case the model has multiple inputs).
- A TensorFlow tensor, or a list of tensors
(in case the model has multiple inputs).
- A dict mapping input names to the corresponding array/tensors,
if the model has named inputs.
- A `tf.data` dataset. Should return a tuple
of either `(inputs, targets)` or
`(inputs, targets, sample_weights)`.
- A generator or `keras.utils.Sequence` returning `(inputs, targets)`
or `(inputs, targets, sample weights)`.
A more detailed description of unpacking behavior for iterator types
(Dataset, generator, Sequence) is given below.
But you are passing a tuple with zip function
Solution 3:[3]
Despite everywhere saying you can use a zip method for a generator. I instead created a zip of datasets and then used THAT to solve our problem. So I fed a single dataset that was itself two datasets.
IMAGE_SIZE = (256,256)
def train_generator():
frames = tf.keras.preprocessing.image_dataset_from_directory('/content/drive/My Drive/Thesis Pics/train_frames',label_mode=None,image_size=IMAGE_SIZE,batch_size=4)
masks = tf.keras.preprocessing.image_dataset_from_directory('/content/drive/My Drive/Thesis Pics/train_masks',label_mode=None,image_size=IMAGE_SIZE,batch_size=4)
train = tf.data.Dataset.zip((frames,masks))
return train
def val_generator():
frames = tf.keras.preprocessing.image_dataset_from_directory('/content/drive/My Drive/Thesis Pics/val_frames',label_mode=None,image_size=IMAGE_SIZE,batch_size=4)
masks = tf.keras.preprocessing.image_dataset_from_directory('/content/drive/My Drive/Thesis Pics/val_masks',label_mode=None,image_size=IMAGE_SIZE,batch_size=4)
val = tf.data.Dataset.zip((frames,masks))
return val
train_gen = train_generator()
val_gen = val_generator()
Solution 4:[4]
What solved the problem for me was to change:
metrics=["acc"]
to:
metrics=["binary_accuracy"]
You could try to remove the metrics and see if it works and then incorporate the metrics one by one, with full name.
Sorry, I can't give a more comprehensive explanation why that is. It seems to me to be a bug that still persists.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | MichaelJanz |
Solution 2 | FancyXun |
Solution 3 | TheJeran |
Solution 4 | Alexander Bartl |