'Using datagen.flow_from_directory with image segmination and number of classes

I used "flow_from_directory" but my "lose" is not decreasing. I notice When I run "fit_generator". Its says there is 1 classes, even though my mask have 3 classes. My question is, do we need to indicate in the "datagen.flow_from_directory" how many number of classes? do yo see any mistake in the "datagen.flow_from_directory" call:

enter image description here

My directory structure as shown below:

enter image description here

My code is shown below:

inputs = tf.keras.layers.Input(shape=(IMAGE_SIZE, IMAGE_SIZE, 3), name="input_image")

model  = tf.keras.applications.ResNet50(input_tensor=inputs, weights=None, include_top=true)

LR = 0.0001
optim = keras.optimizers.Adam(LR)

dice_loss_se2 = sm.losses.DiceLoss()
mae = tf.keras.losses.MeanAbsoluteError( )
metrics = [ mae,sm.metrics.IOUScore(threshold=0.5), sm.metrics.FScore(threshold=0.5) , dice_loss_se2]

model.compile(optimizer=optim,loss= dice_loss_se2,metrics= metrics)


image_datagen = ImageDataGenerator()
                
mask_datagen = ImageDataGenerator()
                 
image_generator =image_datagen.flow_from_directory( "/mydata/train/image", target_size=(IMAGE_SIZE, IMAGE_SIZE)
                                                   , class_mode = None,
                                                  )
                                                   

mask_generator = mask_datagen.flow_from_directory("/mydata/train/mask"  , target_size=(IMAGE_SIZE, IMAGE_SIZE)
                                                , class_mode = None,
                                                 )
                                                   

train_generator = zip(image_generator, mask_generator)

train_steps = 1212//batch_size

#---------------------------


image_generator_val =image_datagen.flow_from_directory( "/mydata/Validation/image", target_size=(IMAGE_SIZE, IMAGE_SIZE)
                                                   , class_mode = None,
                                                  )
                                                    

mask_generator_val = mask_datagen.flow_from_directory("/mydata/Validation/mask"  , target_size=(IMAGE_SIZE, IMAGE_SIZE)
                                                , class_mode = None,
                                                 )
                                                  )

val_generator = zip(image_generator_val, mask_generator_val)

val_steps = 250//batch_size



history =model.fit_generator(train_generator, validation_data=val_generator , steps_per_epoch=train_steps, validation_steps=val_steps , epochs=epochs, verbose=1) 


Solution 1:[1]

your problem is in your directory structure. What you want is a directory structure as shown below

mydata
---- train
     ---- image
          ------1.jpg
          ------2.jpg

     ---- mask
          ------1.png
          ------2.png

you are only getting one class because the generator only sees the class img. So just move your images as shown in the above directory structure

Solution 2:[2]

They also doing with one the way, specific subset for training or validation or specify the folder where my foloder sturtures ( directory ) are see as in below.

F:\datasets\downloads\example\image
F:\datasets\downloads\example\image\Bee
F:\datasets\downloads\example\image\Shiny Jumbo
F:\datasets\downloads\example\image\Sleepy cat
...

def gen():
    train_generator = ImageDataGenerator(
            rescale=1./255,
            shear_range=0.2,
            zoom_range=0.2,
            horizontal_flip=True)
    train_generator = train_generator.flow_from_directory(
            directory,
            target_size=(150, 150),
            batch_size=32,
            class_mode='binary',    # None  # categorical   # binary
            subset='training')
    target = np.array([[i] for i in range(10)])
            
    return train_generator

train_generator = gen()
val_generator = train_generator

inputs = tf.keras.layers.Input(shape=(150, 150, 3), name="input_image")
model  = tf.keras.applications.ResNet50(input_tensor=inputs, weights=None, include_top=True)

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Optimizer
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
optimizer = tf.keras.optimizers.Nadam(
    learning_rate=0.0001, beta_1=0.9, beta_2=0.999, epsilon=1e-07,
    name='Nadam'
) # 0.00001

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Loss Fn
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""                               
# 1
# lossfn = tf.keras.losses.MeanSquaredLogarithmicError(reduction=tf.keras.losses.Reduction.AUTO, name='mean_squared_logarithmic_error')
# 2
lossfn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False)
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Model Summary
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
model.compile(optimizer=optimizer, loss=lossfn, metrics=['accuracy'])

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Training
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
history = model.fit_generator(train_generator, validation_data=val_generator, steps_per_epoch=train_steps, validation_steps=val_steps , epochs=epochs, verbose=1)

input('...')

None Found 10 images belonging to 10 classes.

Output example

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Gerry P
Solution 2 Martijn Pieters