'Implementing Multiclass Dice Loss Function

I am doing multi class segmentation using UNet. My input to the model is HxWxC and my output is,

outputs = layers.Conv2D(n_classes, (1, 1), activation='sigmoid')(decoder0)

Using SparseCategoricalCrossentropy I can train the network fine. Now I would like to also try dice coefficient as the loss function. Implemented as follows,

def dice_loss(y_true, y_pred, smooth=1e-6):
    y_true = tf.cast(y_true, tf.float32)
    y_pred = tf.math.sigmoid(y_pred)

    numerator = 2 * tf.reduce_sum(y_true * y_pred) + smooth
    denominator = tf.reduce_sum(y_true + y_pred) + smooth

    return 1 - numerator / denominator

However, I am actually getting an increasing loss instead of decreasing loss. I have checked multiple sources but all the material I find uses dice loss for binary classification and not multiclass. So my question is there a problem with the implementation.



Solution 1:[1]

The problem is that your dice loss doesn't address the number of classes you have but rather assumes binary case, so it might explain the increase in your loss.

You should implement generalized dice loss that accounts for all the classes and return the value for all of them.

Something like the following:

def dice_coef_9cat(y_true, y_pred, smooth=1e-7):
    '''
    Dice coefficient for 10 categories. Ignores background pixel label 0
    Pass to model as metric during compile statement
    '''
    y_true_f = K.flatten(K.one_hot(K.cast(y_true, 'int32'), num_classes=10)[...,1:])
    y_pred_f = K.flatten(y_pred[...,1:])
    intersect = K.sum(y_true_f * y_pred_f, axis=-1)
    denom = K.sum(y_true_f + y_pred_f, axis=-1)
    return K.mean((2. * intersect / (denom + smooth)))

def dice_coef_9cat_loss(y_true, y_pred):
    '''
    Dice loss to minimize. Pass to model as loss during compile statement
    '''
    return 1 - dice_coef_9cat(y_true, y_pred)

This snippet is taken from https://github.com/keras-team/keras/issues/9395#issuecomment-370971561

This is for 9 categories, while you should adjust to the number of categories you have.

Solution 2:[2]

If you are doing multi-class segmentation, the 'softmax' activation function should be used.

I would recommend using one-hot encoded ground-truth masks. This needs to be done outside of the loss calculation code.

The generalized dice loss and others were implemented in the following link:

https://github.com/NifTK/NiftyNet/blob/dev/niftynet/layer/loss_segmentation.py

Solution 3:[3]

Not sure why but the last layer has "sigmoid" as activation function. For Multiclass segmentation it has to be "softmax" not "sigmoid".

Also, the loss you are considering is SparseCategoricalCrossentropy along with a multichannel output. If the last layer would have just 1 channel (when doing multi class segmentation), then using SparseCategoricalCrossentropy makes sense but when you have multiple channels as your output the loss which is to be considered is "CategoricalCrossentropy".

Your loss is increasing as the activation and output channels aren't matching (as mentioned above).

change

outputs = layers.Conv2D(n_classes, (1, 1), activation='sigmoid')(decoder0)

to

outputs = layers.Conv2D(n_classes, (1, 1), activation='softmax')(decoder0)

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 David
Solution 2
Solution 3 Vedant Joshi