'Why doesn't my model converge after JPEG compression of dataset?

I'm experimenting on the CIFAR-10 to see the influence of the JPEG compression algorithm. I'm increasing the quality of the compression from 10% to 95% (95% is supposedly the best compression quality) and I track the performance of the model

I'm using a CNN that already achieves 80% validation accuracy after 120 epochs only, however after any compression of whatever quality on the training dataset, the validation and training accuracy is barely around 10%

How is it possible that even a JPEG compression with high quality of 95% results in such difference of performance ? Is it expectable ?

Here is my code for a JPEG compression of quality 95% :

import tensorflow as tf
import tensorflow.keras as keras
from keras.models import Sequential
from keras.utils import np_utils
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import Dense, Activation, Flatten, Dropout, BatchNormalization
from keras.layers import Conv2D, MaxPooling2D
from keras.datasets import cifar10
from keras import regularizers
from keras.callbacks import LearningRateScheduler
import numpy as np

from PIL import Image
import io


#### Load data
(x_train, y_train), (x_test, y_test) = cifar10.load_data()

### Compress data
def compress_JPEG(x_train, quality) :
    N_DATA = len(x_train)
    print('Compressing ...')
    output = io.BytesIO() # Create BytesIO object
    # Load all training images and write into BytesIO object
    for numpy_img in x_train:
        im = Image.fromarray(numpy_img)
        im.save(output, format='JPEG', quality= quality)
    print('Done compressing')

    
    # Read back images from BytesIO ito list
    print('Reading image from buffer...')
    compressed_dataset = [np.array(Image.open(output)) for _ in range(N_DATA)]
    print('Done reading')
    
    return np.array(compressed_dataset)

x_train = x_train.astype('uint8') # for compression
x_test = x_test.astype('uint8') # for compression
quality = 95
x_train = compress_JPEG(x_train, quality)


### Normalize dataset
#z-score
x_train = x_train.astype('float32') # for training
x_test = x_test.astype('float32') # for training
mean = np.mean(x_train,axis=(0,1,2,3))
std = np.std(x_train,axis=(0,1,2,3))
x_train = (x_train-mean)/(std+1e-7)
x_test = (x_test-mean)/(std+1e-7)

num_classes = 10
y_train = np_utils.to_categorical(y_train,num_classes)
y_test = np_utils.to_categorical(y_test,num_classes)


### Callbacks (scheduler, metrics)
def lr_schedule(epoch):
    lrate = 0.001
    if epoch > 75:
        lrate = 0.0005
    if epoch > 100:
        lrate = 0.0003        
    return lrate



### Parameters
weight_decay = 1e-4
batch_size = 64
EPOCHS = 150


### Model
model = Sequential()
model.add(Conv2D(32, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay), input_shape=x_train.shape[1:]))
model.add(Activation('elu'))
model.add(BatchNormalization())
model.add(Conv2D(32, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('elu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.2))

model.add(Conv2D(64, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('elu'))
model.add(BatchNormalization())
model.add(Conv2D(64, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('elu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.3))

model.add(Conv2D(128, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('elu'))
model.add(BatchNormalization())
model.add(Conv2D(128, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('elu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.4))

model.add(Flatten())
model.add(Dense(num_classes, activation='softmax'))

model.summary()


### Data augmentation
datagen = ImageDataGenerator(
    rotation_range=15,
    width_shift_range=0.1,
    height_shift_range=0.1,
    horizontal_flip=True,
    )
datagen.fit(x_train)



### Training
opt_rms = keras.optimizers.RMSprop(learning_rate=0.001,decay=1e-6)
model.compile(loss='categorical_crossentropy', optimizer=opt_rms, metrics=['accuracy'])
history = model.fit(datagen.flow(x_train, y_train, batch_size=batch_size),\
                    steps_per_epoch=x_train.shape[0] // batch_size,epochs= EPOCHS,\
                    verbose=1,validation_data=(x_test,y_test),callbacks=[LearningRateScheduler(lr_schedule)])



### Testing
scores = model.evaluate(x_test, y_test, batch_size=128, verbose=1)
print('\nTest result: %.3f loss: %.3f' % (scores[1]*100,scores[0]))

Do you see an error that can explain the poor performance ? When I plot images before and after the compression, it seems like it worked.. I don't compress the test data because I feel like it should be the same if I want to compare the experiments

Actually it just outputs the same label over and over.. For example with JPEG and quality 95% it's gonna output the 7th label always but for JPEG and quality 20% it will output the 6th. Since the data is balanced, the evaluation will give a 10% accuracy The data augmentation is present in the code : rotation and shifts.

The weights seem to range between 10e^-6 and 10^-3 in the convolutional layers and between 10-3 and 10^-1 in the output layer..

I have the same issue with the WebM compression algorithm



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source