'Plotting learning curve in keras gives KeyError: 'val_acc'

I was trying to plot train and test learning curve in keras, however, the following code produces KeyError: 'val_acc error.

The official document <https://keras.io/callbacks/> states that in order to use 'val_acc' I need to enable validation and accuracy monitoring which I dont understand and dont know how to use in my code.

Any help would be much appreciated. Thanks.

seed = 7
np.random.seed(seed)

dataframe = pandas.read_csv("iris.csv", header=None)
dataset = dataframe.values
X = dataset[:,0:4].astype(float)
Y = dataset[:,4]

encoder = LabelEncoder()
encoder.fit(Y)
encoded_Y = encoder.transform(Y)
dummy_y = np_utils.to_categorical(encoded_Y)

kfold = StratifiedKFold(y=Y, n_folds=10, shuffle=True, random_state=seed)
cvscores = []

for i, (train, test) in enumerate(kfold):

    model = Sequential()
    model.add(Dense(12, input_dim=4, init='uniform', activation='relu'))
    model.add(Dense(3, init='uniform', activation='sigmoid'))
    model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
    history=model.fit(X[train], dummy_y[train], nb_epoch=200, batch_size=5, verbose=0)
    scores = model.evaluate(X[test], dummy_y[test], verbose=0)
    print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
    cvscores.append(scores[1] * 100)

print( "%.2f%% (+/- %.2f%%)" % (np.mean(cvscores), np.std(cvscores))) 


print(history.history.keys())
# summarize history for accuracy
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()


Solution 1:[1]

Looks like in Keras + Tensorflow 2.0 val_acc was renamed to val_accuracy

Solution 2:[2]

history_dict = history.history
print(history_dict.keys())

if u print keys of history_dict, you will get like this dict_keys(['loss', 'acc', 'val_loss', 'val_acc']).

and edit a code like this

acc = history_dict['acc']
val_acc = history_dict['val_acc']
loss = history_dict['loss']
val_loss = history_dict['val_loss']

Keys and error

Solution 3:[3]

You may need to enable the validation split of your trainset. Usually, the validation happens in 1/3 of the trainset. In your code, make the change as given below:

history=model.fit(X[train], dummy_y[train],validation_split=0.33,nb_epoch=200, batch_size=5, verbose=0) 

It works!

Solution 4:[4]

The main point everyone misses to mention is that this Key Error is related to the naming of metrics during model.compile(...). You need to be consistent with the way you name your accuracy metric inside model.compile(....,metrics=['<metric name>']). Your history callback object will receive the dictionary containing key-value pairs as defined in metrics.

So, if your metric is metrics=['acc'], you can access them in history object with history.history['acc'] but if you define metric as metrics=['accuracy'], you need to change to history.history['accuracy'] to access the value, in order to avoid Key Error. I hope it helps.

N.B. Here's a link to the metrics you can use in Keras.

Solution 5:[5]

If you upgrade keras older version (e.g. 2.2.5) to 2.3.0 (or newer) which is compatible with Tensorflow 2.0, you might have such error (e.g. KeyError: 'acc'). Both acc and val_acc has been renamed to accuracy and val_accuracy respectively. Renaming them in script will solve the issue.

Solution 6:[6]

to get any val_* data (val_acc, val_loss, ...), you need to first set the validation.

first method (will validate from what you give it):

model.fit(validation_data=(X_test, Y_test))

second method (will validate from a part of the training data):

model.fit(validation_split=0.5) 

Solution 7:[7]

I have changed acc to accuracy and my problem solved. Tensorflow 2+

e.g.

accuracy = history_dict['accuracy']
val_accuracy = history_dict['val_acccuracy']

Solution 8:[8]

This error also happens when you specify the validation_data=(X_test, Y_test) and your X_test and/or Y_test are empty. To check this, print the shape of X_test and Y_test respectively. In this case, the model.fit(validation_data=(X_test, Y_test), ...) method ran but because the validation set was empty, it didn't create a dictionary key for val_loss in the history.history dictionary.

Solution 9:[9]

What worked for me was changing objective='val_accuracy'to objective=["val_accuracy"] in

tuner = kt.BayesianOptimization(model_builder,
                         objective=["val_accuracy"],
                         max_trials=80,
                         seed=123) 
tuner.search(X_train, y_train, epochs=50, validation_split=0.2)

I have TensorFlow 2+.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 driedler
Solution 2
Solution 3 Jose Kj
Solution 4 Sajid
Solution 5 Mohammad Nur Nobi
Solution 6 Elior B.Y.
Solution 7 firozSujan
Solution 8 Tshilidzi Mudau
Solution 9 Apollonia Vitelli