'tensorflow model.evaluate and model.predict very different results
I am building a simple CNN for binary image classification, and the AUC obtained from model.evaluate() is much higher than AUC obtained from model.predict() + roc_auc_score().
The whole notebook is here.
Compiling model and output for model.fit():
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=0.001),
metrics=['AUC'])
history = model.fit(
train_generator,
steps_per_epoch=8,
epochs=5,
verbose=1)
Epoch 1/5 8/8 [==============================] - 21s 3s/step - loss: 6.7315 - auc: 0.5143
Epoch 2/5 8/8 [==============================] - 15s 2s/step - loss: 0.6626 - auc: 0.6983
Epoch 3/5 8/8 [==============================] - 18s 2s/step - loss: 0.4296 - auc: 0.8777
Epoch 4/5 8/8 [==============================] - 14s 2s/step - loss: 0.2330 - auc: 0.9606
Epoch 5/5 8/8 [==============================] - 18s 2s/step - loss: 0.1985 - auc: 0.9767
Then model.evaluate() gives something similar:
model.evaluate(train_generator)
9/9 [==============================] - 10s 1s/step - loss: 0.3056 - auc: 0.9956
But then AUC calculated directly from model.predict() method is twice as lower:
from sklearn import metrics
x = model.predict(train_generator)
metrics.roc_auc_score(train_generator.labels, x)
0.5006148007590132
I have read several posts on similar issues (like this, this, this and also extensive discussion on github), but they describe reasons which are irrelevant for my case:
- using binary_crossenthropy for multiclass task (not my case)
- difference between evaluate and predict due to using batch vs whole dataset (should not cause such drastic decline as in my case)
- using batch normalization and regularization (not my case and also should not cause such large decline)
Any suggestions are much appreciated. Thanks!
EDIT! Solution I have founded the solution here, I just needed to call
train_generator.reset()
before model.predict and also set shuffle = False in flow_from_directory() function. The reason for difference is that generator outputs batches starting from different position, so labels and predictions will not match, because they relate to different objects. So the problem is not with evaluate or predict methods, but with generator.
EDIT 2 Using train_generator.reset() is not convenient if generator is created using flow_from_directory(), because it requires setting shuffle = False in flow_from_directory, but this will create batches containing single class during training, which affects learning. So I ended up with redefining train_generator before running predict.
Solution 1:[1]
tensorflow.keras
AUC computes the approximate AUC (Area under the curve) via a Riemann sum, which is not the same implementation as scikit-learn.
If you want to find the AUC with tensorflow.keras
, try:
import tensorflow as tf
m = tf.keras.metrics.AUC()
m.update_state(train_generator.labels, x) # assuming both have shape (N,)
r = m.result().numpy()
print(r)
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | Zabir Al Nazi |