'CatBoost precision imbalanced classes

I use a CatBoostClassifier and my classes are highly imbalanced. I applied a scale_pos_weight parameter to account for that. While training with an evaluation dataset (test) CatBoost shows a high precision on test. However, when I make predictions on test using a predict method, I only get a low precision score (calculated using the sklearn.metrics).

I think this might be related to class weights that I applied. However, I don't quite understand how a precision score is affected by this.

params = frozendict({
    'task_type': 'CPU',
    'loss_function': 'Logloss',
    'eval_metric': 'F1', 
    'custom_metric': ['F1', 'Precision', 'Recall'],
    'iterations': 100,
    'random_seed': 20190128,
    'scale_pos_weight': 56.88657244809081,
    'learning_rate': 0.5412829495147387, 
    'depth': 7, 
    'l2_leaf_reg': 9.526905230698302
})

from catboost import CatBoostClassifier
model = cb.CatBoostClassifier(**params)
model.fit(
    X_train, y_train,
    cat_features=np.where(X_train.dtypes == np.object)[0],
    eval_set=(X_test, y_test),
    verbose=False,
    plot=True
)

model.get_best_score()
{'learn': {'Recall': 0.9243007537531925,
  'Logloss': 0.15892360013680026,
  'F1': 0.9416723809244181,
  'Precision': 0.9640191600545249},
 'validation_0': {'Recall': 0.914252301192093,
  'Logloss': 0.1714387314107052,
  'F1': 0.9357892623978286,
  'Precision': 0.9642642597943112}}

y_test_pred = model.predict(data=X_test)

from sklearn.metrics import balanced_accuracy_score, recall_score, precision_score, f1_score
print('Balanced accuracy: {:.2f}'.format(balanced_accuracy_score(y_test, y_test_pred)))
print('Precision: {:.2f}'.format(precision_score(y_test, y_test_pred)))
print('Recall: {:.2f}'.format(recall_score(y_test, y_test_pred)))
print('F1: {:.2f}'.format(f1_score(y_test, y_test_pred)))

Balanced accuracy: 0.94
Precision: 0.29
Recall: 0.91
F1: 0.44

I expected to get the same precision as CatBoost show while training, however, it's not so. What am I doing wrong?



Solution 1:[1]

Default use_weights is set to True , which means adding weights to the evaluation metrics, e.g. Precision:use_weights=True, To let your own precision calculator the same as his, change to Precision: use_weights=False

Also, get_best_score gives the highest score over the iterations, you need to specify which iteration to be used in prediction. You can set use_best_model=True in model.fit to automatically choose the iteration.

Solution 2:[2]

The predict function uses a standard threshold of 0.5 to convert the probabilities of the prediction into a binary value. When you are dealing with a imbalanced problem, the threshold of 0.5 is not always the best value, that's why on the test set you are achieving a poor precision.

In order to find a better threshold, catboost has some methods that help you to do so, like get_roc_curve, get_fpr_curve, get_fnr_curve. These 3 methods can help you to visualize the true positive, false positive and false negative rates by changing the prediction threhsold.

Besides these visualization methods, catboost has a method called select_threshold which gives you the best threshold by that optimizes one of the curves.

You can check this on their documentation.

Solution 3:[3]

In addition to setting the use_bet_model=True, ensure that the class balance in both datasets is the same, or use balanced accuracy metrics to account for different class balance.

If you've done both of these, and you still see much worse accuracy metrics on a test set versus the train set, it is a sign of overfitting. I'd recommend you take advantage of the CatBoost's overfitting detector. The most common first method is to set early_stopping_rounds to an integer like 10, which will stop training once an improvement in the selected loss function isn't achieved after that number of training rounds (see early_stopping_rounds documentation).

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Mushfirat Mohaimin
Solution 2 Filipe Lauar
Solution 3 K. Thorspear