'Data augmentation in test/validation set?

It is common practice to augment data (add samples programmatically, such as random crops, etc. in the case of a dataset consisting of images) on both training and test set, or just the training data set?



Solution 1:[1]

Only on training. Data augmentation is used to increase the size of the training set and to get more different images. Technically, you could use data augmentation on the test set to see how the model behaves on such images, but usually, people don't do it.

Solution 2:[2]

This answer on stats.SE makes the case for applying crops on the validation / test sets so as to make that input similar the the input in the training set that the network was trained on.

Solution 3:[3]

Data augmentation is done only on training set as it helps the model become more generalize and robust. So there's no point of augmenting the test set.

Solution 4:[4]

In computer vision, you can use data augmentation during test time to obtain different views on the test image. You then have to aggregate the results obtained from each image for example by averaging them.

For example, given this symbol below, changing the point of view can lead to different interpretations :

different view

Solution 5:[5]

Do it only on the training set. And, of course, make sure that the augmentation does not make the label wrong (e.g. when rotating 6 and 9 by about 180°).

The reason why we use a training and a test set in the first place is that we want to estimate the error our system will have in reality. So the data for the test set should be as close to real data as possible.

If you do it on the test set, you might have the problem that you introduce errors. For example, say you want to recognize digits and you augment by rotating. Then a 6 might look like a 9. But not all examples are that easy. Better be save than sorry.

Solution 6:[6]

I would argue that, in some cases, using data augmentation for the validation set can be helpful.

For example, I train a lot of CNNs for medical image segmentation. Many of the augmentation transforms that I use are meant to reduce the image quality so that the network is trained to be robust against such data. If the training set looks bad and the validation set looks nice, it will be hard to compare the losses during training and therefore assessing overfit will be complicated.

I would never use augmentation for the test set unless I'm using test-time augmentation to improve results or estimate aleatoric uncertainty.

Solution 7:[7]

Some image preprocessing software tools like Roboflow (https://roboflow.com/) apply data augmentation to test data as well. I'd say that if one is dealing with small and rare objects, say, cerebral microbleeds (which are tiny and difficult to spot on magnetic resonance images), augmenting one's test set could be useful. Then you can verify that your model has learned to detect these objects given different orientation and brightness conditions (given that your training data has been augmented in the same way).

Solution 8:[8]

The goal of data augmentation is to generalize the model and make it learn more orientation of the images, such that the during testing the model is able to apprehend the test data well. So, it is well practiced to use augmentation technique only for training sets.

Solution 9:[9]

The point of adding validation data is to build generalized model so it is nothing but to predict real-world data. inorder to predict real-world data, the validation set should contain real data. There is no problem with augmenting validation data but it won't increase the accuracy of the model.

Solution 10:[10]

Here are my two cents:

You train your model on the training data and the validation data: the former to optimize your parameters, and the latter to give you an appropriate stopping condition. The test data is to give you a real-world estimate of how well you can expect your model to perform.

For training, you can augment your training data to increase robustness to various factors including, but not limited to, sampling error, bias between data sources, shifts in global data distribution, positioning, and any other sort of variation you would like to account for.

The validation data should indicate to the training method when the model is most generalizable. By this logic, if you expect to see some variation in real-world data that can be simulated using data augmentation, then by all means, the validation dataset should be augmented.

The test data, on the other hand, should not be augmented, except potentially in special scenarios where data is very limited, and an estimate of real-world performance on test data has too much variance.

Solution 11:[11]

You can use augmentation data in training, validation and test sets.

The only thing to avoid is using the same data from the training set in validation or test sets.

For example, if you generate 3 augmented instances from an register of the training data, make sure that no one of these 3 augmented instances accidentally ends up in the validation or test sets.

It turns out that using data from the training set, even augmented data, to validate or test a model is a methodology mistake.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2 Tom Hale
Solution 3 Abhishek Patel
Solution 4 Coding Cow
Solution 5
Solution 6 fepegar
Solution 7 AK_KA
Solution 8 Lakpa Tamang
Solution 9 rah
Solution 10 Aku
Solution 11