'PCA for Recurrent Neural Networks (LSTM) - Shall I use PCA for target variables too?
I have a seasonal timeseries dataset containing 3 target variables and n feature variables. I am trying to apply a PCA algorithm before feeding the data to a simple LSTM. The operations I do are the following:
- Split train - validation - test
- Standard scaler (force mean=0 & std=1) of the train dataset (including target and features)
- Apply PCA for only features of the train dataset
- Transform through the PCA matrix in step 3 the feature variables from validation and target
- Where I get lost: What to do with target's validation and target's test variables?
- ... more neural networks pre-processing and building the architecture of the LSTM
My question is: How do I scale / normalize the target variables? Through a PCA too?, through any independent scaler (standard, mapminmax, etc.)? If I leave the original target values I got overfitting in my LSTM.
The most disappointing is that without the PCA, the LSTM I've build is showing no overfitting
Thanks a lot for your help!
Solution 1:[1]
I know this comes late... As far as I know, you should not apply PCA to the target variables. PCA is used in a way to reduce dimensionality on the feature variables. As you have applied the PCA transformation trained with the Train dataset, you can do the same with the used Scaler.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | Dripy |