'LSTM is Showing very low accuracy and large loss

I am applying LSTM on a dataset that has 53699 entries for the training set and 23014 entries for the test set. The shape of the input training set is (53699,4). I've tried with different activation (tanh, relu) and with different units size with four LSTM layers and a dense layer but the accuracy is very low. How can I improve the accuracy of this problem?

regressor=Sequential()

from keras.layers.normalization.batch_normalization import BatchNormalization
regressor.add(LSTM(units=256, activation='tanh',return_sequences=True,input_shape=(4,1)))
regressor.add(Dropout(0.2))
regressor.add(BatchNormalization())
regressor.add(LSTM(units=256, activation='tanh',return_sequences=True))
regressor.add(Dropout(0.2))
regressor.add(BatchNormalization())
regressor.add(LSTM(units=256, activation='tanh', return_sequences=True))
regressor.add(Dropout(0.2))
regressor.add(BatchNormalization())
regressor.add(LSTM(units=32,activation='tanh'))
regressor.add(Dropout(0.2))
regressor.add(BatchNormalization())
regressor.add(Dense(units=16,activation='tanh'))

regressor.compile(optimizer="adam", loss="mean_squared_error",metrics=['accuracy'])
history=regressor.fit(X_train,y_train,epochs=20,batch_size=32,validation_data=(X_test,y_test))```

Epoch 1/20
1679/1679 [==============================] - 151s 85ms/step - loss: 189050.0781 - accuracy: 0.0016 - val_loss: 195193.2188 - val_accuracy: 0.0000e+00


Solution 1:[1]

For a better answer you'll need to provide more info on what exactly the task is.

But, I would suggest starting with a simpler model - a single LSTM layer, and fewer units. You're input sequence is only 4 elements, so there shouldn't be a need for so many LSTM layers. If the loss is still so high, then there may be another issue, if it drops then you can start adding complexity till you reach convergence.

Solution 2:[2]

The "problems" your facing might be caused by different things.

  1. Train your model for some more epochs! Your only showing the training for one epoch, how is your loss and accuracy evolving over more epochs?

  2. Your validation metric "accuracy" might not be appropriate for your task. I'm assuming you're doing a regression task as you call your model "regressor" and use MSE as a loss function. Use MSE also for validation.

  3. No one knows what your data looks like. Maybe a model isn't even capable of fitting your data, your data might be screwed or whatever. Tell us and show us more of your data!

  4. If you solve the above points, try training a model that can at least heavily overfit your training data. From this starting point you can find a model to work well on your validation data.

Solution 3:[3]

The performance improved. I used MinMaxScaler for data preprocessing. Used five LSTM layers and a dense layer. Used units=256, activation='relu', kernel_regularizer='l2' for LSTM and loss='mae'.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Sean
Solution 2 Alexander Riedel
Solution 3