Category "autoencoder"

Variational Autoencoder KL divergence loss explodes and the model returns nan

I'm training a Conv-VAE for MRI brain images (2D slices). the output of the model is sigmoid, and the loss function binary cross-entropy: x = input, x_hat = out

GRU autoencoer: How to obtain a latent space composed of a single vector if there are multiple samples?

I am using a bidirectional GRU autoencoder to obtain a single vector that represents in a compressed form the input time series, in order to cluster them. These

why does the VQ-VAE require 2 Stage training?

According the the paper, VQ-VAE goes through two stage training. First to train the encoder and the vector quantization and then train an auto-regressive model

The lstm autoencoder does not use the full dimensions of the latent space for dimension reduction

I am trying to train a lstm autoencoder to convert the input space to a latent space and then visualize it, and I hope to find some interesting patterns in the

Faster way to do multiple embeddings in PyTorch?

I'm working on a torch-based library for building autoencoders with tabular datasets. One big feature is learning embeddings for categorical features. In pra