I was curious if it is possible to use transfer learning in text generation, and re-train/pre-train it on a specific kind of text. For example, having a pre
I am trying to implement a VAE for MNIST using convolutional layers using TensorFlow-2.6 and Python-3.9. The code I have is: # Specify latent space dimensions-
I am using the Physics Informed Neural Networks (PINNs) methodology to solve non-linear PDEs in high dimension. Specifically, I am using this class https://git
I am new to Tensorflow and deep leaning. I am trying to see how the loss decreases over 10 epochs in my RNN model that I created to read a dataset from kaggle w
I teached my neural nets and realized that even after torch.cuda.empty_cache() and gc.collect() my cuda-device memory is filled. In Colab Notebooks we can see t
I've been trying to experiment with Region Based: Dice Loss but there have been a lot of variations on the internet to a varying degree that I could not find tw
I have this image that contains text (numbers and alphabets) in it. I want to get the location of all the text and numbers present in this image. Also I want to
I am applying LSTM on a dataset that has 53699 entries for the training set and 23014 entries for the test set. The shape of the input training set is (53699,4)
According the the paper, VQ-VAE goes through two stage training. First to train the encoder and the vector quantization and then train an auto-regressive model
I am training a convolutional neural network, but have a relatively small dataset. So I am implementing techniques to augment it. Now this is the first time i a
I'm currently studying code of transformer, but I can not understand the masked multi-head of decoder. The paper said that it is to prevent you from seeing the
It is common practice to augment data (add samples programmatically, such as random crops, etc. in the case of a dataset consisting of images) on both training
Why does zero_grad() need to be called during training? | zero_grad(self) | Sets gradients of all model parameters to zero.
Why does zero_grad() need to be called during training? | zero_grad(self) | Sets gradients of all model parameters to zero.
I have images(X_train) and masks data (y_train). I want to train a unet network. I am currently using iou metric and the validation iou is very low and constant
So I'm using the gym stocks environment to train a model using A2C policy but I want to understand how the profit is calculated by the model, in the documentati
I see the Layer Normalization is the modern normalization method than Batch Normalization, and it is very simple to coding in Tensorflow. But I think the layer
I need to get the bounding box coordinates generated in the above image using YOLO object detection.
I have being trying to fit the error during my Tensorflow course (Section 3: Neural network Regression with Tensorflow) with Udemy. import tensorflow as tf impo
I had cloned these repo !git clone https://github.com/lbin/DCNv2.git and try to Build on Google colab but got these error