Category "pytorch"

Access all batch outputs at the end of epoch in callback with pytorch lightning

The documentation for the on_train_epoch_end, https://pytorch-lightning.readthedocs.io/en/stable/extensions/callbacks.html#on-train-epoch-end, states: To acces

Assigning custom weights to embedding layer in PyTorch

Does PyTorch's nn.Embedding support manually setting the embedding weights for only specific values? I know I could set the weights of the entire embedding laye

PyTorch bool value of tensor with more than one value is ambiguous

I am trying to train a neural network with PyTorch, but I get the error in the title. I followed this tutorial, and I just applied some small changes to meet my

Can I perform non-text sequence classification in fastai?

I am trying to figure out if I can use fastai for my problem. I am trying to classify sequences of floats. Each sequence is a vector of 24 floats. In principle,

Pytorch RuntimeError: CUDA out of memory with a huge amount of free memory

While training the model, I encountered the following problem: RuntimeError: CUDA out of memory. Tried to allocate 304.00 MiB (GPU 0; 8.00 GiB total capacity; 1

Difference between hidden dimension and n_layers in rnn using pytorch

I am stuck between hidden dimension and n_layers. What I understood so far, is that n_layers in the parameters of RNN using pytorch, is number of hidden layers.

Are anchor box sizes in torchvision's AnchorGenerator with respect to the input image, feature map, or something else?

This is not a generic question about anchor boxes, or Faster-RCNN, or anything related to theory. This is a question about how anchor boxes are implemented in p

Pytorch and Torchvision are compiled different CUDA versions

RuntimeError: Detected that PyTorch and torchvision were compiled with different CUDA versions. PyTorch has CUDA Version=10.2 and torchvision has CUDA Version=1

Semantic segmentation with detectron2

I used Detectron2 to train a custom model with Instance Segmentation and worked well. There are several Tutorials on google colab with Detectron2 using Instance

How can I see source code or explanation of "torch_sparse import SparseTensor"?

I am studying some source codes from PytorchGeometric. Actually I am really finding from torch_sparse import SparseTensor in Google, to get how to use SparseTen

using ImageFolder with albumentations in pytorch

I have a situation where I need to use ImageFolder with the albumentations lib to make the augmentations in pytorch - custom dataloader is not an option. To thi

Having Issues Loading the CelebA dataset on Google Colab Using Pytorch

I need to load the CelebA dataset for a Python (Pytorch) implementation of the following paper: https://arxiv.org/pdf/1908.10578.pdf The original code for loadi

RoBERTa classifier: cannot generate single prediction

I have succesfully trained a text emotion classifier fine-tuning a RoBERTa language model, mostly using a helpful notebook found online. Now I am trying to writ

Pytorch nn.CrossEntropyLoss() always returns 0

I am building a multi-class Vision Transformer Network. When passing my values through my loss function, it always returns zero. My output layer consisits of 37

torchaudio.sox_effects.sox_effects.apply_effects_file requires sox

I have installed sox using pip command, but it says that "torchaudio.sox_effects.sox_effects.apply_effects_file requires sox". Maybe I should install other addi

How to mask a 3D tensor with 2D mask and keep the dimensions of original vector?

Suppose, I have a 3D tensor A A = torch.arange(24).view(4, 3, 2) print(A) and require masking it using 2D tensor mask = torch.zeros((4, 3), dtype=torch.int6

Converting From .pt model to .h5 model

I am using Google colab. I want to convert .pt model from my google drive to .h5 model. I follow link https://github.com/gmalivenko/pytorch2keras and https://ww

RTX 3070 compatibility with Pytorch

NVIDIA GeForce RTX 3070 with CUDA capability sm_86 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilit

Duplicate layers when reusing pytorch model

I am trying to reuse some of the resnet layers for a custom architecture and ran into a issue I can't figure out. Here is a simplified example; when I run: imp

Can we use one optimizer for GAN model?

I have seen lots of GAN tutorials, and all of them use two separate optimizers for Generator and Discriminator. Their code looks like this. import torch.nn as n