Does PyTorch's nn.Embedding support manually setting the embedding weights for only specific values? I know I could set the weights of the entire embedding laye
My Code : h_table = tf.lookup.StaticHashTable( initializer=tf.lookup.KeyValueTensorInitializer( keys=[0, 1, 2, 3, 4, 5], values=[12.3,
Suppose, I have a 3D tensor A A = torch.arange(24).view(4, 3, 2) print(A) and require masking it using 2D tensor mask = torch.zeros((4, 3), dtype=torch.int6
I wish to create a custom pooling layer which can efficiently work on GPUs. For instance, I have following input tensor in = <tf.Tensor: shape=(4, 5), dtype=
I've been trying to generate a custom dataset from two arrays. One with the shape (128,128,6) (satellite data with 6 channels), and the other with the shape (12
I have a output tensor after convolution of dimensions [1,64,112,112]. Is there any way I can visualize this using matplotlib only, keeping in mind that imshow(
I am trying to initialize a tensor on Google Colab with GPU enabled. device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') t = torch.tensor([1,
I have a list of pytorch tensors as shown below: data = [[tensor([0, 0, 0]), tensor([1, 2, 3])], [tensor([0, 0, 0]), tensor([4, 5, 6])]] Now this is ju
Have tensor like :x.shape = [3, 2, 2]. import torch x = torch.tensor([ [[-0.3000, -0.2926],[-0.2705, -0.2632]], [[-0.1821, -0.1747],[-0.1526, -0.1453]
I would like to process text with tensorflow 2.8 on Jupyter notebook. my code: import re import string import tensorflow as tf from tensorflow import keras from
I'm trying to build a simple word generator. However, I encounter some difficulty with the sliding windows. here is my actual code: files = glob("transfdata/*")
I'm working with certian tensors with shape of (X,42) while X can be in a range between 50 to 70. I want to pad each tensor that I get until it reaches a size o
I have a PyTorch tensor of size (5, 1, 44, 44) (batch, channel, height, width), and I want to 'resize' it to (5, 1, 224, 224) How can I do that? What functions
I have these 2 tensors box_a = torch.randn(1,4) box_b = torch.randn(1,4) and i have a code in pytorch box_a[:, 2:].unsqueeze(1).expand(1, 1, 2) but i want to
Any efficient way to merge one tensor to another in Pytorch, but on specific indexes. Here is my full problem. I have a list of indexes of a tensor in below cod