'using ImageFolder with albumentations in pytorch
I have a situation where I need to use ImageFolder with the albumentations lib to make the augmentations in pytorch - custom dataloader is not an option.
To this end, I am stumped and I am not able to get ImageFolder to work with albumenations. I have tried something along these lines:
class Transforms:
def __init__(self, transforms: A.Compose):
self.transforms = transforms
def __call__(self, img, *args, **kwargs):
return self.transforms(image=np.array(img))['image']
and then:
trainset = datasets.ImageFolder(traindir,transform=Transforms(transforms=A.Resize(32 , 32)))
where traindir
is some dir with images. I however get thrown a weird error:
RuntimeError: Given groups=1, weight of size [16, 3, 3, 3], expected input[1024, 32, 32, 3] to have 3 channels, but got 32 channels instead
and I cant seem to find a reproducible example to make a simple aug pipleline work with imagefolder.
UPDATE On the recommendation of @Shai, I have done this now:
class Transforms:
def __init__(self):
self.transforms = A.Compose([A.Resize(224,224),ToTensorV2()])
def __call__(self, img, *args, **kwargs):
return self.transforms(image=np.array(img))['image']
trainset = datasets.ImageFolder(traindir,transform=Transforms())
but I get thrown:
self.padding, self.dilation, self.groups)
RuntimeError: Input type (torch.cuda.ByteTensor) and weight type (torch.cuda.FloatTensor) should be the same
Solution 1:[1]
You need to use ToTensorV2
transformation as the final one:
trainset = datasets.ImageFolder(traindir,transform=Transforms(transforms=A.Compose([A.Resize(32 , 32), ToTensorV2()]))
Solution 2:[2]
By looking into ImageFolder implementation on PyTorch[link] and some proposed work in Kaggle [link]. I propose the following solution (which is successfully tested from my side):
import numpy as np
from typing import Any, Callable, Optional, Tuple
from torchvision.datasets.folder import DatasetFolder, default_loader, IMG_EXTENSIONS
class CustomImageFolder(DatasetFolder):
def __init__(
self,
root: str,
transform: Optional[Callable] = None,
target_transform: Optional[Callable] = None,
loader: Callable[[str], Any] = default_loader,
is_valid_file: Optional[Callable[[str], bool]] = None,
):
super().__init__(
root,
loader,
IMG_EXTENSIONS if is_valid_file is None else None,
transform=transform,
target_transform=target_transform,
is_valid_file=is_valid_file,
)
self.imgs = self.samples
def __getitem__(self, index: int) -> Tuple[Any, Any]:
"""
Args:
index (int): Index
Returns:
tuple: (sample, target) where target is class_index of the target class.
"""
path, target = self.samples[index]
sample = self.loader(path)
if self.transform is not None:
try:
sample = self.transform(sample)
except Exception:
sample = self.transform(image=np.array(sample))["image"]
if self.target_transform is not None:
target = self.target_transform(target)
return sample, target
def __len__(self) -> int:
return len(self.samples)
Now you can run the code as follows:
trainset = CustomImageFolder(traindir,transform=Transforms(transforms=A.Resize(32 , 32)))
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | Shai |
Solution 2 | Mohamed Elawady |