'TypeError: __call__() takes 2 positional arguments but 3 were given. To train Raccoon prediction model using FastRCNN through Transfer Learning

 from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
 from engine import train_one_epoch, evaluate
 import utils
 import torchvision.transforms as T

 num_epochs = 10
 for epoch in range(num_epochs):
    train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10)
    lr_scheduler.step()
    evaluate(model, data_loader_test, device=device)

I am using the same code as provided in this link Building Raccoon Model but mine is not working.

This is the error message I am getting TypeError Traceback (most recent call last) in ()

  2 for epoch in range(num_epochs):

  3    # train for one epoch, printing every 10 iterations

  4   ----> train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10)

  5     # update the learning rate

  6   lr_scheduler.step()

7 frames

in getitem(self, idx)

 29         target["iscrowd"] = iscrowd

 30         if self.transforms is not None:

 31        ---> img, target = self.transforms(img, target)

 32         return img, target
 33 

TypeError: call() takes 2 positional arguments but 3 were given



Solution 1:[1]

The above answer is incorrect, I accidentally upvoted before noticing. You are using the wrong Compose, note that it says

https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html#putting-everything-together

"In references/detection/, we have a number of helper functions to simplify training and evaluating detection models. Here, we will use references/detection/engine.py, references/detection/utils.py and references/detection/transforms.py. Just copy them to your folder and use them here."

there are helper scripts. They subclass the compose and flip methods

https://github.com/pytorch/vision/blob/6315358dd06e3a2bcbe9c1e8cdaa10898ac2b308/references/detection/transforms.py#L17

I did the same thing before noticing this. Do not use the compose method from torchvision.transforms, or else you will get the error above. Download their module and load it.

Solution 2:[2]

I am kind of a newbie at this and I was also having the same problem.

Upon doing more research, I found this where the accepted answer used:

img = self.transforms(img)

instead of:

img, target = self.transforms(img, target)

Removing "target" solved the error for me and should solve it for you as well. Not entirely sure why even the official PyTorch tutorial also has "target" included but it does not work for us.

Solution 3:[3]

I had the same issue, there is even an issue raised on Pytorch discussion forum using regarding the same T.Compose | TypeError: call() takes 2 positional arguments but 3 were given

I was able to overcome this issue by copy and pasting the files on the for a specific version v0.3.0 on the vision/reference/detection of the tutorial I am following building-your-own-object-detector-pytorch-vs-tensorflow-and-how-to-even-get-started

Just to fall into another issue I have raised here ValueError: All bounding boxes should have positive height and width. Found invaid box [500.728515625, 533.3333129882812, 231.10546875, 255.2083282470703] for target at index 0. #2740

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 bw4sz
Solution 2 dj48
Solution 3 Santhosh Dhaipule Chandrakanth