Category "faster-rcnn"

(Faster R-CNN) ROI Pooling layer is not differentiable w.r.t the box coordinates

The paper reports that "having an RoI pooling layer that is differentiable w.r.t the box coordinates is a nontrivial problem" and refers to "ROI Warping" (crops

Are anchor box sizes in torchvision's AnchorGenerator with respect to the input image, feature map, or something else?

This is not a generic question about anchor boxes, or Faster-RCNN, or anything related to theory. This is a question about how anchor boxes are implemented in p

Training loss for Faster-RCNN either becoming Nan or infinity

I want to implement Pytorch Faster-RCNN module on a custom dataset that I curated and labelled. The implementation detail looks straightforward, there was a dem

One box object detection

I am using a faster rcnn model to predict one object in an image. There can only be one object in each image. Is it possible to force Faster Rcnn to train and p

Object detection shows incorrect results on mask rcnn demo code

I have cloned https://github.com/akTwelve/Mask_RCNN and run the demo code. Everything works fine and runs correctly but the image processing part has incorrect

TypeError: Inputs to a layer should be tensors. Got: <tensorflow.python.keras.engine.functional.Functional object at 0x000001ADE3B6BEE0>

I'm trying to Implement Inception_resnet_v2 inside Faster-RCNN instead of using ResNet50. but when I try to run the code I got this TypeError: TypeError: Inputs