'How to transfer learning or fine tune YOLOv4-darknet with freeze some layers?

I'm a beginner in object detection field.

First, I followed YOLOv4 custom-train from here, I have successfully followed the tutorial. Then I started to think that if I have a new task which is similar to YOLOv4 pre-trained (which using COCO 80 classes) and I have only small dataset size, then I think it would be great if I can fine tune the model (unfreeze only the last layer) to keep or even to increase the detector performance by using only small & similar dataset. This reference seems to legitimate my thought about the fine-tuning I wanted to do.

Then I go to Alexey github here to check how to freeze layers, and found that I should use stopbackward=1. It says that

"...set param stopbackward=1 for layer-136 in cfg-file"

But I have no idea about where is "layer-136" in the cfg-file here and also I have no idea where to put stopbackward=1 if I only want to unfreeze the last layer (with freezing all the other layers). So to summarize my questions.

  1. Where (in which line) to put stopbackward=1 in the yolov4-custom.cfg if I want to unfreeze last layer and freeze the other layers?
  2. What is "layer-136" which mentioned in Alexey github reference? (is it one of the classifier layer? or else?)
  3. In which line of yolov4-custom.cfg should I put the stopbackward=1 for that layer-136?

Any further information from you is really appreciated. Please advise.

Thank you in advance.

Regards, Sona



Solution 1:[1]

the "layer-136" is located before the head of yolov4. To make it easy to see, try to visualize the .cfg file to Netron apps and read the .cfg via text editor, so you can understand the location of layer. You can notice the input and output (the x-layer) when you analyze it with Netron

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Franz Junior