In situation of finetuning, parameters in backbone network need to be frozen. To achieve this target, there are two steps.
First, locate the layers and change their requires_grad attributes to be False.
for param in net.backbone.parameters(): param.requires_grad = False
Here we use parameters() method, it will give both bias and weight.
Second, filter out those parameters who need to be updated and pass them to the optimizer.
optimizer = torch.optim.SGD(filter(lambda p: p.requires_grad == True, net.parameters()), lr=learning_rate, momentum=mom)
原文:https://www.cnblogs.com/hizhaolei/p/12535294.html