Closed
Description
Hi, I've been reading through the code and I found that L1 loss is used instead of Smooth L1 loss for localization loss. This is quite different from the paper's procedure, where as far as I know SSD uses Smooth L1 loss.
https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Object-Detection/blob/master/model.py#L549
self.smooth_l1 = nn.L1Loss()
https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Object-Detection/blob/master/model.py#L612
loc_loss = self.smooth_l1(predicted_locs[positive_priors], true_locs[positive_priors]) # (), scalar
My questions are:
- Has anyone tried changing the loss function to
SmoothL1Loss
as implemented in PyTorch as of right now? - If it has been tried, is the result similar to what SSD achieves?
Thank you in advance.
Metadata
Metadata
Assignees
Labels
No labels