-
Notifications
You must be signed in to change notification settings - Fork 861
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can mAP calculation work with sorted labels? #289
Comments
I think I now understand this better. Because the labels are also sorted when they are returned from create_training_instances in train.py, this will not be an issue for models that are trained using this code. It's only a problem if evaluate.py is run on a pretrained model that doesn't use labels that have been sorted alphabetically (e.g. a model that uses the COCO labels). |
I'm glad others came across this as well. Additionally the order of numerical labels is also strictly numerical, i.e. "100" comes before "11".
This is incredibly confusing. I'll see if I can fix that somehow. 🤔 |
OK, so adding
at Line 59 in 768c524
shows the same labels in prediction as they were trained. I'll make a PR when I find the time to. |
I'm confused about how evaluate.py can work correctly when the labels are sorted alphabetically on line 32. The class indices predicted by the model will correspond to the original order of the labels, so can't be compared to the indices of the sorted labels.
I'm working with some COCO images and the APs are coming out as mostly 0s. However, if I remove the command to sort the labels, the AP calculation seems to work ok.
What's confusing me is that this seems to be such a major bug that I can't understand how it wouldn't have been spotted before. Can anyone tell me if I'm missing something? Thank you.
The text was updated successfully, but these errors were encountered: