Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can mAP calculation work with sorted labels? #289

Open
Sharon507 opened this issue Jul 9, 2020 · 3 comments
Open

How can mAP calculation work with sorted labels? #289

Sharon507 opened this issue Jul 9, 2020 · 3 comments

Comments

@Sharon507
Copy link

I'm confused about how evaluate.py can work correctly when the labels are sorted alphabetically on line 32. The class indices predicted by the model will correspond to the original order of the labels, so can't be compared to the indices of the sorted labels.

I'm working with some COCO images and the APs are coming out as mostly 0s. However, if I remove the command to sort the labels, the AP calculation seems to work ok.

What's confusing me is that this seems to be such a major bug that I can't understand how it wouldn't have been spotted before. Can anyone tell me if I'm missing something? Thank you.

@Sharon507 Sharon507 changed the title How can evaluate.py work with sorted labels? How can mAP calculation work with sorted labels? Jul 9, 2020
@Sharon507
Copy link
Author

I think I now understand this better. Because the labels are also sorted when they are returned from create_training_instances in train.py, this will not be an issue for models that are trained using this code. It's only a problem if evaluate.py is run on a pretrained model that doesn't use labels that have been sorted alphabetically (e.g. a model that uses the COCO labels).

@459below
Copy link

I'm glad others came across this as well. Additionally the order of numerical labels is also strictly numerical, i.e. "100" comes before "11".

Given labels:   [u'0', u'1', u'2', u'3', u'4', u'5', u'6', u'7', u'8', u'9', u'10', u'11', u'12', u'13', u'14', u'15', u'16', u'17', u'18', u'19', u'20', u'21', u'23', u'24', u'25', u'26', u'27', u'28', u'29', u'30', u'33', u'34', u'35', u'36', u'37', u'38', u'39', u'40', u'41', u'100']

Training on:    [u'0', u'1', u'10', u'100', u'11', u'12', u'13', u'14', u'15', u'16', u'17', u'18', u'19', u'2', u'20', u'21', u'23', u'24', u'25', u'26', u'27', u'28', u'29', u'3', u'30', u'33', u'34', u'35', u'36', u'37', u'38', u'39', u'4', u'40', u'41', u'5', u'6', u'7', u'8', u'9']

This is incredibly confusing. I'll see if I can fix that somehow. 🤔

@459below
Copy link

459below commented Dec 22, 2020

OK, so adding

labels = sorted(labels)

at

def draw_boxes(image, boxes, labels, obj_thresh, quiet=True):

shows the same labels in prediction as they were trained. I'll make a PR when I find the time to.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants