Skip to content

Commit

Permalink
Merge pull request #14 from laclouis5/dev
Browse files Browse the repository at this point in the history
Merge dev
  • Loading branch information
laclouis5 authored Nov 30, 2022
2 parents 2c9fec6 + bf01bef commit 3e0a66b
Show file tree
Hide file tree
Showing 26 changed files with 1,356 additions and 400 deletions.
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,8 @@ ipython_config.py
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
Pipfile
Pipfile.lock

# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
Expand Down
1 change: 1 addition & 0 deletions .vscode/settings.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
{
"python.testing.pytestEnabled": true,
"python.testing.unittestEnabled": false,
"python.linting.pylintEnabled": true,
"python.linting.enabled": true,
"python.analysis.typeCheckingMode": "off"
Expand Down
96 changes: 53 additions & 43 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,15 +31,16 @@ The `AnnotationSet` class contains static methods to read different databases:

```python
# COCO
coco_gts = AnnotationSet.from_coco(file_path="path/to/file.json")
coco = AnnotationSet.from_coco(file_path="path/to/file.json")

# Pascal VOC
xml_gts = AnnotationSet.from_xml(folder="path/to/files/")

# YOLO
yolo_preds = AnnotationSet.from_yolo(
# YOLOv5
yolo = AnnotationSet.from_yolo_v5(
folder="path/to/files/",
image_folder="path/to/images/")
image_folder="path/to/images/"
)

# Pascal VOC
pascal = AnnotationSet.from_pascal_voc(folder="path/to/files/")
```

`Annotation` offers file-level granularity for compatible datasets:
Expand All @@ -50,28 +51,30 @@ annotation = Annotation.from_labelme(file_path="path/to/file.xml")

For more specific implementations the `BoundingBox` class contains lots of utilities to parse bounding boxes in different formats, like the `create()` method.

`AnnotationsSets` can be combined and annotations can be added:
`AnnotationsSets` are set-like objects. They can be combined and annotations can be added:

```python
gts = coco_gts + xml_gts
gts = coco + yolo
gts.add(annotation)
```

### Inspect Databases
### Inspect Datasets

Iterators and efficient `image_id` lookup are easy to use:
Iterators and efficient lookup by `image_id`'s are easy to use:

```python
if annotation in gts:
print("This annotation is in the DB.")
print("This annotation is present.")

if "image_123.jpg" in gts.image_ids:
print("Annotation of image 'image_123.jpg' is present.")

for box in gts.all_boxes:
print(box.label, box.area, box.is_ground_truth)

for annotation in gts:
print(f"{annotation.image_id}: {len(annotation.boxes)} boxes")

print(gts.image_ids == yolo_preds.image_ids)
nb_boxes = len(annotation.boxes)
print(f"{annotation.image_id}: {nb_boxes} boxes")
```

Database stats can printed to the console:
Expand Down Expand Up @@ -115,30 +118,36 @@ coco_gts.show_stats()
Datasets can be converted to and savde in other formats easily:

```python
# To Pascal VOC
coco_gts.save_xml(save_dir="pascalVOC_db/")
# ImageNet
gts.save_imagenet(save_dir="pascalVOC_db/")

# TO CVAT
coco_gts.save_cvat(path="train.xml")
# YOLO Darknet
gts.save_yolo_darknet(
save_dir="yolo_train/",
label_to_id={"cat": 0, "dog": 1, "racoon": 2}
)

# To YOLO
coco_gts.save_yolo(
# YOLOv5
gts.save_yolo_v5(
save_dir="yolo_train/",
label_to_id={"cat": 0, "dog": 1, "racoon": 2})
label_to_id={"cat": 0, "dog": 1, "racoon": 2},
)

# CVAT
gts.save_cvat(path="train.xml")
```

### COCO Evaluation

Evaluating is as easy as:

```python
evaluator = COCOEvaluator(coco_gts, yolo_preds)
ap = evaluator.ap()
```
evaluator = COCOEvaluator(
ground_truths=gts,
predictions=dets
)

All COCO metrics are available:

```python
ap = evaluator.ap()
ar_100 = evaluator.ar_100()
ap_75 = evaluator.ap_75()
ap_small = evaluator.ap_small()
Expand All @@ -155,19 +164,19 @@ which outputs:

```
COCO Evaluation
┏━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━┳...┳━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━┓
┃ Label ┃ AP 50:95 ┃ AP 50 ┃ ┃ AR S ┃ AR M ┃ AR L ┃
┡━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━╇...╇━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━┩
│ airplane │ 22.72% │ 25.25% │ │ nan% │ 90.00% │ 0.00% │
│ apple │ 46.40% │ 57.43% │ │ 48.57% │ nan% │ nan% │
│ backpack │ 54.82% │ 85.15% │ │ 100.00% │ 72.00% │ 0.00% │
│ banana │ 73.65% │ 96.41% │ │ nan% │ 100.00% │ 70.00% │
. . . . . . . .
. . . . . . . .
. . . . . . . .
├───────────┼──────────┼────────┼...┼────────┼─────────┼─────────┤
│ Total │ 50.36% │ 69.70% │ │ 65.48% │ 60.31% │ 55.37% │
└───────────┴──────────┴────────┴...┴────────┴─────────┴─────────┘
┏━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━┳...┳━━━━━━━━┳━━━━━━━━━━━━━━━━┓
┃ Label ┃ AP 50:95 ┃ AP 50 ┃ ┃ AR S ┃ AR M ┃ AR L ┃
┡━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━╇...╇━━━━━━━━╇━━━━━━━━━━━━━━━━┩
│ airplane │ 22.7% │ 25.2% │ │ nan% │ 90.0% │ 0.0% │
│ apple │ 46.4% │ 57.4% │ │ 48.5% │ nan% │ nan% │
│ backpack │ 54.8% │ 85.1% │ │ 100.0% │ 72.0% │ 0.0% │
│ banana │ 73.6% │ 96.4% │ │ nan% │ 100.0% │ 70.0% │
. . . . . . . .
. . . . . . . .
. . . . . . . .
├───────────┼──────────┼────────┼...┼────────┼────────────────┤
│ Total │ 50.3% │ 69.7% │ │ 65.4% │ 60.3% │ 55.3% │
└───────────┴──────────┴────────┴...┴────────┴────────────────┘
```

The array of results can be saved in CSV format:
Expand All @@ -182,7 +191,8 @@ Custom evaluations can be achieved with:
evaluation = evaluator.evaluate(
iou_threshold=0.33,
max_detections=1_000,
size_range=(0.0, 10_000))
size_range=(0.0, 10_000)
)

ap = evaluation.ap()
cat_ar = evaluation["cat"].ar
Expand Down Expand Up @@ -260,7 +270,7 @@ Saving |1.12s|0.74s|0.42s |4.39s |4.46s |3.75s|3.52s

OpenImage, YOLO and TXT are slower because they store bounding box coordinates in relative coordinates and do not provide the image size, so reading it from the image file is required.

The fastest format is COCO and LabelMe (for individual annotation files).
The fastest format is COCO and LabelMe.

`AnnotationSet.show_stats()`: 0.12 s

Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
PYTHON = ">=3.7"

REQUIREMENTS = ["rich", "tqdm", "numpy"]
EXTRA_REQ = ["tox", "pytest", "twine", "build", "pycocotools"]
EXTRA_REQ = ["tox", "pytest", "twine", "build", "pycocotools", "Pillow"]

with open("README.md") as f:
LONG_DESCRIPTION = f.read()
Expand Down
2 changes: 1 addition & 1 deletion src/globox/__version__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
VERSION = (1, 3, 0)
VERSION = (2, 0, 0)

__version__ = ".".join(map(str, VERSION))
Loading

0 comments on commit 3e0a66b

Please sign in to comment.