Skip to content

Commit

Permalink
Merge branch 'main' into update-packages
Browse files Browse the repository at this point in the history
  • Loading branch information
samet-akcay authored Oct 24, 2024
2 parents b2bc08b + 31952db commit a8bf7cd
Show file tree
Hide file tree
Showing 7 changed files with 50 additions and 28 deletions.
4 changes: 4 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,14 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).

### Changed

- Add duration of experiments in seconds in the benchmark CSV result by [mzweilin](https://github.com/mzweilin) in https://github.com/openvinotoolkit/anomalib/pull/2392

### Deprecated

### Fixed

- Make single GPU benchmarking 5x more efficient by [mzweilin](https://github.com/mzweilin) in https://github.com/openvinotoolkit/anomalib/pull/2390

### New Contributors

**Full Changelog**:
Expand Down
30 changes: 16 additions & 14 deletions docs/source/markdown/get_started/anomalib.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ The installer can be installed using the following commands:
:::{tab-item} API
:sync: label-1

```{literalinclude} ../../snippets/install/pypi.txt
```{literalinclude} /snippets/install/pypi.txt
:language: bash
```

Expand All @@ -26,7 +26,7 @@ The installer can be installed using the following commands:
:::{tab-item} Source
:sync: label-2

```{literalinclude} ../../snippets/install/source.txt
```{literalinclude} /snippets/install/source.txt
:language: bash
```

Expand All @@ -42,22 +42,22 @@ The next section demonstrates how to install the full package using the CLI inst
:::::{dropdown} Installing the Full Package
After installing anomalib, you can install the full package using the following commands:

```{literalinclude} ../../snippets/install/anomalib_help.txt
```{literalinclude} /snippets/install/anomalib_help.txt
:language: bash
```

As can be seen above, the only available sub-command is `install` at the moment.
The `install` sub-command has options to install either the full package or the
specific components of the package.

```{literalinclude} ../../snippets/install/anomalib_install_help.txt
```{literalinclude} /snippets/install/anomalib_install_help.txt
:language: bash
```

By default the `install` sub-command installs the full package. If you want to
install only the specific components of the package, you can use the `--option` flag.

```{literalinclude} ../../snippets/install/anomalib_install.txt
```{literalinclude} /snippets/install/anomalib_install.txt
:language: bash
```

Expand All @@ -66,21 +66,23 @@ After following these steps, your environment will be ready to use anomalib!

## {octicon}`mortar-board` Training

Anomalib supports both API and CLI-based training. The API is more flexible and allows for more customization, while the CLI training utilizes command line interfaces, and might be easier for those who would like to use anomalib off-the-shelf.
Anomalib supports both API and CLI-based training. The API is more flexible
and allows for more customization, while the CLI training utilizes command line
interfaces, and might be easier for those who would like to use anomalib off-the-shelf.

::::{tab-set}

:::{tab-item} API

```{literalinclude} ../../snippets/train/api/default.txt
```{literalinclude} /snippets/train/api/default.txt
:language: python
```

:::

:::{tab-item} CLI

```{literalinclude} ../../snippets/train/cli/default.txt
```{literalinclude} /snippets/train/cli/default.txt
:language: bash
```

Expand All @@ -100,7 +102,7 @@ Anomalib includes multiple inferencing scripts, including Torch, Lightning, Grad
:::{tab-item} API
:sync: label-1

```{literalinclude} ../../snippets/inference/api/lightning.txt
```{literalinclude} /snippets/inference/api/lightning.txt
:language: python
```

Expand All @@ -109,7 +111,7 @@ Anomalib includes multiple inferencing scripts, including Torch, Lightning, Grad
:::{tab-item} CLI
:sync: label-2

```{literalinclude} ../../snippets/inference/cli/lightning.txt
```{literalinclude} /snippets/inference/cli/lightning.txt
:language: bash
```

Expand Down Expand Up @@ -201,15 +203,15 @@ Anomalib supports hyper-parameter optimization using [wandb](https://wandb.ai/)

:::{tab-item} CLI

```{literalinclude} ../../snippets/pipelines/hpo/cli.txt
```{literalinclude} /snippets/pipelines/hpo/cli.txt
:language: bash
```

:::

:::{tab-item} API

```{literalinclude} ../../snippets/pipelines/hpo/api.txt
```{literalinclude} /snippets/pipelines/hpo/api.txt
:language: bash
```

Expand All @@ -233,15 +235,15 @@ To run a training experiment with experiment tracking, you will need the followi

By using the configuration file above, you can run the experiment with the following command:

```{literalinclude} ../../snippets/logging/cli.txt
```{literalinclude} /snippets/logging/cli.txt
:language: bash
```

:::

:::{tab-item} API

```{literalinclude} ../../snippets/logging/api.txt
```{literalinclude} /snippets/logging/api.txt
:language: bash
```

Expand Down
11 changes: 7 additions & 4 deletions docs/source/snippets/train/api/default.txt
Original file line number Diff line number Diff line change
@@ -1,12 +1,15 @@
# Import the required modules
from anomalib.data import MVTec
from anomalib.models import Patchcore
from anomalib.engine import Engine
from anomalib.models import EfficientAd

# Initialize the datamodule, model and engine
datamodule = MVTec()
model = Patchcore()
engine = Engine()
datamodule = MVTec(train_batch_size=1)
model = EfficientAd()
engine = Engine(max_epochs=5)

# Train the model
engine.fit(datamodule=datamodule, model=model)

# Continue from a checkpoint
engine.fit(datamodule=datamodule, model=model, ckpt_path="path/to/checkpoint.ckpt")
7 changes: 5 additions & 2 deletions docs/source/snippets/train/cli/default.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,13 @@
anomalib train -h

# Train by using the default values.
anomalib train --model Patchcore --data anomalib.data.MVTec
anomalib train --model EfficientAd --data anomalib.data.MVTec --data.train_batch_size 1

# Train by overriding arguments.
anomalib train --model Patchcore --data anomalib.data.MVTec --data.category transistor
anomalib train --model EfficientAd --data anomalib.data.MVTec --data.train_batch_size 1 --data.category transistor

# Train by using a config file.
anomalib train --config <path/to/config>

# Continue training from a checkpoint
anomalib train --config <path/to/config> --ckpt_path <path/to/checkpoint.ckpt>
11 changes: 11 additions & 0 deletions src/anomalib/pipelines/benchmark/job.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
# SPDX-License-Identifier: Apache-2.0

import logging
import time
from datetime import datetime
from pathlib import Path
from tempfile import TemporaryDirectory
Expand Down Expand Up @@ -48,6 +49,7 @@ def run(
task_id: int | None = None,
) -> dict[str, Any]:
"""Run the benchmark."""
job_start_time = time.time()
devices: str | list[int] = "auto"
if task_id is not None:
devices = [task_id]
Expand All @@ -59,8 +61,16 @@ def run(
devices=devices,
default_root_dir=temp_dir,
)
fit_start_time = time.time()
engine.fit(self.model, self.datamodule)
test_start_time = time.time()
test_results = engine.test(self.model, self.datamodule)
job_end_time = time.time()
durations = {
"job_duration": job_end_time - job_start_time,
"fit_duration": test_start_time - fit_start_time,
"test_duration": job_end_time - test_start_time,
}
# TODO(ashwinvaidya17): Restore throughput
# https://github.com/openvinotoolkit/anomalib/issues/2054
output = {
Expand All @@ -69,6 +79,7 @@ def run(
"model": self.model.__class__.__name__,
"data": self.datamodule.__class__.__name__,
"category": self.datamodule.category,
**durations,
**test_results[0],
}
logger.info(f"Completed with result {output}")
Expand Down
11 changes: 6 additions & 5 deletions src/anomalib/pipelines/benchmark/pipeline.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,12 @@ def _setup_runners(args: dict) -> list[Runner]:
accelerators = args["accelerator"] if isinstance(args["accelerator"], list) else [args["accelerator"]]
runners: list[Runner] = []
for accelerator in accelerators:
if accelerator == "cpu":
runners.append(SerialRunner(BenchmarkJobGenerator("cpu")))
elif accelerator == "cuda":
runners.append(ParallelRunner(BenchmarkJobGenerator("cuda"), n_jobs=torch.cuda.device_count()))
else:
if accelerator not in {"cpu", "cuda"}:
msg = f"Unsupported accelerator: {accelerator}"
raise ValueError(msg)
device_count = torch.cuda.device_count()
if device_count <= 1:
runners.append(SerialRunner(BenchmarkJobGenerator(accelerator)))
else:
runners.append(ParallelRunner(BenchmarkJobGenerator(accelerator), n_jobs=device_count))
return runners
4 changes: 1 addition & 3 deletions src/anomalib/utils/logging.py
Original file line number Diff line number Diff line change
Expand Up @@ -74,10 +74,8 @@ def redirect_logs(log_file: str) -> None:
"""
Path(log_file).parent.mkdir(exist_ok=True, parents=True)
logger_file_handler = logging.FileHandler(log_file)
root_logger = logging.getLogger()
root_logger.setLevel(logging.DEBUG)
format_string = "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
logging.basicConfig(format=format_string, level=logging.DEBUG, handlers=[logger_file_handler])
logging.basicConfig(format=format_string, handlers=[logger_file_handler])
logging.captureWarnings(capture=True)
# remove other handlers from all loggers
loggers = [logging.getLogger(name) for name in logging.root.manager.loggerDict]
Expand Down

0 comments on commit a8bf7cd

Please sign in to comment.