Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

πŸš€ Add PreProcessor to AnomalyModule #2358

Open
wants to merge 53 commits into
base: feature/v2
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
53 commits
Select commit Hold shift + click to select a range
5338afa
Created pre-processor
samet-akcay Oct 9, 2024
180c22f
Rename transforms to transform in pre-processor
samet-akcay Oct 9, 2024
7738e38
Remove transforms from datamodules
samet-akcay Oct 9, 2024
a133048
Remove transforms from datasets
samet-akcay Oct 9, 2024
c748a0d
Remove setup_transforms from Engine
samet-akcay Oct 9, 2024
03a2a2e
Add preprocessor to AnomalyModule and models
samet-akcay Oct 9, 2024
6f7399a
Fix tests
samet-akcay Oct 10, 2024
cc5f559
Remove self._transform from AnomalyModule
samet-akcay Oct 10, 2024
4d2e110
revert transforms in datasets
samet-akcay Oct 11, 2024
1e83e57
fix efficient_ad and engine.config tests
samet-akcay Oct 11, 2024
1e05349
Update the upgrade tests
samet-akcay Oct 11, 2024
785d64f
Revert on_load_checkpoint hook to AnomalyModule
samet-akcay Oct 11, 2024
b798243
Remove exportable transform from anomaly module and move to pre-proce…
samet-akcay Oct 15, 2024
4bf6187
Merge branch 'feature/design-simplifications' of github.com:openvinot…
samet-akcay Oct 15, 2024
c942604
Merge main
samet-akcay Oct 16, 2024
ea28833
Add pre-processor to the model graph
samet-akcay Oct 17, 2024
78cf516
Add docstring to pre-processor class
samet-akcay Oct 17, 2024
46fe7e5
Fix win-clip tests
samet-akcay Oct 17, 2024
f058fbb
Update notebooks
samet-akcay Oct 17, 2024
84c39cd
Split the forward logic and move the training to model hooks
samet-akcay Oct 21, 2024
6ebbb23
Set data transforms from preprocessor
samet-akcay Oct 21, 2024
1b0483a
Update the docstrings
samet-akcay Oct 21, 2024
a503be1
Get stage transforms in setup of pre-processor
samet-akcay Oct 22, 2024
427f680
Revert data config yaml files
samet-akcay Oct 23, 2024
2ee60ee
Revert datamodules
samet-akcay Oct 23, 2024
138c7e3
Revert datasets
samet-akcay Oct 23, 2024
2e9a544
Revert notebooks
samet-akcay Oct 23, 2024
cd524b5
remove padim preprocessor
samet-akcay Oct 23, 2024
8cd8f7f
Update the setup logic in pre-processor
samet-akcay Oct 23, 2024
f841150
Update the setup logic in pre-processor
samet-akcay Oct 23, 2024
d07f0b9
Revert datamodules
samet-akcay Oct 23, 2024
760d5e5
Set datamodule transforms property from preprocessor
samet-akcay Oct 24, 2024
83c2084
Revert v1 upgrade tool
samet-akcay Oct 24, 2024
e83f9cf
Fix aupimo notebooks
samet-akcay Oct 24, 2024
721e11f
Add pre-processor unit tests
samet-akcay Oct 24, 2024
f937abf
Increase the test coverage for PreProcessor
samet-akcay Oct 24, 2024
ec3e97c
Add option to disable pre-processor
samet-akcay Oct 25, 2024
9b42d5d
Split setup_transforms to setup_datamodule_transforms and setup_datal…
samet-akcay Oct 25, 2024
8c379c0
Replace batch.update with in-place batch transforms
samet-akcay Oct 25, 2024
db1d543
Remove logger.warning when the default pre-processor is used
samet-akcay Oct 25, 2024
9ec2547
Use predict-transforms explicitly
samet-akcay Oct 25, 2024
ba240be
remove pre-processor and configure_transforms from export mixin
samet-akcay Oct 25, 2024
6ea5369
Rename set_datamodule_transform to set_datamodule_stage_transform
samet-akcay Oct 26, 2024
c71e41c
Remove transforms from datamodules
samet-akcay Oct 29, 2024
b9bb700
Remove transforms from datamodules
samet-akcay Oct 29, 2024
185fec8
Remove transforms from datamodules
samet-akcay Oct 29, 2024
06fd947
Remove transforms from datamodules
samet-akcay Oct 29, 2024
079168e
Remove transforms from datamodules
samet-akcay Oct 29, 2024
1f6555c
Remove transform related keys from data configs
samet-akcay Oct 29, 2024
03196fa
update preprocessor tests
samet-akcay Oct 29, 2024
d579312
Remove setup method from the model implementations
samet-akcay Oct 29, 2024
5e82c34
Remove image size from datamodules in jupyter notebooks
samet-akcay Oct 29, 2024
a1a0548
Modify folder notebook to acccess the batch from dataset not dataloader
samet-akcay Oct 30, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 0 additions & 3 deletions configs/data/avenue.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,6 @@ init_args:
train_batch_size: 32
eval_batch_size: 32
num_workers: 8
transform: null
train_transform: null
eval_transform: null
val_split_mode: from_test
val_split_ratio: 0.5
seed: null
3 changes: 0 additions & 3 deletions configs/data/btech.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,6 @@ init_args:
train_batch_size: 32
eval_batch_size: 32
num_workers: 8
transform: null
train_transform: null
eval_transform: null
test_split_mode: from_dir
test_split_ratio: 0.2
val_split_mode: same_as_test
Expand Down
3 changes: 0 additions & 3 deletions configs/data/folder.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,6 @@ init_args:
eval_batch_size: 32
num_workers: 8
task: segmentation
transform: null
train_transform: null
eval_transform: null
test_split_mode: from_dir
test_split_ratio: 0.2
val_split_mode: same_as_test
Expand Down
3 changes: 0 additions & 3 deletions configs/data/kolektor.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,6 @@ init_args:
train_batch_size: 32
eval_batch_size: 32
num_workers: 8
transform: null
train_transform: null
eval_transform: null
test_split_mode: from_dir
test_split_ratio: 0.2
val_split_mode: same_as_test
Expand Down
3 changes: 0 additions & 3 deletions configs/data/mvtec.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,6 @@ init_args:
eval_batch_size: 32
num_workers: 8
task: segmentation
transform: null
train_transform: null
eval_transform: null
test_split_mode: from_dir
test_split_ratio: 0.2
val_split_mode: same_as_test
Expand Down
3 changes: 0 additions & 3 deletions configs/data/mvtec_3d.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,6 @@ init_args:
train_batch_size: 32
eval_batch_size: 32
num_workers: 8
transform: null
train_transform: null
eval_transform: null
test_split_mode: from_dir
test_split_ratio: 0.2
val_split_mode: same_as_test
Expand Down
3 changes: 0 additions & 3 deletions configs/data/shanghaitech.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,6 @@ init_args:
train_batch_size: 32
eval_batch_size: 32
num_workers: 8
transform: null
train_transform: null
eval_transform: null
val_split_mode: FROM_TEST
val_split_ratio: 0.5
seed: null
3 changes: 0 additions & 3 deletions configs/data/ucsd_ped.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,6 @@ init_args:
train_batch_size: 8
eval_batch_size: 1
num_workers: 8
transform: null
train_transform: null
eval_transform: null
val_split_mode: FROM_TEST
val_split_ratio: 0.5
seed: null
3 changes: 0 additions & 3 deletions configs/data/visa.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,6 @@ init_args:
train_batch_size: 32
eval_batch_size: 32
num_workers: 8
transform: null
train_transform: null
eval_transform: null
test_split_mode: from_dir
test_split_ratio: 0.2
val_split_mode: same_as_test
Expand Down
5 changes: 2 additions & 3 deletions notebooks/100_datamodules/101_btech.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@
"# NOTE: Provide the path to the dataset root directory.\n",
"# If the datasets is not downloaded, it will be downloaded\n",
"# to this directory.\n",
"dataset_root = Path.cwd().parent / \"datasets\" / \"BTech\""
"dataset_root = Path.cwd().parent.parent / \"datasets\" / \"BTech\""
]
},
{
Expand Down Expand Up @@ -106,7 +106,6 @@
"btech_datamodule = BTech(\n",
" root=dataset_root,\n",
" category=\"01\",\n",
" image_size=256,\n",
" train_batch_size=32,\n",
" eval_batch_size=32,\n",
" num_workers=0,\n",
Expand Down Expand Up @@ -378,7 +377,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.14"
"version": "3.11.8"
},
"orig_nbformat": 4,
"vscode": {
Expand Down
5 changes: 2 additions & 3 deletions notebooks/100_datamodules/102_mvtec.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@
"# NOTE: Provide the path to the dataset root directory.\n",
"# If the datasets is not downloaded, it will be downloaded\n",
"# to this directory.\n",
"dataset_root = Path.cwd().parent / \"datasets\" / \"MVTec\""
"dataset_root = Path.cwd().parent.parent / \"datasets\" / \"MVTec\""
]
},
{
Expand All @@ -84,7 +84,6 @@
"mvtec_datamodule = MVTec(\n",
" root=dataset_root,\n",
" category=\"bottle\",\n",
" image_size=256,\n",
" train_batch_size=32,\n",
" eval_batch_size=32,\n",
" num_workers=0,\n",
Expand Down Expand Up @@ -345,7 +344,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.14"
"version": "3.11.8"
},
"orig_nbformat": 4,
"vscode": {
Expand Down
21 changes: 10 additions & 11 deletions notebooks/100_datamodules/103_folder.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 28,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -42,7 +42,7 @@
"# NOTE: Provide the path to the dataset root directory.\n",
"# If the datasets is not downloaded, it will be downloaded\n",
"# to this directory.\n",
"dataset_root = Path.cwd().parent / \"datasets\" / \"hazelnut_toy\""
"dataset_root = Path.cwd().parent.parent / \"datasets\" / \"hazelnut_toy\""
]
},
{
Expand All @@ -63,7 +63,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 29,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -91,7 +91,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 30,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -102,7 +102,6 @@
" abnormal_dir=\"crack\",\n",
" task=TaskType.SEGMENTATION,\n",
" mask_dir=dataset_root / \"mask\" / \"crack\",\n",
" image_size=(256, 256),\n",
")\n",
"folder_datamodule.setup()"
]
Expand All @@ -114,7 +113,7 @@
"outputs": [],
"source": [
"# Train images\n",
"i, data = next(enumerate(folder_datamodule.train_dataloader()))\n",
"data = next(iter(folder_datamodule.train_data))\n",
"print(data.image.shape)"
]
},
Expand All @@ -125,7 +124,7 @@
"outputs": [],
"source": [
"# Test images\n",
"i, data = next(enumerate(folder_datamodule.test_dataloader()))\n",
"data = next(iter(folder_datamodule.test_data))\n",
"print(data.image.shape, data.gt_mask.shape)"
]
},
Expand All @@ -143,8 +142,8 @@
"metadata": {},
"outputs": [],
"source": [
"img = to_pil_image(data.image[0].clone())\n",
"msk = to_pil_image(data.gt_mask[0].int() * 255).convert(\"RGB\")\n",
"img = to_pil_image(data.image.clone())\n",
"msk = to_pil_image(data.gt_mask.int() * 255).convert(\"RGB\")\n",
"\n",
"Image.fromarray(np.hstack((np.array(img), np.array(msk))))"
]
Expand Down Expand Up @@ -187,7 +186,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 36,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -369,7 +368,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.14"
"version": "3.11.8"
},
"orig_nbformat": 4,
"vscode": {
Expand Down
2 changes: 1 addition & 1 deletion notebooks/100_datamodules/104_tiling.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@
"# NOTE: Provide the path to the dataset root directory.\n",
"# If the datasets is not downloaded, it will be downloaded\n",
"# to this directory.\n",
"dataset_root = Path.cwd().parent / \"datasets\" / \"MVTec\" / \"transistor\""
"dataset_root = Path.cwd().parent.parent / \"datasets\" / \"MVTec\" / \"transistor\""
]
},
{
Expand Down
5 changes: 2 additions & 3 deletions notebooks/200_models/201_fastflow.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@
"# NOTE: Provide the path to the dataset root directory.\n",
"# If the datasets is not downloaded, it will be downloaded\n",
"# to this directory.\n",
"dataset_root = Path.cwd().parent / \"datasets\" / \"MVTec\""
"dataset_root = Path.cwd().parent.parent / \"datasets\" / \"MVTec\""
]
},
{
Expand Down Expand Up @@ -120,7 +120,6 @@
"datamodule = MVTec(\n",
" root=dataset_root,\n",
" category=\"bottle\",\n",
" image_size=256,\n",
" train_batch_size=32,\n",
" eval_batch_size=32,\n",
" num_workers=0,\n",
Expand Down Expand Up @@ -555,7 +554,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.14"
"version": "3.11.8"
},
"orig_nbformat": 4,
"vscode": {
Expand Down
5 changes: 2 additions & 3 deletions notebooks/600_loggers/601_mlflow_logging.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@
"# NOTE: Provide the path to the dataset root directory.\n",
"# If the datasets is not downloaded, it will be downloaded\n",
"# to this directory.\n",
"dataset_root = Path.cwd().parent / \"datasets\" / \"MVTec\""
"dataset_root = Path.cwd().parent.parent / \"datasets\" / \"MVTec\""
]
},
{
Expand Down Expand Up @@ -197,7 +197,6 @@
"datamodule = MVTec(\n",
" root=dataset_root,\n",
" category=\"bottle\",\n",
" image_size=256,\n",
" train_batch_size=32,\n",
" eval_batch_size=32,\n",
" num_workers=24,\n",
Expand Down Expand Up @@ -421,7 +420,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.14"
"version": "3.11.8"
}
},
"nbformat": 4,
Expand Down
7 changes: 2 additions & 5 deletions notebooks/700_metrics/701a_aupimo.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,6 @@
"datamodule = MVTec(\n",
" root=dataset_root,\n",
" category=\"leather\",\n",
" image_size=256,\n",
" train_batch_size=32,\n",
" eval_batch_size=32,\n",
" num_workers=8,\n",
Expand Down Expand Up @@ -357,9 +356,7 @@
")\n",
"\n",
"for batch in predictions:\n",
" anomaly_maps = batch[\"anomaly_maps\"].squeeze(dim=1)\n",
" masks = batch[\"mask\"]\n",
" aupimo.update(anomaly_maps=anomaly_maps, masks=masks)"
" aupimo.update(anomaly_maps=batch.anomaly_map, masks=batch.gt_mask)"
]
},
{
Expand Down Expand Up @@ -534,7 +531,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.14"
"version": "3.11.8"
},
"orig_nbformat": 4
},
Expand Down
9 changes: 4 additions & 5 deletions notebooks/700_metrics/701b_aupimo_advanced_i.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,6 @@
"datamodule = MVTec(\n",
" root=dataset_root,\n",
" category=\"leather\",\n",
" image_size=256,\n",
" train_batch_size=32,\n",
" eval_batch_size=32,\n",
" num_workers=8,\n",
Expand Down Expand Up @@ -219,10 +218,10 @@
"labels = []\n",
"image_paths = []\n",
"for batch in predictions:\n",
" anomaly_maps.append(batch_anomaly_maps := batch[\"anomaly_maps\"].squeeze(dim=1))\n",
" masks.append(batch_masks := batch[\"mask\"])\n",
" labels.append(batch[\"label\"])\n",
" image_paths.append(batch[\"image_path\"])\n",
" anomaly_maps.append(batch_anomaly_maps := batch.anomaly_map)\n",
" masks.append(batch_masks := batch.gt_mask)\n",
" labels.append(batch.gt_label)\n",
" image_paths.append(batch.image_path)\n",
" aupimo.update(anomaly_maps=batch_anomaly_maps, masks=batch_masks)\n",
"\n",
"# list[list[str]] -> list[str]\n",
Expand Down
9 changes: 4 additions & 5 deletions notebooks/700_metrics/701c_aupimo_advanced_ii.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,6 @@
"datamodule = MVTec(\n",
" root=dataset_root,\n",
" category=\"leather\",\n",
" image_size=256,\n",
" train_batch_size=32,\n",
" eval_batch_size=32,\n",
" num_workers=8,\n",
Expand Down Expand Up @@ -213,10 +212,10 @@
"labels = []\n",
"image_paths = []\n",
"for batch in predictions:\n",
" anomaly_maps.append(batch_anomaly_maps := batch[\"anomaly_maps\"].squeeze(dim=1))\n",
" masks.append(batch_masks := batch[\"mask\"])\n",
" labels.append(batch[\"label\"])\n",
" image_paths.append(batch[\"image_path\"])\n",
" anomaly_maps.append(batch_anomaly_maps := batch.anomaly_map)\n",
" masks.append(batch_masks := batch.gt_mask)\n",
" labels.append(batch.gt_label)\n",
" image_paths.append(batch.image_path)\n",
" aupimo.update(anomaly_maps=batch_anomaly_maps, masks=batch_masks)\n",
"\n",
"# list[list[str]] -> list[str]\n",
Expand Down
Loading
Loading