Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

πŸš€ Add PreProcessor to AnomalyModule #2358

Open
wants to merge 53 commits into
base: feature/v2
Choose a base branch
from

Conversation

samet-akcay
Copy link
Contributor

πŸ“ Description

This PR,

  • Adds PreProcessor to AnomalyModule
  • Removes transforms from datamodule datasets.

✨ Changes

Select what type of change your PR is:

  • 🐞 Bug fix (non-breaking change which fixes an issue)
  • πŸ”¨ Refactor (non-breaking change which refactors the code base)
  • πŸš€ New feature (non-breaking change which adds functionality)
  • πŸ’₯ Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • πŸ“š Documentation update
  • πŸ”’ Security update

βœ… Checklist

Before you submit your pull request, please make sure you have completed the following steps:

  • πŸ“‹ I have summarized my changes in the CHANGELOG and followed the guidelines for my type of change (skip for minor changes, documentation updates, and test enhancements).
  • πŸ“š I have made the necessary updates to the documentation (if applicable).
  • πŸ§ͺ I have written tests that support my changes and prove that my fix is effective or my feature works (if applicable).

For more information about code review checklists, see the Code Review Checklist.

Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
@jpcbertoldo
Copy link
Contributor

A sub-feature request that would fit here: (optionally?) keep both the transformed and original image/mask in the batch.

So instead of

            image, gt_mask = self.XXX_transform(batch.image, batch.gt_mask)
            batch.update(image=image, gt_mask=gt_mask)

something like

            batch.update(image_original=batch.image, gt_mask_original=batch.gt_mask)
            image, gt_mask = self.XXX_transform(batch.image, batch.gt_mask)
            batch.update(image=image, gt_mask=gt_mask)

It's quite practical to have these when using the API (i've re-implemented this in my local copy 100 times haha).

@samet-akcay
Copy link
Contributor Author

A sub-feature request that would fit here: (optionally?) keep both the transformed and original image/mask in the batch.

So instead of

            image, gt_mask = self.XXX_transform(batch.image, batch.gt_mask)
            batch.update(image=image, gt_mask=gt_mask)

something like

            batch.update(image_original=batch.image, gt_mask_original=batch.gt_mask)
            image, gt_mask = self.XXX_transform(batch.image, batch.gt_mask)
            batch.update(image=image, gt_mask=gt_mask)

It's quite practical to have these when using the API (i've re-implemented this in my local copy 100 times haha).

yeah, the idea is to keep batch.image and batch.gt_mask original outside the model. It is not working that way though :)

@jpcbertoldo
Copy link
Contributor

yeah, the idea is to keep batch.image and batch.gt_mask original outside the model

exactly, makes sense : )

but it's also useful to be able to access the transformed one (eg. when using augmentations)

it is not working that way though :)

didnt get this. cause it's not backcompatible?

@samet-akcay
Copy link
Contributor Author

yeah, the idea is to keep batch.image and batch.gt_mask original outside the model

exactly, makes sense : )

but it's also useful to be able to access the transformed one (eg. when using augmentations)

it is not working that way though :)

didnt get this. cause it's not backcompatible?

oh I meant, it is currently not working, I need to fix it :)

Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
…ssor

Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Copy link

Check out this pull request onΒ  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Copy link

codecov bot commented Oct 24, 2024

Codecov Report

Attention: Patch coverage is 88.59316% with 30 lines in your changes missing coverage. Please review.

Please upload report for BASE (feature/v2@95115f9). Learn more about missing BASE report.

Files with missing lines Patch % Lines
src/anomalib/pre_processing/utils/transform.py 85.50% 10 Missing ⚠️
src/anomalib/pre_processing/pre_processing.py 86.76% 9 Missing ⚠️
.../anomalib/models/components/base/anomaly_module.py 83.33% 3 Missing ⚠️
src/anomalib/models/image/uflow/lightning_model.py 90.00% 2 Missing ⚠️
...rc/anomalib/models/image/csflow/lightning_model.py 80.00% 1 Missing ⚠️
...malib/models/image/efficient_ad/lightning_model.py 91.66% 1 Missing ⚠️
.../anomalib/models/image/fastflow/lightning_model.py 80.00% 1 Missing ⚠️
.../anomalib/models/image/ganomaly/lightning_model.py 80.00% 1 Missing ⚠️
...dels/image/reverse_distillation/lightning_model.py 80.00% 1 Missing ⚠️
...c/anomalib/models/image/winclip/lightning_model.py 87.50% 1 Missing ⚠️
Additional details and impacted files
@@              Coverage Diff              @@
##             feature/v2    #2358   +/-   ##
=============================================
  Coverage              ?   78.14%           
=============================================
  Files                 ?      297           
  Lines                 ?    12990           
  Branches              ?        0           
=============================================
  Hits                  ?    10151           
  Misses                ?     2839           
  Partials              ?        0           

β˜” View full report in Codecov by Sentry.
πŸ“’ Have feedback on the report? Share it here.

Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Copy link
Contributor

@djdameln djdameln left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, feels like we're getting there in terms of design. I have one major concern regarding the configuration of the image size, and several smaller comments.

src/anomalib/models/components/base/anomaly_module.py Outdated Show resolved Hide resolved
src/anomalib/pre_processing/pre_processing.py Outdated Show resolved Hide resolved
src/anomalib/pre_processing/pre_processing.py Outdated Show resolved Hide resolved
src/anomalib/pre_processing/pre_processing.py Show resolved Hide resolved
src/anomalib/models/components/base/anomaly_module.py Outdated Show resolved Hide resolved
src/anomalib/models/components/base/anomaly_module.py Outdated Show resolved Hide resolved
src/anomalib/models/components/base/anomaly_module.py Outdated Show resolved Hide resolved
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
…oader_transforms

Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
@openvinotoolkit openvinotoolkit deleted a comment from djdameln Oct 25, 2024
@openvinotoolkit openvinotoolkit deleted a comment from djdameln Oct 25, 2024
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Copy link
Collaborator

@ashwinvaidya17 ashwinvaidya17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. I have a few minor comments

Comment on lines +58 to +69
# Handle pre-processor
# True -> use default pre-processor
# False -> no pre-processor
# PreProcessor -> use the provided pre-processor
if isinstance(pre_processor, PreProcessor):
self.pre_processor = pre_processor
elif isinstance(pre_processor, bool):
self.pre_processor = self.configure_pre_processor()
else:
msg = f"Invalid pre-processor type: {type(pre_processor)}"
raise TypeError(msg)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor comment, but can we move this to a separate method?

Copy link
Contributor Author

@samet-akcay samet-akcay Oct 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

which one would you prefer? _init_pre_processor, _resolve_pre_processor, _handle_pre_processor or _setup_pre_processor

@@ -220,30 +250,12 @@ def input_size(self) -> tuple[int, int] | None:
The effective input size is the size of the input tensor after the transform has been applied. If the transform
is not set, or if the transform does not change the shape of the input tensor, this method will return None.
"""
transform = self.transform or self.configure_transforms()
transform = self.pre_processor.train_transform
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add a check to ascertain whether train_transform is present? Models like VlmAD might not have train_transforms passed to them. I feel it should pick up val or pred transform is train is not available.

@@ -79,6 +93,10 @@ def _setup(self) -> None:
initialization.
"""

def configure_callbacks(self) -> Sequence[Callback] | Callback:
"""Configure default callbacks for AnomalyModule."""
return [self.pre_processor]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How can we ensure that pre_processor callback is called before the other callbacks? Like, is metrics callback dependent on pre-processing first?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the base model, we will need to ensure the list of callbacks, I guess. For the child classes that inherits this one, we could have something like this:

def configure_callbacks(self) -> Sequence[Callback]:
    """Configure callbacks with parent callbacks preserved."""
    # Get parent callbacks first
    parent_callbacks = super().configure_callbacks()
    
    # Add child-specific callbacks
    callbacks = [
        *parent_callbacks,      # Parent callbacks first
        MyCustomCallback(),     # Then child callbacks
        AnotherCallback(),
    ]
    return callbacks

Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

πŸ“‹ [TASK] Integrate Pre-processing as AnomalibModule Attribute
4 participants