-
Notifications
You must be signed in to change notification settings - Fork 672
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
π Add PreProcessor
to AnomalyModule
#2358
base: feature/v2
Are you sure you want to change the base?
π Add PreProcessor
to AnomalyModule
#2358
Conversation
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
A sub-feature request that would fit here: (optionally?) keep both the transformed and original image/mask in the batch. So instead of image, gt_mask = self.XXX_transform(batch.image, batch.gt_mask)
batch.update(image=image, gt_mask=gt_mask) something like batch.update(image_original=batch.image, gt_mask_original=batch.gt_mask)
image, gt_mask = self.XXX_transform(batch.image, batch.gt_mask)
batch.update(image=image, gt_mask=gt_mask) It's quite practical to have these when using the API (i've re-implemented this in my local copy 100 times haha). |
yeah, the idea is to keep |
exactly, makes sense : ) but it's also useful to be able to access the transformed one (eg. when using augmentations)
didnt get this. cause it's not backcompatible? |
oh I meant, it is currently not working, I need to fix it :) |
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
β¦ssor Signed-off-by: Samet Akcay <samet.akcay@intel.com>
β¦oolkit/anomalib into add-pre-processor
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Check out this pull request onΒ See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, feels like we're getting there in terms of design. I have one major concern regarding the configuration of the image size, and several smaller comments.
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
β¦oader_transforms Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. I have a few minor comments
# Handle pre-processor | ||
# True -> use default pre-processor | ||
# False -> no pre-processor | ||
# PreProcessor -> use the provided pre-processor | ||
if isinstance(pre_processor, PreProcessor): | ||
self.pre_processor = pre_processor | ||
elif isinstance(pre_processor, bool): | ||
self.pre_processor = self.configure_pre_processor() | ||
else: | ||
msg = f"Invalid pre-processor type: {type(pre_processor)}" | ||
raise TypeError(msg) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor comment, but can we move this to a separate method?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
which one would you prefer? _init_pre_processor
, _resolve_pre_processor
, _handle_pre_processor
or _setup_pre_processor
@@ -220,30 +250,12 @@ def input_size(self) -> tuple[int, int] | None: | |||
The effective input size is the size of the input tensor after the transform has been applied. If the transform | |||
is not set, or if the transform does not change the shape of the input tensor, this method will return None. | |||
""" | |||
transform = self.transform or self.configure_transforms() | |||
transform = self.pre_processor.train_transform |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add a check to ascertain whether train_transform
is present? Models like VlmAD might not have train_transforms passed to them. I feel it should pick up val or pred transform is train is not available.
@@ -79,6 +93,10 @@ def _setup(self) -> None: | |||
initialization. | |||
""" | |||
|
|||
def configure_callbacks(self) -> Sequence[Callback] | Callback: | |||
"""Configure default callbacks for AnomalyModule.""" | |||
return [self.pre_processor] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How can we ensure that pre_processor
callback is called before the other callbacks? Like, is metrics callback dependent on pre-processing first?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the base model, we will need to ensure the list of callbacks, I guess. For the child classes that inherits this one, we could have something like this:
def configure_callbacks(self) -> Sequence[Callback]:
"""Configure callbacks with parent callbacks preserved."""
# Get parent callbacks first
parent_callbacks = super().configure_callbacks()
# Add child-specific callbacks
callbacks = [
*parent_callbacks, # Parent callbacks first
MyCustomCallback(), # Then child callbacks
AnotherCallback(),
]
return callbacks
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
π Description
This PR,
PreProcessor
toAnomalyModule
β¨ Changes
Select what type of change your PR is:
β Checklist
Before you submit your pull request, please make sure you have completed the following steps:
For more information about code review checklists, see the Code Review Checklist.