You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This can cause problems if datamodule was never used inside the trainer. In that case looking at the following code, the transforms returned are just resize (since there is no trainer to take the correct model transforms from). This leads to missing the potential normalization from the model:
This happens only if you are calling setup and dataloader from the datamodule outside trainer (as that is not set in this case) like this:
datamodule=MVTec(..)
datamodule.setup() # this calls self.train_transforms mentioned abovedatamodule.test_dataloader() # only resize transofmr here
There is also code below to reproduce this.
A workaround I have at the moment is doing this:
dataloader.dataset.transform=None
This way the dataloader transforms are ignored and the ones from model are taken. Another possibility that I see is by manually setting the transforms of the datamodule with model.configure_transforms(image_size).
Dataset
N/A
Model
N/A
Steps to reproduce the behavior
#### CONFIGURE THIS ####mvtec_path="../datasets/MVTec"#####data=MVTec(root=mvtec_path, image_size=(42, 42), category="bottle", num_workers=0)
data.setup()
print(data.test_dataloader().dataset.transform)
OS information
OS information:
OS: Win11
Python version: 3.10.3
Anomalib version: 1.2.0dev
PyTorch version: 2.3.0
CUDA/cuDNN version: x
GPU models and configuration: x
Any other relevant information: x
Expected behavior
I would expect that in this case, the model transforms would take priority over the ones in dataloader, but I see how this would cause trouble in case of custom transforms inside datamodule.
Screenshots
No response
Pip/GitHub
GitHub
What version/branch did you use?
1.2.0dev
Configuration YAML
/
Logs
/
Code of Conduct
I agree to follow this project's Code of Conduct
The text was updated successfully, but these errors were encountered:
Describe the bug
Inside the engine code the trasnforms from the datamodule and dataloader are taken before the ones from the model:
anomalib/src/anomalib/engine/engine.py
Lines 382 to 398 in 2bd2842
This can cause problems if datamodule was never used inside the trainer. In that case looking at the following code, the transforms returned are just resize (since there is no trainer to take the correct model transforms from). This leads to missing the potential normalization from the model:
anomalib/src/anomalib/data/base/datamodule.py
Lines 266 to 277 in 2bd2842
This happens only if you are calling setup and dataloader from the datamodule outside trainer (as that is not set in this case) like this:
There is also code below to reproduce this.
A workaround I have at the moment is doing this:
This way the dataloader transforms are ignored and the ones from model are taken. Another possibility that I see is by manually setting the transforms of the datamodule with
model.configure_transforms(image_size)
.Dataset
N/A
Model
N/A
Steps to reproduce the behavior
OS information
OS information:
Expected behavior
I would expect that in this case, the model transforms would take priority over the ones in dataloader, but I see how this would cause trouble in case of custom transforms inside datamodule.
Screenshots
No response
Pip/GitHub
GitHub
What version/branch did you use?
1.2.0dev
Configuration YAML
/
Logs
Code of Conduct
The text was updated successfully, but these errors were encountered: