-
Notifications
You must be signed in to change notification settings - Fork 6.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
performance degradation in to_pil_image after v0.17 #8669
Comments
Most of the extra time is spent on this line:
I think it's due to multiplication using numpy primitives rather than torch (and also |
Thanks for the report @seymurkafkas .
Ah, if that's the case then the fix might be non-trivial, this it means we'd have to go from a unified numpy logic to a unified pytorch logic. I'm happy to consider a PR if we can keep the code simple enough. Out of curiosity, why do you need to convert tensors back to PIL, and more specifically, why do you need that part to be fast? |
Thanks for the response! I will take a look and submit a PR if possible.
This is to reduce inference costs for our ML app; less time spent on serialization implies more GPU utilization. We convert to PIL because we use it before serializing to disk. |
Thanks for replying! Just so you know and if that's helpful, you may be able to use the |
Thanks a lot for the tip :) I will experiment with those too. |
🐛 Describe the bug
torchvision.transforms.functional.to_pil_image
is much slower when converting torch.float16 image tensors to PIL Images based on my benchmarks (serializing 360 images):Dependencies:
Before (torch 2.0.1, torchvision v0.15.2, Code here): 23 seconds
After ( torch 2.2.0, torchvision v0.17, Code here): 53 seconds
How to reproduce:
Run the above script with both versions of dependencies listed, and the time difference is apparent.
The cause seems to be this PR
The text was updated successfully, but these errors were encountered: