-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error:SamAutomaticMaskGenerator function has a large memory footprint #110
Comments
@lstswb have u solved this? |
Not yet |
Does the code snippet from the example help? In particular segment-anything-fast/amg_example/amg_example.py Lines 36 to 44 in 387488b
note that you can adjust |
I tried to adjust batch_size, and the GPU memory footprint was reduced, but it still far exceeded the original code. |
Yes, the batch is larger, but should be faster. The original code uses batch size 1. You can try setting it to batch size 1. |
Hm, I assume you're also using the GPU for the display manager? That will take up additional memory as well. Maybe the solution in #97 will help. Can you use your onboard GPU (if you have one) for the display manager and the GPU for the model only? Does it work with vit_b? |
Vit_b can be used normally. Display takes up only a small portion of GPU memory. Setting vit_h equally works fine with the original code. |
Hm, can you try setting the environment variable |
Hm, I'm not sure to be honest. It seems to work on other 4090s, but I think they're on Linux and not Windows. |
Well. I try to do this using a linux system instead of using ubuntu based on WSL2 |
GPU:4090 24G
System:Ubuntu for WSL2
Model:sam_vit_h
Image_size:[1024,1024]
Parameter settings:
model=sam,
points_per_side=128,
points_per_batch = 64,
pred_iou_thresh=0.86,
stability_score_thresh=0.92,
crop_n_layers=3,
crop_n_points_downscale_factor=2,
min_mask_region_area=100,
process_batch_size=4
Issue:
When I use SamAutomaticMaskGenerator,GPU memory usage up to 55GB.
And there will be an error.
[torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.63 GiB. GPU 0 has a total capacity of 23.99 GiB of which 0 bytes is free. Including non-PyTorch memory, this process has 17179869184.00 GiB memory in use. Of the allocated memory 34.86 GiB is allocated by PyTorch, and 5.63 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
]
However, when using the original SAM code, this problem does not exist, and the GPU memory will not exceed 24GB.
The text was updated successfully, but these errors were encountered: