Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: CUDA error: out of memory #11

Open
minghong-X opened this issue Jun 5, 2022 · 4 comments
Open

RuntimeError: CUDA error: out of memory #11

minghong-X opened this issue Jun 5, 2022 · 4 comments

Comments

@minghong-X
Copy link

minghong-X commented Jun 5, 2022

Why do I get this error using 3090 with 24GB of memory
RuntimeError: CUDA error: out of memory
Thank you for your answer!!!

@nkolkin13
Copy link
Owner

Hi Minghong-X,
I'm sorry, but I don't know why you're running out of memory with that GPU, it should have plenty of memory for NNST. Are you running anything else at the same time on the GPU (driving a monitor etc?). I may be able to help more if you post the exact command you are running that is giving the error (but also might not be able to help).
Best,
Nick

@minghong-X
Copy link
Author

Thank you for your reply! I'm not running any other programs on this GPU.Just the "NeuralNeighborStyleTransfer-main" program.I tried some ways to solve it, but all failed.

The run command I use is:"python styleTransfer.py --content_path PATH_TO_CONTENT_IMAGE --style_path PATH_TO_STYLE_IMAGE --output_path PATH_TO_OUTPUT."

And all the error messages are as follows:

File "styleTransfer.py", line 55, in
cnn = misc.to_device(Vgg16Pretrained())
File "/hy-tmp/utils/misc.py", line 17, in to_device
return tensor.cuda()
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 680, in cuda
return self._apply(lambda t: t.cuda(device))
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 570, in _apply
module._apply(fn)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 570, in _apply
module._apply(fn)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 593, in _apply
param_applied = fn(param)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 680, in
return self._apply(lambda t: t.cuda(device))
RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

@nkolkin13
Copy link
Owner

I assume you are replacing the PATH_TO_X arguments with actual file paths?

@minghong-X
Copy link
Author

I tried to do this with a CPU
Thank you!
I can't seem to figure it out!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants