We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
when I press the button to generate result it works only once, the second time it throws a cuda error.
/content/StyleMapGAN train_args: Namespace(batch=16, batch_per_gpu=8, channel_multiplier=2, ckpt=None, d_reg_every=16, dataset='celeba_hq', iter=1400000, lambda_adv_loss=1, lambda_d_loss=1, lambda_indomainGAN_D_loss=1, lambda_indomainGAN_E_loss=1, lambda_perceptual_loss=1, lambda_w_rec_loss=1, lambda_x_rec_loss=1, latent_channel_size=64, latent_spatial_size=8, lr=0.002, lr_mul=0.01, mapping_layer_num=8, mapping_method='MLP', n_sample=16, ngpus=2, normalize_mode='LayerNorm', num_workers=10, r1=10, remove_indomain=False, remove_w_rec=False, size=256, small_generator=False, start_iter=0, train_lmdb='/data/celeba_hq_lmdb/train/LMDB_train', val_lmdb='/data/celeba_hq_lmdb/train/LMDB_val') * Serving Flask app "demo" (lazy loading) * Environment: production WARNING: Do not use the development server in a production environment. Use a production WSGI server instead. * Debug mode: on * Running on http://127.0.0.1:6006/ (Press CTRL+C to quit) * Restarting with stat train_args: Namespace(batch=16, batch_per_gpu=8, channel_multiplier=2, ckpt=None, d_reg_every=16, dataset='celeba_hq', iter=1400000, lambda_adv_loss=1, lambda_d_loss=1, lambda_indomainGAN_D_loss=1, lambda_indomainGAN_E_loss=1, lambda_perceptual_loss=1, lambda_w_rec_loss=1, lambda_x_rec_loss=1, latent_channel_size=64, latent_spatial_size=8, lr=0.002, lr_mul=0.01, mapping_layer_num=8, mapping_method='MLP', n_sample=16, ngpus=2, normalize_mode='LayerNorm', num_workers=10, r1=10, remove_indomain=False, remove_w_rec=False, size=256, small_generator=False, start_iter=0, train_lmdb='/data/celeba_hq_lmdb/train/LMDB_train', val_lmdb='/data/celeba_hq_lmdb/train/LMDB_val') * Debugger is active! * Debugger PIN: 746-628-559 127.0.0.1 - - [17/Sep/2021 23:56:51] "GET / HTTP/1.1" 200 - 127.0.0.1 - - [17/Sep/2021 23:56:51] "GET / HTTP/1.1" 200 - 127.0.0.1 - - [17/Sep/2021 23:57:30] "POST /post HTTP/1.1" 200 - 127.0.0.1 - - [17/Sep/2021 23:57:30] "GET /demo/static/generated/93huqTOEE9NBQ3MPvw41/15.png?ehKqCSWsm2_dZDSEqGn2KA HTTP/1.1" 200 - 127.0.0.1 - - [17/Sep/2021 23:57:30] "GET /demo/static/generated/93huqTOEE9NBQ3MPvw41/14.png?bpSTEHoDmsEdxklrmK1Iyw HTTP/1.1" 200 - 127.0.0.1 - - [17/Sep/2021 23:57:30] "GET /demo/static/generated/93huqTOEE9NBQ3MPvw41/13.png?vNWTCVyaDi3SgiGKoZVgqg HTTP/1.1" 200 - 127.0.0.1 - - [17/Sep/2021 23:57:30] "GET /demo/static/generated/93huqTOEE9NBQ3MPvw41/12.png?YpPhMa2VRbTKOyk_bXF1uw HTTP/1.1" 200 - 127.0.0.1 - - [17/Sep/2021 23:57:30] "GET /demo/static/generated/93huqTOEE9NBQ3MPvw41/11.png?Rv2b_tny9RSuUGjNv-Aj8w HTTP/1.1" 200 - 127.0.0.1 - - [17/Sep/2021 23:57:30] "GET /demo/static/generated/93huqTOEE9NBQ3MPvw41/10.png?gUzT-B3BIBWt8zKBCN-UJQ HTTP/1.1" 200 - 127.0.0.1 - - [17/Sep/2021 23:57:30] "GET /demo/static/generated/93huqTOEE9NBQ3MPvw41/9.png?hgD_0xCxc1D3SvoRnOL4kQ HTTP/1.1" 200 - 127.0.0.1 - - [17/Sep/2021 23:57:30] "GET /demo/static/generated/93huqTOEE9NBQ3MPvw41/8.png?irG3xuBZ7T4aUC1KSpNl0g HTTP/1.1" 200 - 127.0.0.1 - - [17/Sep/2021 23:57:30] "GET /demo/static/generated/93huqTOEE9NBQ3MPvw41/7.png?FzAEfzS1Y-PaknX8gkh3Ow HTTP/1.1" 200 - 127.0.0.1 - - [17/Sep/2021 23:57:30] "GET /demo/static/generated/93huqTOEE9NBQ3MPvw41/6.png?qSPaQEGIwCJJcp_lqKLDrA HTTP/1.1" 200 - 127.0.0.1 - - [17/Sep/2021 23:57:30] "GET /demo/static/generated/93huqTOEE9NBQ3MPvw41/5.png?uHnXqW2275FRM4kIAWlWoQ HTTP/1.1" 200 - 127.0.0.1 - - [17/Sep/2021 23:57:30] "GET /demo/static/generated/93huqTOEE9NBQ3MPvw41/4.png?KnBxIm4rmz5WSjDZ3i9fUA HTTP/1.1" 200 - 127.0.0.1 - - [17/Sep/2021 23:57:30] "GET /demo/static/generated/93huqTOEE9NBQ3MPvw41/3.png?i5-jUUhv1ey199OxX_dbCg HTTP/1.1" 200 - 127.0.0.1 - - [17/Sep/2021 23:57:30] "GET /demo/static/generated/93huqTOEE9NBQ3MPvw41/2.png?CiAW6qyZmqz93OLXoZx_MQ HTTP/1.1" 200 - 127.0.0.1 - - [17/Sep/2021 23:57:30] "GET /demo/static/generated/93huqTOEE9NBQ3MPvw41/0.png?72jqdYShEUiOpkSMW9R_qg HTTP/1.1" 200 - 127.0.0.1 - - [17/Sep/2021 23:57:30] "GET /demo/static/generated/93huqTOEE9NBQ3MPvw41/1.png?8AK-OswmqdND3xa-F9ZJvw HTTP/1.1" 200 - 127.0.0.1 - - [18/Sep/2021 00:14:17] "POST /post HTTP/1.1" 500 - Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2309, in __call__ return self.wsgi_app(environ, start_response) File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2295, in wsgi_app response = self.handle_exception(e) File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1741, in handle_exception reraise(exc_type, exc_value, tb) File "/usr/local/lib/python3.7/site-packages/flask/_compat.py", line 35, in reraise raise value File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2292, in wsgi_app response = self.full_dispatch_request() File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1815, in full_dispatch_request rv = self.handle_user_exception(e) File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1718, in handle_user_exception reraise(exc_type, exc_value, tb) File "/usr/local/lib/python3.7/site-packages/flask/_compat.py", line 35, in reraise raise value File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1813, in full_dispatch_request rv = self.dispatch_request() File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1799, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/content/StyleMapGAN/demo.py", line 181, in post save_dir=save_dir, File "/usr/local/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 49, in decorate_no_grad return func(*args, **kwargs) File "/content/StyleMapGAN/demo.py", line 133, in my_morphed_images mixed = model(original_image, reference_images, masks, shift_values).cpu() File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/content/StyleMapGAN/demo.py", line 73, in forward mask=[masks, shift_values, args.interpolation_step], File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/content/StyleMapGAN/training/model.py", line 1153, in forward image = self.decoder(stylecode, mix_space=mix_space, mask=mask) File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/content/StyleMapGAN/training/model.py", line 1073, in forward out = self.convs[i](out, [style_codes[2 * i + 1], style_codes[2 * i + 2]]) File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/content/StyleMapGAN/training/model.py", line 720, in forward out = self.conv2(out, stylecodes[1]) File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/content/StyleMapGAN/training/model.py", line 499, in forward out = self.conv(input, style) File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/content/StyleMapGAN/training/model.py", line 610, in forward input = input * gamma + beta RuntimeError: CUDA out of memory. Tried to allocate 512.0 ```0 MiB (GPU 0; 11.17 GiB total capacity; 6.90 GiB already allocated; 215.44 MiB free; 8.10 GiB reserved in total by PyTorch)
The text was updated successfully, but these errors were encountered:
Please refer to #4 (comment)
Sorry, something went wrong.
No branches or pull requests
when I press the button to generate result it works only once, the second time it throws a cuda error.
The text was updated successfully, but these errors were encountered: