-
I have the same error message everytime I try to execute the stable-diffusion-videos code on my computer. |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 2 replies
-
You've run out of GPU memory. Looks like you have 6GB available...not sure this will be enough. But, you might be able to make it work with attention slicing...call Also make sure to try with from stable_diffusion_videos import StableDiffusionWalkPipeline
from diffusers.models import AutoencoderKL
from diffusers.schedulers import LMSDiscreteScheduler
import torch
pipe = StableDiffusionWalkPipeline.from_pretrained(
'runwayml/stable-diffusion-v1-5',
vae=AutoencoderKL.from_pretrained(f"stabilityai/sd-vae-ft-ema"),
torch_dtype=torch.float16,
revision="fp16",
safety_checker=None,
scheduler=LMSDiscreteScheduler(
beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear"
)
).to("cuda")
pipe.enable_attention_slicing()
# ...generate images... |
Beta Was this translation helpful? Give feedback.
-
The safety checker is desactivated, can I bypass this error ? or how can I turn the safety checker on ? You have disabled the safety checker for <class 'stable_diffusion_videos.stable_diffusion_pipeline.StableDiffusionWalkPipeline'> by passing |
Beta Was this translation helpful? Give feedback.
-
Do you know if your solution works with other stable diffusion based video generator like Deforum ? |
Beta Was this translation helpful? Give feedback.
You've run out of GPU memory. Looks like you have 6GB available...not sure this will be enough. But, you might be able to make it work with attention slicing...call
.enable_attention_slicing()
on the pipeline right after you initialize it and then try again to see if you're able to make it workAlso make sure to try with
batch_size=1
for minimal example.