Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

xformers wasn't built with CUDA support #251

Open
pranto-jpg opened this issue Nov 30, 2023 · 1 comment
Open

xformers wasn't built with CUDA support #251

pranto-jpg opened this issue Nov 30, 2023 · 1 comment
Labels
bug Something isn't working

Comments

@pranto-jpg
Copy link

Describe the bug

Hello,
whenever I hit the cell of training the model after uploading the images I'm getting this error
Screenshot 2023-11-30 220500
this is the error I get
Any solutions for this?

Reproduction

!python3 train_dreambooth.py
--pretrained_model_name_or_path=$MODEL_NAME
--pretrained_vae_name_or_path="stabilityai/sd-vae-ft-mse"
--output_dir=$OUTPUT_DIR
--revision="fp16"
--with_prior_preservation --prior_loss_weight=1.0
--seed=1337
--resolution=512
--train_batch_size=1
--train_text_encoder
--mixed_precision="fp16"
--use_8bit_adam
--gradient_accumulation_steps=1
--learning_rate=1e-6
--lr_scheduler="constant"
--lr_warmup_steps=0
--num_class_images=50
--sample_batch_size=4
--max_train_steps=1400
--save_interval=1400
--save_sample_prompt="photo of mumtahina person"
--concepts_list="concepts_list.json"

Reduce the --save_interval to lower than --max_train_steps to save weights from intermediate steps.

--save_sample_prompt can be same as --instance_prompt to generate intermediate samples (saved along with weights in samples directory).

Logs

WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
    PyTorch 2.1.0+cu121 with CUDA 1201 (you have 2.1.0+cu118)
    Python  3.10.13 (you have 3.10.12)
  Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
  Memory-efficient attention, SwiGLU, sparse and more won't be available.
  Set XFORMERS_MORE_DETAILS=1 for more details
2023-11-30 16:00:11.464430: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2023-11-30 16:00:11.464494: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2023-11-30 16:00:11.464533: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-11-30 16:00:13.133018: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
text_encoder/model.safetensors not found
/usr/local/lib/python3.10/dist-packages/transformers/models/clip/feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
  warnings.warn(
You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
Traceback (most recent call last):
  File "/content/train_dreambooth.py", line 869, in <module>
    main(args)
  File "/content/train_dreambooth.py", line 487, in main
    pipeline.enable_xformers_memory_efficient_attention()
  File "/usr/local/lib/python3.10/dist-packages/diffusers/pipelines/pipeline_utils.py", line 1322, in enable_xformers_memory_efficient_attention
    self.set_use_memory_efficient_attention_xformers(True, attention_op)
  File "/usr/local/lib/python3.10/dist-packages/diffusers/pipelines/pipeline_utils.py", line 1347, in set_use_memory_efficient_attention_xformers
    fn_recursive_set_mem_eff(module)
  File "/usr/local/lib/python3.10/dist-packages/diffusers/pipelines/pipeline_utils.py", line 1338, in fn_recursive_set_mem_eff
    module.set_use_memory_efficient_attention_xformers(valid, attention_op)
  File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 219, in set_use_memory_efficient_attention_xformers
    fn_recursive_set_mem_eff(module)
  File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 215, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 215, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 215, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 212, in fn_recursive_set_mem_eff
    module.set_use_memory_efficient_attention_xformers(valid, attention_op)
  File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 219, in set_use_memory_efficient_attention_xformers
    fn_recursive_set_mem_eff(module)
  File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 215, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 215, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 212, in fn_recursive_set_mem_eff
    module.set_use_memory_efficient_attention_xformers(valid, attention_op)
  File "/usr/local/lib/python3.10/dist-packages/diffusers/models/attention_processor.py", line 151, in set_use_memory_efficient_attention_xformers
    raise e
  File "/usr/local/lib/python3.10/dist-packages/diffusers/models/attention_processor.py", line 145, in set_use_memory_efficient_attention_xformers
    _ = xformers.ops.memory_efficient_attention(
  File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/__init__.py", line 223, in memory_efficient_attention
    return _memory_efficient_attention(
  File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/__init__.py", line 321, in _memory_efficient_attention
    return _memory_efficient_attention_forward(
  File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/__init__.py", line 337, in _memory_efficient_attention_forward
    op = _dispatch_fw(inp, False)
  File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/dispatch.py", line 120, in _dispatch_fw
    return _run_priority_list(
  File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/dispatch.py", line 63, in _run_priority_list
    raise NotImplementedError(msg)
NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs:
     query       : shape=(1, 2, 1, 40) (torch.float32)
     key         : shape=(1, 2, 1, 40) (torch.float32)
     value       : shape=(1, 2, 1, 40) (torch.float32)
     attn_bias   : <class 'NoneType'>
     p           : 0.0
`decoderF` is not supported because:
    xFormers wasn't build with CUDA support
    attn_bias type is <class 'NoneType'>
    operator wasn't built - see `python -m xformers.info` for more info
`flshattF@0.0.0` is not supported because:
    xFormers wasn't build with CUDA support
    requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old)
    dtype=torch.float32 (supported: {torch.float16, torch.bfloat16})
    operator wasn't built - see `python -m xformers.info` for more info
`tritonflashattF` is not supported because:
    xFormers wasn't build with CUDA support
    requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old)
    dtype=torch.float32 (supported: {torch.float16, torch.bfloat16})
    operator wasn't built - see `python -m xformers.info` for more info
    triton is not available
    requires GPU with sm80 minimum compute capacity, e.g., A100/H100/L4
    Only work on pre-MLIR triton for now
`cutlassF` is not supported because:
    xFormers wasn't build with CUDA support
    operator wasn't built - see `python -m xformers.info` for more info
`smallkF` is not supported because:
    max(query.shape[-1] != value.shape[-1]) > 32
    xFormers wasn't build with CUDA support
    operator wasn't built - see `python -m xformers.info` for more info
    unsupported embed per head: 40

System Info

google colab

@pranto-jpg pranto-jpg added the bug Something isn't working label Nov 30, 2023
@freddieb12345
Copy link

freddieb12345 commented Dec 2, 2023

It seems that you need to upgrade the pytorch version you are using in the install requirements. If you change the install requirements code block to this, it works fine for me.

!wget -q https://github.com/ShivamShrirao/diffusers/raw/main/examples/dreambooth/train_dreambooth.py
!wget -q https://github.com/ShivamShrirao/diffusers/raw/main/scripts/convert_diffusers_to_original_stable_diffusion.py
%pip install -qq git+https://github.com/ShivamShrirao/diffusers
%pip install -q -U --pre triton
%pip install -q accelerate transformers ftfy bitsandbytes==0.35.0 gradio natsort safetensors xformers
%pip install torch==2.1.0+cu121 torchvision==0.16.0+cu121 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants