Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update ldm patched && DORA support #3454

Open
wants to merge 11 commits into
base: develop
Choose a base branch
from

Conversation

IPv6
Copy link

@IPv6 IPv6 commented Aug 5, 2024

managed to cleanup errors and now it is working with updated ldm_patched

calculate_sigmas - some type unification added
some parameters (default_image_only_indicator) now defaults to None to avoid exceptions
some deprecated method unwrapped
samples/schedulers - tested, seems to be ok, karas, euler, heun, restart, some others && samplers - no problems
uni_pc/uni_pc_bh2: updated from latest comfy, original code had function parameters mismatch
controlNets - tested, seems to be ok

did not tested: Refiners (SDXL/SD15), Inpainting

mashb1t and others added 2 commits June 2, 2024 20:52
currently issues with calculate_sigmas call
Some parameters (default_image_only_indicator) defaults to None to avoid exceptions
Some deprecated method unwrapped
Samples/Schedulers - tested, seems to be ok
- karas, euler, heun, restart, some others && samplers
- uni_pc/uni_pc_bh2: updated from latest comfy, original code had function parameters mismatch
ControlNets - tested, seems to be ok

Did not tested: Refiners (SDXL/SD15), Inpainting
@IPv6 IPv6 requested a review from mashb1t as a code owner August 5, 2024 13:00
@mashb1t
Copy link
Collaborator

mashb1t commented Aug 5, 2024

@IPv6 thanks a lot, much appreciated! There still seem to be unresolved Merge Conflicts, which you can solve by merging main to your branch and to resolve them locally.

…ate-ldm-patched

# Conflicts:
#	args_manager.py
#	ldm_patched/k_diffusion/sampling.py
#	ldm_patched/modules/latent_formats.py
#	ldm_patched/modules/model_sampling.py
#	ldm_patched/modules/samplers.py
#	requirements_versions.txt
@IPv6
Copy link
Author

IPv6 commented Aug 5, 2024

@mashb1t Updated with conflict resolve. Uncertain about some changes:

  • removed opencv-contrib-python==4.8.0.74
    due opencv-contrib-python-headless==4.10.0.84 already present

@mashb1t
Copy link
Collaborator

mashb1t commented Aug 7, 2024

Thanks for the updates, can check this on the weekend!

@mashb1t mashb1t changed the base branch from main to develop August 11, 2024 13:36
@mashb1t
Copy link
Collaborator

mashb1t commented Aug 11, 2024

LGTM after fixing a bug with favicon (missing args). It's quite the spam in the logs due to logging all requests done by HTTPX, but manageable.

2 concerns:

  • images before and after are not 100% the same, but 99% (like using xformers). This might have to do with internal changes of the code.
  • It was a mess to do package updates as some users, mostly non-tech-people (Fooocus is the "simple" interface), are not able to do updates themselves without massive support despite providing update notes on how to update.

Sadly i don't have the time, neither can i implement new features nor can i support as extensively as last time with the 2.5.0 enhance upgrade until November, as there's a lot going on in my private and work life. During this period, nobody will be able to do code updates as i'm sadly the only maintainer and i can only assume lllyasviel has ditched the project despite mentioning to come back after the release of SD3 (see #2154 (comment)).
=> Merging this feature now could result in reports of unknown errors, which nobody would be able to fix.

I'm open to your suggestions, hope you understand the situation.

Thank you once more for your support, much appreciated!!

@mashb1t
Copy link
Collaborator

mashb1t commented Aug 11, 2024

@IPv6 fyi I've decided to step back as maintainer of Fooocus, details see #3504.
In a nutshell: no time, doing other things, as already mentioned above.

@IPv6
Copy link
Author

IPv6 commented Aug 11, 2024

Totally understandable, Fooocus "simplicity aura" has this drawback of endless flow of end-user problems // And such update can mess a lot, most probably in some hidden cases.

Anyway, I think any fork that wish to implement something ldm-fresh will be able to use this patch as a starting point. For me Fooocus already quite mature and frankly there is nothing really needs to be added given perfect Fooocus focus. It works, that`s it :)

@mohamed-benali
Copy link

I got a question that I would appreciate if you could answer.

If I switch to this branch and run Fooocus it is going to run? Is so, could DoRa be easily added? Or that would require a lot of work?

Thanks in advance!

@mashb1t
Copy link
Collaborator

mashb1t commented Aug 12, 2024

@mohamed-benali this branch supports DoRA out of the box, you can switch to it and just select it as any other LoRA.

@mohamed-benali
Copy link

Thats amazing!

Since there will be very few updates, unless @lllyasviel takes this back wink wink, I will start using this branch. Hopefully I won't find any blocking bug.

Thanks for your work!! It's very appreciated!

@mohamed-benali
Copy link

mohamed-benali commented Aug 15, 2024

After pocking around and trying this version I have to say I am very impressed!

It's sooo much faster, specially the initial setup which is almost 0 now. And the actual generations are faster too.

Thank you for updating this @IPv6 @mashb1t , it's very appreciated!!

(only issue seems to be the anime inpaiting after step 24ish of 30 steps, which seems to error. But I can use the official version to inpaint and this one to generate faster)

…s expected, with synthetic refiner switch (without exceptions)
@IPv6
Copy link
Author

IPv6 commented Sep 1, 2024

(only issue seems to be the anime inpaiting after step 24ish of 30 steps, which seems to error. But I can use the official version to inpaint and this one to generate faster)

also hit this problem some days ago and fixed it. The problem was in synthetic refiner for inpainting. this inpainting stuff blowed my mind :)

Anyway added a fix to this branch - IPv6@03fabf0 Theoretically you can apply it locally, if needed. with patch inpainting works for me, with and without with controlnets. and presumably SDXL refiners in general should work

New ldm indeed seems to make inference faster - at least with "--highvram --use-pytorch-cross-attention --force-fp16"
(some of older CLI params were removed, by the way: --all-in-fp16 etc)

@mohamed-benali
Copy link

Thank you so much @IPv6. It's working for me too :)

Just have 1 question. Are DORAs supposed to work the same as LORAs? They don't seem to work for me, so just asking to know if I am missing something or they just don't work. (thanks in advance!)

@IPv6
Copy link
Author

IPv6 commented Sep 2, 2024

Hard to tell, with new ldm they should work (as mashb1t noted), but ldm is just the basement of all that machinery. There are also higher-level checks, and i suspect they need some upgrading to recognise DORAs as valid LORAs models.

There is no such specific upgrades in this patch, unfortunately. But it should be easy to add, since ldm do all the heavy lifting.

@mohamed-benali
Copy link

Do you know which module/files need to be touched? If it's that easy I might try to see if I can do it.

@IPv6
Copy link
Author

IPv6 commented Sep 2, 2024

Hard to tell (not using them, so never looked into it). looking at code there is suspicious lora parsing in "match_lora" method in core.py. Possibly DORA have some unexpected internal layers that are not recognized, this is not a part of ldm_patched. It is possible to add some loggin in relevant calls to find out is DORA loads (as it should) or rejected. In latter case it needs to be updated by someone who familiar with torch model insides //

@mohamed-benali
Copy link

Thanks for the insight! Just as a note, the DORA never errors when I select it or generate an image. It's just that it has no effect.

@mohamed-benali
Copy link

mohamed-benali commented Sep 4, 2024

I have made some testing and it appears that DORAs give a model mismatch.

After investigating more I found that it's missing some keys on def match_lora(...).

I added the missing DORAs portion from here (all the code across the def load_lora(...)):
https://github.com/comfyanonymous/ComfyUI/blob/f067ad15d139d6e07e44801759f7ccdd9985c636/comfy/lora.py#L46

I added it locally and it worked for me!! DORAs work properly now!!

I would push it to this branch but I cannot since I don't have edit permissions.

@IPv6
Copy link
Author

IPv6 commented Sep 4, 2024

I added it locally and it worked for me!! DORAs work properly now!!
I would push it to this branch but I cannot since I don't have edit permissions.

Whoah, pretty cool! So it was lora parsing after all
Congrats!!!

If you attach your updated lora.py - i can push it to the branch, just to keep all updates in one place

@mohamed-benali
Copy link

mohamed-benali commented Sep 4, 2024

Here it is:
lora.txt

I changed the format to .txt as Github doesn't allow .py extension. You can change the extension back to .py

@IPv6
Copy link
Author

IPv6 commented Sep 4, 2024

Here it is:

Thanks! Pushed it to the branch - IPv6@1157a1d

@IPv6 IPv6 changed the title Test update ldm patched Update ldm patched && DORA support Sep 4, 2024
@mohamed-benali
Copy link

Perfect!! With the fixes for Inpainting and DORAs this will be my main source for Fooocus. No need to switch between main and this one. Good job @IPv6!!

@santiagosfx I saw in here #3192 you were asking for DORAs support. If you use this branch they will work correctly!

@mashb1t mashb1t added the Size L large change, may depend on other changes, thoroughly test label Sep 17, 2024
@santiagosfx
Copy link

Perfect!! With the fixes for Inpainting and DORAs this will be my main source for Fooocus. No need to switch between main and this one. Good job @IPv6!!

@santiagosfx I saw in here #3192 you were asking for DORAs support. If you use this branch they will work correctly!

Sorry for the newbie question, but how do I use a branch instead of the main repository?
Specifically in Google Collab.

@IPv6
Copy link
Author

IPv6 commented Sep 21, 2024

Sorry for the newbie question, but how do I use a branch instead of the main repository? Specifically in Google Collab.

Don`t know regarding colab, but for using branch you need to checkout project from top commit of the branch - IPv6@1157a1d
You need to be familiar with git

@santiagosfx
Copy link

Sorry for the newbie question, but how do I use a branch instead of the main repository? Specifically in Google Collab.

Don`t know regarding colab, but for using branch you need to checkout project from top commit of the branch - IPv6@1157a1d You need to be familiar with git

I managed to get the branch running, but only halfway.
It seems to be missing a parameter to make it work on Colab, called "--always-high-vram".
What it does is to better manage memory. Without this parameter, I end up running out of memory before I finish generating the image.

@IPv6
Copy link
Author

IPv6 commented Sep 24, 2024

called "--always-high-vram". What it does is to better manage memory

This param was changed in Comfy, so updated ldm require new setting instead of old one. try --highvram --use-pytorch-cross-attention --force-fp16

@santiagosfx
Copy link

called "--always-high-vram". What it does is to better manage memory

This param was changed in Comfy, so updated ldm require new setting instead of old one. try --highvram --use-pytorch-cross-attention --force-fp16

It keeps giving me the same error, the only difference now is that it does it in the middle of loading the models.

@mohamed-benali
Copy link

Any idea how hard it would be to add support for SD3.5? Would be as difficult as adding the DORA support? Or far higher?

@IPv6
Copy link
Author

IPv6 commented Oct 25, 2024

Any idea how hard it would be to add support for SD3.5? Would be as difficult as adding the DORA support? Or far higher?

SD3.5 has different architecture comparing to SDXL, so it is the same as with Flux - almost full rewrite is needed //

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Size L large change, may depend on other changes, thoroughly test
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants