Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LaMa and MAT's result is different with original repo #63

Open
jarheadjoe opened this issue Jun 26, 2024 · 3 comments
Open

LaMa and MAT's result is different with original repo #63

jarheadjoe opened this issue Jun 26, 2024 · 3 comments

Comments

@jarheadjoe
Copy link

For example, I want to remove something from original image:
把背后的路人删除 jpg_input
Original LaMa output:
把背后的路人删除 jpg_res
Your node output:
把背后的路人删除 jpg_res
The painted area is blurry and border is clear. The same with MAT.

@Acly
Copy link
Owner

Acly commented Jun 29, 2024

What original repo are you referring to?

The reason it's blury is that it runs at low resolution (256 for LaMA, 512 for MAT). You can technically run it at higher resolution, but it results in those grainy patterns, I don't find it very useful.

To inpaint this image I'd downscale it, use Lama/MAT inpaint at low resolution, do a 1st diffusion pass, upscale and crop, then run a 2nd diffusion pass at original resolution but only the inpaint area. So Lama/MAT is meant as a first step in a pipeline, not a final solution.

@jarheadjoe
Copy link
Author

original lama repo: https://github.com/advimman/lama.Use big-lama ckpt.
Original LaMa works good some way. For example, as depth controlnet's input.
The LaMa results in your repo are bad and almost like blurring to a certain extent.
And for inpaint models, the mask area is not visible. So Lama feels unnecessary as a first step
https://github.com/comfyanonymous/ComfyUI/blob/14764aa2e2e2b282c4a4dffbfab4c01d3e46e8a7/nodes.py#L346

@Acly
Copy link
Owner

Acly commented Jul 23, 2024

I don't use VAEEncodeForInpaint and I downscale/crop images which makes the low resolution less of an issue. Lama still helps at 1.0 denoise as a base for conservative inpainting (remove objects and such).

For a stand-alone solution that you can slap on an image like in your example, the node would have to be more complex. I'm not particularly motivated to go for that because I don't think results are good enough in general.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants