Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Appendix + Evaluation Configs #11

Closed
darkliang opened this issue Jul 13, 2023 · 4 comments
Closed

Appendix + Evaluation Configs #11

darkliang opened this issue Jul 13, 2023 · 4 comments

Comments

@darkliang
Copy link

Hello, I saw that the paper claims that some details are in the appendix, but I can't find the appendix, can you provide it?

@tianweiy
Copy link
Collaborator

tianweiy commented Jul 13, 2023

@darkliang you can find it here
appendix.pdf

Evaluation details from the appendix: In our experiments, we use α = 0.5 for single object generation, and α = 0.6 for multi-subject generation. We use PNDM sampling [3] with 50 steps and a classifier-free guidance scale of 5 across all methods.

we will put it online soon. Thanks

@darkliang
Copy link
Author

darkliang commented Jul 14, 2023

Thanks a lot!

@darkliang
Copy link
Author

Hi, I read your appendix, but I couldn't find all the images you used in your evaluation.

@tianweiy
Copy link
Collaborator

tianweiy commented Jul 14, 2023

we plan to release the data + evaluation code but it is going to take some time to prepare. stay tuned

@tianweiy tianweiy pinned this issue Aug 8, 2023
@tianweiy tianweiy changed the title Appendix Appendix Aug 10, 2023
@tianweiy tianweiy changed the title Appendix Appendix + Evaluation Configs Aug 10, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants