Joint synthesis of images and labels #460
naga-karthik
started this conversation in
Ideas
Replies: 1 comment 1 reply
-
Hi there, It is indeed possible, one simple approach is to treat the image + label as a two-channel image and generate them jointly. You would need ground truth paired image and label data to train this on though, do you not have that? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello! Thank you for maintaining this wonderful repository! Makes it quite easy to get started with diffusion models.
I have recently started working with diffusion models and am noticing that several diffusion papers talk only about image synthesis. I am interesting in generating a synthetic dataset containing both images and labels and I am wondering if there are any papers out there doing such type of joint synthesis.
Alternatives considered:
One workaround is that you generate high-quality images with diffusion models and use an existing model to get the labels for the synthetic images. My argument here is that, if there's already a model that works well on some dataset/task, then what's the need for a synthetic dataset in the first place? So, in my specific setting, I don't have a good model to get the predictions so I am looking to synthetic generate both images and labels using diffusion models.
I have looked into a few papers that generate labels conditioned on multi-modal images (for e.g. this paper). However, in such cases, I see that the labels are generated for existing (real) images for which the ground-truth already exist (i.e. the distribution of the labels isn't new, IIUC).
Any suggestions on how to approach this problem with
monai-generative
? OR, is it the case that joint synthesis is hard without proper ground-truth image-label pairs? For context, I am working with spinal cord MRI lesions.Beta Was this translation helpful? Give feedback.
All reactions