Script to train a ControlNet (from Adding Conditional Control to Text-to-Image Diffusion Models) on UK BIOBANK dataset to transform FLAIRs to T1w 2D images using MONAI Generative Models package.
This repository is part of this Medium post.
After Downloading UK Biobank dataset and preprocessing it, we obtain the list of the image paths for the T1w and FLAIR images. For that, we use the following script:
src/python/preprocessing/create_png_dataset.py
- Create png images from the nifti filessrc/python/preprocessing/create_ids.py
- Create files with datalist for training, validation and test
After we obtain the paths, we can train the models using similar commands as in the following files (note: This project was executed on a cluster with RunAI platform):
cluster/runai/training/stage1.sh
- Command to start to execute in the server the training the first stage of the model. The main python script in for this is thesrc/python/training/train_aekl.py
script. The--volume
flags indicate how the dataset is mounted in the Docker container.cluster/runai/training/ldm.sh
- Command to start to execute in the server the training the diffusion model on the latent representation. The main python script in for this is thesrc/python/training/train_ldm.py
script. The--volume
flags indicate how the dataset is mounted in the Docker container.cluster/runai/training/controlnet.sh
- Command to start to execute in the server the training the ControlNet model using the pretrained LDM. The main python script in for this is thesrc/python/training/train_controlnet.py
script. The--volume
flags indicate how the dataset is mounted in the Docker container.
These .sh
files indicates which parameters and configuration file was used for training, as well how the host directories
were mounted in the used Docker container.
Finally, we converted the mlflow model to .pth files (for easily loading with MONAI), sampled images from the diffusion model and controlnet, and evaluated the models. The following is the list of execution for inference and evaluation:
src/python/testing/convert_mlflow_to_pytorch.py
- Convert mlflow model to .pth filessrc/python/testing/sample_t1w.py
- Sample T1w images from the diffusion model without using contditioning.cluster/runai/testing/sample_flair_to_t1w.py
- Sample T1w images from the controlnet using the test set's FLAIR images as conditionings.src/python/testing/compute_msssim_reconstruction.py
- Measure the mean structural similarity index between images and reconstruction to measure the preformance of the autoencoder.src/python/testing/compute_msssim_sample.py
- Measure the mean structural similarity index between samples in order to measure the diversity of the synthetic data.src/python/testing/compute_fid.py
- Compute FID score between generated images and real images.src/python/testing/compute_controlnet_performance.py
- Compute the performance of the controlnet using MAE, PSNR and MS-SSIM metrics.