Our paper has been early accpeted by MICCAI 2024 under (5/6/6) !!!
- Linux (tested on Ubuntu 16.04, 18.04, 20.04)
- Python 3.6+
- PyTorch 1.6 or higher (tested on PyTorch 1.13.1)
- CUDA 11.3 or higher (tested on CUDA 11.6+torch-geometric 2.2.0)
conda env create -f environment.yml
The training and evaluation code can be overviewed in main.py
. The code of proposed model can be seen in /model
.
Due to ethical review and privacy concerns related to the patients from whom the dataset was collected, the authors have no rights to make the dataset publicly available. Currently, you can use your own multimodal dataset to run the code. The data types and requirements can be set according to /dataloader/Dataset.py
.
Our repo is developed based on the these projects: CARD, DiffMIC