This repository is a PyTorch DistributedDataParallel (DDP) re-implementation of the CVPR 2020 paper View-GCN.
-
First, the re-implementation aims to accelerate the process of training and inference by the PyTorch DDP mechanism since the original implementation by the author is for single-GPU learning and the procedure is much slower, especially when reproducing the retrieval results on SHREC17 benchmark.
-
Second, the retrieval code was absent in the original repository and the author only released the classification implementation on ModelNet40. Our re-implementation adds the
retrieval experiment
and corresponding instructions. -
Third, we also add the
classification
code on theRGBD
andModelNet10
datasets, add all used datasets download links, modify the model and arguments definition to adapt multi-dataset training, re-write the READEME and optimzie code style, etc.
In summary, this repository has the following new features compared to the original one
- add DDP acceleration for the model training and inference
- add the classification code on the RGBD dataset
- add the retrieval code on the SHREC17 dataset
- modify the model definition and add arguments to adapt multi-dataset training and inference
- upgrade the PyTorch and torchvision to recent version 1.12.0 and 0.13.0, respectively, and give several fixes
- re-organize README and optimize the code style
- Ubuntu 18.04
- Python 3.7.15
- PyTorch 1.12.0
- CUDA 11.6
- torchvision 0.13.0
- timm 0.6.11
- einops 0.6.0
- wandb 0.12.11
- pueue & pueued 2.0.4
conda create -n viewgcn python=3.7.15
codna activate viewgcn
pip install torch==1.12.0+cu116 torchvision==0.13.0+cu116 --extra-index-url https://download.pytorch.org/whl/cu116
pip install -r requirements.txt
pueue
is a shell command management software, we use it for scheduling the model training & inference tasks, please refer to the official page for installation and basic usage.
We recommend this tool because under its help you can run the experiments at scale thus save your time.
We track the model training and fine-tuning with W&B tools. The official W&B tools may be slow and unstable since they are on remote servers, we install the local version by running the following command.
sudo groupadd docker # create `docker` group on Ubuntu
sudo usermod -aG docker <username> # add <username> to `docker` group to use docker, replace <username> with yours
docker run --rm -d -v wandb:/vol -p 28282:8080 --name wandb-local wandb/local:0.9.41
If you do not have Docker installed on your computer before, referring to the official document to finish Docker installation on Ubuntu.
-
Download the following datasets and extract them to the desired location on your computer.
- ModelNet10
- ModelNet40
- RGBD
- SHREC17
- follow this project to render the meshes to get the multiple views
-
The directories of the above datasets should be organized as follow
|- View-GCN-DDP |---- data |-------- ModelNet10 |-------- ModelNet40 |-------- RGBD |-------- SHREC17
The
data
directory is at the same level withmodels
,scripts
, etc.
-
To train and evaluate on ModelNet10, run
./scripts/MN10-V20-L4H8D512-MR2-Alex-1.sh
-
To train and evaluate on ModelNet40, run
./scripts/MN40-V20-L4H8D512-MR2-Alex-1.sh
-
To train and evaluate on RGBD, run
./scripts/RGBD-V12-L4H8D512-MR2-Alex-1.sh
-
To train and evaluate the classification performance on SHREC17, run
./scripts/RET/viewgcn_shrec17/SH17-V20-ViewGCN-RN18-1.sh
-
Change the work directory and make a new directory that are used to save the retrieval results
cd retrieval mkdir -p evaluator/viewgcn
-
Retrieve shapes that have same class as the query to generate the rank list
python shrec17.py val.csv resnet18 0 SH17-V20-ViewGCN-RN18-1 24 viewgcn
-
Evaluate the retrieval performance
node --max-old-space-size=8192 evaluate.js viewgcn/
-
Replace
val.csv
withtest.csv
and re-run steps 3-4 to get the results of the test split
-
3D shape retrieval on the SHREC17 benchmark
Metrics Split P@N R@N F1@N mAP NDCG P@N R@N F1@N mAP NDCG Version micro micro micro micro micro macro macro macro macro macro reported unspecified 81.8 80.9 80.6 78.4 85.2 62.9 65.2 61.1 60.2 66.5 reproduced val 82.2 82.3 81.9 80.1 82.3 62.6 66.6 63.0 62.3 68.3 reproduced test 78.7 77.8 77.6 75.1 82.6 56.8 61.2 56.6 56.3 63.3 Since the View-GCN paper did not specify whether the results on SHREC17 were produced from the test or val split, according to the reproduced results, we infer the reported scores corresponds to the val split.
Our re-implementation is inspired by the following projects, thanks to their hard work