Before evaluation, we need to prepare the ImageNet-1k dataset and the checkpoints in model zoo.
Run the following command for evaluation:
Evaluate TinyViT with pretraining distillation
Evaluate TinyViT-5M
python -m torch.distributed.launch --nproc_per_node 8 main.py --cfg configs/22kto1k/tiny_vit_5m_22kto1k.yaml --data-path ./ImageNet --batch-size 128 --eval --resume ./checkpoints/tiny_vit_5m_22kto1k_distill.pth
Evaluate TinyViT-11M
python -m torch.distributed.launch --nproc_per_node 8 main.py --cfg configs/22kto1k/tiny_vit_11m_22kto1k.yaml --data-path ./ImageNet --batch-size 128 --eval --resume ./checkpoints/tiny_vit_11m_22kto1k_distill.pth
Evaluate TinyViT-21M
python -m torch.distributed.launch --nproc_per_node 8 main.py --cfg configs/22kto1k/tiny_vit_21m_22kto1k.yaml --data-path ./ImageNet --batch-size 128 --eval --resume ./checkpoints/tiny_vit_21m_22kto1k_distill.pth
Evaluate TinyViT-21M-384
python -m torch.distributed.launch --nproc_per_node 8 main.py --cfg configs/higher_resolution/tiny_vit_21m_224to384.yaml --data-path ./ImageNet --batch-size 64 --eval --resume ./checkpoints/tiny_vit_21m_22kto1k_384_distill.pth
Evaluate TinyViT-21M-512
python -m torch.distributed.launch --nproc_per_node 8 main.py --cfg configs/higher_resolution/tiny_vit_21m_384to512.yaml --data-path ./ImageNet --batch-size 32 --eval --resume ./checkpoints/tiny_vit_21m_22kto1k_512_distill.pth
Evaluate TinyViT trained from scratch in IN-1k
Evaluate TinyViT-5M
python -m torch.distributed.launch --nproc_per_node 8 main.py --cfg configs/1k/tiny_vit_5m.yaml --data-path ./ImageNet --batch-size 128 --eval --resume ./checkpoints/tiny_vit_5m_1k.pth
Evaluate TinyViT-11M
python -m torch.distributed.launch --nproc_per_node 8 main.py --cfg configs/1k/tiny_vit_11m.yaml --data-path ./ImageNet --batch-size 128 --eval --resume ./checkpoints/tiny_vit_11m_1k.pth
Evaluate TinyViT-21M
python -m torch.distributed.launch --nproc_per_node 8 main.py --cfg configs/1k/tiny_vit_21m.yaml --data-path ./ImageNet --batch-size 128 --eval --resume ./checkpoints/tiny_vit_21m_1k.pth
The model pretrained on IN-22k can be evaluated directly on IN-1k
Since the model pretrained on IN-22k is not finetuned on IN-1k, the accuracy is lower than the model finetuned 22kto1k.
Evaluate TinyViT-5M-22k
python -m torch.distributed.launch --nproc_per_node 8 main.py --cfg configs/22k_distill/tiny_vit_5m_22k_distill.yaml --data-path ./ImageNet --batch-size 128 --eval --resume ./checkpoints/tiny_vit_5m_22k_distill.pth --opts DATA.DATASET imagenet