Skip to content

IanYangChina/GAM-paper-codes

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GAM: General Affordance-based Manipulation for Robotic Object Disentangling Tasks

Installation guide

  • Create conda env with the provided environment.yml file.
  • Install graspnet-baseline.
  • Install mujoco and mujoco-py (the experiments use 2.1.0 version, others may work).
  • Make sure gym version <=0.20.0.
  • From repository root, run python -m pip install ..

Data

Pretrained agents

  • There are some pre-trained checkpoints that can be downloaded onedrive.
  • Place the result folders in the root directory (e.g., GAM-paper-codes/results_3).
  • Namings:
    • C/C+/S: hook shapes
    • gf: grasp filter checkpoints
    • hm: hemisphere action checkpoints
    • 3/30: num of episode timesteps

Scripts

  • With the data and checkpoints, you may play around with evaluations.
    • E.g., python script/evaluate_dqn --render --nenv 1 --hs C --nh 3
  • Other scripts for training, generating dataset are also available.

Citation

@article{yang2024gam,
  title={GAM: General affordance-based manipulation for contact-rich object disentangling tasks},
  author={Yang, Xintong and Wu, Jing and Lai, Yu-Kun and Ji, Ze},
  journal={Neurocomputing},
  pages={127386},
  year={2024},
  publisher={Elsevier}
}

About

The official open-source codes for the GAM paper

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages