You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have released the code for running our model on Jetson Nano with pre-built TVM binary in nano_demo. To convert the torch model to TVM binary, you may need to check the TVM Auto Scheduler Toturial.
hello,
I tested your COCO and CROWDPOSE path.tar files using litepose/valid.py
but in my experience result, when using COCO trained LightPose-Auto-S, inference speed was 2 FPS.
is there some ways to speed up inference speed on Jetson Nano?
or...did I missed something? (like converting torch models to tvm)
when I tested litepose/nano_demo/start.py, using weight <lite_pose_nano.tar>, FPS was almost 7.
The text was updated successfully, but these errors were encountered: