Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

benchmark results are not accordance with the results presented in the dvo_slam paper #38

Open
somebodyus opened this issue Oct 28, 2015 · 4 comments

Comments

@somebodyus
Copy link

Dear author,

I successfully compile the dvo_slam package in ROS fuerte on Ubuntu 12.04.5. I use the default ROS openni calibration parameters.

I benchmark the dataset "rgbd_dataset_freiburg1_xyz" using the following command,
roslaunch dvo_benchmark benchmark.launch keep_alive:=true
Then I do evaluation using the following command,
rosrun rgbd_benchmark_tools evaluate_ate.py --plot PLOT_ate --verbose assoc_opt_traj_final.txt groundtruth.txt

The results of ATE for fr1/xyz are as follows,
compared_pose_pairs 790 pairs
absolute_translational_error.rmse 0.016561 m
absolute_translational_error.mean 0.014532 m
absolute_translational_error.median 0.013264 m
absolute_translational_error.std 0.007941 m
absolute_translational_error.min 0.000828 m
absolute_translational_error.max 0.049756 m

The results are not accordance with the results presented in your dvo_slam (IROS2013) paper. The RMSE value for fr1/xyz indicated in the Table.III of your paper is 0.011, but I got 0.016561.

Maybe there are some settings I don't have. Could you tell me how to do correct benchmark?

PS: Could you tell me how to find how many key frames that have been set during the SLAM?

Thank you very much.

@HeYijia
Copy link

HeYijia commented Dec 2, 2015

I also faced the similar problem when I using the dvo package. Did anyone figure out?
Many thanks in advance.

@amesh90
Copy link

amesh90 commented Apr 20, 2016

same here
but the error is much diferent
compared_pose_pairs 791 pairs
absolute_translational_error.rmse 0.029669 m
absolute_translational_error.mean 0.026008 m
absolute_translational_error.median 0.023651 m
absolute_translational_error.std 0.014277 m
absolute_translational_error.min 0.003234 m
absolute_translational_error.max 0.082670 m

@nifengzhiyix2
Copy link

same question, and the error is different in some datasets. Can anyone help? Many thanks!
such as:
fr1/room: in paper is 0.053, the output is 0.192

@ZhouXiner
Copy link

Me too. In my test, the fr1/room is 0.2448

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants