diff --git a/README.md b/README.md index ad96a27..9379344 100644 --- a/README.md +++ b/README.md @@ -1,17 +1,13 @@ # DriViDOC Driving from Vision through Differentiable Optimal Control -This repository complements the main paper "Learning from Visual Demonstrations through Differentiable Nonlinear MPC for Personalized Autonomous Driving," submitted to IROS 2024, to enhance the transparency and reproducibility of the research presented. +This repository complements the main paper "Driving from Vision through Differentiable Optimal Control" presented at IROS 2024, to enhance the transparency and reproducibility of the research presented. [ArXiv preprint](https://arxiv.org/abs/2403.15102) -[YouTube Video](https://youtu.be/WxWPuAtJ08E) +[YouTube Video](https://youtu.be/ENHhphpbPLs) -![](img/styles.gif) ## Abstract -Human-like autonomous driving controllers have the potential to enhance passenger perception of autonomous vehicles. This paper proposes DriViDOC: a model for Driving from Vision through Differentiable Optimal Control, and its application to learn personalized autonomous driving controllers from human demonstrations. -DriViDOC combines the automatic inference of relevant features from camera frames with the properties of nonlinear model predictive control (NMPC), such as constraint satisfaction. -Our approach leverages the differentiability of parametric NMPC, allowing for end-to-end learning of the driving model from images to control. The model is trained on an offline dataset comprising various driving styles collected on a motion-base driving simulator. During online testing, the model demonstrates successful imitation of different driving styles, and the interpreted NMPC parameters provide insights into the achievement of specific driving behaviors. Our experimental results show that DriViDOC outperforms other methods involving NMPC and neural networks, exhibiting an average improvement of 20\% in imitation scores. - +This paper proposes DriViDOC: a framework for Driving from Vision through Differentiable Optimal Control, and its application to learn autonomous driving controllers from human demonstrations. DriViDOC combines the automatic inference of relevant features from camera frames with the properties of nonlinear model predictive control (NMPC), such as constraint satisfaction. Our approach leverages the differentiability of parametric NMPC, allowing for end-to-end learning of the driving model from images to control. The model is trained on an offline dataset comprising various human demonstrations collected on a motion-base driving simulator. During online testing, the model demonstrates successful imitation of different driving styles, and the interpreted NMPC parameters provide insights into the achievement of specific driving behaviors. Our experimental results show that DriViDOC outperforms other methods involving NMPC and neural networks, exhibiting an average improvement of 20% in imitation scores. ## Authors: diff --git a/img/styles.gif b/img/styles.gif deleted file mode 100644 index f521b52..0000000 Binary files a/img/styles.gif and /dev/null differ diff --git a/index.html b/index.html index d1bf479..56b6856 100644 --- a/index.html +++ b/index.html @@ -133,7 +133,7 @@

DriViDOC: Driving from Vision through D - @@ -187,7 +187,7 @@

- DriViDOC structure + DriViDOC structure

@@ -262,21 +262,33 @@

Abstract

- Human-like autonomous driving controllers have the potential to enhance passenger perception of autonomous vehicles. This paper proposes DriViDOC: a model for Driving from Vision through Differentiable Optimal Control, and its application to learn personalized autonomous driving controllers from human demonstrations.

-

DriViDOC combines the automatic inference of relevant features from camera frames with the properties of nonlinear model predictive control (NMPC), such as constraint satisfaction. Our approach leverages the differentiability of parametric NMPC, allowing for end-to-end learning of the driving model from images to control.

-

The model is trained on an offline dataset comprising various driving styles collected on a motion-base driving simulator. During online testing, the model demonstrates successful imitation of different driving styles, and the interpreted NMPC parameters provide insights into the achievement of specific driving behaviors. Our experimental results show that DriViDOC outperforms other methods involving NMPC and neural networks, exhibiting an average improvement of 20% in imitation scores. + This paper proposes DriViDOC: a framework for Driving from Vision through Differentiable Optimal Control, and its application to learn autonomous driving controllers from human demonstrations. + DriViDOC combines the automatic inference of relevant features from camera frames with the properties of nonlinear model predictive control (NMPC), such as constraint satisfaction. + Our approach leverages the differentiability of parametric NMPC, allowing for end-to-end learning of the driving model from images to control. The model is trained on an offline dataset comprising various human demonstrations collected on a motion-base driving simulator. During online testing, the model demonstrates successful imitation of different driving styles, and the interpreted NMPC parameters provide insights into the achievement of specific driving behaviors. Our experimental results show that DriViDOC outperforms other methods involving NMPC and neural networks, exhibiting an average improvement of 20% in imitation scores.

+
+
+

NMPC parameters are dynamically changed by the CNN based on the driving context +

+
+ NMPC parameters are dynamically changed by the CNN based on the driving context
+          +
+
+
+ +

Video

-
@@ -428,7 +440,7 @@

Related Links

BibTeX

@inproceedings{acerbo2024drividoc,
   author    = {Acerbo, Flavia Sofia and Swevers, Jan and Tuytelaars, Tinne and Tong, Son},
-  title     = {Learning from Visual Demonstrations through Differentiable Nonlinear MPC for Personalized Autonomous Driving},
+  title     = {Driving from Vision through Differentiable Optimal Control},
   booktitle = {Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
   year      = {2024},
 }
diff --git a/static/images/dynamic_par.gif b/static/images/dynamic_par.gif new file mode 100644 index 0000000..3d1e778 Binary files /dev/null and b/static/images/dynamic_par.gif differ