Skip to content

Commit

Permalink
modified for final submission
Browse files Browse the repository at this point in the history
  • Loading branch information
acerbosisw committed Sep 2, 2024
1 parent ce29afa commit 735e07a
Show file tree
Hide file tree
Showing 4 changed files with 22 additions and 14 deletions.
10 changes: 3 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,13 @@
# DriViDOC
Driving from Vision through Differentiable Optimal Control

This repository complements the main paper "Learning from Visual Demonstrations through Differentiable Nonlinear MPC for Personalized Autonomous Driving," submitted to IROS 2024, to enhance the transparency and reproducibility of the research presented.
This repository complements the main paper "Driving from Vision through Differentiable Optimal Control" presented at IROS 2024, to enhance the transparency and reproducibility of the research presented.
[ArXiv preprint](https://arxiv.org/abs/2403.15102)
[YouTube Video](https://youtu.be/WxWPuAtJ08E)
[YouTube Video](https://youtu.be/ENHhphpbPLs)

![](img/styles.gif)

## Abstract
Human-like autonomous driving controllers have the potential to enhance passenger perception of autonomous vehicles. This paper proposes DriViDOC: a model for Driving from Vision through Differentiable Optimal Control, and its application to learn personalized autonomous driving controllers from human demonstrations.
DriViDOC combines the automatic inference of relevant features from camera frames with the properties of nonlinear model predictive control (NMPC), such as constraint satisfaction.
Our approach leverages the differentiability of parametric NMPC, allowing for end-to-end learning of the driving model from images to control. The model is trained on an offline dataset comprising various driving styles collected on a motion-base driving simulator. During online testing, the model demonstrates successful imitation of different driving styles, and the interpreted NMPC parameters provide insights into the achievement of specific driving behaviors. Our experimental results show that DriViDOC outperforms other methods involving NMPC and neural networks, exhibiting an average improvement of 20\% in imitation scores.

This paper proposes DriViDOC: a framework for Driving from Vision through Differentiable Optimal Control, and its application to learn autonomous driving controllers from human demonstrations. DriViDOC combines the automatic inference of relevant features from camera frames with the properties of nonlinear model predictive control (NMPC), such as constraint satisfaction. Our approach leverages the differentiability of parametric NMPC, allowing for end-to-end learning of the driving model from images to control. The model is trained on an offline dataset comprising various human demonstrations collected on a motion-base driving simulator. During online testing, the model demonstrates successful imitation of different driving styles, and the interpreted NMPC parameters provide insights into the achievement of specific driving behaviors. Our experimental results show that DriViDOC outperforms other methods involving NMPC and neural networks, exhibiting an average improvement of 20% in imitation scores.


## Authors:
Expand Down
Binary file removed img/styles.gif
Binary file not shown.
26 changes: 19 additions & 7 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@ <h1 class="title is-1 publication-title">DriViDOC: Driving from Vision through D
</span>
<!-- Video Link. -->
<span class="link-block">
<a href="https://www.youtube.com/watch?v=WxWPuAtJ08E"
<a href="https://youtu.be/ENHhphpbPLs"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="fab fa-youtube"></i>
Expand Down Expand Up @@ -187,7 +187,7 @@ <h2 class="subtitle has-text-centered">
<div class="column is-four-fifths">
<div class="content has-text-centered">
<p>
<img src="static/images/arch_plain_website.png" alt="DriViDOC structure" style="max-width: 60%;">
<img src="static/images/arch_plain_website.png" alt="DriViDOC structure" style="max-width: 50%;">
</p>
</div>
</div>
Expand Down Expand Up @@ -262,21 +262,33 @@ <h2 class="subtitle has-text-centered">
<h2 class="title is-3">Abstract</h2>
<div class="content has-text-justified">
<p>
Human-like autonomous driving controllers have the potential to enhance passenger perception of autonomous vehicles. This paper proposes DriViDOC: a model for Driving from Vision through Differentiable Optimal Control, and its application to learn personalized autonomous driving controllers from human demonstrations. </p>
<p>DriViDOC combines the automatic inference of relevant features from camera frames with the properties of nonlinear model predictive control (NMPC), such as constraint satisfaction. Our approach leverages the differentiability of parametric NMPC, allowing for end-to-end learning of the driving model from images to control.</p>
<p>The model is trained on an offline dataset comprising various driving styles collected on a motion-base driving simulator. During online testing, the model demonstrates successful imitation of different driving styles, and the interpreted NMPC parameters provide insights into the achievement of specific driving behaviors. Our experimental results show that DriViDOC outperforms other methods involving NMPC and neural networks, exhibiting an average improvement of 20% in imitation scores.
This paper proposes DriViDOC: a framework for Driving from Vision through Differentiable Optimal Control, and its application to learn autonomous driving controllers from human demonstrations.
DriViDOC combines the automatic inference of relevant features from camera frames with the properties of nonlinear model predictive control (NMPC), such as constraint satisfaction.
Our approach leverages the differentiability of parametric NMPC, allowing for end-to-end learning of the driving model from images to control. The model is trained on an offline dataset comprising various human demonstrations collected on a motion-base driving simulator. During online testing, the model demonstrates successful imitation of different driving styles, and the interpreted NMPC parameters provide insights into the achievement of specific driving behaviors. Our experimental results show that DriViDOC outperforms other methods involving NMPC and neural networks, exhibiting an average improvement of 20% in imitation scores.
</p>
</div>
</div>
</div>
<!--/ Abstract. -->

<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h2 class="title is-3">NMPC parameters are dynamically changed by the CNN based on the driving context
</h2>
<div class="content">
<img src="./static/images/dynamic_par.gif" alt="NMPC parameters are dynamically changed by the CNN based on the driving context
">
</div>
</div>
</div>


<!-- Paper video. -->
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h2 class="title is-3">Video</h2>
<div class="publication-video">
<iframe src="https://www.youtube.com/embed/WxWPuAtJ08E?si=m49x7Y5qbWeRFTVL"
<iframe src="https://www.youtube.com/embed/ENHhphpbPLs?si=7HTT25inwtSbZyV1"
frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
</div>
</div>
Expand Down Expand Up @@ -428,7 +440,7 @@ <h2 class="title is-3">Related Links</h2>
<h2 class="title">BibTeX</h2>
<pre><code>@inproceedings{acerbo2024drividoc,
author = {Acerbo, Flavia Sofia and Swevers, Jan and Tuytelaars, Tinne and Tong, Son},
title = {Learning from Visual Demonstrations through Differentiable Nonlinear MPC for Personalized Autonomous Driving},
title = {Driving from Vision through Differentiable Optimal Control},
booktitle = {Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
year = {2024},
}</code></pre>
Expand Down
Binary file added static/images/dynamic_par.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 735e07a

Please sign in to comment.