Abstract

The lack of stability guarantee restricts the practical use of learning-based methods in core control problems in robotics. We develop new methods for learning neural control policies and neural Lyapunov critic functions in the model-free reinforcement learning (RL) setting. We use sample-based approaches and the Almost Lyapunov function conditions to estimate the region of attraction and invariance properties through the learned Lyapunov critic functions. The methods enhance stability of neural controllers for various nonlinear systems including automobile and quadrotor control.

Overall algorithm:

Published in ICRA 2021 [Paper] [Code coming soon!]

Results:

Inverted pendulum

SAC

PPO

LY (with neural Lyapunov critics)


Path-tracking control

( In training environment )

SAC

PPO

LY (with neural Lyapunov critics)

( In testing environment )

SAC

PPO

LY (with neural Lyapunov critics)


Quadrotor control

SAC

PPO

LY (with neural Lyapunov critics)


Walker

SAC

PPO

LY (with neural Lyapunov critics)

Bibtex

@inproceedings{
    title={Stabilizing Neural Control Using Self-Learned Almost Lyapunov Critics},
    author={Ya-Chien Chang and Sicun Gao},
    booktitle={ICRA},
    year={2021}
}