The IRIM Seminar Series | April 14, 2021 | 12:15PM EDT
Conflict-Aware Risk-averse and Safe Reinforcement Learning: A Meta-Cognitive Learning Framework
Hamidreza Modares | Assistant Professor; Department of Mechanical Engineering, Michigan State University
Access the Event Here : https://tinyurl.com/IRIMVSSspring5
Abstract
While the success of reinforcement learning (RL) in computer games has shown impressive engineering feat, unlike the computer games, safety-critical settings such as unmanned vehicles must thrash around in the real world, which makes the entire enterprise unpredictable. Standard RL practice generally implants pre-specified performance metrics or objectives into the RL agent to encode the designers’ intention and preferences in achieving different and sometimes conflicting goals (e.g., cost efficiency, safety, speed of response, accuracy, etc.). Optimizing pre- specified performance metrics, however, cannot provide safety and performance guarantees across a vast variety of circumstances that the system might encounter in non-stationary and hostile environments. In this talk, I will discuss novel metacognitive RL algorithms to learn not only a control policy that optimizes accumulated reward values, but also what reward functions to optimize in the first place to formally assure safety with a good enough performance. I will present safe RL algorithms that adapt the focus of attention of RL algorithm to its variety of performance and safety objectives to resolve conflict and thus assure the feasibility of the reward function in a new circumstance. Moreover, model-free RL algorithms will be presented to solve the risk-averse optimal control (RAOC) problem to optimize the expected utility of outcomes while reducing the variance of cost under aleatory uncertainties (i.e., randomness). This is because, performance-critical systems must not only optimize the expected performance, but also reduce its variance to avoid performance fluctuation during RL’s course of operation. To solve the RAOC problem, I will present the three variants of RL algorithms and analyze their advantages and preferences for different situations/systems: 1) a one-shot static convex program based RL, 2) an iterative value iteration algorithm that solves a linear programming optimization at each iteration, and 3) an iterative policy iteration algorithm that solves a convex optimization at each iteration and guarantees the stability of the consecutive control policies.
While the success of reinforcement learning (RL) in computer games has shown impressive engineering feat, unlike the computer games, safety-critical settings such as unmanned vehicles must thrash around in the real world, which makes the entire enterprise unpredictable. Standard RL practice generally implants pre-specified performance metrics or objectives into the RL agent to encode the designers’ intention and preferences in achieving different and sometimes conflicting goals (e.g., cost efficiency, safety, speed of response, accuracy, etc.). Optimizing pre- specified performance metrics, however, cannot provide safety and performance guarantees across a vast variety of circumstances that the system might encounter in non-stationary and hostile environments. In this talk, I will discuss novel metacognitive RL algorithms to learn not only a control policy that optimizes accumulated reward values, but also what reward functions to optimize in the first place to formally assure safety with a good enough performance. I will present safe RL algorithms that adapt the focus of attention of RL algorithm to its variety of performance and safety objectives to resolve conflict and thus assure the feasibility of the reward function in a new circumstance. Moreover, model-free RL algorithms will be presented to solve the risk-averse optimal control (RAOC) problem to optimize the expected utility of outcomes while reducing the variance of cost under aleatory uncertainties (i.e., randomness). This is because, performance-critical systems must not only optimize the expected performance, but also reduce its variance to avoid performance fluctuation during RL’s course of operation. To solve the RAOC problem, I will present the three variants of RL algorithms and analyze their advantages and preferences for different situations/systems: 1) a one-shot static convex program based RL, 2) an iterative value iteration algorithm that solves a linear programming optimization at each iteration, and 3) an iterative policy iteration algorithm that solves a convex optimization at each iteration and guarantees the stability of the consecutive control policies.
Speaker
Hamidreza Modares is an Assistant Professor in the Department of Mechanical Engineering at Michigan State University. Prior to joining Michigan State University, he was an Assistant professor in the Department of Electrical Engineering, Missouri University of Science and Technology. His current research interests include control and security of cyber–physical systems, machine learning in control, distributed control of multi-agent systems, and robotics. He is an Associate Editor of IEEE Transactions on Neural Networks and Learning Systems.
-
Mar 24
ML@GT Virtual Seminar: Ellie Pavlick, Brown University
https://primetime.bluejeans.com/a2m/register/esbdzzaf
ML@GT is hosting a virtual seminar featuring Ellie Pavlick from Brown University.
-
Mar 10
ML@GT Virtual Seminar: Csaba Szepesvari, University of Alberta
primetime.bluejeans.com/a2m/register/ddtatyph
ML@GT invites you to a virtual seminar featuring Csaba Szepesvari from the University of Alberta.