While the success of reinforcement learning (RL) in computer games has shown impressive engineering feat, unlike the computer games, safety-critical settings such as unmanned vehicles must thrash around in the real world, which makes the entire enterprise unpredictable. Standard RL practice generally implants pre-specified performance metrics or objectives into the RL agent to encode the designers’ intention and preferences in achieving different and sometimes conflicting goals (e.g., cost efficiency, safety, speed of response, accuracy, etc.). Optimizing pre- specified performance metrics, however, cannot provide safety and performance guarantees across a vast variety of circumstances that the system might encounter in non-stationary and hostile environments. In this talk, I will discuss novel metacognitive RL algorithms to learn not only a control policy that optimizes accumulated reward values, but also what reward functions to optimize in the first place to formally assure safety with a good enough performance. I will present safe RL algorithms that adapt the focus of attention of RL algorithm to its variety of performance and safety objectives to resolve conflict and thus assure the feasibility of the reward function in a new circumstance. Moreover, model-free RL algorithms will be presented to solve the risk-averse optimal control (RAOC) problem to optimize the expected utility of outcomes while reducing the variance of cost under aleatory uncertainties (i.e., randomness). This is because, performance-critical systems must not only optimize the expected performance, but also reduce its variance to avoid performance fluctuation during RL’s course of operation. To solve the RAOC problem, I will present the three variants of RL algorithms and analyze their advantages and preferences for different situations/systems: 1) a one-shot static convex program based RL, 2) an iterative value iteration algorithm that solves a linear programming optimization at each iteration, and 3) an iterative policy iteration algorithm that solves a convex optimization at each iteration and guarantees the stability of the consecutive control policies.
ML@GT is hosting a virtual seminar featuring Ellie Pavlick from Brown University.
ML@GT invites you to a virtual seminar featuring Csaba Szepesvari from the University of Alberta.