Representation-based Reinforcement Learning for Dynamical Systems

Dr. Li, Na (Lina)

Professor
Department of Electrical Engineering and Applied Mathematics
Harvard University


Seminar Information

Seminar Series
Dynamic Systems & Controls

Seminar Date - Time
May 28, 2024, 3:00 pm
-
4 PM

Seminar Location
EBU II 479, Von Karman-Penner Seminar Room

Dr. Li, Na (Lina)

Abstract

The explosive growth of machine learning and data-driven methodologies have revolutionized numerous fields. Yet, the translation of these successes to the domain of dynamical physical systems remains a significant challenge. Closing the loop from data to actions in these systems faces many difficulties, stemming from the need for sample efficiency and computational feasibility, along with many other requirement such as verifiability, robustness, and safety. In this talk, we bridge this gap by introducing innovative representations to develop nonlinear stochastic control and reinforcement learning methods. Key in the representation is to  represent the stochastic, nonlinear  dynamics linearly onto a nonlinear feature space. We present a comprehensive framework to develop control and learning strategies which achieve efficiency, safety, and robustness with provable performance. We also show how the representation could be used to close the sim-to-real gap. Lastly, we will briefly present some concrete real-world applications, discussing how domain knowledge is applied in practice to further close the loop from data to actions.

Speaker Bio

Na Li is a Winokur Family Professor of Electrical Engineering and Applied Mathematics at Harvard University.  She received her Bachelor's degree in Mathematics from Zhejiang University in 2007 and Ph.D. degree in Control and Dynamical systems from California Institute of Technology in 2013. She was a postdoctoral associate at the Massachusetts Institute of Technology 2013-2014.  She has held a variety of short-term visiting appointments including the Simons Institute for the Theory of Computing, MIT, and Google Brain. Her research lies in the control, learning, and optimization of networked systems, including theory development, algorithm design, and applications to real-world cyber-physical societal system.  She has been an associate editor for IEEE Transactions on Automatic Control, Systems & Control Letters, IEEE Control Systems Letters, and served on the organizing committee for a few conferences.  She received the NSF career award (2016), AFSOR Young Investigator Award (2017), ONR Young Investigator Award(2019),  Donald P. Eckman Award (2019), McDonald Mentoring Award (2020), the IFAC Manfred Thoma Medal (2023), along with some other awards.