Seminar Information
Our understanding of multi-robot coordination and control has experienced great advances to the point where deploying multi-robot systems in the near future seems to be a
The Dynamic Systems and Control group at UC San Diego integrates, at a fundamental level, system design, modeling, and control disciplines to obtain improved performance of the dynamic response of engineering systems using feedback. As such, the areas of research of the Dynamic Systems and Control group is a joint activity in the topics of systems integration, dynamic system modeling, feedback control design, and the fundamentals of systems theory as applied to linear and nonlinear dynamic systems, mechatronics, structural control, aerospace, and fluid-mechanical systems.
Our understanding of multi-robot coordination and control has experienced great advances to the point where deploying multi-robot systems in the near future seems to be a
Increased autonomy can have many advantages, including increased safety and reliability, improved reaction time and performance, reduced personnel burden with associated cost savings, and the ability to continue operations in communications-degraded or denied environments. Artificial Intelligence for Small Unit Maneuver (AISUM) envisions a way for future expeditionary tactical maneuver elements to team with intelligent adaptive systems.
In this talk we discuss probabilistic formulations of system identification, with particular focus on handling sparse, noisy, and indirect data. We introduce the problem from a Bayesian perspective and discuss how it provides a principled mechanism for fusing information and data. We can extract estimators of the system from the posterior distribution, and compare them to commonly used least squares-based optimization objectives in the literature ranging from Hankel Matrix/ Markov parameter based methods to sparse identification of nonlinear dynamcs (SINDy) to dynamic mode decomposition.
In this presentation we present an iterative forward-backward algorithm which aims to solve discrete time stochastic optimal control problems. It is inspired by both max-plus numerical methods introduced by McEneaney and the Stochastic Dual Dynamic Programming (SDDP) algorithm of Pereira and Pinto.
At each time step, our algorithm builds two sequences of approximations of the (Bellman) value function respectively as a max-plus linear combinations of basic functions and as a min-plus linear combinations of other basic functions.
Designing autonomous systems that are simultaneously high-performing, adaptive, and provably safe remains an open problem. In this talk, we will argue that in order to meet this goal, new theoretical and algorithmic tools are needed that blend the stability, robustness, and safety guarantees of robust control with the flexibility, adaptability, and performance of machine and reinforcement learning. We will highlight our progress towards developing such a theoretical foundation of robust learning for safe control in the context of two case studies: (i) efficiently learning stability certif