Seminar Information
The Dynamic Systems and Control group at UC San Diego integrates, at a fundamental level, system design, modeling, and control disciplines to obtain improved performance of the dynamic response of engineering systems using feedback. As such, the areas of research of the Dynamic Systems and Control group is a joint activity in the topics of systems integration, dynamic system modeling, feedback control design, and the fundamentals of systems theory as applied to linear and nonlinear dynamic systems, mechatronics, structural control, aerospace, and fluid-mechanical systems.
Programmatic advertising is at the heart of the business for companies such as Google, Facebook, and Yahoo. A Demand Side Platform is a particular business model for programmatic advertising, and its goal is to optimally spend an advertising budget. The optimization is challenging due to an underlying high-dimensional, nonlinear, time-varying, dynamic, and stochastic plant. In this talk we introduce the optimization problem and demonstrate how techniques from control engineering can be used to analyze and solve the problem.
Standard stochastic control methods assume that the probability distribution of uncertain variables is available. Unfortunately, in practice, obtaining accurate distribution information is a challenging task.
Reinforcement learning (RL) has been widely used to solve sequential decision making problems in unknown stochastic environments. In this talk we first present a new zeroth-order policy optimization method for Multi-Agent Reinforcement Learning (MARL) with partial state and action observations and for online learning in non-stationary environments. Zeroth-order optimization methods enable the optimization of black-box models that are available only in the form of input-output data and are common in training of Deep Neural Networks and RL.
System identification has a long history with several well-established methods, in particular for learning linear dynamical systems from input/output data. While the asymptotic properties of these methods are well understood as the number of data points goes to infinity or the noise level tends to zero, how well their estimates in finite data regime evolve is relatively less studied. This talk will mainly focus on our analysis of the robustness of the classical Ho-Kalman algorithm and how it translates to non-asymptotic estimation error bounds as a function of the number of data samples.
The aspiration of modern robotics is to achieve a level of adaptability, robustness and safety that will allow the wide deployment of robots in unstructured domains and everyday human spaces. This requires progress at multiple components of robotics, from mechanisms, to sensing as well as decision-making and reasoning. This talk starts from robot planning algorithms, which achieve asymptotic optimality for systems with significant dynamics.
Modern nonlinear control theory seeks to endow systems with properties such as stability and safety. Despite its successful deployment in various domains, uncertainty remains a significant challenge, while data offers a potential solution. In this talk, I will discuss two settings of data-driven nonlinear control, where uncertainty arises from 1) the dynamics model and 2) the sensing model. I will introduce robust control synthesis procedures based on Control Lyapunov and Control Barrier Functions. Using this framework, we will show data-dependent guarantees.
We consider the problem of optimal and constrained data-driven control for unknown systems. A novel data-enabled predictive control (DeePC) algorithm is presented that computes optimal and safe control policies driving the unknown system along a desired trajectory while satisfying system constraints. Using a finite number of data samples from the unknown system, our algorithm is grounded on insights from subspace identification and behavioral systems theory. In particular, we use raw unprocessed data assembled in a matrix time series for data-driven estimation and prediction.
Karl Åström once famously called automatic control the hidden technology In recognition of the fact that despite Its pervasiveness, It Is rarely mentioned. Control ls Indeed a critical component of so many technologies used in Industry and In our everyday life. In this talk I want to Illustrate the broad reach of control engineering through applications I performed over the last forty years.
Autonomous mobile robots are becoming pervasive in everyday life, and hybrid approaches that merge traditional control theory and modern data-driven methods are becoming increasingly important. In the first half of the seminar, we begin with a discussion of safety verification methods, and their computational and practical challenges. In the second half, we examine connections between optimal control and reinforcement learning, and between optimal control and visual navigation.