Stay Informed:

COVID-19 (coronavirus) information
Zoom Links: Zoom Help | Teaching with Zoom | Zoom Quick Guide

The Fifth Bay Area Optimization Meeting

Start Time: 
Friday, May 17, 2019 - 9:00am
End Time: 
Friday, May 17, 2019 - 5:00pm
Location: 
Baskin Engineering Building E2, Room 180
Organizer: 
Yu Zhang and Qi Gong

Agenda details:

9:10 Welcome

9:20 Michael Friedlander, University of British Columbia, Vancouver

Title: Polar duality and atomic alignment

Abstract: The aim of structured optimization is to assemble a solution, using a given set of atoms, to fit a model to data. Polarity, which generalizes the notion of orthogonality from linear sets to general convex sets, plays a special role in a simple and geometric form of convex duality. The atoms and their implicit "duals" share a special relationship, and their participation in the solution assembly depends on a notion of alignment. This geometric perspective leads to practical algorithms for large-scale problems.

10:00 Patrick Combettes, North Carolina State University

Title: The pervasiveness of proximal point iterations

Abstract: The scope of the proximal point algorithm for finding a zero of a monotone operator may seem rather limited. We show that it can actually be used to devise and analyze a surprisingly broad a class of algorithms in nonlinear analysis. In particular (joint work with J.-C. Pesquet) it will be seen that the proximal point formalism provides valuable new insights into the static and asymptotic properties of deep neural networks.

10:40 Break

11:10 Mengdi Wang, Princeton University

Title: Learning to control in metric space

Abstract: We study online reinforcement learning for finite-horizon deterministic control systems with arbitrary state and action spaces. Suppose that the transition dynamics and reward function is unknown, but the state and action space is endowed with a metric that characterizes the proximity between different states and actions. We provide a surprisingly simple upper-confidence reinforcement learning algorithm that uses a function approximation oracle to estimate optimistic Q functions from experiences. We prove sublinear regret that depends on the doubling dimension of the state space with respect to the given metric - which is intrinsic and typically much smaller than the appeared dimension. We also establish a matching regret lower bound. The proposed method can be adapted to work for more structured systems.

11:50 Michael Jordan, University of California, Berkeley

Title: On the Theory of Gradient-Based Learning: A View from Continuous Time

Abstract: Gradient-based optimization has provided the theoretical and practical foundations on which recent developments in statistical machine learning have reposed. A complementary set of foundations is provided by Monte Carlo sampling, where gradientbased methods have also been leading the way in recent years. We explore links between gradient-based optimization algorithms and gradient-based sampling algorithms. Although these algorithms are generally studied in discrete time, we find that fundamental insights can be obtained more readily if we work in continuous time. A particularly striking finding is that there is a counterpart of Nesterov acceleration in the world of Langevin diffusion.

12:30 Lunch

2:10 Matthew Carlyle, Naval Postgraduate School

Title: Defending Maximum Flows from Worst-case Attacks

Abstract: We present a reformulation of the trilevel defender-attacker-defender maximum flow problem as an s-t-cut defense problem, and provide a corresponding decomposition algorithm for solving the problem. On moderate to large problems our reformulation results in significantly smaller master problem instances than the typical flow-based formulation. Our algorithm requires far fewer iterations and has faster solution times than the current standard nested decomposition algorithms. We provide small examples and an instance based on historical data to illustrate the formulation, and we report experimental results on examples of varying sizes and topologies.

2:50 Amitabh Basu, Johns Hopkins University

Title: Admissibility of solution estimators for stochastic optimization

Abstract: We look at stochastic optimization problems through the lens of statistical decision theory. In particular, we address admissibility, in the statistical decision theory sense, of the natural sample average estimator for a stochastic optimization problem (which is also known as the empirical risk minimization (ERM) rule in learning literature). It is well known that for general stochastic optimization problems, the sample average estimator may not be admissible. This is known as Stein's paradox in the statistics literature. We show in this paper that for optimizing stochastic linear functions over compact sets, the sample average estimator is admissible.

3:30 Break

4:00 Stephen Wright, University of Wisconsin, Madison

Title: Interactions between Control, Learning, and Optimization

Abstract: There has been cross-fertilization between the fields of control, machine learning, and optimization for many years. These interactions continue to expand into such areas as nonconvexity, reinforcement learning and model predictive control, and the application of neural nets in control contexts. After an overview of these areas, we focus on a topic in which all three areas interact: the use of dissipativity theory from control to analyze convergence of momentum-based optimization algorithms used in learning. This is joint work with Bin Hu and Laurent Lessard.