KTH Royal Institute of Technology
June 10-14 2024
Organization
Location and Schedule
Instructors
Lecture Plans
Summary of the Course (tentative)
Gradient-flow dynamics are at the forefront of numerous engineering and machine learning problems. Many algorithms utilizing gradient dynamics are large-scale, with computations being performed in a decentralized fashion over a network. This decentralization can be attributed to either information limitations or computational overhead in gradient computations. This course introduces the primary toolkit for designing and analyzing efficient decentralized optimization algorithms. We initiate the course by introducing the fundamentals of decentralized optimization problems, where individuals aim to find a common optimizer through local interactions over a network. We delve into both discrete and continuous-time dynamics, explore the effects of graph structures by providing a self-contained introduction to the basics of algebraic graph theory, and examine the impacts of uncertainty and randomness. A central focus will be on connecting the performance of distributed optimization methods to the features of the underlying network. The second part of the course will delve into fundamental limits in a broad class of distributed optimization algorithms and establish state-of-the-art worst-case performance bounds. We will also study issues of robustness and adversarial behaviors. In the final part of the course, we will concentrate on decentralized greedy algorithms for submodular optimization problems with limited information. These algorithms have found numerous recent applications, such as data summarization and selection problems in natural language processing, as well as sensor deployment in robotics.
Outline (tentative)
- Decentralization in optimization and learning: an overview of the state-of-the-art
- Consensus-based methods: deterministic and stochastic, with discrete and continuous-time dynamics
- Worst-case studies and performance estimation for gradient dyanmics
- Fundamental limits in distributed computations and optimization
- Robustness and resiliency in distributed optimization
- Distributed submodular optimization