Keynotes
We are honored to invite leading experts to share their cutting-edge research and viewpoints. The talks will introduce the advances, challenges, and new technologies in federated learning.
Title: Federated Target Trial Emulation
Abstract: Target trial emulation (TTE) is a process of simulating randomized controlled trials (RCT) with real world data such as electronic health records. With the huge cost in both money and time for RCT, it is both important and attractive for TTE. One crucial factor for the TTE process to succeed is sample size. However, due to the sensitivity of real world patient data, it is not straightforward to construct a data repository with a sample size that is sufficient to run different TTEs. In this talk, I will present some preliminary work that we have done recently on federated TTE, demonstrate the promise and discuss the challenges.
Bio: Fei Wang is a Professor at Weill Cornell Medicine. His research focuses on machine learning and its applications in healthcare. For more details, please visit his website at https://wcm-wanglab.github.io/bio.html
Title: Federated Optimization for Multi-task Learning
Abstract: The extensive existing literature on federated optimization largely considers training a model for a single-task. In this talk, I will discuss two recent works on training one or more models for multiple tasks (either independent or related) in a federated manner. The first work addresses resource allocation and aggregation challenges that arise when training separate models for each task. The second work considers a multi-objective optimization problem where we train a single model for all tasks, and designs an algorithm that find the optimal weighting for each task in an adaptive and communication-efficient manner.
Bio: Gauri Joshi is an associate professor in the ECE department at Carnegie Mellon University. Gauri completed her Ph.D. from MIT EECS, and received her B.Tech and M.Tech from the Indian Institute of Technology (IIT) Bombay. Her awards include the MIT Technology Review 35 under 35 Award, ONR Young Investigator and NSF CAREER Award, Best Paper awards at MobiHoc 2022 and SIGMETRICS 2020, and the Institute Gold Medal of IIT Bombay (2010).
Title: Using Synthetic Data to Learn from Federated, Private Client Data
Abstract: On-device training is currently the most common approach for training machine learning (ML) models on private, distributed user data. Despite this, on-device training has several drawbacks: (1) most user devices are too small to train large models on-device, (2) on-device training is communication- and computation-intensive, and (3) on-device training can be difficult to debug and deploy. To address these problems, we study a pipeline in which models are trained at a central server on differentially-private synthetic data from client devices. We show how a recent algorithm called Private Evolution can outperform traditional federated learning baselines in accuracy and cost. We further show how the Private Evolution algorithm can be reformulated as a preference optimization problem, thereby significantly improving the performance of private synthetic data relative to on-device baselines and prior synthetic data baselines.
Bio: Giulia Fanti is the Angel Jordan Associate Professor of Electrical and Computer Engineering at Carnegie Mellon University. Her research interests span the security, privacy, and efficiency of distributed systems. She is a member of the Department of Commerce Information Security and Privacy Advisory Board and a two-time fellow of the World Economic Forum’s Global Future Council on Cybersecurity. Her work has been recognized with several awards, including best paper awards, a Sloan Fellowship, an Intel Rising Star Faculty Award, and an ACM SIGMETRICS Rising Star Award.
Title: Incentive Aligned Federated Learning and Unlearning Methods
Abstract: Federated learning enables machine learning algorithms to be trained over decentralized edge devices without requiring the exchange of local datasets. The incentives of the agents are often overlooked in the design of the system. We consider two scenarios in this talk. In the first scenario, we consider strategic agents with different data distributions. We analyze how the distribution of data affects agents’ incentives to voluntarily participate and obediently follow traditional federated learning algorithms. We design a Faithful Federated Learning (FFL) mechanism based on FedAvg method and VCG mechanism which achieves (probably approximate) optimality, faithful implementation, voluntary participation, and balanced budget. In the second scenario, we analyze an alternative approach to align individual agent’s incentive to participate by allowing unlearning option. We propose a multi-stage game theoretic framework and study the equilibrium properties.
Bio: Ermin Wei is an Associate Professor at the Electrical and Computer Engineering Department and Industrial Engineering and Management Sciences Department of Northwestern University. She completed her PhD studies in Electrical Engineering and Computer Science at MIT in 2014, advised by Professor Asu Ozdaglar. Her team won the 2nd place in the GO-competition Challenge 1, an electricity grid optimization competition organized by Department of Energy. Wei’s research interests include distributed optimization methods, convex optimization and analysis, smart grid, communication systems and energy networks and market economic analysis.