Goals

Deep Learning has facilitated various high-stakes applications such as crime detection, urban planning, drug discovery, and healthcare. Its continuous success hinges on learning from massive data in miscellaneous sources, ranging from data with independent distributions to graph-structured data capturing intricate inter-sample relationships. Scaling up the data access requires global collaboration from distributed data owners. Yet, centralizing all data sources to an untrustworthy centralized server will put users’ data at risk of privacy leakage or regulation violation. Federated Learning (FL) is a de facto decentralized learning framework that enables knowledge aggregation from distributed users without exposing private data. Though promising advances are witnessed for FL, new challenges are emerging when integrating FL with the rising needs and opportunities in data mining, graph analytics, foundation models, generative AI, and new interdisciplinary applications in science. By hosting this workshop, we aim to attract a broad range of audiences, including researchers and practitioners from academia and industry interested in the emergent challenges in FL. As an effort to advance the fundamental development of FL, this workshop will encourage ideas exchange on the trustworthiness, scalability, and robustness of distributed data mining and graph analytics and their emergent challenges.

Organizers

Team Image

Junyuan Hong

General Chair

University of Texas at Austin

Team Image

Carl Yang

General Chair

Emory University

Team Image

Zhuangdi Zhu

Program Chair

George Mason University

Team Image

Zheng Xu

Program Chair

Google

Team Image

Nathalie Baracaldo

IBM Research

Team Image

Neil Shah

Snap

Team Image

Salman Avestimehr

USC & FedML

Team Image

Jiayu Zhou

Michigan State University

Volunteers

Team Image

Shuyang Yu

Michigan State University