The success of deep learning stems from availability of big training data and massive computation power. However, in many applications, training data are generated by individuals or organizations, who hesitate to share their data that expose privacy. Federated learning has been proposed to enable distributed computing nodes to collaboratively train models without exposing their own data. Its basic idea is to let these computing nodes train local models using their own data, respectively, and then upload the local models, instead of raw data, to a logically centralized parameter server that synthesizes a global model.
Despite the promise in data protection, federated learning faces new security and privacy threats. Some recent research has shown that it is possible to infer training data information by observing shared models. In addition, there is strong desire to protect models because model design needs significant investment and they are treated as important digital assets. However, models are exposed to everyone in the default design of federated learning. Furthermore, malicious participants may exist in federated learning and they would compromise the whole learning process by sharing wrong models. There also may exist free-riders that enjoy the shared model, without making contributions.
Addressing the above challenges of federated learning security and privacy needs significant research efforts on theories, algorithms, architecture, and experiences of system deployment and maintenance. Therefore, this workshop aims to offer a platform for researchers from both academia and industry to publish recent research findings and to discuss opportunities, challenges and solutions related to security and privacy of federated learning.
Topics of interests include, but are not limited to:
Security and privacy analysis of federated learning
Data privacy enhancement of federated learning
Model protection of federated learning
Secure multi-party computation for federated learning
Homomorphic encryption of federated learning
Differential privacy of federated learning
Tradeoff between privacy and efficiency of federated learning
Software system security of federated learning
Hardware security of federated learning
Network security of federated learning
Quantum security for federated learning
Blockchain for security and privacy protection of federated learning
Emerging threat and attack for federated learning
Submission deadline: March 15, 2023
Notification: April 19, 2023
Camera-ready: May 1, 2023
Submissions must be anonymous, with no author names, affiliations, acknowledgments, or obvious references.
All submissions must follow the original LNCS format with a page limit of 18 pages (including references) . Submissions not meeting the submission guidelines risk rejection without consideration of their merits.
The authors are invited to submit the papers using EasyChair submission system . The proceedings of the workshop will be published by Springer in the LNCS series. Best paper award will be selected paper based on the reviews by the committee.
Kouichi Sakurai, Kyushu University
Peng Li, The University of Aizu
Albert Cheng, University of Houston, USA
Celimuge Wu, University of Electro-Communications, Japan
Chao Fang, Beijing University of Technology, China
Feng Ye, University of Dayton, US
Ghassan Karame, Ruhr-University Bochum, Germany
Hiroaki Kikuchi, Meiji University, Japan
Ilsun YOU, Kookmin University. Korea
Kevin I-Kai Wang, University of Auckland, New Zealand
Karuna P. Joshi, UMBC, USA
Quan Chen, Shanghai Jiao Tong University, China
Raylin Tso, National Chengchi University, Taiwan
Rodrigo Roman, University of Malaga, Spain
Soufiene Djahel, University of Huddersfield, UK
Song Guo, Hong Kong Polytechnic University, China
Sushmita Ruj, UNSW, Australia
Xiaoyan Wang, Ibaraki University, Japan
Yufeng Zhan, Beijing Institute of Technology, China
Xiaokang Zhou, Gifu University, Japan
Yingjiu (Joe) Li, University of Oregon, US
Zekeriya Erkin, Delft University of Technology, The Netherland