Abstract
AI is adept at using large quantities of data, sometimes sensitive personal data, and can adversely affect individuals’ privacy. Data privacy concerns significantly impact the course of next-generation AI. Users do not trust anyone withholding their data and need privacy-preserving intelligent systems. In addition, several regulations mandate that organizations handle users’ data in ways that do not affect their privacy and provide them control on their data. Federated Learning emerged as a privacy-preserving technology for data-intensive machine learning by training the models on-site or on-device. However, several concerns related to federated learning emerged due to: (i) dynamic, distributed, heterogeneous, and collaborative nature of client devices, (ii) membership inference and model inversion attacks affecting the overall privacy and security of FL systems, (iii) the need for strict compliance to data privacy and protection laws, (iv) the vulnerabilities at local client devices leading to data leakage, and (iv) diversity and ubiquity of smart devices collecting real-time multimodal data leading to lack of standardization efforts for security and privacy management framework. In this paper, we discuss (a) how federated learning can help us withholding privacy, (b) the need for improving security and privacy in federated learning systems, (c) the privacy regulations and their application to federated learning in various business domains, (d) proposed a federated recommender system and demonstrated the performance that matches the central setting.