Download PDFOpen PDF in browserFederated Learning Approaches for Decentralized Data Privacy in Machine LearningEasyChair Preprint 1499412 pages•Date: September 22, 2024AbstractAs data privacy concerns escalate, especially in domains such as healthcare and finance, the need for privacy-preserving machine learning methodologies has become paramount. Federated learning (FL) emerges as a revolutionary paradigm that facilitates collaborative model training across distributed devices, ensuring that raw data remains localized. This paper delves into various federated learning strategies, analyzing their efficacy in preserving privacy while maintaining robust model performance. We examine classical algorithms like Federated Averaging (FedAvg) and Federated SGD (FedSGD) alongside cutting-edge approaches like Federated Proximal (FedProx), which addresses data heterogeneity challenges. Through rigorous evaluation on a synthetic dataset mimicking real-world conditions, we provide a comprehensive assessment of these approaches, focusing on critical metrics such as accuracy, communication efficiency, and model convergence. Our findings underscore the potential of federated learning to offer a balanced solution to the trade-offs between privacy, efficiency, and accuracy, paving the way for broader adoption across various sectors. Keyphrases: Federated Averaging, Federated Learning, communication efficiency, data heterogeneity, decentralized data, privacy-preserving machine learning
|