The Internet of Things (IoT) has revolutionized data-driven applications but also introduced critical challenges in terms of privacy, scalability, and computation. Federated Learning (FL), a decentralized machine learning paradigm, has emerged as a promising solution to address these concerns by enabling local model training on devices without transferring raw data. This review provides a comprehensive comparative analysis of existing FL frameworks and techniques in the context of IoT. We evaluate prominent frameworks such as TensorFlow Federated (TFF), Flower, and FATE, analysing their architectural design, communication efficiency, security provisions, and performance across diverse IoT scenarios. We further explore various FL algorithms—such as FedAvg, Fed Prox—and optimization techniques including model compression, differential privacy, and homomorphic encryption. Our findings highlight the trade-offs between accuracy, resource consumption, and scalability, offering insights into framework suitability for applications like smart homes, healthcare, and industrial IoT. Despite FL's potential, challenges remain in communication overhead, device heterogeneity, security vulnerabilities, and system dynamics. We conclude by identifying key research opportunities, such as adaptive personalization, robust privacy-preserving techniques, and scalable FL architectures.
Monika , & Prof. Rishi Pal Singh
114-119
08.2025-49497881