A Survey on Federated Learning for TinyML: Challenges, Techniques, and Future Directions
DOI:
https://doi.org/10.5281/zenodo.15240508Keywords:
Federated Learning, TinyML, Distributed Computing, Model Optimization, Communication Efficiency, Privacy Preservation, Resource Constraints, Data Heterogeneity, Security, IoT, Industrial Automation, Edge AIAbstract
The convergence of Federated Learning (FL) and Tiny Machine Learning (TinyML) represents a transformative step toward enabling intelligent and privacy-preserving applications on resourceconstrained edge devices. TinyML focuses on deploying lightweight machine learning models on microcontrollers and other low-power devices, whereas FL facilitates decentralized learning across distributed datasets without compromising user privacy. This survey provides a comprehensive review of the current state of research at the intersection of FL and TinyML, exploring model optimization techniques such as quantization, pruning, and knowledge distillation as well as communication efficient algorithms such as federated averaging and gradient sparsification. Key challenges, including ensuring energy efficiency, scalability, and security in FL-TinyML systems, are highlighted. Real-world applications, such as revolutionizing personalized healthcare, enabling smarter IoT devices, and advancing industrial automation, demonstrating the transformative potential of FL-TinyML to drive innovations in edge intelligence. This survey provides a timely and essential guide to the emerging field of FL-TinyML, paving the way for future research and development. Finally, this study identify open research questions and propose future directions, including hybrid optimization approaches, standardized evaluation frameworks, and the integration of blockchain for decentralized trust management.