Logo image
Model Protection and Privacy Preservation of Federated Learning using Homomorphic Encryption
Thesis   Open access

Model Protection and Privacy Preservation of Federated Learning using Homomorphic Encryption

Manzur Elahi
Murdoch University
Masters by Research, Murdoch University
2026
DOI:
https://doi.org/10.60867/00000073
pdf
Whole Thesis2.43 MBDownloadView
Open Access

Abstract

Modern Machine Learning (ML) faces many critical challenges, including fragmented data silos and increasingly strict data privacy regulations. Applications in delicate fields like healthcare and finance are particularly impacted since access to extensive datasets is restricted by stringent legal and ethical requirements. Sensitive data, such as pregnancy-related medical records, particularly those involving termination, carries high risks of stigma, domestic violence, and social repercussions if exposed. Therefore, secure and privacy-preserving ML methods are essential for enabling collaborative learning without compromising individual privacy. This research is motivated by the need to protect sensitive information while supporting collaborative machine learning across institutions. Traditional centralized approaches risk exposing private data, while decentralized methods like Federated Learning (FL) still suffer from security vulnerabilities. FL allows model training on decentralized data, but shared model updates remain susceptible to inference attacks, model poisoning, and data leakage. These risks are particularly critical in domains where indirect exposure can have severe consequences. Stronger model protection and privacy-preserving techniques are crucial for this field to succeed. To address these challenges, this study proposes integrating Homomorphic Encryption (HE), specifically the CKKS scheme, with FL using the Federated Averaging (FedAvg) algorithm. HE enables computations on encrypted data, ensuring that model updates remain confidential during training. Our approach protects both client-side data and model integrity against a range of attacks, including gradient leakage and model poisoning. This solution enhances trust in FL environments by eliminating the need to share raw or intermediate data with central servers or peers. Compared to traditional FL, our proposed FL framework integrated with CKKS significantly enhances data privacy. Moreover, it maintains comparable training performance and model accuracy, demonstrating that the added security does not come at the cost of effectiveness. It also aligns with regulatory compliance standards such as HIPAA and GDPR. This research enables privacy preserving ML in critical sectors such as healthcare, finance, and IoT. It supports secure collaboration across institutions without exposing sensitive data. Ultimately, it advances the development of trustworthy, privacy-aware Federated Learning systems.

Details

Metrics

18 File views/ downloads
9 Record Views
Logo image