Flow of Knowledge: Federated Fine-Tuning of LLMs in Healthcare under Non-IID Conditions

Zeyu Chen, Yun Ji, Bowen Wang, Liwen Shi, Zijie Zeng, Sheng Zhang

公開日: 2025/10/1

Abstract

Large language models (LLMs) show great promise in healthcare, but their applications are hindered by data privacy restrictions and the challenges of cross-institution collaboration. Sensitive medical data cannot be centralized, while non-independent and identically distributed (non-IID) characteristics across institutions further complicate convergence and fairness. To address these issues, we present a federated fine-tuning approach based on Low-Rank Adaptation (LoRA), enabling privacy-preserving knowledge flow across institutions. The method iteratively combines local LoRA adaptation with global parameter aggregation, allowing efficient knowledge sharing without exposing raw data. A blockchain identity scheme is used for identifying individual LLM in such a distributed network. We evaluate this approach on heterogeneous and highly non-IID medical text datasets, where experiments demonstrate that federated LoRA not only enhances cross-client generalization but also improves the performance of the weakest client, achieving stable convergence and fairer outcomes. These findings highlight federated LoRA fine-tuning as a practical and effective paradigm for adapting LLMs in healthcare, offering a new path for multi-center medical AI collaboration.

Flow of Knowledge: Federated Fine-Tuning of LLMs in Healthcare under Non-IID Conditions | SummarXiv | SummarXiv