Distributed Functional Optimization and Learning on Banach Spaces: Generic Frameworks

Zhan Yu, Zhongjie Shi, Deming Yuan, Daniel W. C. Ho

公開日: 2025/9/22

Abstract

In this paper, we establish a distributed functional optimization (DFO) theory based on time-varying multi-agent networks. The vast majority of existing distributed optimization theories are developed based on Euclidean decision variables. However, for many scenarios in machine learning and statistical learning, such as reproducing kernel spaces or probability measure spaces that use functions or probability measures as fundamental variables, the development of existing distributed optimization theories exhibit obvious theoretical and technical deficiencies. This paper addresses these issues by developing a novel general DFO theory on Banach spaces, allowing functional learning problems in the aforementioned scenarios to be incorporated into our framework for resolution. We study both convex and nonconvex DFO problems and rigorously establish a comprehensive convergence theory of distributed functional mirror descent and distributed functional gradient descent algorithm to solve them. Satisfactory convergence rates are fully derived. The work has provided generic analyzing frameworks for distributed optimization. The established theory is shown to have crucial application value in the kernel-based distributed learning theory.