On Rate-Optimal Partitioning Classification from Observable and from Privatised Data

Balázs Csanád Csáji, László Györfi, Ambrus Tamás, Harro Walk

Published: 2023/12/22

Abstract

In this paper we revisit the classical method of partitioning classification and study its convergence rate under relaxed conditions, both for observable (non-privatised) and for privatised data. We consider the problem of classification in a $d$ dimensional Euclidean space. Previous results on the partitioning classifier worked with the strong density assumption, which is restrictive, as we demonstrate through simple examples. Here, we study the problem under much milder assumptions. We presuppose that the distribution of the inputs is a mixture of an absolutely continuous and a discrete distribution, such that the absolutely continuous component is concentrated to a $d_a$ dimensional subspace. In addition to the standard Lipschitz and margin conditions, a novel characteristic of the absolutely continuous component is introduced, by which the exact convergence rate of the classification error probability is computed, both for the binary and for the multi-label cases. Interestingly, this rate of convergence depends only on the intrinsic dimension of the inputs, $d_a$. The privacy constraints mean that the independent identically distributed data cannot be directly observed, and the classifiers are functions of the randomised outcome of a suitable local differential privacy mechanism. In this paper we add Laplace distributed noises to the discontinuations of all possible locations of the feature vector and to its label. Again, tight upper bounds on the rate of convergence of the classification error probability are derived, without the strong density assumption, such that this rate depends on $2d_a$.

On Rate-Optimal Partitioning Classification from Observable and from Privatised Data | SummarXiv | SummarXiv