Privacy-Aware Sequential Learning
Yuxin Liu, M. Amin Rahimian
公開日: 2025/2/26
Abstract
In settings like vaccination registries, individuals act after observing others, and the resulting public records can expose private information. We study privacy-preserving sequential learning, where agents add endogenous noise to their reported actions to conceal private signals. Efficient social learning relies on information flow, seemingly in conflict with privacy. Surprisingly, with continuous signals and a fixed privacy budget $(\varepsilon)$, the optimal randomization strategy balances privacy and accuracy, accelerating learning to $\Theta_{\varepsilon}(\log n)$, faster than the nonprivate $\Theta(\sqrt{\log n})$ rate. In the nonprivate baseline, the expected time to the first correct action and the number of incorrect actions diverge; under privacy with sufficiently small $\varepsilon$, both are finite. Privacy helps because, under the false state, agents more often receive signals contradicting the majority; randomization then asymmetrically amplifies the log-likelihood ratio, enhancing aggregation. In heterogeneous populations, an order-optimal $\Theta(\sqrt{n})$ rate is achievable when a subset of agents have low privacy budgets. With binary signals, however, privacy reduces informativeness and impairs learning relative to the nonprivate baseline, though the dependence on $\varepsilon$ is nonmonotone. Our results show how privacy reshapes information dynamics and inform the design of platforms and policies.