On classical advice, sampling advice and complexity assumptions for learning separations

Jordi Pérez-Guijarro

Published: 2024/8/25

Abstract

In this paper, we study the relationship between advice in the form of a training set and classical advice. We do this by analyzing the class $\mathsf{BPP/samp}$ and certain variants of it. Specifically, our main result demonstrates that $\mathsf{BPP/samp}$ is a proper subset of the class $\mathsf{P/poly}$, which implies that advice in the form of a training set is strictly weaker than classical advice. This result remains valid when considering quantum advice and a quantum generalization of the training set. Finally, leveraging the insights from our proofs, we identify both sufficient and necessary complexity-theoretic assumptions for the existence of concept classes that exhibit a quantum learning speed-up. We consider both the worst-case setting, where accurate results are required for all inputs, and the average-case setting.