Nearly Optimal Algorithms for Level Set Estimation

Published in Artifical Intelligence and Statistics (AIStats), 2022

Recommended citation: Mason, B., Camilleri, R., Mukherjee, S., Jamieson, K., Nowak, R., & Jain, L. (2021). Nearly Optimal Algorithms for Level Set Estimation. arXiv preprint arXiv:2111.01768. (To appear at AIStats, 2022) https://arxiv.org/pdf/2111.01768

The level set estimation problem seeks to find all points in a domain $\cal X$ where the value of an unknown function $f:{\cal X}\rightarrow \mathbb{R}$ exceeds a threshold $\alpha$. The estimation is based on noisy function evaluations that may be acquired at sequentially and adaptively chosen locations in $\cal X$. The threshold value $\alpha$ can either be \emph{explicit} and provided a priori, or \emph{implicit} and defined relative to the optimal function value, i.e. $\alpha = (1-\epsilon)f(x_\ast)$ for a given $\epsilon > 0$ where $f(x_\ast)$ is the maximal function value and is unknown. In this work we provide a new approach to the level set estimation problem by relating it to recent adaptive experimental design methods for linear bandits in the Reproducing Kernel Hilbert Space (RKHS) setting. We assume that $f$ can be approximated by a function in the RKHS up to an unknown misspecification and provide novel algorithms for both the implicit and explicit cases in this setting with strong theoretical guarantees. Moreover, in the linear (kernel) setting, we show that our bounds are nearly optimal, namely, our upper bounds match existing lower bounds for threshold linear bandits. To our knowledge this work provides the first instance-dependent, non-asymptotic upper bounds on sample complexity of level-set estimation that match information theoretic lower bounds.

Download paper here

Recommended citation: Mason, B., Camilleri, R., Mukherjee, S., Jamieson, K., Nowak, R., & Jain, L. (2021). Nearly Optimal Algorithms for Level Set Estimation. arXiv preprint arXiv:2111.01768. (To appear at AIStats, 2022)