An Optimal Algorithm for Stochastic and Adversarial Bandits

Julian Ulf Zimmert, Yevgeny Seldin

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningpeer review

Abstract

We derive an algorithm that achieves the optimal (up to constants) pseudo-regret in both adversarial and stochastic multi-armed bandits without prior knowledge of the regime and time horizon. The algorithm is based on online mirror descent with Tsallis entropy regularizer. We provide a complete characterization of such algorithms and show that Tsallis entropy with power α=1/2 achieves the goal. In addition, the proposed algorithm enjoys improved regret guarantees in two intermediate regimes: the moderately contaminated stochastic regime defined by Seldin and Slivkins (2014) and the stochastically constrained adversary studied by Wei and Luo (2018). The algorithm also achieves adversarial and stochastic optimality in the utility-based dueling bandit setting. We provide empirical evaluation of the algorithm demonstrating that it outperforms UCB1 and EXP3 in stochastic environments. In certain adversarial regimes the algorithm significantly outperforms UCB1 and Thompson Sampling, which exhibit almost linear regret.
OriginalsprogEngelsk
TitelProceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS)
RedaktørerKamalika Chaudhuri, Masashi Sugiyama
ForlagPMLR
Publikationsdato2019
Sider467-475
StatusUdgivet - 2019
Begivenhed22nd International Conference on Artificial Intelligence and Statistics (AISTAT) - Naha, Okinawa, Japan
Varighed: 16 apr. 201918 apr. 2019

Konference

Konference22nd International Conference on Artificial Intelligence and Statistics (AISTAT)
Land/OmrådeJapan
ByNaha, Okinawa
Periode16/04/201918/04/2019
NavnProceedings of Machine Learning Research
Vol/bind89
ISSN1938-7228

Citationsformater