TY - GEN
T1 - Delayed Bandits
T2 - 40th International Conference on Machine Learning,
AU - Esposito, Emmanuel
AU - Masoudian, Saeed
AU - Qiu, Hao
AU - van der Hoeven, Dirk
AU - Cesa-Bianchi, Nicolò
AU - Seldin, Yevgeny
PY - 2023
Y1 - 2023
N2 - We study a K-armed bandit with delayed feedback and intermediate observations. We consider a model, where intermediate observations have a form of a finite state, which is observed immediately after taking an action, whereas the loss is observed after an adversarially chosen delay. We show that the regime of the mapping of states to losses determines the complexity of the problem, irrespective of whether the mapping of actions to states is stochastic or adversarial. If the mapping of states to losses is adversarial, then the regret rate is of order (K+d)T−−−−−−−−√ (within log factors), where T is the time horizon and d is a fixed delay. This matches the regret rate of a K-armed bandit with delayed feedback and without intermediate observations, implying that intermediate observations are not helpful. However, if the mapping of states to losses is stochastic, we show that the regret grows at a rate of (K+min{|S|,d})T−−−−−−−−−−−−−−−−√ (within log factors), implying that if the number |S| of states is smaller than the delay, then intermediate observations help. We also provide refined high-probability regret upper bounds for non-uniform delays, together with experimental validation of our algorithms.
AB - We study a K-armed bandit with delayed feedback and intermediate observations. We consider a model, where intermediate observations have a form of a finite state, which is observed immediately after taking an action, whereas the loss is observed after an adversarially chosen delay. We show that the regime of the mapping of states to losses determines the complexity of the problem, irrespective of whether the mapping of actions to states is stochastic or adversarial. If the mapping of states to losses is adversarial, then the regret rate is of order (K+d)T−−−−−−−−√ (within log factors), where T is the time horizon and d is a fixed delay. This matches the regret rate of a K-armed bandit with delayed feedback and without intermediate observations, implying that intermediate observations are not helpful. However, if the mapping of states to losses is stochastic, we show that the regret grows at a rate of (K+min{|S|,d})T−−−−−−−−−−−−−−−−√ (within log factors), implying that if the number |S| of states is smaller than the delay, then intermediate observations help. We also provide refined high-probability regret upper bounds for non-uniform delays, together with experimental validation of our algorithms.
M3 - Article in proceedings
T3 - Proceedings of Machine Learning Research
SP - 9374
EP - 9395
BT - Proceedings of the 40 th International Conference on Machine Learnin
PB - PMLR
Y2 - 23 July 2023 through 29 July 2023
ER -