"Multi-Player Bandits Revisited"
- PDF : BK__ALT_2018.pdf
- HAL notice : BK__ALT_2018
- BibTeX : BK__ALT_2018.bib
- Source code and documentation: http://banditslilian.gforge.inria.fr/MultiPlayers.html
Multi-player Multi-Armed Bandits (MAB) have been extensively studied in the literature, motivated by applications to Cognitive Radio systems. Driven by such applications as well, we motivate the introduction of several levels of feedback for multi-player MAB algorithms. Most existing work assume that sensing information is available to the algorithm. Under this assumption, we improve the state-of-the-art lower bound for the regret of any decentralized algorithms and introduce two algorithms, RandTopM and MCTopM, that are shown to empirically outperform existing algorithms. Moreover, we provide strong theoretical guarantees for these algorithms, including a notion of asymptotic optimality in terms of the number of selections of bad arms. We then introduce a promising heuristic, called Selfish, that can operate without sensing information, which is crucial for emerging applications to Internet of Things networks. We investigate the empirical performance of this algorithm and provide some first theoretical elements for the understanding of its behavior.
- Multi-Armed Bandits
- Decentralized Algorithms
- Reinforcement Learning
- Cognitive Radio
- Opportunistic Spectrum Access