Confounded Budgeted Causal Bandits

Fateme Jamshidi, Jalal Etesami, Negar Kiyavash

Publikation: Beitrag in FachzeitschriftKonferenzartikelBegutachtung

2 Zitate (Scopus)

Abstract

We study the problem of learning “good” interventions in a stochastic environment modeled by its underlying causal graph. Good interventions refer to interventions that maximize rewards. Specifically, we consider the setting of a pre-specified budget constraint, where interventions can have non-uniform costs. We show that this problem can be formulated as maximizing the expected reward for a stochastic multi-armed bandit with side information. We propose an algorithm to minimize the cumulative regret in general causal graphs. This algorithm trades off observations and interventions based on their costs to achieve the optimal reward. This algorithm generalizes the state-of-the-art methods by allowing non-uniform costs and hidden confounders in the causal graph. Furthermore, we develop an algorithm to minimize the simple regret in the budgeted setting with non-uniform costs and also general causal graphs. We provide theoretical guarantees, including both upper and lower bounds, as well as empirical evaluations of our algorithms. Our empirical results showcase that our algorithms outperform the state of the art.

OriginalspracheEnglisch
Seiten (von - bis)423-461
Seitenumfang39
FachzeitschriftProceedings of Machine Learning Research
Jahrgang236
PublikationsstatusVeröffentlicht - 2024
Veranstaltung3rd Conference on Causal Learning and Reasoning, CLeaR 2024 - Los Angeles, USA/Vereinigte Staaten
Dauer: 1 Apr. 20243 Apr. 2024

Fingerprint

Untersuchen Sie die Forschungsthemen von „Confounded Budgeted Causal Bandits“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren