DIFFERENTIABLE DAG SAMPLING

Bertrand Charpentier, Simon Kibler, Stephan Günnemann

Publikation: KonferenzbeitragPapierBegutachtung

9 Zitate (Scopus)

Abstract

We propose a new differentiable probabilistic model over DAGs (DP-DAG). DP-DAG allows fast and differentiable DAG sampling suited to continuous optimization. To this end, DP-DAG samples a DAG by successively (1) sampling a linear ordering of the node and (2) sampling edges consistent with the sampled linear ordering. We further propose VI-DP-DAG, a new method for DAG learning from observational data which combines DP-DAG with variational inference. Hence, VI-DP-DAG approximates the posterior probability over DAG edges given the observed data. VI-DP-DAG is guaranteed to output a valid DAG at any time during training and does not require any complex augmented Lagrangian optimization scheme in contrast to existing differentiable DAG learning approaches. In our extensive experiments, we compare VI-DP-DAG to other differentiable DAG learning baselines on synthetic and real datasets. VI-DP-DAG significantly improves DAG structure and causal mechanism learning while training faster than competitors.

OriginalspracheEnglisch
PublikationsstatusVeröffentlicht - 2022
Veranstaltung10th International Conference on Learning Representations, ICLR 2022 - Virtual, Online
Dauer: 25 Apr. 202229 Apr. 2022

Konferenz

Konferenz10th International Conference on Learning Representations, ICLR 2022
OrtVirtual, Online
Zeitraum25/04/2229/04/22

Fingerprint

Untersuchen Sie die Forschungsthemen von „DIFFERENTIABLE DAG SAMPLING“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren