Abstract
Simulated DAG models may exhibit properties that, perhaps inadvertently, render their structure identifiable and unexpectedly affect structure learning algorithms. Here, we show that marginal variance tends to increase along the causal order for generically sampled additive noise models. We introduce varsortability as a measure of the agreement between the order of increasing marginal variance and the causal order. For commonly sampled graphs and model parameters, we show that the remarkable performance of some continuous structure learning algorithms can be explained by high varsortability and matched by a simple baseline method. Yet, this performance may not transfer to real-world data where varsortability may be moderate or dependent on the choice of measurement scales. On standardized data, the same algorithms fail to identify the ground-truth DAG or its Markov equivalence class. While standardization removes the pattern in marginal variance, we show that data generating processes that incur high varsortability also leave a distinct covariance pattern that may be exploited even after standardization. Our findings challenge the significance of generic benchmarks with independently drawn parameters. The code is available at https://github.com/Scriddie/Varsortability.
Originalsprog | Engelsk |
---|---|
Titel | Advances in Neural Information Processing Systems 34 (NeurIPS) |
Forlag | NeurIPS Proceedings |
Publikationsdato | 2021 |
Sider | 1-13 |
Status | Udgivet - 2021 |
Begivenhed | 35th Conference on Neural Information Processing Systems (NeurIPS 2021) - Virtuel Varighed: 6 dec. 2021 → 14 dec. 2021 |
Konference
Konference | 35th Conference on Neural Information Processing Systems (NeurIPS 2021) |
---|---|
By | Virtuel |
Periode | 06/12/2021 → 14/12/2021 |