TY - GEN
T1 - Compositional Abstraction Error and a Category of Causal Models
AU - Rischel, Eigil F.
AU - Weichwald, Sebastian
N1 - Funding Information:
We thank the anonymous reviewers for their constructive comments that helped improve the interdisciplinary presentation. SW was supported by the Carlsberg Foundation.
Publisher Copyright:
© 2021 37th Conference on Uncertainty in Artificial Intelligence, UAI 2021. All Rights Reserved.
PY - 2021
Y1 - 2021
N2 - Interventional causal models describe several joint distributions over some variables used to describe a system, one for each intervention setting. They provide a formal recipe for how to move between the different joint distributions and make predictions about the variables upon intervening on the system. Yet, it is difficult to formalise how we may change the underlying variables used to describe the system, say moving from fine-grained to coarse-grained variables. Here, we argue that compositionality is a desideratum for such model transformations and the associated errors: When abstracting a reference model M iteratively, first obtaining M0 and then further simplifying that to obtain M00, we expect the composite transformation from M to M00 to exist and its error to be bounded by the errors incurred by each individual transformation step. Category theory, the study of mathematical objects via compositional transformations between them, offers a natural language to develop our framework for model transformations and abstractions. We introduce a category of finite interventional causal models and, leveraging theory of enriched categories, prove the desired compositionality properties for our framework.
AB - Interventional causal models describe several joint distributions over some variables used to describe a system, one for each intervention setting. They provide a formal recipe for how to move between the different joint distributions and make predictions about the variables upon intervening on the system. Yet, it is difficult to formalise how we may change the underlying variables used to describe the system, say moving from fine-grained to coarse-grained variables. Here, we argue that compositionality is a desideratum for such model transformations and the associated errors: When abstracting a reference model M iteratively, first obtaining M0 and then further simplifying that to obtain M00, we expect the composite transformation from M to M00 to exist and its error to be bounded by the errors incurred by each individual transformation step. Category theory, the study of mathematical objects via compositional transformations between them, offers a natural language to develop our framework for model transformations and abstractions. We introduce a category of finite interventional causal models and, leveraging theory of enriched categories, prove the desired compositionality properties for our framework.
UR - http://www.scopus.com/inward/record.url?scp=85124320987&partnerID=8YFLogxK
M3 - Article in proceedings
AN - SCOPUS:85124320987
T3 - Proceedings of Machine Learning Research
SP - 1013
EP - 1023
BT - Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence,
PB - PMLR
T2 - 37th Conference on Uncertainty in Artificial Intelligence, UAI 2021
Y2 - 27 July 2021 through 30 July 2021
ER -