CausalML Lab
Current Members
PostDoc Researchers
- Lai Wei
PhD Students
Projects
Our group’s research is focused on developing fundamental algorithms for causal discovery and inference from data. Some threads we focus on are as follows:
Information Theoretic Causal Inference and Discovery from Observational Data
We would like to extend the limits of the existing causal discovery algorithms from observational data. We establish connections with information theory and develop efficient algorithms for causal structure discovery using these connections.
Related publications:
- Z. Jiang, L. Wei, M. Kocaoglu, “Approximate Causal Effect Identification under Weak Confounding,” to appear in Proc. of ICML’23, Honolulu, HI, USA, July 2023.
- S. Compton, D. Katz, B. Qi, K. Greenewald, M. Kocaoglu, “Minimum-Entropy Coupling Approximation Guarantees Beyond the Majorization Barrier,” in Proc. of AISTATS’23, Valencia, Spain, April 2023.
- S. Compton, K. Greenewald, D. Katz, M. Kocaoglu, “Entropic Causal Inference: Graph Identifiability”, in Proc. of ICML’22, July 2022.
- S. Compton, M. Kocaoglu, Kristjan Greenewald, Dmitriy Katz, “Entropic Causal Inference: Identifiability and Finite Sample Results,” in Proc. of NeurIPS’20, Online, Dec. 2020.
- M. Kocaoglu, S. Shakkottai, A. G. Dimakis, C. Caramanis, S. Vishwanath, “Applications of Common Entropy for Causal Inference,” in Proc. of NeurIPS’20, Online, Dec. 2020.
- M. Kocaoglu, A. G. Dimakis, S. Vishwanath, B. Hassibi, “Entropic Causality and Greedy Minimum Entropy Coupling,” in Proc. of ISIT’17, 2017.
- M. Kocaoglu, A. G. Dimakis, S. Vishwanath, B. Hassibi, “Entropic Causal Inference,” in Proc. of AAAI’17, San Francisco, USA, Feb. 2017.
Experimental Design for Causal Discovery
In many settings, further experimentation is possible in order to aid with causal structure discovery. We seek out methods to be as efficient as possible with these experimental designs. Efficiency can mean different things in different contexts, for example, using the smallest possible number of experiments or minimizing an arbitrary modular cost function.
Related publications
- C. Squires, S. Magliacane, K. Greenewald, D. Katz, M. Kocaoglu, K. Shanmugam, “Active Structure Learning of Causal DAGs via Directed Clique Trees,” in Proc. of NeurIPS’20, Online, Dec. 2020.
- K. Greenewald, D. Katz, K. Shanmugam, S. Magliacane, M. Kocaoglu, E. B. Adsera, G. Bresler, “Sample Efficient Active Learning of Causal Trees,” in Proc. of NeurIPS’19, Vancouver, Canada, Dec. 2019.
- E. Lindgren, M. Kocaoglu, A. G. Dimakis, S. Vishwanath, “Experimental Design for Cost-Aware Learning of Causal Graphs” in Proc. of NeurIPS’18, Montreal, Canada, Dec. 2018.
- E. Lindgren, M. Kocaoglu, A. G. Dimakis, S. Vishwanath, “Submodularity and Minimum Cost Intervention Design for Learning Causal Graphs,” in DISCML’17 Workshop in NIPS’17, Dec. 2017.
- M. Kocaoglu, K. Shanmugam, E. Bareinboim, “Experimental Design for Learning Causal Graphs with Latent Variables,” in Proc. of NeurIPS’17, 2017.
- M. Kocaoglu, A. G. Dimakis, S. Vishwanath, “Cost-Optimal Learning of Causal Graphs,” in Proc. of ICML’17, 2017.
- M. Kocaoglu, A. G. Dimakis, S. Vishwanath, “Learning Causal Graphs with Constraints,” in NeurIPS’16 Workshop: What If? Inference and Learning of Hypothetical and Counterfactual Interventions in Complex Systems, Barcelona, Spain, Dec. 2016.
- K. Shanmugam, M. Kocaoglu, A. G. Dimakis, S. Vishwanath, “Learning Causal Graphs with Small Interventions,” in Proc. of NeurIPS’15, Montreal, Canada, Dec. 2015.
Causal Discovery from Interventional Data
Causal discovery from interventional data is the golden standard where we can get away with the least amount of assumptions. However one challenge today is that interventions are expensive and we need to make best use of the available interventional data. Especially for large-scale systems, learning the causal structure exhaustively requires too many experiments. We focus on distilling as much information as possible from a given collection of interventional datasets.
Related publications
- A. Jaber, M. Kocaoglu, K. Shanmugam, E. Bareinboim, “Causal Discovery from Soft Interventions with Unknown Targets: Characterization and Learning,” in Proc. of NeurIPS’20, Online, Dec. 2020.
- M. Kocaoglu*, A. Jaber*, K. Shanmugam*, E. Bareinboim, “Characterization and Learning of Causal Graphs with Latent Variables from Soft Interventions,” in Proc. of NeurIPS’19, Vancouver, Canada, Dec. 2019.
Applications of Causality in Machine Learning
We explore where causal inference and discovery could benefit the current machine learning methods.
Related publications
- K. Lee, M. M. Rahman, M. Kocaoglu, “Finding Invariant Predictors Efficiently via Causal Structure,” to appear in Proc. of UAI’23, Pittsburgh, PA, USA, Aug. 2023.
- M. A. Ikram, S. Chakraborty, S. Mitra, S. Saini, S. Bagchi, M. Kocaoglu, “Root Cause Analysis of Failures in Microservices through Causal Discovery,” in Proc. of NeurIPS’22, Dec. 2022.
- K. Ahuja, P. Sattigeri, K. Shanmugam, D. Wei, K. N. Ramamurthy, M. Kocaoglu, “Conditionally Independent Data Generation”, in Proc. of UAI’21, 2021.
- M. Kocaoglu*, C. Snyder*, A. G. Dimakis, S. Vishwanath, “CausalGAN: Learning Causal Implicit Generative Models with Adversarial Training,” in Proc. of ICLR’18, Vancouver, Canada, May 2018.
- R. Sen, K. Shanmugam, M. Kocaoglu, A. G. Dimakis, S. Shakkottai, “Contextual Bandits with Latent Confounders: An NMF Approach,” in Proc. of AISTATS’17, 2017.
Past Members
Visiting Researcher
- Suyeong Park, July - August 2022 đź“„
Acknowledgement
Our lab is currently supported by funding from the National Science Foundation (NSF), and Adobe Research.