CausalML Lab

Causal reasoning is essential for artificial intelligence and machine learning. In CausalML lab, we develop new theoretical results that give us insights about fundamental causal discovery and inference problems, and develop novel algorithms based on these insights. Our research can be broadly categorized into multiple pillars

i) Practical and Approximate Causal Reasoning via Information-theoretic Methods,

ii) Causal Machine Learning,

iii) Fundamentals of Causal Inference and Discovery,

iv) High-dimensional Causal Inference with Deep Generative Models.

Current Members

PhD Students

Projects

Our group’s research is focused on developing fundamental algorithms for causal discovery and inference from data, and exploring the connections between causality and machine learning, information theory, graph theory, deep learning, online learning. Some threads we focus on are as follows.

Practical and Approximate Causal Reasoning via Information-theoretic Methods

Information-theoretic tools provide us with ways to go beyond the boundaries of classical causal reasoning algorithms and make approximate causal inferences under practical assumptions.

  1. Z. Jiang, L. Wei, M. Kocaoglu, “Approximate Causal Effect Identification under Weak Confounding,” in Proc. of ICML’23, Honolulu, HI, USA, July 2023.
  2. S. Compton, D. Katz, B. Qi, K. Greenewald, M. Kocaoglu, “Minimum-Entropy Coupling Approximation Guarantees Beyond the Majorization Barrier,” in Proc. of AISTATS’23, Valencia, Spain, April 2023.
  3. S. Compton, K. Greenewald, D. Katz, M. Kocaoglu, “Entropic Causal Inference: Graph Identifiability”, in Proc. of ICML’22, July 2022.
  4. S. Compton, M. Kocaoglu, Kristjan Greenewald, Dmitriy Katz, “Entropic Causal Inference: Identifiability and Finite Sample Results,” in Proc. of NeurIPS’20, Online, Dec. 2020.
  5. M. Kocaoglu, S. Shakkottai, A. G. Dimakis, C. Caramanis, S. Vishwanath, “Applications of Common Entropy for Causal Inference,” in Proc. of NeurIPS’20, Online, Dec. 2020.
  6. M. Kocaoglu, A. G. Dimakis, S. Vishwanath, B. Hassibi, “Entropic Causality and Greedy Minimum Entropy Coupling,” in Proc. of ISIT’17, 2017.
  7. M. Kocaoglu, A. G. Dimakis, S. Vishwanath, B. Hassibi, “Entropic Causal Inference,” in Proc. of AAAI’17, San Francisco, USA, Feb. 2017.

Causal Machine Learning

We explore ways in which causal inference and discovery can help more robust and practical machine learning solutions.

  1. Sean Kulinski, Zeyu Zhou, Ruqi Bai, Murat Kocaoglu, David I. Inouye Towards Characterizing Domain Counterfactuals for Invertible Latent Causal Models” to appear in Proc. of ICLR’24, Vienna, Austria, May 2024.
  2. L. Wei, M. Q. Elahi, M. Ghasemi, M. Kocaoglu, “Approximate Allocation Matching for Structural Causal Bandits with Unobserved Confounders,” in Proc. of NeurIPS’23, New Orleans, LA, USA, Dec. 2023.
  3. K. Lee, M. M. Rahman, M. Kocaoglu, “Finding Invariant Predictors Efficiently via Causal Structure,” in Proc. of UAI’23, Pittsburgh, PA, USA, Aug. 2023.
  4. M. A. Ikram, S. Chakraborty, S. Mitra, S. Saini, S. Bagchi, M. Kocaoglu, “Root Cause Analysis of Failures in Microservices through Causal Discovery,” in Proc. of NeurIPS’22, Dec. 2022.
  5. K. Ahuja, P. Sattigeri, K. Shanmugam, D. Wei, K. N. Ramamurthy, M. Kocaoglu, “Conditionally Independent Data Generation”, in Proc. of UAI’21, 2021.
  6. R. Sen, K. Shanmugam, M. Kocaoglu, A. G. Dimakis, S. Shakkottai, “Contextual Bandits with Latent Confounders: An NMF Approach,” in Proc. of AISTATS’17, 2017.

Fundamentals of Causal Inference and Discovery

We are interested in developing a fundamental understanding of how much causal knowledge can be extracted from data under well-defined assumptions. The cross-cutting nature of causal inference makes this a challenging problem with different constraints coming from different fields. For example, we can set up a randomized controlled trial but the number of such experiments needs to be small since interventions are costly. In other domains, interventional data may already have been collected for different purposes such as average treatment effect estimation, and the goal would be to repurpose them for causal discovery. In certain contexts conducting any experiments might be infeasible, and the observational data may be very noisy or contain only a small number of samples. We develop fundamental bounds on how much causal knowledge is contained in such data, and the associated sound and complete learning algorithms.

  1. M. Kocaoglu, “Characterization and Learning of Causal Graphs with Small Conditioning Sets,” in Proc. of NeurIPS’23, New Orleans, LA, 2023.
  2. A. Shah, K. Shanmugam, M. Kocaoglu, “Front-door Adjustment Beyond Markov Equivalence with Limited Graph Knowledge,” in Proc. of NeurIPS’23, New Orleans, LA, USA, Dec. 2023.
  3. S. Gao, R. Addanki, T. Yu, R. A. Rossi, M. Kocaoglu, “Causal Discovery in Semi-Stationary Time Series,” in Proc. of NeurIPS’23, New Orleans, LA, USA, Dec. 2023.
  4. C. Squires, S. Magliacane, K. Greenewald, D. Katz, M. Kocaoglu, K. Shanmugam, “Active Structure Learning of Causal DAGs via Directed Clique Trees,” in Proc. of NeurIPS’20, Online, Dec. 2020.
  5. K. Greenewald, D. Katz, K. Shanmugam, S. Magliacane, M. Kocaoglu, E. B. Adsera, G. Bresler, “Sample Efficient Active Learning of Causal Trees,” in Proc. of NeurIPS’19, Vancouver, Canada, Dec. 2019.
  6. A. Jaber, M. Kocaoglu, K. Shanmugam, E. Bareinboim, “Causal Discovery from Soft Interventions with Unknown Targets: Characterization and Learning,” in Proc. of NeurIPS’20, Online, Dec. 2020.
  7. M. Kocaoglu*, A. Jaber*, K. Shanmugam*, E. Bareinboim, “Characterization and Learning of Causal Graphs with Latent Variables from Soft Interventions,” in Proc. of NeurIPS’19, Vancouver, Canada, Dec. 2019.
  8. E. Lindgren, M. Kocaoglu, A. G. Dimakis, S. Vishwanath, “Experimental Design for Cost-Aware Learning of Causal Graphs” in Proc. of NeurIPS’18, Montreal, Canada, Dec. 2018.
  9. E. Lindgren, M. Kocaoglu, A. G. Dimakis, S. Vishwanath, “Submodularity and Minimum Cost Intervention Design for Learning Causal Graphs,” in DISCML’17 Workshop in NIPS’17, Dec. 2017.
  10. M. Kocaoglu, K. Shanmugam, E. Bareinboim, “Experimental Design for Learning Causal Graphs with Latent Variables,” in Proc. of NeurIPS’17, 2017.
  11. M. Kocaoglu, A. G. Dimakis, S. Vishwanath, “Cost-Optimal Learning of Causal Graphs,” in Proc. of ICML’17, 2017.
  12. M. Kocaoglu, A. G. Dimakis, S. Vishwanath, “Learning Causal Graphs with Constraints,” in NeurIPS’16 Workshop: What If? Inference and Learning of Hypothetical and Counterfactual Interventions in Complex Systems, Barcelona, Spain, Dec. 2016.
  13. K. Shanmugam, M. Kocaoglu, A. G. Dimakis, S. Vishwanath, “Learning Causal Graphs with Small Interventions,” in Proc. of NeurIPS’15, Montreal, Canada, Dec. 2015.

High-dimensional Causal Inference with Deep Generative Models

We are interested in leveraging the representation capabilities of deep neural networks to enable sampling from causal queries in the presence of high-dimensional variables such as images.

  1. M. M. Rahman*, M. Jordan*, M. Kocaoglu, “Conditional Generative Models are Sufficient to Sample from Any Causal Effect Estimand,” arXiv, 2024.
  2. M. M. Rahman, M. Kocaoglu, “Modular Learning of Deep Causal Generative Models for High-dimensional Causal Inference,” arXiv, 2024.
  3. M. Kocaoglu*, C. Snyder*, A. G. Dimakis, S. Vishwanath, “CausalGAN: Learning Causal Implicit Generative Models with Adversarial Training,” in Proc. of ICLR’18, Vancouver, Canada, May 2018.


Past Members

PostDoc Researchers

Visiting Researcher


Acknowledgement

Our lab is currently supported by funding from the National Science Foundation (NSF), and Adobe Research.