Research

My current research focuses on machine learning in general and causal inference and learning algorithms from data in particular. Specifically, I have been developing algorithms for learning causal graphs from observational and experimental data using tools from information theory and graph theory.

News

New! I will be starting as an assistant professor at Purdue University in the ECE department in January 2021 and will be looking for motivated Ph.D. students to work on fundamental problems in causal inference with machine learning applications starting Fall 2021. Please apply here, mention my name in your application package and email me your resume at mkocaoglu - at - utexas.edu with the subject line “Prospective PhD Student” w/out the quotes.
If you already sent me an email, please see this note.

New! Our paper titled “Applications of Common Entropy for Causal Inference” is accepted at NeurIPS’20, available here.

New! Our paper titled “Causal Discovery from Soft Interventions with Unknown Targets: Characterization and Learning” is accepted at NeurIPS’20, available here.

New! Our paper titled “Entropic Causal Inference: Identifiability and Finite Sample Results” is accepted at NeurIPS’20, available here.

New! Our paper titled “Active Structure Learning of Causal DAGs via Directed Clique Trees” is accepted at NeurIPS’20, available here.

Our paper titled “Characterization and Learning of Causal Graphs with Latent Variables from Soft Interventions” is accepted at NeurIPS’19 available here.

Our paper titled “Sample Efficient Active Learning of Causal Trees” is accepted at NeurIPS’19, available here.

I gave an invited talk in the WHY-19 Symposium on CausalGAN. Website and slides are here.

I gave a talk on the Shannon Channel on entropic methods for causal inference, you can watch it here.

Our paper titled “Experimental Design for Cost-Aware Learning of Causal Graphs” is accepted at NeurIPS’18. A preprint is available here.

A new preprint titled “Entropic Causal Inference with Latent Variables” is available here.

Our paper titled “CausalGAN: Learning Causal Implicit Generative Models with Adversarial Training” is accepted at ICLR’18. A preprint is available here.

Our paper titled “Experimental Design for Learning Causal Graphs with Latent Variables” is accepted at NIPS’17. A preprint is available here.

A new preprint titled “CausalGAN: Learning Causal Implicit Generative Models with Adversarial Training” is available here. Tensorflow implementation is available here.

Our paper titled “Cost-Optimal Learning of Causal Graphs” is accepted at ICML’17. A preprint is available here.

Our paper titled “Entropic Causality and Greedy Minimum Entropy Coupling” is accepted at ISIT 2017. A preprint is available here.

A new preprint titled “Sparse Quadratic Logistic Regression in Sub-quadratic Time” is available here.

A new preprint titled “Cost-Optimal Learning of Causal Graphs” is available here.

A new preprint titled “Entropic Causality and Greedy Minimum Entropy Coupling” is available here.

Our paper titled “Contextual Bandits with Latent Confounders: An NMF Approach” is accepted at AISTATS 2017. A preprint is available here.

Python code is available for the entropy-based causal inference algorithm of “Entropic Causal Inference” paper here.

Our paper titled “Entropic Causal Inference” is accepted at AAAI 2017. A preprint is available here.

Our paper titled “Learning Causal Graphs with Constraints” is accepted for a poster presentation in the NIPS 2016 workshop What if? Inference and Learning of Hypothetical and Counterfactual Interventions in Complex Systems, available here.

We have a preprint available on contextual bandits with unobserved confounders here.

Our paper titled “Learning Causal Graphs with Small Interventions” is accepted at NIPS 2015. A preprint is available here.

We are organizing a student seminar series within WNCG. You can reach the schedule here.

Our paper titled “Sparse Polynomial Learning and Graph Sketching” is accepted for oral presentation at NIPS 2014. A preprint is available here.