IEOR-DRO Seminar Series: Nathan Kallus, Cornell Tech
Tuesday,
November 20, 2018
1:10 PM - 2:00 PM
Speaker: Nathan Kallus (Cornell Tech)
Date: Tuesday, November 20, 2018
Time: 1:10pm to 2:00pm
Location: 303 Mudd
Title: Learning to Personalize from Observational Data Under Unobserved Confounding
Abstract: Recent work on counterfactual learning from observational data aims to leverage large-scale data -- much larger than any experiment can ever be -- to learn individual-level causal effects for personalized interventions. The hope is to transform electronic medical records to personalized treatment regimes, transactional records to personalized pricing strategies, and click- and "like"-streams to personalized advertising campaigns. Motivated by the richness of the data, existing approaches (including my own) make the simplifying assumption that there are no unobserved confounders: unobserved variables that affect both treatment and outcome and would induce non-causal correlations that cannot be accounted for. However, all observational data, which lacks experimental manipulation, no matter how rich, will inevitably be subject to some level of unobserved confounding and assuming otherwise can lead to personalized treatment policies that seek to exploit individual-level effects that are not really there, may intervene where not necessary, and may in fact lead to net harm rather than net good relative to current, non-personalized practices.
The question is then how to use such powerfully rich data to safely improve upon current practices. In this talk, I will present a novel approach to the problem that calibrates policy learning to realistic violations of the unverifiable assumption of unconfoundedness. Our framework for confounding-robust policy improvement optimizes the minimax regret of a candidate policy against a baseline standard-of-care policy over an uncertainty set for propensity weights motivated by sensitivity analysis in causal inference. By establishing a finite-sample generalization bound, we prove that our robust policy, when applied in practice, is (almost) guaranteed to do no worse than the baseline and improve upon it if it is possible. We characterize the adversarial optimization subproblem and use efficient algorithmic solutions to optimize over policy spaces such as hyperplanes, score cards, and decision trees. We assess our methods on a large clinical trial of acute ischaemic stroke treatment, demonstrating that hidden confounding can hinder existing approaches and lead to overeager intervention and unwarranted harm, while our robust approach guarantees safety and focuses on well-evidenced improvement, a necessity for making personalized treatment policies learned from observational data usable in practice.
Date: Tuesday, November 20, 2018
Time: 1:10pm to 2:00pm
Location: 303 Mudd
Title: Learning to Personalize from Observational Data Under Unobserved Confounding
Abstract: Recent work on counterfactual learning from observational data aims to leverage large-scale data -- much larger than any experiment can ever be -- to learn individual-level causal effects for personalized interventions. The hope is to transform electronic medical records to personalized treatment regimes, transactional records to personalized pricing strategies, and click- and "like"-streams to personalized advertising campaigns. Motivated by the richness of the data, existing approaches (including my own) make the simplifying assumption that there are no unobserved confounders: unobserved variables that affect both treatment and outcome and would induce non-causal correlations that cannot be accounted for. However, all observational data, which lacks experimental manipulation, no matter how rich, will inevitably be subject to some level of unobserved confounding and assuming otherwise can lead to personalized treatment policies that seek to exploit individual-level effects that are not really there, may intervene where not necessary, and may in fact lead to net harm rather than net good relative to current, non-personalized practices.
The question is then how to use such powerfully rich data to safely improve upon current practices. In this talk, I will present a novel approach to the problem that calibrates policy learning to realistic violations of the unverifiable assumption of unconfoundedness. Our framework for confounding-robust policy improvement optimizes the minimax regret of a candidate policy against a baseline standard-of-care policy over an uncertainty set for propensity weights motivated by sensitivity analysis in causal inference. By establishing a finite-sample generalization bound, we prove that our robust policy, when applied in practice, is (almost) guaranteed to do no worse than the baseline and improve upon it if it is possible. We characterize the adversarial optimization subproblem and use efficient algorithmic solutions to optimize over policy spaces such as hyperplanes, score cards, and decision trees. We assess our methods on a large clinical trial of acute ischaemic stroke treatment, demonstrating that hidden confounding can hinder existing approaches and lead to overeager intervention and unwarranted harm, while our robust approach guarantees safety and focuses on well-evidenced improvement, a necessity for making personalized treatment policies learned from observational data usable in practice.
Bio: Nathan Kallus is a Professor of Operations Research and Information Engineering at Cornell University and Cornell Tech, starting Fall 2016. His research revolves around data-driven decision making in operations, the interplay of optimization and statistics in decision making and inference, and the analytical capacities and challenges of unstructured, large-scale, and web-based data. His works span basic theory, effective methodology, and novel applications and has been recognized by awards. Nathan hails from the town of Haifa, Israel. He holds a PhD in Operations Research from MIT as well as a BA in Pure Mathematics and a BS in Computer Science both from UC Berkeley. Previously, Nathan has been a Visiting Scholar at USC's Department of Data Sciences and Operations and a Postdoctoral Associate at MIT's Operations Research and Statistics group. Nathan is currently recruiting highly motivated and talented PhD students for his research group at Cornell Tech in NYC.
LOCATION:
← BACK TO EVENTS
- Morningside
- Seminar
- Engineering
- Students
- Staff
- Faculty
- IEOR-DRO
Date Navigation Widget
Getting to Columbia
Other Calendars
- Alumni Events
- Barnard College
- Columbia Business School
- Columbia College
- Committee on Global Thought
- Heyman Center
- Jewish Theological Seminary
- Miller Theatre
- School of Engineering & Applied Science
- School of Social Work
- Teachers College
Guests With Disabilities
- Columbia University makes every effort to accommodate individuals with disabilities. Please notify us if you need any assistance by contacting the event’s point person. Alternatively, the Office of Disability Services can be reached at 212.854.2388 and [email protected]. Thank you.