Applied Statistics Workshop (Gov 3009)

Date: 

Wednesday, November 18, 2015, 12:00pm to 1:30pm

Location: 

K354, CGIS Knafel 1737 Cambridge St, Cambridge MA
The Applied Statistics Workshop (Gov 3009) meets all academic year, Wednesdays, 12pm-1:30pm, in CGIS K354. This workshop is a forum for advanced graduate students, faculty, and visiting scholars to present and discuss methodological or empirical work in progress in an interdisciplinary setting. The workshop features a tour of Harvard's statistical innovations and applications with weekly stops in different fields and disciplines and includes occasional presentations by invited speakers. There is a free lunch provided. Presentation by Luke Miratrix Title: Estimating and assessing treatment effect variation in large-scale randomized trials with randomization inference Authors: Peng Deng, Avi Feller, Luke Miratrix Abstract: Recent literature has underscored the critical role of treatment effect variation in estimating and understanding causal effects. This approach, however, is in contrast to much of the foundational research on causal inference; Neyman, for example, avoided such variation through his focus on the average treatment effect (ATE) and his definition of the confidence interval. We extend the Neymanian framework to explicitly allow both for treatment effect variation explained by covariates, known as the systematic component, and for unexplained treatment effect variation, known as the idiosyncratic component. This perspective enables estimation and testing of impact variation without imposing a model on the marginal distributions of potential outcomes, with the workhorse approach of regression with interaction terms being a special case. Our approach leads to two practical results. First, estimates of systematic impact variation give sharp bounds on overall treatment variation as well as bounds on the proportion of total impact variation explained by a given model---this is essentially an R^2 for treatment effect variation. Second, by using covariates to partially account for the correlation of potential outcomes, we sharpen the bounds on the variance of the unadjusted average treatment effect estimate itself. As long as the treatment effect varies across observed covariates, these bounds are sharper than the current sharp bounds in the literature. We demonstrate these ideas on the Head Start Impact Study, a large randomized evaluation in educational research, showing that these results are meaningful in practice.