RD or Not RD: Using Experimental Studies to Assess the Performance of the Regression Discontinuity Approach

Publisher: Evaluation Review, vol. 42, no.
Feb 28, 2018
Authors
Philip Gleason, Alexandra Resch, and Jillian Berk

Background. This article explores the performance of regression discontinuity (RD) designs for measuring program impacts using a synthetic within-study comparison design. We generate synthetic RD data sets from experimental data sets from two recent evaluations of educational interventions—the Educational Technology Study and the Teach for America Study—and compare the RD impact estimates to the experimental estimates of the same intervention.

Objectives. This article examines the performance of the RD estimator with the design is well implemented and also examines the extent of bias introduced by manipulation of the assignment variable in an RD design.

Research design. We simulate RD analysis files by selectively dropping observations from the original experimental data files. We then compare impact estimates based on this RD design with those from the original experimental study. Finally, we simulate a situation in which some students manipulate the value of the assignment variable to receive treatment and compare RD estimates with and without manipulation.

Results and conclusion. RD and experimental estimators produce impact estimates that are not significantly different from one another and have a similar magnitude, on average. Manipulation of the assignment variable can substantially influence RD impact estimates, particularly if manipulation is related to the outcome and occurs close to the assignment variable’s cutoff value.