PUAD 729/PUBP 710: Advanced Research Methods for Policy and Management Schar School of Policy and Government George Mason University Summer 2017 Instructor: John D. Marvel, PhD Time and Location: Monday and Wednesday 7:20-10:00pm, Arlington: Founders Hall 477 Contact: Office Hours: office phone: 703-993-4447, email: jmarvel@gmu.edu Monday and Wednesday, 6:00-7:00pm, Founders Hall I Overview and Objectives Do school choice programs increase student performance? Do minimum wage laws increase unemployment? Are adolescents who are exposed to firearm violence more likely to become perpetrators of violence? Does the age-21 drinking limit in the United States result in dangerous drinking behavior among young adults? Each of these is fundamentally a causal question. Each asks whether a policy, program, or some other event or intervention causes changes in an outcome of interest. The purpose of this course is to familiarize students with research methods that are designed to answer causal questions that are relevant to policy and management. These methods include experimental and observational approaches to causal inference. Coverage includes field experiments, survey experiments, lab experiments, natural experiments, instrumental variables, regression discontinuity designs, propensity score matching, and difference-indifference designs. The course aims to minimize technical statistical content, but some statistics is unavoidable. Most class meetings will combine lecture with data analysis done in Stata. Upon completion of the course, you should: [i] Know how to design and evaluate research whose purpose is to answer causal questions [ii] Feel comfortable using Stata to analyze data from experimental and observational studies II Required Materials [i] Stata 14 IC 6-month license ($75) [ii] There is no required text. All assigned readings and other course materials will be posted at the following site (you do not need a dropbox account to access these materials): https://www.dr opbox.com/sh/pczp1chz9iwiitd/aaasacc5xal7gtsxahwng1hma?dl=0 1
III Assignments [i] Participation Be prepared and willing to discuss assigned readings Class exercises [ii] Homework Approximately six take-home exercises, mixing theory and data analysis All homeworks must be submitted in PDF form by responding to Dropbox file requests that I will send to your GMU email addresses Title each homework with your last name and homework number. For instance, if your name was George Washington, you would title your first homework washington-hw1 and you would title your second homework washington-hw2, etc. [iii] Take-home final exam Use real data (that I provide) to replicate and critique an existing study (or studies) IV Grading Participation 10% Homework 60% Take-home final exam 30% V Schedule - Week I [1] June 5 Overview, Introduction to Stata [2] June 7 The Potential outcomes framework - Week II [3] June 12 Field experiments [4] June 14 Statistical power and other design issues - Week III [5] June 19 Treatment effect heterogeneity [6] June 21 Survey experiments - Week IV [7] June 26 Lab experiments [8] June 28 Natural experiments 2
- Week V * July 3 No Class: July 4 observed [9] July 5 Regression discontinuity - Week VI [10] July 10 Instrumental variables I [11] July 12 Instrumental variables II - Week VII [12] July 17 Matching I [13] July 19 Matching II - Week VIII [14] July 24 Difference-in-differences [15] July 26 Audit studies/extra VI Course Outline - Week I [1] Overview, Introduction to Stata Taubes, G. (2007) Do we really know what makes us healthy? New York Times Magazine Heckman, J. J. and Smith, J. A. (1995). Assessing the case for social experiments. Journal of Economic Perspectives, 9(2):85 110 [2] The Potential outcomes framework - Week II Duflo et al. (2007). Using randomization in development economics research: A toolkit Sections 1 & 2 only Hernan and Robins (2017). Causal Inference Sections 1.1 1.3 only Mills, J. N., Egalite, A. J., and Wolf, P. J. (2016). How has the louisiana scholarship program affected students? [3] Field experiments Gerber, A. S., Green, D. P., and Larimer, C. W. (2008). Social pressure and voter turnout: Evidence from a large-scale field experiment. American Political Science Review, 102(01):33 48 Hodnett, E. D., Lowe, N. K., Hannah, M. E., Willan, A. R., Stevens, B., Weston, J. A., Ohlsson, A., Gafni, A., Muir, H. A., Myhr, T. L., et al. (2002). Effectiveness of nurses as providers 3
of birth labor support in north american hospitals: a randomized controlled trial. Jama, 288(11):1373 1381 [4] Statistical power and other design issues - Week III Duflo et al. (2007). Using randomization in development economics research: A toolkit Sections 4 & 5 only Stata Power and Sample-Size Reference Manual, Introduction to power and sample-size analysis [5] Treatment effect heterogeneity Duflo et al. (2007). Using randomization in development economics research: A toolkit Section 7.3 only Hernan and Robins (2017). Causal Inference Section 4.1 only Ferraro, P. J. and Price, M. K. (2013). Using nonpecuniary strategies to influence behavior: evidence from a large-scale field experiment. Review of Economics and Statistics, 95(1):64 73 [6] Survey experiments - Week IV Carnes, N. and Lupu, N. (2016). Do voters dislike working-class candidates? voter biases and the descriptive underrepresentation of the working class. American Political Science Review, 110(04):832 844 Barabas, J. and Jerit, J. (2010). Are survey experiments externally valid? American Political Science Review, 104(02):226 242 Duflo et al. 2007. Using randomization in development economics research: A toolkit Sections 6 & 7 only [7] Lab experiments Berkowitz, L. and Donnerstein, E. (1982). External validity is more than skin deep: Some answers to criticisms of laboratory experiments. American Psychologist, 37(3):245 257 Smith, J. (2016). The motivational effects of mission matching: A lab-experimental test of a moderated mediation model. Public Administration Review, 76(4):626 637 [8] Natural experiments - Week V Dunning, T. (2008). Improving causal inference: Strengths and limitations of natural experiments. Political Research Quarterly, 61(2):282 293 Kirk, D. S. (2009). A natural experiment on residential change and recidivism: Lessons from hurricane katrina. American Sociological Review, 74(3):484 505 4
* No class, July 4 observed [9] Regression discontinuity - Week VI Duflo et al. 2007. Using randomization in development economics research: A toolkit Re-read section 2.3 Jacob, R., Zhu, P., Somers, M.-A., and Bloom, H. (2012). A practical guide to regression discontinuity. MDRC Sections 1 & 2 only Carpenter, C. and Dobkin, C. (2011). The minimum legal drinking age and public health. The Journal of Economic Perspectives, 25(2):133 156 Read the entire article, but we are particularly interested in the sections on regression discontinuity designs [10] Instrumental variables I Murray, M. P. (2006). Avoiding invalid instruments and coping with weak instruments. The journal of economic perspectives, 20(4):111 132 Newhouse, J. P. and McClellan, M. (1998). Econometrics in outcomes research: the use of instrumental variables. Annual review of public health, 19(1):17 34 [11] Instrumental variables II - Week VII Daron Acemoglu, S. J. and Robinson, J. A. (2001). The colonial origins of comparative development: An empirical investigation. The American Economic Review, 91(5):1369 1401 Gottfried, M. A. (2010). Evaluating the relationship between student attendance and achievement in urban elementary and middle schools: An instrumental variables approach. American Educational Research Journal, 47(2):434 465 [12] Matching I d Agostino, R. B. (1998). Tutorial in biostatistics: propensity score methods for bias reduction in the comparison of a treatment to a non-randomized control group. Stat Med, 17(19):2265 2281 Bingenheimer, J. B., Brennan, R. T., and Earls, F. J. (2005). Firearm violence exposure and serious violent behavior. Science, 308(5726):1323 1326 [13] Matching II - Week VIII O Keefe, S. (2004). Job creation in california s enterprise zones: a comparison using a propensity score matching model. Journal of Urban Economics, 55(1):131 150 Foster, E. M. (2003). Propensity score matching: an illustrative analysis of dose response. Medical care, 41(10):1183 1192 [14] Difference-in-differences 5
Dimick, J. B. and Ryan, A. M. (2014). Methods for evaluating changes in health care policy: the difference-in-differences approach. Jama, 312(22):2401 2402 Bogart, W. T. and Cromwell, B. A. (2000). How much is a neighborhood school worth? Journal of urban Economics, 47(2):280 305 Card, D. and Krueger, A. B. (1994). Minimum wages and employment: A case study of the fast-food industry in new jersey and pennsylvania. American Economic Review, 84:772 793 [15] Audit Studies/extra Bertrand, M. and Mullainathan, S. (2004). Are emily and greg more employable than lakisha and jamal? a field experiment on labor market discrimination. The American Economic Review, 94(4):991 1013 Lahey, J. N. (2008). Age, women, and hiring an experimental study. Journal of Human Resources, 43(1):30 56 6