MIT Alumni Books Podcast Failing in the Field

Similar documents
Getting Started with Deliberate Practice

Cognitive Thinking Style Sample Report

How to make an A in Physics 101/102. Submitted by students who earned an A in PHYS 101 and PHYS 102.

The Flaws, Fallacies and Foolishness of Benchmark Testing

Teaching Reproducible Research Inspiring New Researchers to Do More Robust and Reliable Science

Susan Castillo Oral History Interview, June 17, 2014

EVENT BROCHURE. Top Ranking Performers BEST IN THE WORLD 2017 GLOBAL Conference. Grange City Hotel, London th October 2017

Teacher Loses Job After Commenting About Students, Parents on Facebook

WORK OF LEADERS GROUP REPORT

5 Guidelines for Learning to Spell

Chapter 5: TEST THE PAPER PROTOTYPE

No Child Left Behind Bill Signing Address. delivered 8 January 2002, Hamilton, Ohio

2013 DISCOVER BCS NATIONAL CHAMPIONSHIP GAME NICK SABAN PRESS CONFERENCE

What is Teaching? JOHN A. LOTT Professor Emeritus in Pathology College of Medicine

Improving Conceptual Understanding of Physics with Technology

Understanding and Changing Habits

Becoming a Leader in Institutional Research

LEARN TO PROGRAM, SECOND EDITION (THE FACETS OF RUBY SERIES) BY CHRIS PINE

Science Fair Project Handbook

P-4: Differentiate your plans to fit your students

We'll be looking at some of the work of Isabel Beck, Mckeown, and Kucan as we look at developing

WEEK FORTY-SEVEN. Now stay with me here--this is so important. Our topic this week in my opinion, is the ultimate success formula.

Webinar How to Aid Transition by Digitizing Note-Taking Support

1. Locate and describe major physical features and analyze how they influenced cultures/civilizations studied.

How to make your research useful and trustworthy the three U s and the CRITIC

Building Mutual Trust and Rapport. Navigating the Intersection of Administrators and Faculty in Short-Term Program Planning

A CONVERSATION WITH GERALD HINES

Client Psychology and Motivation for Personal Trainers

On May 3, 2013 at 9:30 a.m., Miss Dixon and I co-taught a ballet lesson to twenty

Dentist Under 40 Quality Assurance Program Webinar

PREPARATION STUDY ABROAD PERIOD. Adam Mickiewicz University Report 1. level bachelor s master s PhD. 30 / 06 / 2017 (dd/mm/yyyy)

Fearless Change -- Patterns for Introducing New Ideas

Our installer John Stoddard was polite, courteous, and efficient. The order was exactly as we had placed it and we are very satisfied.

PREPARATION STUDY ABROAD PERIOD

By Merrill Harmin, Ph.D.

PREP S SPEAKER LISTENER TECHNIQUE COACHING MANUAL

Episode 97: The LSAT: Changes and Statistics with Nathan Fox of Fox LSAT

STUDENTS' RATINGS ON TEACHER

The Foundations of Interpersonal Communication

Active Ingredients of Instructional Coaching Results from a qualitative strand embedded in a randomized control trial

Exemplar Grade 9 Reading Test Questions

Executive Guide to Simulation for Health

Eduroam Support Clinics What are they?

COMMUNITY ENGAGEMENT

Experience Corps. Mentor Toolkit

FLN Learning Helping your Child succeed

And it was really interesting, he came to New York, we talked, I liked what he said about Columbia. Who was this? This was Nat Lehrman.

White Paper. The Art of Learning

Fundraising 101 Introduction to Autism Speaks. An Orientation for New Hires

"Be who you are and say what you feel, because those who mind don't matter and

ALER Association of Literacy Educators and Researchers Charlotte, North Carolina November 5-8, 2009

Changing User Attitudes to Reduce Spreadsheet Risk

B. How to write a research paper

Photography: Photojournalism and Digital Media Jim Lang/B , extension 3069 Course Descriptions

Lucy Calkins Units of Study 3-5 Heinemann Books Support Document. Designed to support the implementation of the Lucy Calkins Curriculum

LEARNER VARIABILITY AND UNIVERSAL DESIGN FOR LEARNING

expository, graphic essay graphic essay graphic

SMARTboard: The SMART Way To Engage Students

Mission Statement Workshop 2010

Undocumented Students. from high school also want to attend a university. Unfortunately, the majority can t due to their

The Process of Evaluating and Selecting An Option

the conventional song and dance formula is kept in 21st century H istory Movies, filmmakers are now moving towards

WHY GRADUATE SCHOOL? Turning Today s Technical Talent Into Tomorrow s Technology Leaders

a) analyse sentences, so you know what s going on and how to use that information to help you find the answer.

University of Waterloo School of Accountancy. AFM 102: Introductory Management Accounting. Fall Term 2004: Section 4

The One Minute Preceptor: 5 Microskills for One-On-One Teaching

Case study Norway case 1

Welcome to the Purdue OWL. Where do I begin? General Strategies. Personalizing Proofreading

The format what writing Are, are type

Faculty Schedule Preference Survey Results

UNDERSTANDING DECISION-MAKING IN RUGBY By. Dave Hadfield Sport Psychologist & Coaching Consultant Wellington and Hurricanes Rugby.

My first english teacher essay. To teacher first on research andor english, simply order an essay from us..

essays personal admission college college personal admission

Entrepreneurial Discovery and the Demmert/Klein Experiment: Additional Evidence from Germany

PHYS 2426: UNIVERSITY PHYSICS II COURSE SYLLABUS: SPRING 2013

Building a Sovereignty Curriculum

What Teachers Are Saying

Academic Integrity RN to BSN Option Student Tutorial

teaching essay writing presentation presentation essay presentations. presentation, presentations writing teaching essay essay writing

Sight Word Assessment

Listening to your members: The member satisfaction survey. Presenter: Mary Beth Watt. Outline

Reflective problem solving skills are essential for learning, but it is not my job to teach them

BUSINESS HONORS PROGRAM

The Indices Investigations Teacher s Notes

Professional Voices/Theoretical Framework. Planning the Year

Following the Freshman Year

GENERAL BUSINESS 7397, section 18842: BOOKS AN MBA SHOULD READ

Many instructors use a weighted total to calculate their grades. This lesson explains how to set up a weighted total using categories.

Contents. Foreword... 5

How to get the most out of EuroSTAR 2013

Notetaking Directions

Cara Jo Miller. Lead Designer, Simple Energy Co-Founder, Girl Develop It Boulder

IMPORTANT STEPS WHEN BUILDING A NEW TEAM

Ryan Coogler and the 'Fruitvale Station' effect - San Francisco...

Curricular Redesign Grant Program: Enhancing Teaching and Learning with Technology. Final Report

Red Flags of Conflict

Me on the Map. Standards: Objectives: Learning Activities:

Cousins recall rescuing 'angels' rather than devastating tornado Joplin Globe 5/22/16

CALCULUS III MATH

Aviation English Training: How long Does it Take?

Transcription:

MIT Alumni Books Podcast Failing in the Field [SLICE OF MIT THEME MUSIC] ANNOUNCER: You're listening to the Slice of MIT podcast, a production of the MIT Alumni Association. JOE It's the MIT Alumni Books Podcast. I'm Joe McGonegal, Director of Alumni Education. And joining me is Dean Karlan, PhD class of '02, a Professor of Economics at Yale University. Dean's new book is Failing in the Field-- What We Can Learn When Field Research Goes Wrong, co-authored with Jacob Appel and published this fall by Princeton University Press. Karlan is the founder of Innovations for Poverty Action, a research and policy nonprofit that discovers and promotes effective solutions to global poverty problems. Dean Karlan, thanks for joining me. The book collaboration with this co-author was born out of a failure in the field in Africa a decade ago, so why choose 2016 to publish it? DEAN It had been an idea that we'd had for a while, but needless to say, you're not typically driven to write up all the failures. In order to be clear, these are not failed development projects where we did an evaluation and we found that something didn't work, because that's actually an important story. And that's generating knowledge, and that should be published as a paper. These are projects that get called off halfway through, and there was really no knowledge generated from an academic perspective on the key question that we set out to ask when we started the project. And so it never saw the light of day. And yet when we are out there constantly in the field, trying new things, trying new ways of collecting data, if we don't collect our failures on the research process, we're not going to learn as well as we could. And so that was basically the motivation behind this, was to say, let's document some of these. Let's also try to create a little bit more of a culture of sharing these kinds of stories. Because we're going to all do better in the way we do our research if we're a little bit more open and learning from our failures. You say the subject need not be taboo. It shouldn't be. And I mean, obviously some of these are embarrassing. There's some things in here I scratch my head, and it's embarrassing that we did these things. But at the end of the day, we did them, and we'd like to see that other people don't do them, too. So let's go ahead and be candid and talk about them. One thing that allows us to do this,

too. So let's go ahead and be candid and talk about them. One thing that allows us to do this, to be perfectly blunt, is that IPA and the MIT Jameel Poverty Action Lab-- that's our sister organization. We have had exhilarating successes with some of the studies we've done, in seeing the way we've managed to generate important knowledge. And that knowledge and that evidence has changed the way policies have happened. But too often, people get this idea that everything's rosy, and that's not the way the world really works. And so this is an attempt to document the dark side a little bit and show that there have been some pick-ups along the way. And we learn and we improve, and it's part of the journey. You've written plenty about your successes, I'm sure. The scale-ups from IPA and JPAL have reached over 200 million people, you say. But then again, two billion people in the world live on $2 a day, and there's plenty more scaling up that can be done. Absolutely. The book, readers will find five chapters on kinds of failure that research projects in randomized controlled trials will encounter. And then you take us through some case studies of these wide scale experiments on curing the ills of poverty around the world. So tell listeners a story of one of these failures in one of these case studies that you take us through and what was learned from it. One of the earlier ones was in the Philippines. It was actually one of our-- Philippines and Peru were the first two countries that IPA started working in. And the punch line lesson that we learned is it's never quite as easy to set up a randomized trial and adhere to the protocols as you think it is. So we were working with a micro-lending organization, and they wanted to know what are the extra benefits one might get if you include business training or health training as part of the weekly meetings in the microcredit program. And this is a very common approach that you see in microcredit in different parts of the world. And yet we didn't really have good evidence about the value of that add-on. And so we worked with this organization, and we thought it was going to be fairly simple. We took a list of all of their groups, and we randomized them into three piles-- those that get business training, those that get health training, and those that stayed with the same services they were currently getting. And they said, OK, this is great. Make sure to let us know which

ones do which, and everything's great. And we did not actually put a staff member on the ground there, working with them, day in and day out, to help make sure that those assignments were actually adhered to. Instead we just went and spot-checked every so often and would give feedback to them about what was happening. The first spot check we did, we found that something like 70%-- I forget the exact number-- 70% of treatment was getting treated, and about 5% of the control group was getting treated. Now, 70% is not such a bad number, because there's a lot of things going on for the group that week. They might ditch the training that week because they need to focus on repayment. The 5% control worried me a little bit more, because that's supposed to be zero. And so that means there is a bit of confusion. Maybe a CRED officer moved assignments or something like that. And so maybe you got a little bit of leakage. A little bit of leakage is OK. You can still do the analysis that's needed, as long as you have this big difference. But then we talked a lot about, OK, this is good, but maybe some incentives would be good to put in place to make sure that all the CRED offices adhered to the protocols. And this was all done from afar, and that's the theme of the mistakes here. This is all being done from afar. And they said, sure, and they said, great. And three months later, we asked, are the incentives in place? And they said, we're working on it. We go back, we do another spot check, and this time about 35% of the treatment group is getting treated and something like 30% of the control group was getting treated. And that was the end of the study. Luckily, for what it's worth, we didn't do a baseline. You don't need a baseline when you do a randomized trial. There are some reasons why you might like to have one, but you don't need one. We did actually do this a bit on the [INAUDIBLE] with the hopes that we would have all this nice evidence of a good experiment and then we would be able to raise money for an [INAUDIBLE] But we never were obviously able to raise money for an [INAUDIBLE] when there's really no experiment to evaluate, so it never came to fruition. And the whole thing got called off. And the lesson learned was, put people in the field, make sure that you're working closely with

the organization. Just because it seems simple from your perspective as a researcher does not mean that it's simple from the organizational perspective about how to adhere to the experimental protocols. You read about technical design flaws, survey and measurement design flaws, low participation rates, and appropriate research settings-- all sorts of pitfalls that control trials fall into. You write that failing fast study and pivoting quickly can be demoralizing for those doing your surveys and your trials. That's right. You do need to be self-aware of what's going on. You do need to make sure that you don't fall prey to some costs and just refuse to let go and just keep pushing and pushing on the project, even if the stars are just not aligned and the organizations are not probably excited. And then it's going to be a painful process. And you express your frustration with the publishing industry, journals that have been less prone in the past to publish no impact studies and studies of failure. Talk about response to the book so far. Do you see any change in the publishing industry? There has been change, but I want to make an important distinction here. The grumpiness that I have with the publication journals and decisions is not about the stuff I was writing about in this book. There's not a single thing I wrote in this book that I think would have realistically been viable for a refereed journal in economics. Because all we ever learned was some methodological points about project management and partner management and technical issues about procedures for running a randomized trial. The part that's a problem in academia is when there's a good question being asked. I think A is going to cause B. And you go out, you set a task, and it turns out A did not cause B to happen. You get what's called a null result. And then those do have a problem sometimes getting published. And that's a problem, because while it was a good theory and there was a plausible story as to why A should cause B, if you learn that it didn't, that's important. And it's just as important as documenting that it does cause B. And yet that does happen. I had a project in the Philippines. I'll never forget the line from a referee that said, the fundamental problem with this paper is the null result. And yet we had lots of data explaining the null result, mechanisms.

We had all sorts of useful ways of understanding why we were not finding-- and it's not to say all journals like that. There's definitely journals that will and editors and referees that do share my sentiments on this, but it is a historical problem across science, not just economics. What ends up happening is we say when you get a statistically significant result, a lot of times we say, OK, I want to know why. And that's an important part of generating the knowledge, to understand why did that result happen. But for some reason when we get a null result, the instinct of our profession is to require a higher bar for understanding why. And it shouldn't be a higher bar. In both cases, we want some understanding as to why the result happened or didn't happen. So understanding the mechanism the why is always important, but it's not more important when you get stars on your coefficients for statistical significance than when you don't. It's always an important question, but it is the case that, for some reason, our profession tends to, in my perception, require a higher bar on the null result. Talk about how your MIT education is alive and well in the writing in this book. I got started writing randomized trials and doing fieldwork when I was a grad student at MIT. I'm about to go give a talk at this gathering called Fail Test about a project that I did when I was at MIT. I failed. And luckily, some did succeed, too, so I did get out of MIT. I did graduate [INAUDIBLE]. But MIT is where I got my start doing all of this research. And to this day, I still collaborate and am very close to my advisors at MIT. And a big part of this whole movement is the Jameel Poverty Action Lab has been instrumental in seeing this incredible expansion of people out there in the world gathering evidence to figure out what's working and what's not. It's exhilarating to be a part of this movement. You sit on the board of JPAL now. Tell alumni something about JPAL they might not understand about its impact or appreciate about its impact. JPAL's vision is to develop a network of researchers that are like-minded, who want to go out in developing countries, and the United States as well, and conduct randomized trials on social policy, whether it's run by a nonprofit, a business, a government. That's the first part of its mission, and the second part is to do the right communication and the right scale-up work.

Obviously it's a university. MIT is not going to go and get involved in direct scale-up, but we need to be able to communicate the results of the research to the right players to figure out who needs to do what differently and get that information to them. And that policy and communications work is a big part of what we're doing at JPAL. And, of course, because we are part of a university, education is a big part of what JPAL is doing as well-- executive education in developing countries, here in the United States, online courses, things of that nature. Talk about what else needs to be written on the topic of failure in the field. Well, what we're hoping to see happen is-- we collaborated with the World Bank's Development Impact Blog so that people can write in their own failures, put in some key words. And our hope is that enough people will start writing some in that it'll become its own little blog. And people can have a bit of living history of them, so that when you're doing a project and someone says, I know someone who tried a project like that, it failed, you ought to go read the blog post-- and there it is. And it has a nice little home where they can all live together. So we're really hoping some others do. I know we've seen a few posts, and that's one of reasons why I'm giving this talk today, to try to generate some more enthusiasm for people. We don't want it to be burdensome. That's why we're not making it into a big thing. It's just a blog. But just write it up. Get it out there. Let people know what happened. And tell me what else you're reading right now. Well, literally at this very moment-- well, not literally, I mean, but just before the phone call, I was reading Underground Airlines. It's a fiction novel. It's kind of scary. It's a depiction of America if the Civil War was won by the South, where slavery was still in place in four states. And yet it's set in today's time with all of the social media, et cetera. The book is Failing in the Field-- What We Can Learn When Field Research Goes Wrong. It's Dean Karlan, PhD, class of '02, and it's available on Princeton University Press or at your favorite local bookstore. Dean Karlan, thanks for patching in and telling us about it. Great. Thank you very much.

[SLICE OF MIT THEME MUSIC]