University of Wisconsin, and NBER, USA, and IZA, Germany
IZA World of Labor role
Professor of Economics, University of Wisconsin, USA
Experimental and non-experimental methods for the evaluation of interventions, with particular application to social and educational programs
Positions/functions as a policy advisor
Consultant to governments in the US, Canada, the UK, and Australia on evaluation issues for labor market and educational interventions
Faculty at University of Western Ontario (1994–2001); faculty at University of Maryland (2001–2005); faculty at University of Michigan (2005–2017)
PhD Economics, University of Chicago, 1996
"Is the threat of reemployment services more effective than the services themselves?" American Economic Review 93:4 (2003): 1313–1327 (with D. Black, M. Berger, and B. Noel).
"The economics and econometrics of active labor market programmes." In: Ashenfelter, O. C., and D. Card (eds.). Handbook of Labor Economics, Volume 3A. Amsterdam: Elsevier, 1999: 1865–2097 (with J. Heckman and R. LaLonde).
"Does matching overcome LaLonde's critique of nonexperimental methods?" Journal of Econometrics 125:1–2 (2005): 305–353 (with P. Todd).
"Heterogeneous program impacts: Experimental evidence from the PROGRESA Program." Journal of Econometrics 64 (2008): 487–535 (with H. Djebbari).
"Government-sponsored vocational education." In: Hanushek, E. A., S. Machin, and L. Woessmann (eds). Handbook of the Economics of Education, Volume 5. Amsterdam: Elsevier, 2016; 479–652 (with B. McCall and C. Wunsch).
Are experiments the gold standard or just over-hyped?Jeffrey A. Smith, May 2018Non-experimental evaluations of programs compare individuals who choose to participate in a program to individuals who do not. Such comparisons run the risk of conflating non-random selection into the program with its causal effects. By randomly assigning individuals to participate in the program or not, experimental evaluations remove the potential for non-random selection to bias comparisons of participants and non-participants. In so doing, they provide compelling causal evidence of program effects. At the same time, experiments are not a panacea, and require careful design and interpretation.MoreLess