Assessing Inference Methods
Bruno Ferman
公開日: 2019/12/18
Abstract
We analyze different types of simulations that applied researchers can use to assess whether their inference methods reliably control false-positive rates. We show that different assessments involve trade-offs, varying in the types of problems they may detect, finite-sample performance, susceptibility to sequential-testing distortions, susceptibility to cherry-picking, and implementation complexity. We also show that a commonly used simulation to assess inference methods in shift-share designs can lead to misleading conclusions and propose alternatives. Overall, we provide novel insights and recommendations for applied researchers on how to choose, implement, and interpret inference assessments in their empirical applications.