Journal Article
Many published papers in the management field have used statistical methods that, according to the latest insights in econometrics, can lead to elevated rates of false positives: results that appear “significant,” whereas they are not. The question is how problematic these less robust econometric analyses are in practice for management research.
This paper presents simulations and an empirical replication to investigate two widespread but now largely discredited practices in panel data analysis: nonclustered standard errors and random effects (RE). The simulations indicate that these two practices can lead to strongly elevated rates of false positives in typical empirical settings studied in management research.
The often-advocated Hausman test does not always prevent false positives in RE regressions. Replication of a published regression that used RE and classic standard errors yields that many of the coefficients reported as significant in the original analysis become insignificant when using fixed effects and clustered standard errors, on a slightly different sample.
Based on the findings in this paper, published results using nonclustered standard errors or RE estimates for panel data should be interpreted with great care, because the probability that they are false positives can be much larger than reported. Going forward, empirical researchers should cluster standard errors to account for serial correlation and use fixed rather than random effects to account for unobserved heterogeneity.
Faculty
Assistant Professor of Strategy