Inference for Losers (with Isaiah Andrews, Dillon Bowen and Toru Kitagawa), American Economic Association Papers and Proceedings, 112 (2022), 635-640.
Inference After Estimation of Breaks (with Isaiah Andrews and Toru Kitagawa), Journal of Econometrics, 224 (2021), 39-59.
Asymptotically Uniform Tests After Consistent Model Selection in the Linear Regression Model, Journal of Business and Economic Statistics, 38 (2020), 810-825.
Estimation and Inference with a (Nearly) Singular Jacobian (with Sukjin Han), Quantitative Economics, 10 (2019), 1019-1068.
Bonferroni-Based Size-Correction for Nonstandard Testing Problems, Journal of Econometrics, 200 (2017), 17-35.
Parameter Estimation Robust to Low-Frequency Contamination (with Jonathan B. Hill), Journal of Business and Economic Statistics, 35 (2017), 598-610.
Memory Parameter Estimation in the Presence of Level Shifts and Deterministic Trends (with Pierre Perron), Econometric Theory, 29 (2013), 1196-1237.
Estimation of the Long-Memory Stochastic Volatility Model Parameters that is Robust to Level Shifts and Deterministic Trends, Journal of Time Series Analysis, 34 (2013), 285-301.
Short and Simple Confidence Intervals when the Directions of Some Effects are Known (with Philipp Ketz), revise and resubmit at Review of Economics and Statistics
Stata code available from SSC archive: type "ssc install ssci"
We introduce adaptive confidence intervals on a parameter of interest in the presence of nuisance parameters, such as coefficients on control variables, with known signs. Our confidence intervals are trivial to compute and can provide significant length reductions relative to standard ones when the nuisance parameters are small. At the same time, they entail minimal length increases at any parameter values. We apply our confidence intervals to the linear regression model, prove their uniform validity and illustrate their length properties in an empirical application to a factorial design field experiment and a Monte Carlo study calibrated to the empirical application.
Hybrid Confidence Intervals for Informative Uniform Asymptotic Inference After Model Selection, revise and resubmit at Biometrika
I propose a new type of confidence interval for correct asymptotic inference after using data to select a model of interest without assuming any model is correctly specified. This hybrid confidence interval is constructed by combining techniques from the selective inference and post-selection inference literatures to yield a short confidence interval across a wide range of data realizations. I show that hybrid confidence intervals have correct asymptotic coverage, uniformly over a large class of probability distributions that do not bound scaled model parameters. I illustrate the use of these confidence intervals in the problem of inference after using the LASSO objective function to select a regression model of interest and provide evidence of their desirable length and coverage properties in small samples via a set of Monte Carlo experiments that entail a variety of different data distributions as well as an empirical application to the predictors of diabetes disease progression.
Critical Values Robust to P-hacking (with Pascal Michaillat)
(previously titled "Incentive-Compatible Critical Values")
P-hacking occurs when researchers engage in various behaviors that increase their chances of reporting statistically significant results. P-hacking is problematic because it reduces the informativeness of hypothesis tests - by making significant results much more common than they are supposed to be in the absence of true significance. Despite its prevalence, p-hacking is not taken into account in hypothesis testing theory: the critical values used to determine significance assume no p-hacking. To address this problem, we build a model of p-hacking and use it to construct critical values such that, if these values are used to determine significance, and if researchers adjust their behavior to these new significance standards, then significant results occur with the desired frequency. Because such robust critical values allow for p-hacking, they are larger than classical critical values. As an illustration, we calibrate the model with evidence from the social and medical sciences. We find that the robust critical value for any test is the classical critical value for the same test with one fifth of the significance level - a form of Bonferroni correction. For instance, for a z-test with a significance level of 5%, the robust critical value is 2.31 instead of 1.65 if the test is one-sided and 2.57 instead of 1.96 if the test is two-sided.
Inference on Winners (with Isaiah Andrews and Toru Kitagawa), revise and resubmit at Quarterly Journal of Economics
2019 Version (referenced in "Inference After Estimation of Breaks")
Many empirical questions concern target parameters selected through optimization. For example, researchers may be interested in the effectiveness of the best policy found in a randomized trial, or the best-performing investment strategy based on historical data. Such settings give rise to a winner's curse, where conventional estimates are biased and conventional confidence intervals are unreliable. This paper develops optimal confidence intervals and median-unbiased estimators that are valid conditional on the target selected and so overcome this winner's curse. If one requires validity only on average over targets that might have been selected, we develop hybrid procedures that combine conditional and projection confidence intervals to offer further performance gains relative to existing alternatives.