Inference After Estimation of Breaks (with Isaiah Andrews and Toru Kitagawa), forthcoming in Journal of Econometrics
Asymptotically Uniform Tests After Consistent Model Selection in the Linear Regression Model, Journal of Business and Economic Statistics, 38 (2020), 810-825.
Estimation and Inference with a (Nearly) Singular Jacobian (with Sukjin Han), Quantitative Economics, 10 (2019), 1019-1068.
Bonferroni-Based Size-Correction for Nonstandard Testing Problems, Journal of Econometrics, 200 (2017), 17-35.
Parameter Estimation Robust to Low-Frequency Contamination (with Jonathan B. Hill), Journal of Business and Economic Statistics, 35 (2017), 598-610.
Memory Parameter Estimation in the Presence of Level Shifts and Deterministic Trends (with Pierre Perron), Econometric Theory, 29 (2013), 1196-1237.
Estimation of the Long-Memory Stochastic Volatility Model Parameters that is Robust to Level Shifts and Deterministic Trends, Journal of Time Series Analysis, 34 (2013), 285-301.
We provide adaptive confidence intervals on a parameter of interest in the presence of nuisance parameters when some of the nuisance parameters have known signs. The confidence intervals are adaptive in the sense that they tend to be short at and near the points where the nuisance parameters are equal to zero. We focus our results primarily on the practical problem of inference on a coefficient of interest in the linear regression model when it is unclear whether or not it is necessary to include a subset of control variables whose partial effects on the dependent variable have known directions (signs). Our confidence intervals are trivial to compute and can provide significant length reductions relative to standard confidence intervals in cases for which the control variables do not have large effects. At the same time, they entail minimal length increases at any parameter values. We prove that our confidence intervals are asymptotically valid uniformly over the parameter space and illustrate their length properties in an empirical application to a factorial design field experiment and a Monte Carlo study calibrated to the empirical application.
Hybrid Confidence Intervals for Informative Uniform Asymptotic Inference After Model Selection, reject and resubmit at Journal of the American Statistical Association
I propose a new type of confidence interval for correct asymptotic inference after using data to select a model of interest without assuming any model is correctly specified. This hybrid confidence interval is constructed by combining techniques from the selective inference and post-selection inference literatures to yield a short confidence interval across a wide range of data realizations. I show that hybrid confidence intervals have correct asymptotic coverage, uniformly over a large class of probability distributions. I illustrate the use of these confidence intervals in the problem of inference after using the LASSO objective function to select a regression model of interest and provide evidence of their desirable length properties in finite samples via a set of Monte Carlo exercises that is calibrated to real-world data.
Incentive-Compatible Critical Values (with Pascal Michaillat)
Statistical hypothesis tests are a cornerstone of scientific research. The tests are informative when their size is properly controlled, so the frequency of rejecting true null hypotheses (type I error) stays below a pre-specified nominal level. Publication bias exaggerates test sizes, however. Since scientists can typically only publish results that reject the null hypothesis, they have the incentive to continue conducting studies until attaining rejection. Such p-hacking takes many forms: from collecting additional data to examining multiple regression specifications, all in the search of statistical significance. The process inflates test sizes above their nominal levels because the critical values used to determine rejection assume that test statistics are constructed from a single study---abstracting from p-hacking. This paper addresses the problem by constructing critical values that are compatible with scientists' behavior given their incentives. We assume that researchers conduct studies until finding a test statistic that exceeds the critical value, or until the benefit from conducting an extra study falls below the cost. We then solve for the incentive-compatible critical value (ICCV). When the ICCV is used to determine rejection, readers can be confident that size is controlled at the desired significance level, and that the researcher's response to the incentives delineated by the critical value is accounted for. Since they allow researchers to search for significance among multiple studies, ICCVs are larger than classical critical values. Yet, for a broad range of researcher behaviors and beliefs, ICCVs lie in a fairly narrow range.
Inference on Winners (with Isaiah Andrews and Toru Kitagawa), revise and resubmit at Quarterly Journal of Economics
2019 Version (referenced in "Inference After Estimation of Breaks")
Many empirical questions concern target parameters selected through optimization. For example, researchers may be interested in the effectiveness of the best policy found in a randomized trial, or the best-performing investment strategy based on historical data. Such settings give rise to a winner's curse, where conventional estimates are biased and conventional confidence intervals are unreliable. This paper develops optimal confidence intervals and median-unbiased estimators that are valid conditional on the target selected and so overcome this winner's curse. If one requires validity only on average over targets that might have been selected, we develop hybrid procedures that combine conditional and projection confidence intervals to offer further performance gains relative to existing alternatives.