Inference After Estimation of Breaks (with Isaiah Andrews and Toru Kitagawa), forthcoming in Journal of Econometrics
Asymptotically Uniform Tests After Consistent Model Selection in the Linear Regression Model, forthcoming in Journal of Business and Economic Statistics
Estimation and Inference with a (Nearly) Singular Jacobian (with Sukjin Han), Quantitative Economics, 10 (2019), 1019-1068.
Bonferroni-Based Size-Correction for Nonstandard Testing Problems, Journal of Econometrics, 200 (2017), 17-35.
Parameter Estimation Robust to Low-Frequency Contamination (with Jonathan B. Hill), Journal of Business and Economic Statistics, 35 (2017), 598-610.
Memory Parameter Estimation in the Presence of Level Shifts and Deterministic Trends (with Pierre Perron), Econometric Theory, 29 (2013), 1196-1237.
Estimation of the Long-Memory Stochastic Volatility Model Parameters that is Robust to Level Shifts and Deterministic Trends, Journal of Time Series Analysis, 34 (2013), 285-301.
I propose a new type of confidence interval for correct asymptotic inference after using data to select a model of interest without assuming any model is correctly specified. This hybrid confidence interval is constructed by combining techniques from the selective inference and post-selection inference literatures to yield a short confidence interval across a wide range of data realizations. I show that hybrid confidence intervals have correct asymptotic coverage, uniformly over a large class of probability distributions. I illustrate the use of these confidence intervals in the problem of inference after using the LASSO objective function to select a regression model of interest and provide evidence of their desirable length properties in finite samples via a set of Monte Carlo exercises that is calibrated to real-world data.
Incentive-Compatible Critical Values (with Pascal Michaillat)
Statistical hypothesis tests are a cornerstone of scientific research. The tests are informative when their size is properly controlled, so the frequency of rejecting true null hypotheses (type I error) stays below a pre-specified nominal level. Publication bias exaggerates test sizes, however. Since scientists can typically only publish results that reject the null hypothesis, they have the incentive to continue conducting studies until attaining rejection. Such p-hacking takes many forms: from collecting additional data to examining multiple regression specifications, all in the search of statistical significance. The process inflates test sizes above their nominal levels because the critical values used to determine rejection assume that test statistics are constructed from a single study---abstracting from p-hacking. This paper addresses the problem by constructing critical values that are compatible with scientists' behavior given their incentives. We assume that researchers conduct studies until finding a test statistic that exceeds the critical value, or until the benefit from conducting an extra study falls below the cost. We then solve for the incentive-compatible critical value (ICCV). When the ICCV is used to determine rejection, readers can be confident that size is controlled at the desired significance level, and that the researcher's response to the incentives delineated by the critical value is accounted for. Since they allow researchers to search for significance among multiple studies, ICCVs are larger than classical critical values. Yet, for a broad range of researcher behaviors and beliefs, ICCVs lie in a fairly narrow range.
Inference on Winners (with Isaiah Andrews and Toru Kitagawa)
2019 Version (referenced in "Inference After Estimation of Breaks")
Many empirical questions concern target parameters selected through optimization. For example, researchers may be interested in the effectiveness of the best policy found in a randomized trial, or the best-performing investment strategy based on historical data. Such settings give rise to a winner's curse, where conventional estimates are biased and conventional confidence intervals are unreliable. This paper develops optimal confidence intervals and median-unbiased estimators that are valid conditional on the target selected and so overcome this winner's curse. If one requires validity only on average over targets that might have been selected, we develop hybrid procedures that combine conditional and projection confidence intervals to offer further performance gains relative to existing alternatives.
On the Computation of Size-Correct Power-Directed Tests with Null Hypotheses Characterized by Inequalities, revise and resubmit at Journal of Econometrics
This paper presents theoretical results and a computational algorithm that allows a practitioner to conduct hypothesis tests in nonstandard contexts under which the null hypothesis is characterized by a finite number of inequalities on a vector of parameters. The algorithm allows one to obtain a test with uniformly correct asymptotic size, while directing power towards alternatives of interest, by maximizing a user-chosen local weighted average power criterion. Existing feasible methods for size control in this context do not allow the user to direct the power of the test toward alternatives of interest while controlling size. This is because presently available theoretical results require the user to search for a maximal empirical quantile over a potentially high-dimensional Euclidean space via repeated Monte Carlo simulation. The theoretical results I establish here reduce the space required for this search to a finite number of points for a large class of test statistics and data-dependent critical values, enabling power-direction to be computationally feasible. The results apply to a wide variety of testing contexts including tests on parameters in partially-identified moment inequality models and tests for the superior predictive ability of a benchmark forecasting model. I briefly analyze the asymptotic power properties of the new testing algorithm over existing feasible tests in a Monte Carlo study.
Heavy Tail Robust Frequency Domain Estimation (with Jonathan B. Hill)
We develop heavy tail robust frequency domain estimators for covariance stationary time series with a parametric spectrum, including ARMA, GARCH and stochastic volatility. We use robust techniques to reduce the moment requirement down to only a finite variance. In particular, we negligibly trim the data, permitting both identification of the parameter for the candidate model, and asymptotically normal frequency domain estimators, while leading to a classic limit theory when the data have a finite fourth moment. The transform itself can lead to asymptotic bias in the limit distribution of our estimators when the fourth moment does not exist, hence we correct the bias using extreme value theory that applies whether tails decay according to a power law or not. In the case of symmetrically distributed data, we compute the mean-squared-error of our biased estimator and characterize the mean-squared-error minimization number of sample extremes. A simulation experiment shows our QML estimator works well and in general has lower bias than the standard estimator, even when the process is Gaussian, suggesting robust methods have merit even for thin tailed processes.
Many economic and financial time series are thought to exhibit long-memory behavior while nevertheless remaining covariance stationary. Changes in persistence have been widely documented though little formal analysis has been undertaken in the case of otherwise covariance stationary series. Minimal work has been done with regard to detecting change in the memory parameter d (or the Hurst parameter H = d + 1/2) of such series while the potential presence of such change has important implications for inference, forecasting and model building. I propose here a semiparametric test for change in d, which I dub the Range-Ratio Test (RRT). It detects changes in d when d remains in a region of stationarity [0, 1/2), rather than testing against I (0) or I (1) alternatives. This new test's main advantage over the few existing tests for similar change in this persistence parameter is that it does not require specification of parameters affecting the spectral density at frequencies distant from zero. Asymptotic results show the RRT to be consistent with a simple null limiting distribution that is well approximated by a nuisance parameter free distribution for a wide range of null and alternative hypotheses. Monte Carlo simulations show that it performs well in moderately sized samples though care should be taken when interpreting the test statistic for initial estimates of d near the null hypothesis boundary of stationarity. The simulations also shed light on the trimming parameter that should be used for each sample size/d estimate pair. Finally, a short empirical application of the RRT is conducted providing evidence that the S&P 500 stock market volatility is not adequately characterized as a stationary long (or short) memory process.