A crowded street

Working papers

Our IFS working paper series publishes academic papers by staff and IFS associates.

Working papers: all content

Showing 661 – 680 of 1819 results

Working paper graphic

Nonparametric analysis of random utility models

Working Paper

This paper develops and implements a nonparametric test of Random Utility Models. The motivating application is to test the null hypothesis that a sample of cross-sectional demand distributions was generated by a population of rational consumers.

14 June 2016

Working paper graphic

Estimation of a Multiplicative Covariance Structure

Working Paper

We consider a Kronecker product structure for large covariance matrices, which has the feature that the number of free parameters increases logarithmically with the dimensions of the matrix. We propose an estimation method of the free parameters based on the log linear property of this structure, and also a Quasi-Likelihood method. We establish the rate of convergence of the estimated parameters when the size of the matrix diverges. We also establish a CLT for our method. We apply the method to portfolio choice for S&P500 daily returns and compare with sample covariance based methods and with the recent Fan et al. (2013) method.

17 May 2016

Working paper graphic

The value of private schools: evidence from Pakistan

Working Paper

Using unique data from Pakistan we estimate a model of demand for di fferentiated products in 112 rural education markets with signifi cant choice among public and private schools. Our model accounts for the endogeneity of school fees and the characteristics of students attending the school.

13 May 2016

Working paper graphic

Inference under Covariate-Adaptive Randomization

Working Paper

This paper studies inference for the average treatment eff ect in randomized controlled trials with covariate-adaptive randomization. Here, by covariate-adaptive randomization, we mean randomization schemes that first stratify according to baseline covariates and then assign treatment status so as to achieve "balance" within each stratum. Such schemes include, for example, Efron's biased-coin design and strati ed block randomization. When testing the null hypothesis that the average treatment eff ect equals a pre-speci fied value in such settings, we fi rst show that the usual two-sample t-test is conservative in the sense that it has limiting rejection probability under the null hypothesis no greater than and typically strictly less than the nominal level. In a simulation study, we fi nd that the rejection probability may in fact be dramatically less than the nominal level. We show further that these same conclusions remain true for a naïve permutation test, but that a modi fied version of the permutation test yields a test that is non-conservative in the sense that its limiting rejection probability under the null hypothesis equals the nominal level for a wide variety of randomization schemes. The modi fied version of the permutation test has the additional advantage that it has rejection probability exactly equal to the nominal level for some distributions satisfying the null hypothesis and some randomization schemes. Finally, we show that the usual t-test (on the coefficient on treatment assignment) in a linear regression of outcomes on treatment assignment and indicators for each of the strata yields a non-conservative test as well under even weaker assumptions on the randomization scheme. In a simulation study, we fi nd that the non-conservative tests have substantially greater power than the usual two-sample t-test.

10 May 2016

Working paper graphic

Bounds On Treatment Effects On Transitions

Working Paper

This paper considers identif cation of treatment effects on conditional transition probabilities. We show that even under random assignment only the instantaneous average treatment e ffect is point identi fied. Because treated and control units drop out at different rates, randomization only ensures the comparability of treatment and controls at the time of randomization, so that long run average treatment effects are not point identifi ed. Instead we derive informative bounds on these average treatment effects. Our bounds do not impose (semi)parametric restrictions, as e.g. proportional hazards. We also explore various assumptions such as monotone treatment response, common shocks and positively correlated outcomes that tighten the bounds.

22 April 2016

Working paper graphic

Taxing high-income earners: tax avoidance and mobility

Working Paper

The taxation of high-income earners is of importance to every country and is the subject of a considerable amount of recent academic research. Such high-income earners contribute substantial amounts of tax and generate signifi cant positive spillovers, but are also highly mobile: a 1% increase in the top marginal income tax rate increases out-migrations by around 1.5 to 3%. We review research into taxation of high-income earners to provide a synthesis of existing theoretical and empirical understanding. We o ffer various avenues for potential future theoretical and empirical research.

22 April 2016

Working paper graphic

Homophily and transitivity in dynamic network formation

Working Paper

In social and economic networks linked agents often share additional links in common. There are two competing explanations for this phenomenon. First, agents may have a structural taste for transitive links – the returns to linking may be higher if two agents share links in common. Second, agents may assortatively match on unobserved attributes, a process called homophily. I study parameter identifiability in a simple model of dynamic network formation with both effects. Agents form, maintain, and sever links over time in order to maximize utility.

15 April 2016

Working paper graphic

Optimal data collection for randomized control trials

Working Paper

In a randomized control trial, the precision of an average treatment effect estimator can be improved either by collecting data on additional individuals, or by collecting additional covariates that predict the outcome variable. We propose the use of pre-experimental data such as a census, or a household survey, to inform the choice of both the sample size and the covariates to be collected.

1 April 2016

Working paper graphic

Estimating Matching Games with Transfers

Working Paper

I explore the estimation of transferable utility matching games, encompassing many-to-many matching, marriage and matching with trading networks (trades). I introduce a matching maximum score estimator that does not suffer from a computational curse of dimensionality in the number of agents in a matching market. I apply the estimator to data on the car parts supplied by automotive suppliers to estimate the returns from different portfolios of parts to suppliers and automotive assemblers.

24 March 2016

Working paper graphic

Education policy and intergenerational transfers in equilibrium

Working Paper

This paper examines the equilibrium effects of alternative financial aid policies intended to promote college participation. We build an overlapping generations life-cycle, heterogeneous-agent, incomplete-markets model with education, labor supply, and consumption/saving decisions.

21 March 2016

Working paper graphic

Program evaluation and causal inference with high-dimensional data

Working Paper

In this paper, we provide efficient estimators and honest con fidence bands for a variety of treatment eff ects including local average (LATE) and local quantile treatment eff ects (LQTE) in data-rich environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment e ffects, and function-valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces ecient estimators and honest bands for (functional) average treatment eff ects (ATE) and quantile treatment eff ects (QTE). To make informative inference possible, we assume that key reduced form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for post-regularization and post-selection inference that are uniformly valid (honest) across a wide-range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reduced form functional parameters. We illustrate the use of the proposed methods with an application to estimating the eff ect of 401(k) eligibility and participation on accumulated assets. The results on program evaluation are obtained as a consequence of more general results on honest inference in a general moment condition framework, which arises from structural equation models in econometrics. Here too the crucial ingredient is the use of orthogonal moment conditions, which can be constructed from the initial moment conditions. We provide results on honest inference for (function-valued) parameters within this general framework where any high-quality, modern machine learning methods can be used to learn the nonparametric/high-dimensional components of the model. These include a number of supporting auxilliary results that are of major independent interest: namely, we (1) prove uniform validity of a multiplier bootstrap, (2) o er a uniformly valid functional delta method, and (3) provide results for sparsity-based estimation of regression functions for function-valued outcomes.

19 March 2016

Working paper graphic

Simple Nonparametric Estimators for the Bid-Ask Spread in the Roll Model

Working Paper

We propose new methods for estimating the bid-ask spread from observed transaction prices alone. Our methods are based on the empirical characteristic function instead of the sample autocovariance function like the method of Roll (1984). As in Roll (1984), we have a closed form expression for the spread, but this is only based on a limited amount of the model-implied identification restrictions. We also provide methods that take account of more identification information. We compare our methods theoretically and numerically with the Roll method as well as with its best known competitor, the Hasbrouck (2004) method, which uses a Bayesian Gibbs methodology under a Gaussian assumption. Our estimators are competitive with Roll’s and Hasbrouck’s when the latent true fundamental return distribution is Gaussian, and perform much better when this distribution is far from Gaussian. Our methods are applied to the Emini futures contract on the S&P 500 during the Flash Crash of May 6, 2010. Extensions to models allowing for unbalanced order flow or Hidden Markov trade direction indicators or trade direction indicators having general asymmetric sup port or adverse selection are also presented, without requiring additional data.

18 March 2016