Nonlinear Nonadditive Regression

A regression of the form   r(y_i,\textbf{x}_i ,\boldsymbol\beta )=u_i  where “r” is a known continuous nonlinear function, “u” is the error term and the model cannot be written as  y_i-g(\textbf{x}_i ,\boldsymbol\beta )=u_i  is a case of nonlinear nonadditive regression. While working on estimation of index numbers we faced such regressions but we couldn’t find all we wanted about their inference in one place. Therefore we decided to write an appendix on estimation of these models. I have put an updated version of it as a separate note for the interested reader here

There are several points worth mentioning. (i) Nonlinear least squares do not provide a consistent estimator for these models; inference is instead based on nonlinear GMM theory. (ii) There are other closely related nonlinear nonadditive models. For example, the transformation model  T(y_i,\boldsymbol\alpha) =\textbf{x}_i \boldsymbol\beta+u_i  and nonadditve model  y_i=g(\textbf{x}_i ,\boldsymbol\beta,u_i) . The former is a special case of our model and has been discussed in Horowitz (2009). For the latter model, if one can solve “u” in terms of other elements then we are back to our model but it doesn’t seem as straightforward to estimate  y_i=g(\textbf{x}_i ,\boldsymbol\beta,u_i)  directly. Nonparametric version of  the latter model where “g” is unknown has been discussed in Matzkin (2003) but I am not aware of any reference focusing on the parametric version.  (iii) It seems that at least for our version, the model is identified under general conditions. (iv) It also seems that similar theory is applicable if “r” and “yi” are vectors (v) In index number analysis we often have exogenous weights, we have shown in the paper how to incorporate them.

Advertisements

4 responses to this post.

  1. Posted by Sriram on September 7, 2015 at 4:26 am

    Yes, GMM would give consistent estimates. What about using Bayesian approach for estimating non-linear non-additive regression? If there is no issue with Bayesian approach then it may be easier if you know how to do Gibbs (MH algorithm may be needed) sampling. There is no need for computing Hessian matrix and other things that may need calculus. I would appreciate your comments in this regard.

    Reply

  2. Sriram,
    Bayesian is fine in this context and straightforward to implement. If you look at the notes, you can see that we show how to derive the likelihood. If you have the likelihood, you can multiply it by some priors to obtain the posterior and the rest is a matter of drawing from a posterior. There is a catch though, you have to assume a specific distribution for “u” but with GMM you don’t need that. A second problem with the Bayesian specific to index numbers is that we have exogenous weights and it is not clear how to incorporate such weights under a Bayesian framework. This last bit is of interest to me and I have thought about it but don’t know how to do it.

    Reply

  3. Posted by Sriram on September 7, 2015 at 6:52 am

    If weights are exogenous why can’t one do Bayesian with (f^wi) as density instead of f?

    Reply

    • Posted by Reza on September 7, 2015 at 7:24 am

      I think the problem with (f^wi) is that it is not a proper density and you cannot apply Bayes theorem. We have done this trick with our maximum likelihood but we treat it as an “m” estimator and use the sandwich variance formula.

      Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: