# Sampling a variable based on a regression function

Dear all,

I have two questions regarding sampling a variable based on a regression function.

I am using a power function as regression model y = A x_1^B x_2^C. When I use MATLAB’s nlinfit-function, as a result I receive the parameters A, B, C, the residual vector R, the Jacobian J, the covariance matrix CovB, the mean square error MSE and some information on the error model.
The residual R(i) I understand as an extra term, i.e. y(i) = A x_1(i)^B x_2(i)^C + R(i), where R(i) is the residual for each triplet of data x_1(i), x_2(i) and y(i).

1. Question: Do I understand R correctly, or is it a term in the logarithmic equation y(i) = exp(A^* + B * ln(x_1(i)) + C * ln(x_2(i)) + R(i)), where A^* = ln(A)?

To the residual vector R I can fit a normal distribution with \mu and \sigma, where \mu \approx 0. When I want to use y in my probabilistic framework I sample x_1, x_2 and \epsilon (error term).

1. Question: Is this correct, or do I need to sample A, B and C as well?

For A, B and C I could use the information from the covariance matrix CovB. From this I find the variances \sigma^2 on the main diagonal and could therefore apply normal distributions with the values for A, B and C as the mean values \mu with the corresponding \sigma = sqrt(\sigma^2). In addition, I can derive the correlation matrix for A, B and C from their covariance matrix. Now I could sample A, B, C, \epsilon, x_1 and x_2 to determine y.

From literature, I mostly found that it is enough to only sample x_1, x_2 and \epsilon. However, there is one important paper in my field where they also sampled A, B and C with coherent correlations.

Many thanks in advance and best regards,
Stephan