Dear Olaf,
Thanks for your reply. I didn’t want to post an incomplete answer, so I’ve thought a lot about this.
You perfectly understood my Bayesian inversion, and the compensation between E and \lambda. Thus I reproduced an analytical fictitious example on the model f(x,y) = x-y. An observation f_i leads here to a huge correlation between X and Y.
That being said, our main problem is the posterior correlation, between a priori independant inputs. In our former example, the correlation between E and \lambda should not exist a priori, because it has not a physical meaning. It can seem strange that because of an observation, a correlation appears.
I think there is a kind of confusion here, because after the observation, we focus on the distribution given the observation, which is a different couple of variables than the prior one. I mean we don’t focus on (E,\lambda) anymore, but on (E,\lambda \vert S_{observation}) which has different properties.
To conclude, I think the correlation between the posterior inputs is relevant, even if the assumption of independent prior inputs is made.
Finally, we do not use unknown discrepancy for now. I assume it does not help understanding the inversion, and can sometime hide some issues. Do you recommend the use of unknown discrepancy, especially when you can’t put numbers on its value (see my former post Bayesian inversion : discrepancy) ?
Thanks again,
Marc