Hi Paul
Thanks a lot for your reply.
As for your first reply, I can understand why this functionality hasn’t been implemented in UQLab. Thanks for the answer.
As for your replies to the second question, firstly, thanks a lot for pointing out the mistake in line 31
. It explains why I was getting reasonable results while updating sequentially but not when I was trying to update with a single custom log-likelihood function.
Secondly, I thought it wouldn’t make too much of a difference as I wasn’t trying to maximize the likelihood here. But I can see your point and I’ll make the change in the log-likelihood function too.
Thirdly, that lonely y(j)
was there by mistake. I had put it while doing some debugging.
Fourthly, I’ll definitely vectorize the LL code. Thanks a lot for the suggestion.
Some further points and queries
By just making the correction in line 31
to sum, I’ve started getting reasonable results. No more -Inf
in the LL outputs.
I have a query which is more related to Bayesian Updating than UQLab. As per my understanding, under the i.i.d. assumption, the results from updating with a single LL function should be the same as that obtained by updating sequentially with two different LL functions. With the same priorscale and number of steps, in case of a single LL function my posterior distribution is \ln R_i''\sim\mathcal{N}(0.933, 0.0187) whereas by updating sequentially I get, \ln R_i''\sim\mathcal{N}(0.918, 0.0291). The prior assumed was \ln R_i'\sim\mathcal{N}(0.911, 0.099). I was wondering whether I can attribute this difference in estimation to the numerical approximations and methods? The difference in the standard deviations is slightly high it seems. In both cases I get an acceptance rate between 0.4-0.5.