'MCMCglmm gives very different results from lme4 - how to diagnose issue?
I am taking the plunge into Bayesian analysis for some new projects. I have some yes/no data, and three fixed effects and for the time being, I'm simply taking random intercepts (I will worry about random slopes later). I'm also sticking with the default prior settings, as I'll worry about that once I've settled on the library that I'm going to use, etc.
If I run a glmm using lme4, I get:
summary(glmer(gesture~vis*comm*task+(1|subject), dat, family="binomial"))
Generalized linear mixed model fit by maximum likelihood (Laplace
Approximation) [glmerMod]
Family: binomial ( logit )
Formula: gesture ~ vis * comm * task + (1 | subject)
Data: dat
AIC BIC logLik deviance df.resid
513.0 569.6 -243.5 487.0 562
Scaled residuals:
Min 1Q Median 3Q Max
-3.9970 -0.4008 -0.0662 0.3445 3.9459
Random effects:
Groups Name Variance Std.Dev.
subject (Intercept) 7.527 2.744
Number of obs: 575, groups: subject, 48
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -3.41133 0.83651 -4.078 4.54e-05 ***
visvisible -0.03572 0.73199 -0.049 0.9611
commtape 0.87192 1.13579 0.768 0.4427
taskact 5.75031 0.91292 6.299 3.00e-10 ***
taskani 3.27341 0.76841 4.260 2.04e-05 ***
visvisible:commtape 0.02226 1.03044 0.022 0.9828
visvisible:taskact -1.27673 1.00445 -1.271 0.2037
visvisible:taskani 0.42952 0.95662 0.449 0.6534
commtape:taskact -1.91736 1.13234 -1.693 0.0904 .
commtape:taskani -0.86535 1.02607 -0.843 0.3990
visvisible:commtape:taskact 1.13058 1.36033 0.831 0.4059
visvisible:commtape:taskani -0.08530 1.32874 -0.064 0.9488
So now i try using MCMCglmm:
m = MCMCglmm(
gesture ~ vis * comm * task,
random=~subject,
data=dat,
family="categorical",
verbose=F)
And get the following output:
Iterations = 3001:12991
Thinning interval = 10
Sample size = 1000
DIC: 24.22676
G-structure: ~subject
post.mean l-95% CI u-95% CI eff.samp
subject 25172 487.6 53908 2.488
R-structure: ~units
post.mean l-95% CI u-95% CI eff.samp
units 9486 295.7 17854 2.103
Location effects: gesture ~ vis * comm * task
post.mean l-95% CI u-95% CI eff.samp pMCMC
(Intercept) -173.2727 -334.1856 -30.9605 4.813 <0.001 ***
visvisible 4.3220 -67.1333 82.1748 58.509 0.910
commtape 47.8088 -64.3913 194.5895 46.842 0.380
taskact 282.8265 49.6004 465.3049 2.952 <0.001 ***
taskani 169.3790 23.3087 303.2590 6.197 <0.001 ***
visvisible:commtape -7.5823 -130.7870 89.6066 54.957 0.878
visvisible:taskact -52.1963 -158.5566 67.1533 44.944 0.252
visvisible:taskani 18.4001 -86.3001 136.4961 48.813 0.708
commtape:taskact -94.0063 -234.2362 8.6195 15.732 0.062 .
commtape:taskani -52.2950 -179.9766 37.5422 32.639 0.320
visvisible:commtape:taskact 48.8714 -97.5667 216.0025 30.265 0.434
visvisible:commtape:taskani 0.2211 -139.2106 157.8038 71.036 0.990
So the "good news" is that the pattern of statistical significance is the same under both methods. But I was under the impression that the advantage of moving to these new methods was to provide more robust estimations of effect sizes. And if we look at the estimates in both cases, there's a huge difference.
So, something is going wrong.
Questions:
If I just looking at the MCMCglmm output, what should I be looking out for as a warning sign?
More generally, is there a good source for information about this package and how to interpret the output?
What should be my next step in my analysis? I tried increasing nitt, but that, if anything, makes the problem worse. I'm guessing this means that I'm getting nowhere close to convergence, and I need to increase the burn in parameter, and possibly increase thin?
Thank you for you help
Solution 1:[1]
For family="categorical"
in MCMCglmm, the response variable must be a factor. You don't mention that you converted your response to a factor when switching from lme4 binomial model to MCMCglmm categorical model.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | RichardB |