'Parallel processing cluster not used when running dredge on a MCMCglmm model with the parallel package
I am using MCMCglmm
to run a PGLMM model. Since the aim is not to make predictions, I'm using dredge
(from MuMIn) to calculate model-weighted parameter values and confidence intervals. Due to the large number of fixed effects, I thought it would be a good idea to implement a parallel processing workflow so that it doesn't take a day or two. I am using fully-updated Win10 and R Studio 2021.09.2.
The model and model selection code for reference:
fullmod<-upd.MCMCglmm(occ~den*month+year+diet+dssi+eu_trend+hssi*migration+mass+latitude,
random=~phylo+spp,family="gaussian",ginverse=list(phylo=inv.mat),prior=prior,
data=merged_full,nitt=nitt,burnin=burnin,thin=thin, verbose=F)
all_mods <- dredge(fullmod, trace=2)
This model works perfectly, and using dredge
without a cluser also runs with no issues, so I know the issue lies solely in my implementation of parallel processing. According to the relevant documentation, pdredge
is depreciated and so dredge can be used directly; I tried pdredge
regardless based on the code in this question and I get the exact same error.
Clustering code:
library(parallel)
library(snow)
# Detect number of cores and create cluster (leave one out to not overwhelm pc)
nCores <- detectCores() - 1
cl <- makeCluster(nCores, type = "SOCK")
# Export all objects to be used to all the cores in the cluster
clusterExport(cl, list("merged_full","inv.mat","prior","nitt","burnin","thin","fullmod",
"upd.MCMCglmm"))
# Load packages to be used
clusterEvalQ(cl,library(MuMIn,logical.return =T))
clusterEvalQ(cl,library(MCMCglmm,logical.return =T))
fullmod<-upd.MCMCglmm(occ~den*month+year+diet+dssi+eu_trend+hssi*migration+mass+latitude,random=~phylo+spp,
family="gaussian",ginverse=list(phylo=inv.mat),prior=prior,
data=merged_full,nitt=nitt,burnin=burnin,thin=thin, verbose=F)
all_mods <- MuMIn::dredge(fullmod, trace=2, cluster=cl)
And the output:
> all_mods <- MuMIn::dredge(fullmod, trace=2, cluster=cl)
Not using cluster.
Fixed term is "(Intercept)"
And of course, the code runs fine, just without clustering. Checking the Task Manager, I can see that the cluster is up and running, it's just that the other cores are not being used.
I did not provide a reproducible example as the model runs with no issues , but I can do so if required. Anyone knows what could be causing this?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|