1
为什么Anova()和drop1()为GLMM提供了不同的答案?
我有以下形式的GLMM: lmer(present? ~ factor1 + factor2 + continuous + factor1*continuous + (1 | factor3), family=binomial) 当我使用时drop1(model, test="Chi"),我得到的结果与Anova(model, type="III")从汽车包装或汽车上获得的结果不同summary(model)。后两个给出相同的答案。 通过使用大量虚构数据,我发现这两种方法通常没有区别。对于平衡线性模型,不平衡线性模型(不同组中的n不相等)和平衡广义线性模型,它们给出相同的答案,但对于平衡广义线性混合模型,它们给出相同的答案。因此看来,只有在包括随机因素的情况下,这种矛盾才会显现出来。 为什么这两种方法之间存在差异? 使用GLMM时应使用Anova()还是drop1()应使用? 至少就我的数据而言,两者之间的差异很小。哪一个使用都重要吗?
10
r
anova
glmm
r
mixed-model
bootstrap
sample-size
cross-validation
roc
auc
sampling
stratification
random-allocation
logistic
stata
interpretation
proportion
r
regression
multiple-regression
linear-model
lm
r
cross-validation
cart
rpart
logistic
generalized-linear-model
econometrics
experiment-design
causality
instrumental-variables
random-allocation
predictive-models
data-mining
estimation
contingency-tables
epidemiology
standard-deviation
mean
ancova
psychology
statistical-significance
cross-validation
synthetic-data
poisson-distribution
negative-binomial
bioinformatics
sequence-analysis
distributions
binomial
classification
k-means
distance
unsupervised-learning
euclidean
correlation
chi-squared
spearman-rho
forecasting
excel
exponential-smoothing
binomial
sample-size
r
change-point
wilcoxon-signed-rank
ranks
clustering
matlab
covariance
covariance-matrix
normal-distribution
simulation
random-generation
bivariate
standardization
confounding
z-statistic
forecasting
arima
minitab
poisson-distribution
negative-binomial
poisson-regression
overdispersion
probability
self-study
markov-process
estimation
maximum-likelihood
classification
pca
group-differences
chi-squared
survival
missing-data
contingency-tables
anova
proportion