[r-cran-lsmeans] 01/03: Imported Upstream version 2.21
Jonathon Love
jon at thon.cc
Sat Dec 5 11:28:37 UTC 2015
This is an automated email from the git hooks/post-receive script.
jonathon-guest pushed a commit to branch master
in repository r-cran-lsmeans.
commit 733a4fe516e18d12d75c3bef88f5babd646b85fd
Author: Jonathon Love <jon at thon.cc>
Date: Sat Dec 5 12:19:20 2015 +0100
Imported Upstream version 2.21
---
DESCRIPTION | 11 +-
MD5 | 68 ++++++------
NAMESPACE | 23 ++++-
R/MCMC-support.R | 4 +-
R/aovlist-support.R | 26 +++--
R/countreg-support.R | 246 ++++++++++++++++++++++++++++++++++++++++++++
R/helpers.R | 41 +++++++-
R/lsmeans.R | 26 ++---
R/lsmip.R | 10 +-
R/multinom-support.R | 2 +-
R/nonlin-support.R | 2 +-
R/ordinal-support.R | 6 +-
R/plot.lsm.R | 5 +
R/pmmeans.R | 47 +++++++++
R/rbind.R | 4 +-
R/ref.grid.R | 67 ++++++++----
R/summary.R | 31 ++----
inst/NEWS | 32 ++++++
inst/doc/extending.R | 19 ++--
inst/doc/extending.pdf | Bin 250300 -> 251785 bytes
inst/doc/extending.rnw | 33 +++++-
inst/doc/using-lsmeans.R | 59 ++++++-----
inst/doc/using-lsmeans.pdf | Bin 406568 -> 407331 bytes
inst/doc/using-lsmeans.rnw | 8 +-
man/extending.Rd | 23 ++---
man/glht.Rd | 4 +-
man/lsmeans-package.Rd | 1 +
man/lsmeans.Rd | 26 +++--
man/lsmip.Rd | 9 +-
man/models.Rd | 23 ++++-
man/ref.grid.Rd | 2 +-
man/summary.Rd | 7 +-
man/update.Rd | 21 +++-
tests/tests1.Rout.save | 22 ++--
vignettes/extending.rnw | 33 +++++-
vignettes/using-lsmeans.rnw | 8 +-
36 files changed, 735 insertions(+), 214 deletions(-)
diff --git a/DESCRIPTION b/DESCRIPTION
index a44d30d..5e4f0a1 100644
--- a/DESCRIPTION
+++ b/DESCRIPTION
@@ -1,30 +1,29 @@
Package: lsmeans
Type: Package
Title: Least-Squares Means
-Version: 2.20
-Date: 2015-08-28
+Version: 2.21
+Date: 2015-12-04
Authors at R: c(person("Russell", "Lenth", role = c("aut", "cre"),
email = "russell-lenth at uiowa.edu"),
person("Maxime", "Herv\\'e", role = "ctb"))
Depends: estimability, methods, R (>= 3.0)
Suggests: car, lattice, MCMCpack, mediation, multcompView, ordinal,
pbkrtest (>= 0.4-1), CARBayes, coxme, gee, geepack, glmmADMB,
- lme4.0, lme4, MASS, MCMCglmm, nnet, survival
+ lme4.0, lme4, MASS, MCMCglmm, nnet, pscl, survival
Imports: graphics, stats, utils, nlme, coda (>= 0.17), multcomp, plyr,
mvtnorm
Additional_repositories: http://glmmadmb.r-forge.r-project.org/repos,
http://lme4.0.r-forge.r-project.org/repos
LazyData: yes
ByteCompile: yes
-Encoding: latin1
Description: Obtain least-squares means for many linear, generalized linear,
and mixed models. Compute contrasts or linear functions of least-squares
means, and comparisons of slopes. Plots and compact letter displays.
License: GPL-2
NeedsCompilation: no
-Packaged: 2015-08-28 19:15:19 UTC; rlenth
+Packaged: 2015-12-04 17:59:13 UTC; rlenth
Author: Russell Lenth [aut, cre],
Maxime Herv\'e [ctb]
Maintainer: Russell Lenth <russell-lenth at uiowa.edu>
Repository: CRAN
-Date/Publication: 2015-08-29 10:01:15
+Date/Publication: 2015-12-04 21:43:52
diff --git a/MD5 b/MD5
index 725acb5..3576af1 100644
--- a/MD5
+++ b/MD5
@@ -1,23 +1,25 @@
-61e51a4ad0e5c1d162ac6225fe897efe *DESCRIPTION
-991abc021158c5a1b9663e7eee2538cc *NAMESPACE
-8d8236ac0bc973e198ee9fab9d08e84b *R/MCMC-support.R
+a7d99afb8e29b4f0c6d857baaecdfbc2 *DESCRIPTION
+6a4bd5a9b9d1bc04d592e734c4e44d88 *NAMESPACE
+81b8d6ed7fa4911183eba1c47d938ce3 *R/MCMC-support.R
0c1e1e67b366e333c59ca1d4bc906e80 *R/S4-classes.R
-5b3bcb5bec07fe9e89c8473ee7cb8024 *R/aovlist-support.R
+95b9f9ba69b8bc1055d2071ad50e217b *R/aovlist-support.R
fc4a9c967069dafcae6245b8737cc350 *R/cld.lsm.R
+e5bc0c0106fee5f8b60eabfa29b5e9ef *R/countreg-support.R
54b63f31e3bea39d3e37ec09836161b5 *R/glht-support.R
-20742d04ae1d5f32849360e5086f63ac *R/helpers.R
+31e006455401777b6ed297e68bf43319 *R/helpers.R
e9e4975adf11cfefe004844e2546b94a *R/lsm-contr.R
9bdcfa0b20edb9d2498139e44f909094 *R/lsm.list.R
-2dd462bae62457cbab27d2fa2f69629e *R/lsmeans.R
-7877674094e31d7fdaea0cf6f2f49621 *R/lsmip.R
-a0107ecfff9b8b2bf2a8950ee2a770e3 *R/multinom-support.R
-283e0028401663e58f415c12dfed066f *R/nonlin-support.R
-6313366cc27e687c591138124080c23d *R/ordinal-support.R
-580ddcaa84007964a3ca17fee5aaa0f6 *R/plot.lsm.R
-6f234afff78f381a1657c042fa0862e1 *R/rbind.R
-f34a944e36be8257300f6dc671439e99 *R/ref.grid.R
+682cdc1f5acd2945616e36c6f8c9b439 *R/lsmeans.R
+023c0a5cd47917faa86b333d1e9be683 *R/lsmip.R
+a20bea4c3cb0e7b22d8640ad7ef19dc1 *R/multinom-support.R
+aecd5a482db0bec6c0cb67caa7263b6f *R/nonlin-support.R
+5f1cce2820fb04f54dda2b3b181c0ff4 *R/ordinal-support.R
+ece38aedcad99b64b4f5a74c9f7db3f2 *R/plot.lsm.R
+2a117fe733baeb21a33e2ae077b91fed *R/pmmeans.R
+b229c8767dbf09f7c562d7beef50cc9c *R/rbind.R
+bb480e87e35f599aded6c27e24855616 *R/ref.grid.R
47170af1f5012f10b5a3ba0fa84021e0 *R/rms-support.R
-4d6f0d6359826960124d29836276dc8b *R/summary.R
+aae4082048d1a9515a8d872869053e83 *R/summary.R
d4c76e01b4d7f759205d1229f2b110d1 *build/vignette.rds
dd8a9e9e6180f65bff84a40f25943636 *data/MOats.RData
2d7a1d217def5e4ab094c7f019a7d27d *data/autonoise.RData
@@ -25,34 +27,34 @@ dd8a9e9e6180f65bff84a40f25943636 *data/MOats.RData
f204c1431e151e2459178e8f4fc8f0e8 *data/fiber.RData
298a5159a77e201ffe1410cf80282eff *data/nutrition.RData
2b03398ef2214b51371430e875d964cc *data/oranges.RData
-60b5a9d71c9dc74fc421bc952da2aac9 *inst/NEWS
-9696ff5c71ce6b224f062420f3ece896 *inst/doc/extending.R
-5cd2996a6d64bf5522cfa238405435c9 *inst/doc/extending.pdf
-0d2e8b24c04ee245fe0c1eaa2d7d3f73 *inst/doc/extending.rnw
-c1a8e56bf451a8a73a900ab0b94ccd67 *inst/doc/using-lsmeans.R
-f7fc6699d7a758d406766d80932d1fc3 *inst/doc/using-lsmeans.pdf
-5c8d6c6c87670f8d71e6272a89d87ee0 *inst/doc/using-lsmeans.rnw
+d8d44ae0f7cd9e76d475345072a01441 *inst/NEWS
+35d101d38e7efda82971d573740a4731 *inst/doc/extending.R
+70e2fe2f4c6163ca81d25296143c8b72 *inst/doc/extending.pdf
+8ba55dabcb58b91cba0b35490fa0bf2b *inst/doc/extending.rnw
+625c2a99db7b44020626904a438f80a6 *inst/doc/using-lsmeans.R
+acdf305314e108df82505048fa7e7203 *inst/doc/using-lsmeans.pdf
+63ee40c1977b64cb47fcf7e31df89e6f *inst/doc/using-lsmeans.rnw
973374a99676131f00cabe705e5fde34 *man/MOats.Rd
7073c84580b0826caf2d10479596ac4b *man/auto.noise.Rd
7f6d7494796dc12d1307d9adfb506574 *man/cld.Rd
98ef9f484be985fc10f4a3ef168c2364 *man/contrast.Rd
-3d8d4689b4c70ad661869bf7bb1b5440 *man/extending.Rd
+b4e5397f0f1d90e0ed0f4e6d0bd01976 *man/extending.Rd
8acf14c497ccb2cb78366a461bb5856d *man/feedlot.Rd
76e8383265d34132ecef76141aa6fd20 *man/fiber.Rd
-31bfc7525c953cf6db2d830f3f95e109 *man/glht.Rd
-8c746675be4e85cf33f4fb568b710dcb *man/lsmeans-package.Rd
-ac6e600a0f16d30dafd691764ba1fde1 *man/lsmeans.Rd
-566b9796c83af6dd3a7abbd8dc71f027 *man/lsmip.Rd
-a81b565388613769f5dc795ff34fd443 *man/models.Rd
+fc19727e78e1b78e6b06d68f495ad125 *man/glht.Rd
+aad911cfedb8b6182a08a85c6ce42243 *man/lsmeans-package.Rd
+8222b5e279641e7af4b1b8364a7dc626 *man/lsmeans.Rd
+09168fb853b1ef7a4a7885f22d83848e *man/lsmip.Rd
+9f1a977da91eb0a23c5d05cbd8d395e1 *man/models.Rd
14561bd752782024533ef1ef5e06f0bb *man/nutrition.Rd
b8e3e18b69fb207939f7e42497519d6c *man/oranges.Rd
8bf9ba39266c94b5bc16f71a81a1e51e *man/pairwise.lsmc.Rd
-cbb0494921d46e8571c30ba1bba90968 *man/ref.grid.Rd
+473a2369b6c504d740478574e292494e *man/ref.grid.Rd
d55dcaa810dbe202b27234b523019436 *man/ref.grid.class.Rd
-bb7aa6ce866f4c555a653b5b4c4d5bde *man/summary.Rd
-c37f774093d91d21f9b113e63ec47d90 *man/update.Rd
+8e7775fd376e8535de7ec2513b000ae3 *man/summary.Rd
+4fe8f946d6d43056fd7c183e6360e939 *man/update.Rd
d36fdf54f5ba0150f90a567b23b5225b *tests/tests1.R
-3ff9a9b33b0ecaa7ef0c2694961886ac *tests/tests1.Rout.save
-0d2e8b24c04ee245fe0c1eaa2d7d3f73 *vignettes/extending.rnw
+9346d4a5569d69c0d49e3569570f98bc *tests/tests1.Rout.save
+8ba55dabcb58b91cba0b35490fa0bf2b *vignettes/extending.rnw
57809d96b40c92d2b432152eaa8dc914 *vignettes/lsmeans.bib
-5c8d6c6c87670f8d71e6272a89d87ee0 *vignettes/using-lsmeans.rnw
+63ee40c1977b64cb47fcf7e31df89e6f *vignettes/using-lsmeans.rnw
diff --git a/NAMESPACE b/NAMESPACE
index a085ba2..a176acd 100644
--- a/NAMESPACE
+++ b/NAMESPACE
@@ -4,7 +4,7 @@
# From default packages (new req from CRAN, July 2015)
importFrom("graphics", "pairs")
importFrom("stats", "as.formula", "coef", "complete.cases", "confint", "cov", "cov2cor", "delete.response", "deriv", "family", "lm", "make.link", "model.frame", "model.matrix", "na.omit", "na.pass", "p.adjust", "p.adjust.methods", "pchisq", "pf", "poly", "predict", "pt", "ptukey", "qf", "qt", "qtukey", "reformulate", "terms", "uniroot", "update", "vcov")
-importFrom("utils", "str")
+importFrom("utils", "head", "installed.packages", "str")
importFrom("nlme", "fixef")
# Others
@@ -40,9 +40,11 @@ if (requireNamespace("coda", quietly = TRUE)) {
# Visible exports
exportPattern("*.lsmc") #all contrast fcns
export(
- All.vars,
as.glht,
contrast,
+ defaults,
+ get.lsm.option,
+ get.pmm.option,
lsm,
lsm.basis,
lsmeans,
@@ -51,12 +53,29 @@ export(
lsmip,
lsm.options,
lstrends,
+ pmm,
+ pmmeans,
+ pmmobj,
+ pmmip,
+ pmm.options,
+ pmtrends,
+ ###rbind, # my own version that overrides one in methods
recover.data,
ref.grid,
regrid,
test
)
+# hidden functions of possible use to other package developers
+export(
+ .all.vars,
+ .aovlist.dffun,
+ .diag,
+ .get.offset,
+ .my.vcov
+)
+
+
# S3 methods for recover.data and lsm.basis that are needed by other packages
S3method(recover.data, call)
diff --git a/R/MCMC-support.R b/R/MCMC-support.R
index b83af6a..e85efc7 100644
--- a/R/MCMC-support.R
+++ b/R/MCMC-support.R
@@ -20,7 +20,7 @@ as.mcmc.ref.grid = function(x, names = TRUE, ...) {
# Currently, data is required, as call is not stored
recover.data.MCMCglmm = function(object, data, ...) {
# if a multivariate response, stack the data with `trait` variable
- yvars = All.vars(update(object$Fixed$formula, ". ~ 1"))
+ yvars = .all.vars(update(object$Fixed$formula, ". ~ 1"))
if(length(yvars) > 1) {
# for (v in yvars) data[[v]] = NULL
dat = data
@@ -30,7 +30,7 @@ recover.data.MCMCglmm = function(object, data, ...) {
}
attr(data, "call") = object$Fixed
attr(data, "terms") = trms = delete.response(terms(object$Fixed$formula))
- attr(data, "predictors") = All.vars(delete.response(trms))
+ attr(data, "predictors") = .all.vars(delete.response(trms))
attr(data, "responses") = yvars
data
}
diff --git a/R/aovlist-support.R b/R/aovlist-support.R
index 2ba6a5c..f6eb0b3 100644
--- a/R/aovlist-support.R
+++ b/R/aovlist-support.R
@@ -88,16 +88,10 @@ lsm.basis.aovlist = function (object, trms, xlev, grid, vcov., ...) {
dfargs = list()
dffun = function(k, dfargs) NA
}
- else{
+ else {
dfargs = list(Vmats=Vmats, Vidx=Vidx, Vdf=unlist(Vdf), wts = wts)
dffun = function(k, dfargs) {
- v = sapply(seq_along(dfargs$Vdf), function(j) {
- ii = dfargs$Vidx[[j]]
- kk = (k * dfargs$wts[j, ])[ii]
- #sum(kk * .mat.times.vec(dfargs$Vmats[[j]], kk))
- .qf.non0(dfargs$Vmats[[j]], kk)
- })
- sum(v)^2 / sum(v^2 / dfargs$Vdf) # Good ole Satterthwaite
+ lsmeans::.aovlist.dffun(k, dfargs)
}
}
nbasis = estimability::all.estble # Consider this further?
@@ -105,4 +99,20 @@ lsm.basis.aovlist = function (object, trms, xlev, grid, vcov., ...) {
list(X = X, bhat = bhat, nbasis = nbasis, V = V, dffun = dffun,
dfargs = dfargs, misc = misc)
+}
+
+.aovlist.dffun = function(k, dfargs) {
+ if(is.matrix(k) && (nrow(k) > 1)) {
+ dfs = apply(k, 1, .aovlist.dffun, dfargs)
+ min(dfs)
+ }
+ else {
+ v = sapply(seq_along(dfargs$Vdf), function(j) {
+ ii = dfargs$Vidx[[j]]
+ kk = (k * dfargs$wts[j, ])[ii]
+ #sum(kk * .mat.times.vec(dfargs$Vmats[[j]], kk))
+ .qf.non0(dfargs$Vmats[[j]], kk)
+ })
+ sum(v)^2 / sum(v^2 / dfargs$Vdf) # Good ole Satterthwaite
+ }
}
\ No newline at end of file
diff --git a/R/countreg-support.R b/R/countreg-support.R
new file mode 100644
index 0000000..fd0bea0
--- /dev/null
+++ b/R/countreg-support.R
@@ -0,0 +1,246 @@
+# Support for zeroinfl and hurdle models (pscl [& future countreg package?])
+
+# We'll support two optional arguments:
+# mode -- type of result required
+# lin.pred -- TRUE: keep linear predictor and link
+# FALSE: back-transform (default)
+#
+# With lin.pred = FALSE and mode %in% c("response", "count", "zero"), we
+# will return comparable results to predict(..., type = mode)
+# with mode = "prob0", same results as predict(..., type = "prob")[, 1]
+#
+# lin.pred only affects results for mode %in% c("count", "zero").
+# When lin.pred = TRUE, we get the actual linear predictor and link function
+# for that part of the model.
+
+
+
+# ----- zeroinfl objects -----
+
+recover.data.zeroinfl = function(object, mode = c("response", "count", "zero", "prob0"), ...) {
+ fcall = object$call
+ mode = match.arg(mode)
+ if (mode %in% c("count", "zero"))
+ trms = delete.response(terms(object, model = mode))
+ else ### mode = %in% c("response", "prob0")
+ trms = delete.response(object$terms$full)
+ # Make sure there's an offset function available
+ env = new.env(parent = attr(trms, ".Environment"))
+ env$offset = function(x) x
+ attr(trms, ".Environment") = env
+ recover.data(fcall, trms, object$na.action, ...)
+}
+
+
+lsm.basis.zeroinfl = function(object, trms, xlev, grid,
+ mode = c("response", "count", "zero", "prob0"), lin.pred = FALSE, ...)
+{
+ mode = match.arg(mode)
+ m = model.frame(trms, grid, na.action = na.pass, xlev = xlev)
+ if (mode %in% c("count", "zero")) {
+ contr = object$contrasts[[mode]]
+ X = model.matrix(trms, m, contrasts.arg = contr)
+ bhat = coef(object, model = mode)
+ V = .pscl.vcov(object, model = mode, ...)
+ if (mode == "count")
+ misc = list(tran = "log", inv.lbl = "count")
+ else
+ misc = list(tran = object$link, inv.lbl = "prob")
+ if (!lin.pred) { # back-transform the results
+ lp = as.numeric(X %*% bhat + .get.offset(trms, grid))
+ lnk = make.link(misc$tran)
+ bhat = lnk$linkinv(lp)
+ delta = .diag(lnk$mu.eta(lp)) %*% X
+ V = delta %*% tcrossprod(V, delta)
+ X = diag(1, length(bhat))
+ misc = list(offset.mult = 0)
+ }
+ }
+ else { ## "response", "prob0"
+ trms1 = delete.response(terms(object, model = "count"))
+ off1 = .get.offset(trms1, grid)
+ contr1 = object$contrasts[["count"]]
+ X1 = model.matrix(trms1, m, contrasts.arg = contr1)
+ b1 = coef(object, model = "count")
+ lp1 = as.numeric(X1 %*% b1 + off1)
+ mu1 = exp(lp1)
+
+ trms2 = delete.response(terms(object, model = "zero"))
+ off2 = .get.offset(trms2, grid)
+ contr2 = object$contrasts[["zero"]]
+ X2 = model.matrix(trms2, m, contrasts.arg = contr2)
+ b2 = coef(object, model = "zero")
+ lp2 = as.numeric(X2 %*% b2) + off2
+ mu2 = object$linkinv(lp2)
+ mu2prime = stats::make.link(object$link)$mu.eta(lp2)
+
+ if(mode == "response") {
+ delta = .diag(mu1) %*% cbind(.diag(1 - mu2) %*% X1, .diag(-mu2prime) %*% X2)
+ bhat = (1 - mu2) * mu1
+ }
+ else { # mode = "prob0"
+ p0 = 1 - .prob.gt.0(object$dist, mu1, object$theta)
+ dp0 = - .dprob.gt.0(object$dist, mu1, object$theta, "log", lp1)
+ bhat = (1 - mu2) * p0 + mu2
+ delta = cbind(.diag((1 - mu2) * dp0) %*% X1, .diag(mu2prime * (1 - p0)) %*% X2)
+ }
+ V = delta %*% tcrossprod(.pscl.vcov(object, model = "full", ...), delta)
+ X = diag(1, length(bhat))
+ misc = list(offset.mult = 0)
+ }
+ nbasis = estimability::all.estble
+ dffun = function(k, dfargs) NA
+ dfargs = list()
+ list(X = X, bhat = bhat, nbasis = nbasis, V = V,
+ dffun = dffun, dfargs = dfargs, misc = misc)
+}
+
+
+
+#### Support for hurdle models
+
+recover.data.hurdle = function(object, mode = c("response", "count", "zero", "prob0"), ...) {
+ fcall = object$call
+ mode = match.arg(mode)
+ if (mode %in% c("count", "zero"))
+ trms = delete.response(terms(object, model = mode))
+ else ### mode = "mean" or "prob.ratio"
+ trms = delete.response(object$terms$full)
+ # Make sure there's an offset function available
+ env = new.env(parent = attr(trms, ".Environment"))
+ env$offset = function(x) x
+ attr(trms, ".Environment") = env
+ recover.data(fcall, trms, object$na.action, ...)
+}
+
+# see expl notes afterward for notations in some of this
+lsm.basis.hurdle = function(object, trms, xlev, grid,
+ mode = c("response", "count", "zero", "prob0"),
+ lin.pred = FALSE, ...)
+{
+ mode = match.arg(mode)
+ m = model.frame(trms, grid, na.action = na.pass, xlev = xlev)
+ if ((lin.pred && mode %in% c("count", "zero")) || (!lin.pred && mode %in% c("count", "prob0"))) {
+ model = ifelse(mode == "count", "count", "zero")
+ contr = object$contrasts[[model]]
+ X = model.matrix(trms, m, contrasts.arg = contr)
+ bhat = coef(object, model = model)
+ V = .pscl.vcov(object, model = model, ...)
+ misc = switch(object$dist[[model]],
+ binomial = list(tran = object$link, inv.lbl = "prob"),
+ list(tran = "log", inv.lbl = "count"))
+ if (!lin.pred) { # back-transform
+ lp = as.numeric(X %*% bhat + .get.offset(trms, grid))
+ lnk = make.link(misc$tran)
+ bhat = lnk$linkinv(lp)
+ if (mode != "prob0") {
+ delta = .diag(lnk$mu.eta(lp)) %*% X
+ }
+ else {
+ bhat = 1 - .prob.gt.0(object$dist$zero, bhat, object$theta["zero"])
+ db = - .dprob.gt.0(object$dist$zero, bhat, object$theta["zero"], misc$tran, lp)
+ delta = .diag(db) %*% X
+ }
+ V = delta %*% tcrossprod(V, delta)
+ X = diag(1, length(bhat))
+ misc = list(offset.mult = 0)
+ }
+ }
+ else { ### "zero" or "response" with implied lin.pred = FALSE
+ trms1 = delete.response(terms(object, model = "count"))
+ off1 = .get.offset(trms1, grid)
+ contr1 = object$contrasts[["count"]]
+ X1 = model.matrix(trms1, m, contrasts.arg = contr1)
+ b1 = coef(object, model = "count")
+ mu1 = as.numeric(exp(X1 %*% b1 + off1))
+ theta1 = object$theta["count"]
+ p1 = .prob.gt.0(object$dist$count, mu1, theta1)
+ dp1 = .dprob.gt.0(object$dist$count, mu1, theta1, "", 0) # binomial won't happen
+
+ trms2 = delete.response(terms(object, model = "zero"))
+ off2 = .get.offset(trms2, grid)
+ contr2 = object$contrasts[["zero"]]
+ X2 = model.matrix(trms2, m, contrasts.arg = contr2)
+ b2 = coef(object, model = "zero")
+ lp2 = as.numeric(X2 %*% b2 + off2)
+ mu2 = switch(object$dist$zero,
+ binomial = object$linkinv(lp2),
+ exp(lp2) )
+ theta2 = object$theta["zero"]
+ p2 = .prob.gt.0(object$dist$zero, mu2, theta2)
+ dp2 = .dprob.gt.0(object$dist$zero, mu2, theta2, object$link, lp2)
+
+ if (mode == "response") {
+ bhat = p2 * mu1 / p1
+ delta = cbind(.diag(bhat*(1 - mu1 * dp1 / p1)) %*% X1,
+ .diag(mu1 * dp2 / p1) %*% X2)
+ }
+ else { ## mode == "zero"
+ bhat = p2 / p1
+ delta = cbind(.diag(-p2 * dp1 / p1^2) %*% X1,
+ .diag(dp2 / p1) %*% X2)
+ }
+ V = delta %*% tcrossprod(.pscl.vcov(object, model = "full", ...), delta)
+ X = .diag(1, length(bhat))
+
+ misc = list(estName = mode, offset.mult = 0)
+ }
+ nbasis = estimability::all.estble
+ dffun = function(k, dfargs) object$df.residual
+ dfargs = list()
+ list(X = X, bhat = bhat, nbasis = nbasis, V = V,
+ dffun = dffun, dfargs = dfargs, misc = misc)
+}
+
+# utility for prob (Y > 0 | dist, mu, theta)
+.prob.gt.0 = function(dist, mu, theta) {
+ switch(dist,
+ binomial = mu,
+ poisson = 1 - exp(-mu),
+ negbin = 1 - (theta / (mu + theta))^theta,
+ geometric = 1 - 1 / (1 + mu)
+ )
+}
+
+# utility for d/d(eta) prob (Y > 0 | dist, mu, theta)
+.dprob.gt.0 = function(dist, mu, theta, link, lp) {
+ switch(dist,
+ binomial = stats::make.link(link)$mu.eta(lp),
+ poisson = mu * exp(-mu),
+ negbin = mu * (theta /(mu + theta))^(1 + theta),
+ geometric = mu / (1 + mu)^2
+ )
+}
+
+# special version of .my.vcov that accepts (and requires!) model argument
+.pscl.vcov = function(object, model, vcov. = stats::vcov, ...) {
+ if (is.function(vcov.))
+ vcov. = vcov.(object, model = model)
+ else if (!is.matrix(vcov.))
+ stop("vcov. must be a function or a square matrix")
+ vcov.
+}
+
+# Explanatory notes for hurdle models
+# -----------------------------------
+# We have a linear predictor eta = X%*%beta + offset
+# mu = h(eta) where h is inverse link (usually exp but not always)
+# Define p = P(Y > 0 | mu). This comes out to...
+# binomial: mu
+# poisson: 1 - exp(-mu)
+# negbin: 1 - (theta/(mu+theta))^theta
+# geometric: 1 - 1/(mu+1)
+# Define dp = dp/d(eta). Note - when h(mu)=exp(mu) we have dp = mu*dp/d(mu)
+# binomial: h'(eta)
+# poisson: mu*exp(-mu)
+# negbin: mu*(theta/(mu+theta))^(theta+1)
+# geometric: mu/(mu+1)^2
+#
+# This gives us what we need to find the estimates and apply the delta method
+# In the code we index these notations with 1 (count model) and 2 (zero model)
+# And we treat theta1 and theta2 as constants
+#
+#!!! In theory, above seems correct, and estimates match those from predict.hurdle.
+#!!! But SEs don't seem right.
+#!!! They do seem right though if I omit the factor of mu in dp
+#!!! when link is log
diff --git a/R/helpers.R b/R/helpers.R
index af20f5b..90970cc 100644
--- a/R/helpers.R
+++ b/R/helpers.R
@@ -73,7 +73,7 @@ lsm.basis.default = function(object, trms, xlev, grid, ...) {
# Later addition: na.action arg req'd - vector of row indexes removed due to NAs
recover.data.call = function(object, trms, na.action, data = NULL, params = NULL, ...) {
fcall = object # because I'm easily confused
- vars = setdiff(All.vars(trms), params)
+ vars = setdiff(.all.vars(trms), params)
if (length(vars) == 0)
return("Model must have at least one predictor")
tbl = data
@@ -104,7 +104,7 @@ recover.data.call = function(object, trms, na.action, data = NULL, params = NULL
attr(tbl, "call") = object # the original call
attr(tbl, "terms") = trms
- attr(tbl, "predictors") = setdiff(All.vars(delete.response(trms)), params)
+ attr(tbl, "predictors") = setdiff(.all.vars(delete.response(trms)), params)
attr(tbl, "responses") = setdiff(vars, union(attr(tbl, "predictors"), params))
tbl
}
@@ -181,7 +181,10 @@ lsm.basis.merMod = function(object, trms, xlev, grid, vcov., ...) {
dfargs = misc = list()
if (lme4::isLMM(object)) {
pbdis = .lsm.is.true("disable.pbkrtest")
- if (!pbdis && requireNamespace("pbkrtest") && missing(vcov.)) {
+ Nlim = get.lsm.option("pbkrtest.limit")
+ objN = lme4::getME(object, "N")
+ toobig = objN > Nlim
+ if (!pbdis && !toobig && requireNamespace("pbkrtest") && missing(vcov.)) {
dfargs = list(unadjV = V,
adjV = pbkrtest::vcovAdj.lmerMod(object, 0))
V = as.matrix(dfargs$adjV)
@@ -194,7 +197,14 @@ lsm.basis.merMod = function(object, trms, xlev, grid, vcov., ...) {
}
}
else {
- if(!pbdis) message("Install package 'pbkrtest' to obtain bias corrections and degrees of freedom")
+ if(!pbdis && !("pbkrtest" %in% row.names(installed.packages())))
+ message("Install package 'pbkrtest' to obtain bias corrections and degrees of freedom")
+ else if(toobig)
+ message("Note: Adjusted covariance and degrees-of-freedom calculations have been\n",
+ "disabled because the number of observations exceeds ", Nlim, ".\n",
+ "Standard errors and tests may be more biased than if they were adjusted.\n",
+ "To enable adjustments, set lsm.options(pbkrtest.limit = ", objN, ") or larger,\n",
+ "but be warned that this may result in large computation time and memory use.")
dffun = function(k, dfargs) NA
}
}
@@ -685,7 +695,13 @@ lsm.basis.gam = function(object, trms, xlev, grid, ...) {
## Alternative to all.vars, but keeps vars like foo$x and foo[[1]] as-is
## Passes ... to all.vars
-All.vars = function(expr, retain = c("\\$", "\\[\\[", "\\]\\]"), ...) {
+.all.vars = function(expr, retain = c("\\$", "\\[\\[", "\\]\\]"), ...) {
+ if (!inherits(expr, "formula")) {
+ expr = try(eval(expr), silent = TRUE)
+ if(inherits(expr, "try-error")) {
+ return(character(0))
+ }
+ }
repl = paste("_Av", seq_along(retain), "_", sep = "")
for (i in seq_along(retain))
expr = gsub(retain[i], repl[i], expr)
@@ -696,3 +712,18 @@ All.vars = function(expr, retain = c("\\$", "\\[\\[", "\\]\\]"), ...) {
vars = gsub(repl[i], retain[i], vars)
vars
}
+
+
+### Not-so-damn-smart replacement of diag() that will
+### not be so quick to assume I want an identity matrix
+### returns matrix(x) when x is a scalar
+.diag = function(x, nrow, ncol) {
+ if(is.matrix(x))
+ diag(x)
+ else if((length(x) == 1) && missing(nrow) && missing(ncol))
+ matrix(x)
+ else
+ diag(x, nrow, ncol)
+}
+
+
diff --git a/R/lsmeans.R b/R/lsmeans.R
index a4ede5b..49b1cc5 100644
--- a/R/lsmeans.R
+++ b/R/lsmeans.R
@@ -44,11 +44,11 @@ function(object, specs, contr.list, trend, ...) {
if(length(specs) == 2) { # just a rhs
by = .find.by(as.character(specs[2]))
- lsmeans(object, All.vars(specs), by = by, ...)
+ lsmeans(object, .all.vars(specs), by = by, ...)
}
else {
-# lsms = lsmeans(object, All.vars(specs[-2]), ...)
- contr.spec = All.vars(specs[-3])[1]
+# lsms = lsmeans(object, .all.vars(specs[-2]), ...)
+ contr.spec = .all.vars(specs[-3])[1]
by = .find.by(as.character(specs[3]))
# Handle old-style case where contr is a list of lists
if (!missing(contr.list)) {
@@ -56,7 +56,7 @@ function(object, specs, contr.list, trend, ...) {
if (!is.null(cmat))
contr.spec = cmat
}
- lsmeans(object, specs = All.vars(specs[-2]),
+ lsmeans(object, specs = .all.vars(specs[-2]),
by = by, contr = contr.spec, ...)
}
}
@@ -123,6 +123,7 @@ lsmeans.character.ref.grid = function(object, specs, by = NULL,
# Figure out the structure of the grid
wgt = RG at grid[[".wgt."]]
+ if(all(zapsmall(wgt) == 0)) wgt = wgt + 1 ### repl all zero wgts with 1
dims = sapply(RG at levels, length)
row.idx = array(seq_len(nrow(RG at linfct)), dims)
use.mars = match(facs, names(RG at levels)) # which margins to use
@@ -159,14 +160,14 @@ lsmeans.character.ref.grid = function(object, specs, by = NULL,
if (is.matrix(weights)) {
wtrow = 0
fac.reduce = function(coefs) {
- wtmat = diag(weights[wtrow+1, ]) / sum(weights[wtrow+1, ])
+ wtmat = .diag(weights[wtrow+1, ]) / sum(weights[wtrow+1, ])
ans = apply(wtmat %*% coefs, 2, sum)
wtrow <<- (1 + wtrow) %% nrow(weights)
ans
}
}
else if (is.numeric(weights)) {
- wtmat = diag(weights)
+ wtmat = .diag(weights)
wtsum = sum(weights)
if (wtsum <= 1e-8) wtsum = NA
fac.reduce = function(coefs) {
@@ -188,7 +189,7 @@ lsmeans.character.ref.grid = function(object, specs, by = NULL,
if (!missing(weights) && (weights == "fq"))
K = plyr::alply(row.idx, use.mars, function(idx) {
fq = RG at grid[[".wgt."]][idx]
- apply(diag(fq) %*% RG at linfct[idx, , drop=FALSE], 2, sum) / sum(fq)
+ apply(.diag(fq) %*% RG at linfct[idx, , drop=FALSE], 2, sum) / sum(fq)
})
else
K = plyr::alply(row.idx, use.mars, function(idx) {
@@ -217,7 +218,8 @@ lsmeans.character.ref.grid = function(object, specs, by = NULL,
RG at roles$responses = character()
RG at misc$famSize = nrow(linfct)
- RG at misc$estName = "lsmean"
+ if(RG at misc$estName == "prediction")
+ RG at misc$estName = "lsmean"
RG at misc$adjust = "none"
RG at misc$infer = c(TRUE,FALSE)
RG at misc$pri.vars = setdiff(facs, by)
@@ -252,7 +254,7 @@ lsmeans.character.ref.grid = function(object, specs, by = NULL,
.find.by = function(rhs) {
b = strsplit(rhs, "\\|")[[1]]
if (length(b) > 1)
- All.vars(as.formula(paste("~",b[2])))
+ .all.vars(as.formula(paste("~",b[2])))
else NULL
}
@@ -313,7 +315,7 @@ contrast.ref.grid = function(object, method = "eff", by, adjust, offset = NULL,
# If you ever want to expand to irregular grids, this block will
# have to change, but everything else is probably OK.
else {
- tcmat = kronecker(diag(rep(1,length(by.rows))), t(cmat))
+ tcmat = kronecker(.diag(rep(1,length(by.rows))), t(cmat))
linfct = tcmat %*% object at linfct[unlist(by.rows), ]
tmp = expand.grid(con= names(cmat), by = seq_len(length(by.rows)))###unique(by.id))
grid = data.frame(.contrast. = tmp$con)
@@ -511,7 +513,7 @@ lstrends = function(model, specs, var, delta.var=.01*rng, data, ...) {
fcn = NULL # differential
if (is.null(x)) {
fcn = var
- var = All.vars(as.formula(paste("~",var)))
+ var = .all.vars(as.formula(paste("~",var)))
if (length(var) > 1)
stop("Can only support a function of one variable")
else {
@@ -584,7 +586,7 @@ lstrends = function(model, specs, var, delta.var=.01*rng, data, ...) {
.some.term.contains = function(facs, terms) {
for (trm in attr(terms, "term.labels")) {
if(all(sapply(facs, function(f) length(grep(f,trm))>0)))
- if (length(All.vars(as.formula(paste("~",trm)))) > length(facs))
+ if (length(.all.vars(as.formula(paste("~",trm)))) > length(facs))
return(TRUE)
}
return(FALSE)
diff --git a/R/lsmip.R b/R/lsmip.R
index 9049d3d..372b15f 100644
--- a/R/lsmip.R
+++ b/R/lsmip.R
@@ -28,12 +28,12 @@ lsmip.default = function(object, formula, type,
}
}
- allvars = setdiff(All.vars(formula), ".single.")
+ allvars = setdiff(.all.vars(formula), ".single.")
lsmopts$object = object
lsmopts$specs = reformulate(allvars)
lsmo = do.call("lsmeans", lsmopts)
if(missing(type)) {
- type = lsm.options()$summary$predict.type
+ type = get.lsm.option("summary")$predict.type
if (is.null(type))
type = .get.predict.type(lsmo at misc)
}
@@ -43,7 +43,7 @@ lsmip.default = function(object, formula, type,
lsms = cbind(lsmo at grid, lsmean = lsm)
# Set up trace vars and key
- tvars = All.vars(update(formula, . ~ 1))
+ tvars = .all.vars(update(formula, . ~ 1))
if (all(tvars == ".single.")) {
lsms$.single. = 1
my.key = function(tvars) list()
@@ -61,7 +61,7 @@ lsmip.default = function(object, formula, type,
# figure out 'x' and 'by' vars
rhs = strsplit(as.character(formula[3]), "\\|")[[1]]
- xvars = All.vars(reformulate(rhs[[1]]))
+ xvars = .all.vars(reformulate(rhs[[1]]))
xv = do.call(paste, lsms[xvars])
lsms$xvar = factor(xv, levels = unique(xv))
lsms = lsms[order(lsms$xvar), ]
@@ -69,7 +69,7 @@ lsmip.default = function(object, formula, type,
# see if we have any 'by' vars
if (length(rhs) > 1) {
- byvars = All.vars(reformulate(rhs[[2]]))
+ byvars = .all.vars(reformulate(rhs[[2]]))
plotform = as.formula(paste("lsmean ~ xvar |", paste(byvars, collapse="*")))
}
diff --git a/R/multinom-support.R b/R/multinom-support.R
index 58f33da..b05f011 100644
--- a/R/multinom-support.R
+++ b/R/multinom-support.R
@@ -55,7 +55,7 @@ lsm.basis.multinom = function(object, trms, xlev, grid,
exp.psi = exp(linfct[rows, , drop = FALSE] %*% object at bhat)
p = as.numeric(exp.psi / sum(exp.psi))
bhat[rows] = p
- A = diag(p) - outer(p, p) # partial derivs
+ A = .diag(p) - outer(p, p) # partial derivs
linfct[rows, ] = A %*% linfct[rows, ]
}
misc$postGridHook = misc$tran = misc$inv.lbl = NULL
diff --git a/R/nonlin-support.R b/R/nonlin-support.R
index a318da5..1630780 100644
--- a/R/nonlin-support.R
+++ b/R/nonlin-support.R
@@ -44,7 +44,7 @@ recover.data.nlme = function(object, param, ...) {
fcall$weights = NULL
#trms = delete.response(terms(update(terms(object), form)))
trms = delete.response(terms(form))
- if (length(All.vars(trms)) == 0)
+ if (length(.all.vars(trms)) == 0)
return(paste("No predictors for '", param, "' in fixed model", sep = ""))
recover.data(fcall, trms, object$na.action, ...)
}
diff --git a/R/ordinal-support.R b/R/ordinal-support.R
index f1fec17..5b6ec72 100644
--- a/R/ordinal-support.R
+++ b/R/ordinal-support.R
@@ -12,8 +12,8 @@ recover.data.clm = function(object, mode = "latent", ...) {
recover.data.lm(object, ...)
else { # bring-in predictors from loc, scale, and nom models
trms = delete.response(object$terms)
- #preds = union(All.vars(trms), union(All.vars(object$S.terms), All.vars(object$nom.terms)))
- x.preds = union(All.vars(object$S.terms), All.vars(object$nom.terms))
+ #preds = union(.all.vars(trms), union(.all.vars(object$S.terms), .all.vars(object$nom.terms)))
+ x.preds = union(.all.vars(object$S.terms), .all.vars(object$nom.terms))
#x.trms = update(trms, reformulate(preds))
x.trms = terms(update(trms, reformulate(c(".", x.preds))))
recover.data(object$call, x.trms, object$na.action, ...)
@@ -279,7 +279,7 @@ lsm.basis.clm = function (object, trms, xlev, grid,
if (!is.null(object at grid$.offset.))
eta = eta + object at grid$.offset.
for (j in scols) linfct[, j] = eta * linfct[, j]
- linfct = (diag(rsigma) %*% linfct) [, active, drop = FALSE]
+ linfct = (.diag(rsigma) %*% linfct) [, active, drop = FALSE]
list(est = eta * rsigma, V = linfct %*% tcrossprod(object at V, linfct))
}
diff --git a/R/plot.lsm.R b/R/plot.lsm.R
index 6feca5a..5177023 100644
--- a/R/plot.lsm.R
+++ b/R/plot.lsm.R
@@ -8,6 +8,11 @@ plot.lsmobj = function(x, y, type, intervals = TRUE, comparisons = FALSE,
object = update(x, predict.type = type, ..., silent = TRUE)
else
object = update(x, ..., silent = TRUE)
+ if (missing(int.adjust)) {
+ int.adjust = object at misc$adjust
+ if (is.null(int.adjust))
+ int.adjust = "none"
+ }
summ = summary(object, infer = c(TRUE, FALSE), adjust = int.adjust)
estName = attr(summ, "estName")
extra = NULL
diff --git a/R/pmmeans.R b/R/pmmeans.R
new file mode 100644
index 0000000..a11e3b4
--- /dev/null
+++ b/R/pmmeans.R
@@ -0,0 +1,47 @@
+### Support for overriding "ls" with "pm" in names
+
+### general-purpose wrapper for creating pmxxxxx functions
+.pmwrap = function(lsfcn, ...) {
+ result = lsfcn(...)
+
+ if (inherits(result, "ref.grid"))
+ result = .sub.ls.pm(result)
+ else if(inherits(result, "lsm.list")) {
+ for (i in seq_along(result))
+ result[[i]] = .sub.ls.pm(result[[i]])
+ names(result) = gsub("^ls", "pm", names(result))
+ }
+ result
+}
+
+# returns an updated ref.grid or lsmobj with setName "ls..." replaced by "pm..."
+.sub.ls.pm = function(object) {
+ nm = object at misc$estName
+ update(object, estName = gsub("^ls", "pm", nm))
+}
+
+### Exported implementations
+
+pmmeans = function(...)
+ .pmwrap(lsmeans, ...)
+
+# uh, maybe not... pmms = pmmeans
+
+pmtrends = function(...)
+ .pmwrap(lstrends, ...)
+
+
+pmmip = function(...)
+ lsmip(...)
+
+pmm = function(...)
+ lsm(...)
+
+pmmobj = function(...)
+ .pmwrap(lsmobj, ...)
+
+pmm.options = function(...)
+ lsm.options(...)
+
+get.pmm.option = function(...)
+ get.lsm.option(...)
diff --git a/R/rbind.R b/R/rbind.R
index eec0d08..29ed95f 100644
--- a/R/rbind.R
+++ b/R/rbind.R
@@ -19,6 +19,7 @@ rbind.ref.grid = function(..., deparse.level = 1, adjust = "mvt") {
bnms = unlist(lapply(objs, function(o) o at misc$by.vars))
grids = lapply(objs, function(o) o at grid)
gnms = unique(c(bnms, unlist(lapply(grids, names))))
+ gnms = setdiff(gnms, c(".wgt.", ".offset.")) # exclude special names
grid = data.frame(.tmp. = seq_len(n <- nrow(obj at linfct)))
for (g in gnms)
grid[[g]] = rep(".", n)
@@ -28,7 +29,7 @@ rbind.ref.grid = function(..., deparse.level = 1, adjust = "mvt") {
rows = n.before + seq_along(g[[1]])
n.before = max(rows)
for (nm in names(g))
- grid[rows, nm] = as.character(g[[nm]])
+ grid[rows, nm] = g[[nm]] ####as.character(g[[nm]])
}
obj at grid = grid
update(obj, pri.vars = gnms, by.vars = NULL, adjust = adjust,
@@ -47,3 +48,4 @@ rbind.ref.grid = function(..., deparse.level = 1, adjust = "mvt") {
x at misc$by.vars = NULL
x
}
+
diff --git a/R/ref.grid.R b/R/ref.grid.R
index 3d84b88..a3d131f 100644
--- a/R/ref.grid.R
+++ b/R/ref.grid.R
@@ -8,7 +8,7 @@
# FALSE - same as function(x) sort(unique(x))
ref.grid <- function(object, at, cov.reduce = mean, mult.name, mult.levs,
- options = lsm.options()$ref.grid, data, type, ...) {
+ options = get.lsm.option("ref.grid"), data, type, ...) {
# recover the data
if (missing(data)) {
data = try(recover.data (object, data = NULL, ...))
@@ -42,7 +42,7 @@ ref.grid <- function(object, at, cov.reduce = mean, mult.name, mult.levs,
else if (inherits(cvr, "formula")) {
if (length(cvr) < 3)
stop("Formulas in 'cov.reduce' must be two-sided")
- lhs = All.vars(cvr)[1]
+ lhs = .all.vars(cvr)[1]
dep.x[[lhs]] <<- cvr
cvr = mean
}
@@ -130,7 +130,7 @@ ref.grid <- function(object, at, cov.reduce = mean, mult.name, mult.levs,
# resolve any covariate formulas
for (xnm in names(dep.x)) {
- if (!all(All.vars(dep.x[[xnm]]) %in% names(grid)))
+ if (!all(.all.vars(dep.x[[xnm]]) %in% names(grid)))
stop("Formulas in 'cov.reduce' must predict covariates actually in the model")
xmod = lm(dep.x[[xnm]], data = data)
grid[[xnm]] = predict(xmod, newdata = grid)
@@ -144,9 +144,11 @@ ref.grid <- function(object, at, cov.reduce = mean, mult.name, mult.levs,
form = attr(data, "call")$formula
if (is.null(misc$tran) && (length(form) > 2)) { # No link fcn, but response may be transformed
lhs = form[-3] ####form[[2]]
- tran = setdiff(All.vars(lhs, functions = TRUE), c(All.vars(lhs), "~", "cbind"))
- if(length(tran) == 1)
+ tran = setdiff(.all.vars(lhs, functions = TRUE), c(.all.vars(lhs), "~", "cbind"))
+ if(length(tran) == 1) {
misc$tran = tran
+ misc$inv.lbl = "response"
+ }
}
# Take care of multivariate response
@@ -291,7 +293,7 @@ ref.grid <- function(object, at, cov.reduce = mean, mult.name, mult.levs,
covs.d = names(data)[!isfac]
lbls = attr(trms, "term.labels")
- M = model.frame(trms, data)
+ M = model.frame(trms, utils::head(data, 2)) #### just need a couple rows
isfac = sapply(M, function(x) inherits(x, "factor"))
# Character vector of terms in the model frame that are factors ...
@@ -304,7 +306,7 @@ ref.grid <- function(object, at, cov.reduce = mean, mult.name, mult.levs,
if(length(cterms) == 0)
return(cterms)
# (else) Strip off the function calls
- cvars = lapply(cterms, function(x) All.vars(reformulate(x)))
+ cvars = lapply(cterms, function(x) .all.vars(reformulate(x)))
# Exclude any variables that are already factors
intersect(unique(unlist(cvars)), covs.d)
@@ -370,9 +372,7 @@ print.ref.grid = function(x,...)
# vcov method
vcov.ref.grid = function(object, ...) {
- tol = lsm.options()$estble.tol
- if(is.null(tol))
- tol = 1e-8
+ tol = get.lsm.option("estble.tol")
if (!is.null(hook <- object at misc$vcovHook)) {
if (is.character(hook))
hook = get(hook)
@@ -382,7 +382,7 @@ vcov.ref.grid = function(object, ...) {
X = object at linfct
estble = estimability::is.estble(X, object at nbasis, tol) ###apply(X, 1, .is.estble, object at nbasis, tol)
X[!estble, ] = NA
- X = X[, !is.na(object at bhat)]
+ X = X[, !is.na(object at bhat), drop = FALSE]
X %*% tcrossprod(object at V, X)
}
}
@@ -420,8 +420,8 @@ update.ref.grid = function(object, ..., silent = FALSE) {
### set or change lsmeans options
lsm.options = function(...) {
- opts = getOption("lsmeans")
- if (is.null(opts)) opts = list()
+ opts = getOption("lsmeans", list())
+# if (is.null(opts)) opts = list()
newopts = list(...)
for (nm in names(newopts))
opts[[nm]] = newopts[[nm]]
@@ -429,11 +429,26 @@ lsm.options = function(...) {
invisible(opts)
}
+# equivalent of getOption()
+get.lsm.option = function(x, default = lsmeans::defaults[[x]]) {
+ opts = getOption("lsmeans", list())
+ if(is.null(default) || x %in% names(opts))
+ opts[[x]]
+ else
+ default
+}
+
+### Exported defaults for certain options
+defaults = list(
+ estble.tol = 1e-8, # tolerance for estimability checks
+ disable.pbkrtest = FALSE, # whether to bypass pbkrtest routines for lmerMod
+ pbkrtest.limit = 3000 # limit on N for enabling adj V
+)
+
# Utility that returns TRUE if getOption("lsmeans")[[opt]] is TRUE
.lsm.is.true = function(opt) {
- x = getOption("lsmeans")[[opt]]
- if (is.null(x)) FALSE
- else if (is.logical(x)) x
+ x = get.lsm.option(opt, FALSE)
+ if (is.logical(x)) x
else FALSE
}
@@ -443,7 +458,7 @@ lsm.options = function(...) {
### Primary reason to do this is with transform = TRUE, then can
### work with linear functions of the transformed predictions
regrid = function(object, transform = TRUE) {
- est = .est.se.df(object, do.se = FALSE)
+ est = .est.se.df(object, do.se = TRUE) ###FALSE)
estble = !(is.na(est[[1]]))
object at V = vcov(object)[estble, estble, drop=FALSE]
object at bhat = est[[1]]
@@ -454,13 +469,26 @@ regrid = function(object, transform = TRUE) {
object at nbasis = object at linfct[, !estble, drop = FALSE]
if(transform && !is.null(object at misc$tran)) {
link = attr(est, "link")
- D = diag(link$mu.eta(object at bhat[estble]))
+ D = .diag(link$mu.eta(object at bhat[estble]))
object at bhat = link$linkinv(object at bhat)
object at V = D %*% tcrossprod(object at V, D)
inm = object at misc$inv.lbl
if (!is.null(inm))
object at misc$estName = inm
object at misc$tran = object at misc$inv.lbl = NULL
+ # override the df function
+ df = est$df
+ if (length(unique(df)) == 1) {
+ object at dfargs = list(df = df[1])
+ object at dffun = function(k, dfargs) dfargs$df
+ }
+ else { # use containment df
+ object at dfargs = list(df = df)
+ object at dffun = function(k, dfargs) {
+ idx = which(zapsmall(k) != 0)
+ ifelse(length(idx) == 0, NA, min(dfargs$df[idx]))
+ }
+ }
}
# Nix out things that are no longer needed or valid
object at grid$.offset. = object at misc$offset.mult =
@@ -472,3 +500,6 @@ regrid = function(object, transform = TRUE) {
### S4 methods
## use S3 for this setMethod("summary", "ref.grid", summary.ref.grid)
setMethod("show", "ref.grid", function(object) str.ref.grid(object))
+
+
+
diff --git a/R/summary.R b/R/summary.R
index b8a3e82..806207c 100644
--- a/R/summary.R
+++ b/R/summary.R
@@ -8,21 +8,6 @@
else 0
}
-# utility to check estimability of x'beta, given nonest.basis
-### Moved this to separate 'estimability' package
-# is.estble = function(x, nbasis, tol=1e-8) {
-# if (is.matrix(x))
-# return(apply(x, 1, is.estble, nbasis, tol))
-# if(is.na(nbasis[1]))
-# TRUE
-# else {
-# chk = as.numeric(crossprod(nbasis, x))
-# ssqx = sum(x*x) # BEFORE subsetting x
-# # If x really small, don't scale chk'chk
-# if (ssqx < tol) ssqx = 1
-# sum(chk*chk) < tol * ssqx
-# }
-# }
# utility fcn to get est's, std errors, and df
# new arg: do.se -- if FALSE, just do the estimates and return 0 for se and df
@@ -31,9 +16,7 @@
# tol=getOption("lsmeans")$estble.tol) {
# 2.13: Revised to call w/ just object instead of all those args (except linfct)
# Also moved offset comps to here, and provided for misc$estHook
-.est.se.df = function(object, do.se=TRUE, tol=lsm.options()$estble.tol) {
- if (is.null(tol))
- tol = 1e-8
+.est.se.df = function(object, do.se=TRUE, tol = get.lsm.option("estble.tol")) {
misc = object at misc
if (!is.null(hook <- misc$estHook)) {
if (is.character(hook)) hook = get(hook)
@@ -321,7 +304,7 @@
# S3 predict method
predict.ref.grid <- function(object, type, ...) {
# update with any "summary" options
- opt = lsm.options()$summary
+ opt = get.lsm.option("summary")
if(!is.null(opt)) {
opt$object = object
object = do.call("update.ref.grid", opt)
@@ -349,7 +332,7 @@ predict.ref.grid <- function(object, type, ...) {
summary.ref.grid <- function(object, infer, level, adjust, by, type, df,
null = 0, delta = 0, side = 0, ...) {
# update with any "summary" options
- opt = lsm.options()$summary
+ opt = get.lsm.option("summary")
if(!is.null(opt)) {
opt$object = object
object = do.call("update.ref.grid", opt)
@@ -406,6 +389,14 @@ summary.ref.grid <- function(object, infer, level, adjust, by, type, df,
mesg = object at misc$initMesg
+ ### Add an annotation when we show results on lp scale and
+ ### there is a transformation
+ if (!inv && !is.null(linkName <- object at misc$tran)) {
+ if (!is.character(linkName))
+ linkName = "linear predictor (not response)"
+ mesg = c(mesg, paste("Results are given on the", linkName, "scale."))
+ }
+
# et = 1 if a prediction, 2 if a contrast (or unmatched or NULL), 3 if pairs
et = pmatch(c(object at misc$estType, "c"), c("prediction", "contrast", "pairs"), nomatch = 2)[1]
diff --git a/inst/NEWS b/inst/NEWS
index 84496e0..2d4955e 100644
--- a/inst/NEWS
+++ b/inst/NEWS
@@ -1,5 +1,37 @@
Changelog for 'lsmeans' package
+2.21
+ Changed implementation of pbkrtest.limit - now based on
+ number of rows of design matrix.
+ Added support for "hurdle" and zeroinfl" classes (pscl package)
+ In 'regrid', dffun now implements a containment method
+ Additional annotation to clarify when results are on the
+ linear-predictor scale.
+ Added courtesy wrappers pmmeans, pmtreans, pmmip, ... for
+ those who prefer "predicted marginal means" to
+ "least-squares means". Output is re-labeled as well.
+ Export a few hidden fcns of possible use to package developers
+ Updates to "extending" vignette
+ Miscellaneous bug fixes, including repair to d.f. method from `regrid`
+
+
+2.20-23
+ Added 'get.lsm.option()', works like 'getOption()' but with defaults
+ obtained from 'lsmeans::defaults'
+ Added pbkrtest.limit option -- to limit object size of 'lmerMod' objects
+ for which the K-R method is used, to prevent excessive time/memory
+ Fixed bug that occurs with models where the response is not
+ discernible from the call.
+ In plot.lsmobj, int.adjust now defaults to object's 'adjust' setting
+ (as it should have originally)
+
+
+2.20-2
+ Fixed error in df for joint tests with aovlist objects (but existing
+ ref.grid objects need to be rebuilt for it to work)
+ Patched problem with rbind with R < 3.2
+
+
2.20
Added 'rbind' function for combining ref.grids into one family
Added "[.ref.grid" method for subsetting a reference grid
diff --git a/inst/doc/extending.R b/inst/doc/extending.R
index 8d24b52..74d11a7 100644
--- a/inst/doc/extending.R
+++ b/inst/doc/extending.R
@@ -1,5 +1,4 @@
### R code from vignette source 'extending.rnw'
-### Encoding: ISO8859-1
###################################################
### code chunk number 1: extending.rnw:40-42
@@ -99,22 +98,16 @@ lsmeans(fake.lts, ~ B | A)
###################################################
-### code chunk number 14: extending.rnw:165-174
+### code chunk number 14: extending.rnw:194-197
###################################################
-lsm.basis.lqs = function(object, trms, xlev, grid, ...) {
- m = model.frame(trms, grid, na.action = na.pass, xlev = xlev)
- X = model.matrix(trms, m, contrasts.arg = object$contrasts)
- bhat = coef(object)
- V = diag(rep(NA, length(bhat)))
- nbasis = matrix(NA)
- dffun = function(k, dfargs) NA
- list(X=X, bhat=bhat, nbasis=nbasis, V=V, dffun=dffun, dfargs=list())
-}
+form = ~ data$x + data[[5]]
+base::all.vars(form)
+lsmeans::.all.vars(form)
###################################################
-### code chunk number 15: extending.rnw:177-178
+### code chunk number 15: extending.rnw:204-205
###################################################
-lsmeans(fake.lts, pairwise ~ B)
+.get.offset(terms(~ speed + offset(.03*breaks)), head(warpbreaks))
diff --git a/inst/doc/extending.pdf b/inst/doc/extending.pdf
index 345f0e8..ba704a2 100644
Binary files a/inst/doc/extending.pdf and b/inst/doc/extending.pdf differ
diff --git a/inst/doc/extending.rnw b/inst/doc/extending.rnw
index c527b5d..118842e 100644
--- a/inst/doc/extending.rnw
+++ b/inst/doc/extending.rnw
@@ -141,7 +141,7 @@ Before explaining it, let's verify that it works:
<<>>=
lsmeans(fake.lts, ~ B | A)
@
-Hooray! Note the results are comparable to those we had for "fake.rlm", albeit the standard errors are quite a bit smaller.
+Hooray! Note the results are comparable to those we had for "fake.rlm", albeit the standard errors are quite a bit smaller. (In fact, the SEs could be misleading; a better method for estimating covariances should probably be implemented, but that is beyond the scope of this vignette.)
\subsection{Dissecting \code{lsm.basis.lqs}}
Let's go through the listing of this method, by line numbers.
@@ -160,9 +160,10 @@ There is a subtlety you need to know regarding estimability. Suppose the model i
\item[10:] Return these results in a named list.
\end{itemize}
+\ifx %---------- This subsection removed as it's extra fluff -------------
\subsection{The ``honest'' version}
Because of the inadequacies mentioned above for estimating the covariance matrix, then---lacking any better estimate---I think it's probably better to set it and the degrees of freedom to "NA"s. We will still be able to get the LS~means and contrasts thereof, but no standard errors or tests. With that in mind, here's a replacement version:
-<<>>=
+%<<>>=
lsm.basis.lqs = function(object, trms, xlev, grid, ...) {
m = model.frame(trms, grid, na.action = na.pass, xlev = xlev)
X = model.matrix(trms, m, contrasts.arg = object$contrasts)
@@ -172,11 +173,12 @@ lsm.basis.lqs = function(object, trms, xlev, grid, ...) {
dffun = function(k, dfargs) NA
list(X=X, bhat=bhat, nbasis=nbasis, V=V, dffun=dffun, dfargs=list())
}
-@
+%@
And here is a test:
-<<>>=
+%<<>>=
lsmeans(fake.lts, pairwise ~ B)
-@
+%@
+\fi % ----------- end of cut ---------------------------
\section{Hook functions}
Most linear models supported by \lsm{} have straightforward structure: Regression coefficients, their covaraince matrix, and a set of linear functions that define the reference grid. However, a few are more complex. An example is the \dqt{clm} class in the \pkg{ordinal} package, which allows a scale model in addition to the location model. When a scale model is used, the scale parameters are included in the model matrix, regression coefficients, and covariance matrix, and we can't just u [...]
@@ -186,6 +188,27 @@ In addition, you may want to apply some form of special post-processing after th
\section{Exported methods}
For package developers' convenience, \pkg{lsmeans} exports some of its S3 methods for "recover.data" and/or "lsm.basis"---use \code{methods(\dqt{recover.data})} and \code{methods(\dqt{lsm.basis})} to discover which ones. It may be that all you need is to invoke one of those methods and perhaps make some small changes---especially if your model-fitting algorithm makes heavy use of an existing model type supported by \pkg{lsmeans}. Contact me if you need \pkg{lsmeans} to export some additi [...]
+A few additional functions are exported because they may be useful to developers. They are as follows:
+\begin{description}
+\item[\code{.all.vars(expr, retain)}] Some users of your package may include "$" or "[[]]" operators in their model formulas. If you need to get the variable names, "base::all.vars" will probably not give you what you need. Here is an example:
+<<>>=
+form = ~ data$x + data[[5]]
+base::all.vars(form)
+lsmeans::.all.vars(form)
+@
+The "retain" argument may be used to specify regular expressions for patterns to retain as parts of variable names.
+\item[\code{.diag(x, nrow, ncol)}]
+The base "diag" function has a booby trap whereby, for example, "diag(57.6)" returns a $57\times 57$ identity matrix rather than a $1\times1$ matrix with $57.6$ as its only element. But "lsmeans::.diag(57.6)" will return the latter. The function works identically to "diag" except for the identity-matrix trap.
+\item[\code{.aovlist.dffun(k, dfargs)}] This function is exported because it is needed for computing degrees of freedom for models fitted using "aov", but it may be useful for other cases where Satterthwaite degrees-of-freedom calculations are needed. It requires the "dfargs" slot to contain analogous contents.
+\item[\code{.get.offset(terms, grid)}] If "terms" is a model formula containing an "offset" call, this is will compute that offset in the context of "grid" (a "data.frame").
+<<>>=
+.get.offset(terms(~ speed + offset(.03*breaks)), head(warpbreaks))
+@
+\item[\code{.my.vcov(object, ...)}]
+In a call to "ref.grid", "lsmeans", etc., the user may use "vcov." to specify an alternative function or matrix to use as the covariance matrix of the fixed-effects coefficients. This function supports that feature. Calling ".my.vcov" in place of the "vcov" method will substitute the user's "vcov." when it is present in "...".
+
+\end{description}
+
\section{Conclusions}
It is relatively simple to write appropriate methods that work with \lsm{} for model objects it does not support. I hope this vignette is helpful for understanding how. Furthermore, if you are the developer of a package that fits linear models, I encourage you to include "recover.data" and "lsm.basis" methods for those classes of objects, and to remember to export them in your "NAMESPACE" file as follows:
diff --git a/inst/doc/using-lsmeans.R b/inst/doc/using-lsmeans.R
index 4865013..1c53ecc 100644
--- a/inst/doc/using-lsmeans.R
+++ b/inst/doc/using-lsmeans.R
@@ -1,5 +1,4 @@
### R code from vignette source 'using-lsmeans.rnw'
-### Encoding: ISO8859-1
###################################################
### code chunk number 1: using-lsmeans.rnw:49-51
@@ -347,23 +346,29 @@ cld (Chick.lst)
###################################################
lsm.options(ref.grid = list(level = .90),
lsmeans = list(),
- contrast = list(infer = c(TRUE,TRUE)))
+ contrast = list(infer = c(TRUE, TRUE)))
###################################################
-### code chunk number 54: using-lsmeans.rnw:559-560
+### code chunk number 54: using-lsmeans.rnw:551-552
+###################################################
+get.lsm.option("estble.tol")
+
+
+###################################################
+### code chunk number 55: using-lsmeans.rnw:563-564
###################################################
lsmeans(Oats.lmer2, pairwise ~ Variety)
###################################################
-### code chunk number 55: using-lsmeans.rnw:564-565
+### code chunk number 56: using-lsmeans.rnw:568-569
###################################################
lsm.options(ref.grid = NULL, contrast = NULL)
###################################################
-### code chunk number 56: using-lsmeans.rnw:575-578
+### code chunk number 57: using-lsmeans.rnw:579-582
###################################################
nutr.lm <- lm(gain ~ (age + group + race)^2, data = nutrition)
library("car")
@@ -371,21 +376,21 @@ Anova(nutr.lm)
###################################################
-### code chunk number 57: nutr-intplot
+### code chunk number 58: nutr-intplot
###################################################
lsmip(nutr.lm, race ~ age | group)
lsmeans(nutr.lm, ~ group*race)
###################################################
-### code chunk number 58: using-lsmeans.rnw:595-597
+### code chunk number 59: using-lsmeans.rnw:599-601
###################################################
nutr.lsm <- lsmeans(nutr.lm, ~ group * race, weights = "proportional",
at = list(age = c("2","3"), race = c("Black","White")))
###################################################
-### code chunk number 59: using-lsmeans.rnw:600-603
+### code chunk number 60: using-lsmeans.rnw:604-607
###################################################
nutr.lsm
summary(pairs(nutr.lsm, by = "race"), by = NULL)
@@ -393,7 +398,7 @@ summary(pairs(nutr.lsm, by = "group"), by = NULL)
###################################################
-### code chunk number 60: using-lsmeans.rnw:616-620
+### code chunk number 61: using-lsmeans.rnw:620-624
###################################################
lsmeans(nutr.lm, "race", weights = "equal")
lsmeans(nutr.lm, "race", weights = "prop")
@@ -402,20 +407,20 @@ lsmeans(nutr.lm, "race", weights = "cells")
###################################################
-### code chunk number 61: using-lsmeans.rnw:629-631
+### code chunk number 62: using-lsmeans.rnw:633-635
###################################################
temp = lsmeans(nutr.lm, c("group","race"), weights = "prop")
lsmeans(temp, "race", weights = "prop")
###################################################
-### code chunk number 62: using-lsmeans.rnw:636-637
+### code chunk number 63: using-lsmeans.rnw:640-641
###################################################
with(nutrition, tapply(gain, race, mean))
###################################################
-### code chunk number 63: using-lsmeans.rnw:644-648
+### code chunk number 64: using-lsmeans.rnw:648-652
###################################################
library("mediation")
levels(framing$educ) = c("NA","Ref","< HS", "HS", "> HS","Coll +")
@@ -424,34 +429,34 @@ framing.glm = glm(cong_mesg ~ age + income + educ + emo + gender * factor(treat)
###################################################
-### code chunk number 64: framinga
+### code chunk number 65: framinga
###################################################
lsmip(framing.glm, treat ~ educ | gender, type = "response")
###################################################
-### code chunk number 65: framingb
+### code chunk number 66: framingb
###################################################
lsmip(framing.glm, treat ~ educ | gender, type = "response",
cov.reduce = emo ~ treat*gender + age + educ + income)
###################################################
-### code chunk number 66: using-lsmeans.rnw:675-677
+### code chunk number 67: using-lsmeans.rnw:679-681
###################################################
ref.grid(framing.glm,
cov.reduce = emo ~ treat*gender + age + educ + income)@grid
###################################################
-### code chunk number 67: using-lsmeans.rnw:708-710 (eval = FALSE)
+### code chunk number 68: using-lsmeans.rnw:712-714 (eval = FALSE)
###################################################
## rg <- ref.grid(my.model, at = list(x1 = c(5,10,15)),
## cov.reduce = list(x2 ~ x1, x3 ~ x1 + x2))
###################################################
-### code chunk number 68: housing-plot
+### code chunk number 69: housing-plot
###################################################
library("ordinal")
data(housing, package = "MASS")
@@ -461,39 +466,39 @@ lsmip(housing.clm, Cont ~ Infl | Type, layout = c(4,1))
###################################################
-### code chunk number 69: using-lsmeans.rnw:755-756
+### code chunk number 70: using-lsmeans.rnw:759-760
###################################################
test(pairs(lsmeans(housing.clm, ~ Infl | Type)), joint = TRUE)
###################################################
-### code chunk number 70: using-lsmeans.rnw:759-760
+### code chunk number 71: using-lsmeans.rnw:763-764
###################################################
test(pairs(lsmeans(housing.clm, ~ Cont | Type)), joint = TRUE)
###################################################
-### code chunk number 71: using-lsmeans.rnw:765-766
+### code chunk number 72: using-lsmeans.rnw:769-770
###################################################
ref.grid(housing.clm, mode = "cum.prob")
###################################################
-### code chunk number 72: using-lsmeans.rnw:769-771
+### code chunk number 73: using-lsmeans.rnw:773-775
###################################################
lsmeans(housing.clm, ~ Infl, at = list(cut = "Medium|High"),
mode = "cum.prob")
###################################################
-### code chunk number 73: using-lsmeans.rnw:774-776
+### code chunk number 74: using-lsmeans.rnw:778-780
###################################################
summary(lsmeans(housing.clm, ~ Infl, at = list(cut = "Medium|High"),
mode = "linear.predictor"), type = "response")
###################################################
-### code chunk number 74: using-lsmeans.rnw:784-792
+### code chunk number 75: using-lsmeans.rnw:788-796
###################################################
require("nlme")
options(contrasts = c("contr.treatment", "contr.poly"))
@@ -506,14 +511,14 @@ Chick.nlme
###################################################
-### code chunk number 75: using-lsmeans.rnw:795-797
+### code chunk number 76: using-lsmeans.rnw:799-801
###################################################
cld(lsmeans(Chick.nlme, ~ Diet, param = "asym"))
cld(lsmeans(Chick.nlme, ~ Diet, param = "xmid"))
###################################################
-### code chunk number 76: using-lsmeans.rnw:809-814
+### code chunk number 77: using-lsmeans.rnw:813-818
###################################################
library("MCMCpack")
counts <- c(18, 17, 15, 20, 10, 20, 25, 13, 12)
@@ -523,13 +528,13 @@ posterior <- MCMCpoisson(counts ~ outcome + treatment, mcmc = 1000)
###################################################
-### code chunk number 77: using-lsmeans.rnw:817-818
+### code chunk number 78: using-lsmeans.rnw:821-822
###################################################
( post.lsm <- lsmeans(posterior, "treatment") )
###################################################
-### code chunk number 78: using-lsmeans.rnw:821-823
+### code chunk number 79: using-lsmeans.rnw:825-827
###################################################
library("coda")
summary(as.mcmc(post.lsm))
diff --git a/inst/doc/using-lsmeans.pdf b/inst/doc/using-lsmeans.pdf
index 57812df..a92f2c9 100644
Binary files a/inst/doc/using-lsmeans.pdf and b/inst/doc/using-lsmeans.pdf differ
diff --git a/inst/doc/using-lsmeans.rnw b/inst/doc/using-lsmeans.rnw
index c24f98d..9010662 100644
--- a/inst/doc/using-lsmeans.rnw
+++ b/inst/doc/using-lsmeans.rnw
@@ -543,11 +543,15 @@ Note: "lstrends" computes a difference quotient based on two slightly different
<<>>=
lsm.options(ref.grid = list(level = .90),
lsmeans = list(),
- contrast = list(infer = c(TRUE,TRUE)))
+ contrast = list(infer = c(TRUE, TRUE)))
@
This requests that any object created by "ref.grid" be set to have confidence levels default to $90$\%, and that "contrast" results are displayed with both intervals and tests. No new options are set for "lsmeans" results, and the "lsmeans" part could have been omitted. These options are stored with objects created by "ref.grid", "lsmeans", and "contrast". For example, even though no new defaults are set for "lsmeans", future calls to "lsmeans" \emph{on a model object} will be displayed [...]
-
+Certain other options are available; for example, the \dqt{estble.tol} option sets the tolerance for determining estimability of linear contrasts. To see its current value:
+<<>>=
+get.lsm.option("estble.tol")
+@
+Defaults for this and some other parameters are saved in "lsm.default".
diff --git a/man/extending.Rd b/man/extending.Rd
index db584a1..6d0dbf2 100644
--- a/man/extending.Rd
+++ b/man/extending.Rd
@@ -2,7 +2,7 @@
\alias{recover.data}
\alias{recover.data.call}
\alias{lsm.basis}
-\alias{All.vars}
+%%\alias{.all.vars} % documented in the vignette
%%\alias{nonest.basis} % moved to estimability package
%%\alias{is.estble}
@@ -20,8 +20,8 @@ recover.data(object, ...)
data = NULL, params = NULL, ...)
lsm.basis(object, trms, xlev, grid, ...)
-
-All.vars(expr, retain = c("\\\\$", "\\\\[\\\\[", "\\\\]\\\\]"), ...)
+%%%
+%%%.all.vars(expr, retain = c("\\\\$", "\\\\[\\\\[", "\\\\]\\\\]"), ...)
}
%- maybe also 'usage' for other objects documented here.
\arguments{
@@ -36,8 +36,8 @@ An object returned from a model-fitting function.}
model formula that are \emph{not} predictors. An example would be a
variable \code{knots} specifying the knots to use in a spline model.
}
- \item{expr}{A formula}
- \item{retain}{Character vector of operators to retain (escaped as for \code{\link{gsub}})}
+% \item{expr}{A formula}
+% \item{retain}{Character vector of operators to retain (escaped as for \code{\link{gsub}})}
\item{\dots}{Additional arguments passed to other methods.}
}
@@ -65,7 +65,7 @@ The models already supported are detailed in \code{\link{models}}. Some packages
\item{dffun}{A function of \code{(k, dfargs)} that returns the degrees of freedom associated with \code{sum(k * bhat)}.}
\item{dfargs}{A \code{list} containing additional arguments needed for \code{dffun}.}
-\code{All.vars} is an enhancement of \code{\link{all.vars}}, whereby the operators specified in \code{retain} are left intact. Thus, \code{All.vars(foo$y ~ bar[[2]])} returns \code{"foo$y", "bar[[2]]"}, whereas \code{all.vars} returns \code{"foo", "y", "bar"}
+%%%\code{.all.vars} is an enhancement of \code{\link{all.vars}}, whereby the operators specified in \code{retain} are left intact. Thus, \code{All.vars(foo$y ~ bar[[2]])} returns \code{"foo$y", "bar[[2]]"}, whereas \code{all.vars} returns \code{"foo", "y", "bar"}
}
%\references{}
@@ -76,6 +76,10 @@ Some models may need something other than standard linear estimates and standard
The \code{estHook} function should have arguments \samp{(object, do.se, tol, ...)} where \code{object} is the \code{ref.grid} or \code{lsmobj} object, \code{do.se} is a logical flag for whether to return the standard error, and \code{tol} is the tolerance for assessing estimability. It should return a matrix with 3 columns: the estimates, standard errors (\code{NA} when \code{do.se==FALSE}), and degrees of freedom (\code{NA} for asymptotic). The number of rows should be the same as \samp [...]
}
+\section{Additional functions}{
+A few additional functions used in the \pkg{lsmeans} codebase are exported as they may be useful to package developers. See details near the end of the vignette \code{"extending"}.
+}
+
\author{
Russell V. Lenth
}
@@ -101,13 +105,6 @@ Russell V. Lenth
warpsing.lm$xlevels, grid)
} % end dontrun
-# Compare results of all.vars and All.vars
-form = log(cows$weight) ~ factor(bulls[[3]]) * herd$breed
-all.vars(form)
-All.vars(form)
-
-all.vars(form, functions = TRUE) ## same as all.names(form, unique = TRUE)
-All.vars(form, functions = TRUE)
}
diff --git a/man/glht.Rd b/man/glht.Rd
index 310d9cd..5f185c8 100644
--- a/man/glht.Rd
+++ b/man/glht.Rd
@@ -6,6 +6,7 @@
\alias{as.glht.ref.grid}
\alias{as.glht.lsm.list}
\alias{summary.glht.list}
+\alias{pmm}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{
@@ -21,6 +22,7 @@ These functions and methods provide an interface between \pkg{lsmeans} and the \
\method{summary}{glht.list}(object, ...)
lsm(...)
+pmm(...)
}
\arguments{
\item{object}{
@@ -32,7 +34,7 @@ Additional arguuments to other methods.
}
}
\details{
-\code{lsm} is meant to be called only \emph{from} \code{"glht"} as its second (\code{linfct}) argument. It works similarly to \code{\link[multcomp]{mcp}} except with \code{specs} (and optionally \code{by} and \code{contr} arguments) provided as in a call to \code{\link{lsmeans}}.
+\code{lsm} (and \code{pmm}, which is identical) are meant to be called only \emph{from} \code{"glht"} as its second (\code{linfct}) argument. It works similarly to \code{\link[multcomp]{mcp}} except with \code{specs} (and optionally \code{by} and \code{contr} arguments) provided as in a call to \code{\link{lsmeans}} or \code{pmmeans}.
When there is a non-\code{NULL} \code{by} variable (either explicitly or implicitly), each ``by'' group is passed separately to \code{glht} and returned as a \code{list} of \code{"glht"} objects. For convenience, this is classed as \code{"glht.list"} and a \code{summary} method is provided.
}
diff --git a/man/lsmeans-package.Rd b/man/lsmeans-package.Rd
index 3ecf4c0..88b94d9 100644
--- a/man/lsmeans-package.Rd
+++ b/man/lsmeans-package.Rd
@@ -1,6 +1,7 @@
\name{lsmeans-package}
\alias{lsmeans-package}
\docType{package}
+\encoding{utf-8}
\title{
Least-squares means
}
diff --git a/man/lsmeans.Rd b/man/lsmeans.Rd
index 8d2b58e..788981c 100644
--- a/man/lsmeans.Rd
+++ b/man/lsmeans.Rd
@@ -6,10 +6,13 @@
\alias{lsmeans.character.ref.grid}
\alias{lstrends}
\alias{lsmobj}
+\alias{pmmeans}
+\alias{pmtrends}
+\alias{pmmobj}
-\title{Least-squares means}
+\title{Least-squares means (or predicted marginal means)}
\description{
-Compute least-squares means for specified factors or factor combinations in a linear model,
+Compute least-squares means (predicted marginal means) for specified factors or factor combinations in a linear model,
and optionally comparisons or contrasts among them.
}
\usage{
@@ -30,6 +33,10 @@ and optionally comparisons or contrasts among them.
lstrends(model, specs, var, delta.var = 0.01 * rng, data, ...)
lsmobj(bhat, V, levels, linfct, df = NA, post.beta = matrix(NA), ...)
+
+pmmeans(...)
+pmtrends(...)
+pmmobj(...)
}
\arguments{
\item{object}{
@@ -89,7 +96,9 @@ The value of \emph{h} to use in forming the difference quotient \emph{(f(x+h) -
\details{
-Least-squares means are predictions from a linear model over a \emph{reference grid}, or marginal averages thereof. They have been popularized by \pkg{SAS} (SAS Institute, 2012). The \code{\link{ref.grid}} function identifies/creates the reference grid upon which \code{lsmeans} is based.
+Least-squares means (also called predicted marginal means) are predictions from a linear model over a \emph{reference grid}, or marginal averages thereof. They have been popularized by \pkg{SAS} (SAS Institute, 2012). The \code{\link{ref.grid}} function identifies/creates the reference grid upon which \code{lsmeans} is based.
+
+For those who prefer the term \dQuote{predicted marginal means}, courtesy wrappers \code{pmmeans}, \code{pmtrends}, and \code{pmmobj} are provided that behave identically to those that start with \code{ls}, except that estimates are relabeled accordingly (e.g., \code{lsmean} becomes \code{pmmean}).
If \code{specs} is a \code{formula}, it should be of the form \code{contr ~ specs | by}. The formula is parsed and then used as the arguments \code{contr}, \code{specs}, and \code{by} as indicated. The left-hand side is optional, but if specified it should be the name of a contrast family (e.g., \code{pairwise}) or of a sub-list of \code{contr.list}. Operators like \code{*} or \code{:} are necessary to delineate names in the formulas, but otherwise are ignored.
@@ -165,9 +174,10 @@ Oats.lme <- lme(yield ~ factor(nitro) + Variety,
test(Oats.anal) # Uses 1st element by default
confint(Oats.anal, which = 4) # or confint(Oats.anal[[4]])
+# Using 'pmmeans' wrapper ...
# These two calls apply the "bonf" adjustment to different elements...
-lsmeans(warp.lm, eff ~ wool, adjust = "bonf")
-lsmeans(warp.lm, eff ~ wool, options = list(adjust = "bonf"))
+pmmeans(warp.lm, eff ~ wool, adjust = "bonf")
+pmmeans(warp.lm, eff ~ wool, options = list(adjust = "bonf"))
# Advice: Do one thing at a time if you want non-default adjustments
# e.g., wool.lsm <- lsmeans(warp.lm, ~ wool)
# summary(wool.lsm, adjust = "bonf")
@@ -210,11 +220,11 @@ n <- c(44, 11, 37, 24)
se2 = s^2 / n
Satt.df <- function(x, dfargs)
sum(x * dfargs$v)^2 / sum((x * dfargs$v)^2 / (dfargs$n - 1))
-city.lsm <- lsmobj(bhat = ybar, V = diag(se2),
+city.pmm <- pmmobj(bhat = ybar, V = diag(se2),
levels = list(city = LETTERS[1:4]), linfct = diag(c(1,1,1,1)),
df = Satt.df, dfargs = list(v = se2, n = n), estName = "mean")
-city.lsm
-contrast(city.lsm, "revpairwise")
+city.pmm
+contrast(city.pmm, "revpairwise")
# See also many other examples in documentation for
diff --git a/man/lsmip.Rd b/man/lsmip.Rd
index 368bb5b..18b6537 100644
--- a/man/lsmip.Rd
+++ b/man/lsmip.Rd
@@ -1,9 +1,10 @@
\name{lsmip}
\alias{lsmip}
\alias{lsmip.default}
-%- Also NEED an '\alias' for EACH other topic documented here.
+\alias{pmmip}
+
\title{
-Least-squares means interaction plot
+Least-squares (predicted marginal) means interaction plot
}
\description{
This function creates an interaction plot of the least-squares means based on a fitted model and a simple formula specification.
@@ -12,6 +13,8 @@ This function creates an interaction plot of the least-squares means based on a
\method{lsmip}{default}(object, formula, type,
pch = c(1,2,6,7,9,10,15:20),
lty = 1, col = NULL, plotit = TRUE, ...)
+
+pmmip(...)
}
%- maybe also 'usage' for other objects documented here.
\arguments{
@@ -44,6 +47,8 @@ Additional arguments passed to \code{\link{lsmeans}} or to \code{\link[lattice]{
\details{
If \code{object} is a fitted model, \code{\link{lsmeans}} is called with an appropriate specification to obtain least-squares means for each combination of the factors present in \code{formula} (in addition, any arguments in \code{\dots} that match \code{at}, \code{trend}, \code{cov.reduce}, or \code{fac.reduce} are passed to \code{lsmeans}).
Otherwise, if \code{object} is an \code{lsmobj} object, its first element is used, and it must contain one \code{lsmean} value for each combination of the factors present in \code{formula}.
+
+The wrapper \code{pmmip} is provided for those who prefer the term \sQuote{predicted marginal means}.
}
\value{
(Invisibly), a \code{\link{data.frame}} with the table of least-squares means that were plotted, with an additional \code{"lattice"} attribute containing the \code{trellis} object for the plot.
diff --git a/man/models.Rd b/man/models.Rd
index 5e224b8..7a2cea3 100644
--- a/man/models.Rd
+++ b/man/models.Rd
@@ -69,7 +69,9 @@ If a matrix or function is supplied as \code{vcov.method}, it is interpreted as
\item{lmerMod}{If the \pkg{pbkrtest} package is installed, degrees of freedom for confidence intervals and tests are obtained using its \code{\link[pbkrtest]{ddf_Lb}} function, and the covariance matrix is adjusted using \code{\link[pbkrtest]{vcovAdj}}.
If \pkg{pbkrtest} is not installed, the covariance matrix is not adjusted, degrees of freedom are set to \code{NA}, and asymptotic results are displayed.
-The user may disable the use of \pkg{pbkrtest} via \samp{lsm.options(disable.pbkrtest=TRUE)} (this does not disable the \pkg{pbkrtest} package entirely, just its use in \pkg{lsmeans}). The \code{df} argument may be used to specify some other degrees of freedom. Specifying \code{df} is \emph{not} equivalent to disabling \pkg{pbkrtest}, because if not disabled, the covariance matrix is still adjusted.}
+The user may disable the use of \pkg{pbkrtest} via \samp{lsm.options(disable.pbkrtest=TRUE)} (this does not disable the \pkg{pbkrtest} package entirely, just its use in \pkg{lsmeans}). The \code{df} argument may be used to specify some other degrees of freedom. Specifying \code{df} is \emph{not} equivalent to disabling \pkg{pbkrtest}, because if not disabled, the covariance matrix is still adjusted.
+
+On a related matter: for very large objects, computation time or memory use may be excessive. The amount required depends roughly on the number of observations, \emph{N}, in the design matrix (because a major part of the computation involves inverting an \emph{N x N} matrix). Thus, \pkg{pbkrtest} is automatically disabled if \emph{N} exceeds the value of \code{get.lsm.option("pbkrtest.limit")}. If desired, the user may use \code{lsm.options} to adjust this limit from the default of 3000.}
\item{glmerMod}{No degrees of freedom are available for these objects, so tests and confidence intervals are asymptotic.}
}}% lme4
@@ -115,8 +117,11 @@ Tests and confidence intervals are asymptotic.}
\describe{
\item{multinom}{
The reference grid includes a pseudo-factor with the same name and levels as the multinomial response. There is an optional \code{mode} argument which should match \code{"prob"} or \code{"latent"}. With \code{mode = "prob"}, the reference-grid predictions consist of the estimated multinomial probabilities. The \code{"latent"} mode returns the linear predictor, recentered so that it averages to zero over the levels of the response variable (similar to sum-to-zero contrasts). Thus each lat [...]
+
+Please note that, because the probabilities sum to 1 (and the latent values sum to 0) over the multivariate-response levels, all sensible results from \code{lsmeans} must involve that response as one of the factors. For example, if \code{resp} is a response with \eqn{k} levels, \code{lsmeans(model, ~ resp | trt)} will yield the estimated multinomial distribution for each \code{trt}; but \code{lsmeans(model, ~ trt)} will just yield the average probability of \eqn{1/k} for each \code{trt}.
}}}% nnet, multinom
+
\section{\pkg{ordinal} package}{
\describe{
\item{clm,clmm}{The reference grid will include all variables that appear in the main model as well as those in the \code{scale} or \code{nominal} models. There are two optional arguments: \code{mode} (a character string) and \code{rescale} (which defaults to \samp{c(0,1)}). \code{mode} should match one of \code{"latent"} (the default), \code{"linear.predictor"}, \code{"cum.prob"}, \code{"exc.prob"}, \code{"prob"}, \code{"mean.class"}, or \code{"scale"}.
@@ -134,6 +139,22 @@ Any grid point that is non-estimable by either the location or the scale model (
Tests and confidence intervals are asymptotic.}
}}% ordinal
+
+\section{\pkg{pscl} package}{
+\describe{
+\item{hurdle, zeroinfl}{
+Two optional arguments -- \code{mode} and \code{lin.pred} -- are provided. The \code{mode} argument has possible values \code{"response"} (the default), \code{"count"}, \code{"zero"}, or \code{"prob0"}. \code{lin.pred} is logical and defaults to \code{FALSE}.
+
+With \code{lin.pred = FALSE}, the results are comparable to those returned by \code{predict(..., type = "response")}, \code{predict(..., type = "count")}, \code{predict(..., type = "zero")}, or \code{predict(..., type = "prob")[, 1]}. See the documentation for \code{\link[pscl]{predict.hurdle}} and \code{\link[pscl]{predict.zeroinfl}}.
+
+The option \code{lin.pred = TRUE} only applies to \code{mode = "count"} and \code{mode = "zero"}. The results returned are on the linear-predictor scale, with the same transformation as the link function in that part of the model. The predictions for a reference grid with \code{mode = "count"}, \code{lin.pred = TRUE}, and \code{type = "response"} will be the same as those obtained with \code{lin.pred = FALSE} and \code{mode = "count"}; however, any LS means derived from these grids will [...]
+
+If the \code{vcov.} argument is used (see details in \code{\link{ref.grid}}), it must yield a matrix of the same size as would be obtained using \code{\link[pscl]{vcov.hurdle}} or \code{\link[pscl]{vcov.zeroinfl}} with its \code{model} argument set to \code{("full", "count", "zero")} in respective correspondence with \code{mode} of \code{("mean", "count", "zero")}.
+If \code{vcov.} is a function, it must support the \code{model} argument.
+}
+}}% pscl
+
+
\section{\pkg{rms} package}{
\describe{
\item{Potential masking issue}{
diff --git a/man/ref.grid.Rd b/man/ref.grid.Rd
index 3b51561..8c85a4b 100644
--- a/man/ref.grid.Rd
+++ b/man/ref.grid.Rd
@@ -14,7 +14,7 @@ Using a fitted model object, determine a reference grid for which least-squares
}
\usage{
ref.grid(object, at, cov.reduce = mean, mult.name, mult.levs,
- options = lsm.options()$ref.grid, data, type, ...)
+ options = get.lsm.option("ref.grid"), data, type, ...)
}
\arguments{
diff --git a/man/summary.Rd b/man/summary.Rd
index d2ffa51..273f18a 100644
--- a/man/summary.Rd
+++ b/man/summary.Rd
@@ -39,7 +39,7 @@ Use these methods to summarize, print, plot, or examine objects of class \code{"
\method{print}{summary.ref.grid}(x, ..., digits = NULL, quote = FALSE, right = TRUE)
\method{plot}{lsmobj}(x, y, type, intervals = TRUE, comparisons = FALSE,
- alpha = 0.05, adjust = "tukey", int.adjust = "none", ...)
+ alpha = 0.05, adjust = "tukey", int.adjust, ...)
\method{plot}{summary.ref.grid}(x, y, horizontal = TRUE,
xlab, ylab, layout, ...)
@@ -84,7 +84,8 @@ The object to be subsetted, printed, plotted, or converted.
\item{horizontal}{Determines orientation of plotted confidence intervals.}
\item{intervals}{If \code{TRUE}, confidence intervals are plotted for each estimate}
\item{comparisons}{If \code{TRUE}, \dQuote{comparison arrows} are added to the plot, in such a way that the degree to which arrows overlap reflects as much as possible the significance of the comparison of the two estimates.}
- \item{alpha, int.adjust}{The \code{alpha} argument to use in constructing comparison arrows. \code{int.adjust} may be used to set the \code{adjust} argument for the confidence intervals (use \code{adjust} to set the adjust method for the comparison arrows).}
+ \item{alpha}{The \code{alpha} argument to use in constructing comparison arrows.}
+ \item{int.adjust}{the multiplicity adjustment method for the plotted confidence intervals; if missing, it defaults to the object's internal \code{adjust} setting (see \code{\link{update}}). (Note: the \code{adjust} argument in \code{plot} sets the adjust method for the comparison arrows, not the confidennce intervals.)}
\item{transform}{Logical value; if true, the inverse transformation is applied to the estimates in the grid}
\item{names}{Logical scalar or vector specifying whether variable names are appended to levels in the column labels for the \code{as.mcmc} result -- e.g., column names of \code{treat A} and \code{treat B} versus just \code{A} and \code{B}. When there is more than one variable involved, the elements of \code{names} are used cyclically.}
\item{\dots, digits, quote, right, xlab, ylab, layout}{For summaries, these are additional arguments passed to other methods including \code{\link{print.data.frame}}, \code{\link{update}}, or \code{\link{dotplot}} as appropriate. If not specified, appropriate defaults are used. For example, the default \code{layout} is one column of horizontal panels or one row of vertical panels.}
@@ -138,7 +139,7 @@ The \code{plot} method for \code{"lsmobj"} or \code{"summary.ref.grid"} objects
In plots with \code{comparisons = TRUE}, the resulting arrows are only approximate, and in some cases may fail to accurately reflect the pairwise comparisons of the estimates -- especially when estimates having large and small standard errors are intermingled in just the wrong way.
\bold{Re-gridding:}
-The \code{regrid} function reparameterizes an existing \code{ref.grid} so that its \code{linfct} slot is the identity matrix and its \code{bhat} slot consists of the estimates at the grid points. If \code{transform} is \code{TRUE}, the inverse transform is applied to the estimates. Outwardly, the \code{summary} after applying \code{regrid} is identical to what it was before (using \samp{type="response"} if \code{transform} is \code{TRUE}). But subsequent contrasts will be conducted on th [...]
+The \code{regrid} function reparameterizes an existing \code{ref.grid} so that its \code{linfct} slot is the identity matrix and its \code{bhat} slot consists of the estimates at the grid points. If \code{transform} is \code{TRUE}, the inverse transform is applied to the estimates. Outwardly, the \code{summary} after applying \code{regrid} is identical to what it was before (using \samp{type="response"} if \code{transform} is \code{TRUE}). But subsequent contrasts will be conducted on th [...]
\bold{MCMC samplers:}
When the object's \code{post.beta} slot is non-trivial, \code{as.mcmc} will return an \code{\link[coda]{mcmc}} object that can be summarized or plotted using methods in the \pkg{coda} package. Specifically, \code{post.beta} is transformed by post-multiplying by \code{t(linfct)}, creating a sample from the posterior distribution of LS means.
diff --git a/man/update.Rd b/man/update.Rd
index f80850a..8f84307 100644
--- a/man/update.Rd
+++ b/man/update.Rd
@@ -2,20 +2,28 @@
\alias{update}
\alias{update.ref.grid}
\alias{lsm.options}
+\alias{get.lsm.option}
+\alias{defaults}
+\alias{pmm.options}
+\alias{get.pmm.option}
\title{
-Set options for \code{ref.grid} or \code{lsmobj} objects
+Set or retrieve options for objects and summaries in \pkg{lsmeans}
}
\description{
Objects of class \code{ref.grid} or \code{lsmobj} contain several settings in their \code{"misc"} slot that affect primarily the
defaults used by \code{\link{summary}}. This \code{update} method allows them to be changed more safely than by modifying this slot directly.
-In addition, the user may set defaults for all objects using \samp{options(lsmeans = ...)}, or more conveniently using the \code{lsm.options} function documented here.
+In addition, the user may set defaults for all objects using \samp{options(lsmeans = ...)}, or more conveniently using the \code{lsm.options} and \code{get.lsm.option} functions documented here (or its courtesy wrappers, \code{pmm.options} and \code{get.pmm.option} for those who dislike the \sQuote{least-squares means} terminology).
}
\usage{
\S3method{update}{ref.grid}(object, ..., silent = FALSE)
lsm.options(...)
+get.lsm.option(x, default = lsmeans::defaults[[x]])
+
+pmm.options(...)
+get.pmm.option(...)
}
\arguments{
\item{object}{An object of class \code{ref.grid} (or its extension, \code{lsmobj})
@@ -24,6 +32,8 @@ lsm.options(...)
Arguments specifying elements' names and their new values.
}
\item{silent}{If \code{FALSE}, a message is displayed for any unmatched names.}
+ \item{x}{Character string holding an option name for \code{lsm.options}.}
+ \item{default}{Return value if \code{x} is not found.}
}
\details{
In \code{update}, the names in \code{\dots} are partially matched against those that are valid, and if a match is found, it adds or replaces the current setting. The valid names are
@@ -69,7 +79,7 @@ In \code{lsm.options}, we may set or change the default values for the above att
\item{\code{lsmeans}}{A named \code{list} of defaults for objects created by \code{\link{lsmeans}} (or \code{\link{lstrends}}).}
\item{\code{contrast}}{A named \code{list} of defaults for objects created by \code{\link{contrast}} (or \code{\link{pairs}}).}
\item{\code{summary}}{A named \code{list} of defaults used by the methods \code{\link{summary}}, \code{\link{predict}}, and \code{\link{lsmip}}. The only option that can affect the latter two is \code{"predict.method"}.}
-\item{\code{estble.tol}}{Tolerance for determining estimability in rank-deficient cases. If absent, \code{1e-8} is used.}
+\item{\code{estble.tol}}{Tolerance for determining estimability in rank-deficient cases. If absent, the value in \code{lsmeans::defaults$estble.tol)} is used.}
} % end \describe
} % end details
\value{
@@ -108,11 +118,14 @@ lsm.options(ref.grid = list(level = .90),
# reference grid). In addition, when we call 'contrast', 'pairs', etc.,
# confidence intervals rather than tests are displayed by default.
+\dontrun{
lsm.options(disable.pbkrtest = TRUE)
# This forces use of asymptotic methods for lmerMod objects.
# Set to FALSE or NULL to re-enable using pbkrtest.
+}
-print(lsm.options()) # see the current settings
+# See tolerance being used for determining estimability
+get.lsm.option("estble.tol")
}
}
% Add one or more standard keywords, see file 'KEYWORDS' in the
diff --git a/tests/tests1.Rout.save b/tests/tests1.Rout.save
index a1ebc0c..c1005b3 100644
--- a/tests/tests1.Rout.save
+++ b/tests/tests1.Rout.save
@@ -1,5 +1,5 @@
-R version 3.2.0 (2015-04-16) -- "Full of Ingredients"
+R version 3.2.2 (2015-08-14) -- "Fire Safety"
Copyright (C) 2015 The R Foundation for Statistical Computing
Platform: x86_64-w64-mingw32/x64 (64-bit)
@@ -426,19 +426,20 @@ wool = B:
M 3.332706 0.1206070 38 3.088550 3.576862
H 2.933453 0.1229958 38 2.684461 3.182445
+Results are given on the log scale.
Confidence level used: 0.95
> summary(warp.lsm5, type = "resp")
wool = A:
- tension lsresponse SE df lower.CL upper.CL
- L 45.32245 8.277551 38 31.31426 65.59711
- M 22.17601 2.883144 38 17.04430 28.85278
- H 21.25321 2.756618 38 16.34524 27.63489
+ tension response SE df lower.CL upper.CL
+ L 45.32245 8.277551 38 31.31426 65.59711
+ M 22.17601 2.883144 38 17.04430 28.85278
+ H 21.25321 2.756618 38 16.34524 27.63489
wool = B:
- tension lsresponse SE df lower.CL upper.CL
- L 26.63919 3.467596 38 20.46816 34.67077
- M 28.01405 3.378691 38 21.94524 35.76116
- H 18.79240 2.311387 38 14.65030 24.10561
+ tension response SE df lower.CL upper.CL
+ L 26.63919 3.467596 38 20.46816 34.67077
+ M 28.01405 3.378691 38 21.94524 35.76116
+ H 18.79240 2.311387 38 14.65030 24.10561
Confidence level used: 0.95
>
@@ -455,6 +456,7 @@ Confidence level used: 0.95
3 2.751535 0.1458650 NA 2.465645 3.037425
Results are averaged over the levels of: treatment
+Results are given on the log scale.
Confidence level used: 0.95
> # un-log the results to obtain rates
> summary(lsm.D93, type = "resp")
@@ -542,4 +544,4 @@ Confidence level used: 0.95
>
> proc.time()
user system elapsed
- 1.02 0.09 1.26
+ 1.07 0.14 1.25
diff --git a/vignettes/extending.rnw b/vignettes/extending.rnw
index c527b5d..118842e 100644
--- a/vignettes/extending.rnw
+++ b/vignettes/extending.rnw
@@ -141,7 +141,7 @@ Before explaining it, let's verify that it works:
<<>>=
lsmeans(fake.lts, ~ B | A)
@
-Hooray! Note the results are comparable to those we had for "fake.rlm", albeit the standard errors are quite a bit smaller.
+Hooray! Note the results are comparable to those we had for "fake.rlm", albeit the standard errors are quite a bit smaller. (In fact, the SEs could be misleading; a better method for estimating covariances should probably be implemented, but that is beyond the scope of this vignette.)
\subsection{Dissecting \code{lsm.basis.lqs}}
Let's go through the listing of this method, by line numbers.
@@ -160,9 +160,10 @@ There is a subtlety you need to know regarding estimability. Suppose the model i
\item[10:] Return these results in a named list.
\end{itemize}
+\ifx %---------- This subsection removed as it's extra fluff -------------
\subsection{The ``honest'' version}
Because of the inadequacies mentioned above for estimating the covariance matrix, then---lacking any better estimate---I think it's probably better to set it and the degrees of freedom to "NA"s. We will still be able to get the LS~means and contrasts thereof, but no standard errors or tests. With that in mind, here's a replacement version:
-<<>>=
+%<<>>=
lsm.basis.lqs = function(object, trms, xlev, grid, ...) {
m = model.frame(trms, grid, na.action = na.pass, xlev = xlev)
X = model.matrix(trms, m, contrasts.arg = object$contrasts)
@@ -172,11 +173,12 @@ lsm.basis.lqs = function(object, trms, xlev, grid, ...) {
dffun = function(k, dfargs) NA
list(X=X, bhat=bhat, nbasis=nbasis, V=V, dffun=dffun, dfargs=list())
}
-@
+%@
And here is a test:
-<<>>=
+%<<>>=
lsmeans(fake.lts, pairwise ~ B)
-@
+%@
+\fi % ----------- end of cut ---------------------------
\section{Hook functions}
Most linear models supported by \lsm{} have straightforward structure: Regression coefficients, their covaraince matrix, and a set of linear functions that define the reference grid. However, a few are more complex. An example is the \dqt{clm} class in the \pkg{ordinal} package, which allows a scale model in addition to the location model. When a scale model is used, the scale parameters are included in the model matrix, regression coefficients, and covariance matrix, and we can't just u [...]
@@ -186,6 +188,27 @@ In addition, you may want to apply some form of special post-processing after th
\section{Exported methods}
For package developers' convenience, \pkg{lsmeans} exports some of its S3 methods for "recover.data" and/or "lsm.basis"---use \code{methods(\dqt{recover.data})} and \code{methods(\dqt{lsm.basis})} to discover which ones. It may be that all you need is to invoke one of those methods and perhaps make some small changes---especially if your model-fitting algorithm makes heavy use of an existing model type supported by \pkg{lsmeans}. Contact me if you need \pkg{lsmeans} to export some additi [...]
+A few additional functions are exported because they may be useful to developers. They are as follows:
+\begin{description}
+\item[\code{.all.vars(expr, retain)}] Some users of your package may include "$" or "[[]]" operators in their model formulas. If you need to get the variable names, "base::all.vars" will probably not give you what you need. Here is an example:
+<<>>=
+form = ~ data$x + data[[5]]
+base::all.vars(form)
+lsmeans::.all.vars(form)
+@
+The "retain" argument may be used to specify regular expressions for patterns to retain as parts of variable names.
+\item[\code{.diag(x, nrow, ncol)}]
+The base "diag" function has a booby trap whereby, for example, "diag(57.6)" returns a $57\times 57$ identity matrix rather than a $1\times1$ matrix with $57.6$ as its only element. But "lsmeans::.diag(57.6)" will return the latter. The function works identically to "diag" except for the identity-matrix trap.
+\item[\code{.aovlist.dffun(k, dfargs)}] This function is exported because it is needed for computing degrees of freedom for models fitted using "aov", but it may be useful for other cases where Satterthwaite degrees-of-freedom calculations are needed. It requires the "dfargs" slot to contain analogous contents.
+\item[\code{.get.offset(terms, grid)}] If "terms" is a model formula containing an "offset" call, this is will compute that offset in the context of "grid" (a "data.frame").
+<<>>=
+.get.offset(terms(~ speed + offset(.03*breaks)), head(warpbreaks))
+@
+\item[\code{.my.vcov(object, ...)}]
+In a call to "ref.grid", "lsmeans", etc., the user may use "vcov." to specify an alternative function or matrix to use as the covariance matrix of the fixed-effects coefficients. This function supports that feature. Calling ".my.vcov" in place of the "vcov" method will substitute the user's "vcov." when it is present in "...".
+
+\end{description}
+
\section{Conclusions}
It is relatively simple to write appropriate methods that work with \lsm{} for model objects it does not support. I hope this vignette is helpful for understanding how. Furthermore, if you are the developer of a package that fits linear models, I encourage you to include "recover.data" and "lsm.basis" methods for those classes of objects, and to remember to export them in your "NAMESPACE" file as follows:
diff --git a/vignettes/using-lsmeans.rnw b/vignettes/using-lsmeans.rnw
index c24f98d..9010662 100644
--- a/vignettes/using-lsmeans.rnw
+++ b/vignettes/using-lsmeans.rnw
@@ -543,11 +543,15 @@ Note: "lstrends" computes a difference quotient based on two slightly different
<<>>=
lsm.options(ref.grid = list(level = .90),
lsmeans = list(),
- contrast = list(infer = c(TRUE,TRUE)))
+ contrast = list(infer = c(TRUE, TRUE)))
@
This requests that any object created by "ref.grid" be set to have confidence levels default to $90$\%, and that "contrast" results are displayed with both intervals and tests. No new options are set for "lsmeans" results, and the "lsmeans" part could have been omitted. These options are stored with objects created by "ref.grid", "lsmeans", and "contrast". For example, even though no new defaults are set for "lsmeans", future calls to "lsmeans" \emph{on a model object} will be displayed [...]
-
+Certain other options are available; for example, the \dqt{estble.tol} option sets the tolerance for determining estimability of linear contrasts. To see its current value:
+<<>>=
+get.lsm.option("estble.tol")
+@
+Defaults for this and some other parameters are saved in "lsm.default".
--
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/debian-science/packages/r-cran-lsmeans.git
More information about the debian-science-commits
mailing list