blm_star_exact {countSTAR} | R Documentation |
Monte Carlo sampler for STAR linear regression with a g-prior
Description
Compute direct Monte Carlo samples from the posterior and predictive distributions of a STAR linear regression model with a g-prior.
Usage
blm_star_exact(
y,
X,
X_test = X,
transformation = "np",
y_max = Inf,
psi = NULL,
method_sigma = "mle",
approx_Fz = FALSE,
approx_Fy = FALSE,
nsave = 5000,
compute_marg = FALSE
)
Arguments
y |
|
X |
|
X_test |
|
transformation |
transformation to use for the latent data; must be one of
|
y_max |
a fixed and known upper bound for all observations; default is |
psi |
prior variance (g-prior) |
method_sigma |
method to estimate the latent data standard deviation; must be one of
|
approx_Fz |
logical; in BNP transformation, apply a (fast and stable) normal approximation for the marginal CDF of the latent data |
approx_Fy |
logical; in BNP transformation, approximate
the marginal CDF of |
nsave |
number of Monte Carlo simulations |
compute_marg |
logical; if TRUE, compute and return the marginal likelihood |
Details
STAR defines a count-valued probability model by (1) specifying a Gaussian model for continuous *latent* data and (2) connecting the latent data to the observed data via a *transformation and rounding* operation. Here, the continuous latent data model is a linear regression.
There are several options for the transformation. First, the transformation
can belong to the *Box-Cox* family, which includes the known transformations
'identity', 'log', and 'sqrt'. Second, the transformation
can be estimated (before model fitting) using the empirical distribution of the
data y
. Options in this case include the empirical cumulative
distribution function (CDF), which is fully nonparametric ('np'), or the parametric
alternatives based on Poisson ('pois') or Negative-Binomial ('neg-bin')
distributions. For the parametric distributions, the parameters of the distribution
are estimated using moments (means and variances) of y
. The distribution-based
transformations approximately preserve the mean and variance of the count data y
on the latent data scale, which lends interpretability to the model parameters.
Lastly, the transformation can be modeled using the Bayesian bootstrap ('bnp'),
which is a Bayesian nonparametric model and incorporates the uncertainty
about the transformation into posterior and predictive inference.
The Monte Carlo sampler produces direct, discrete, and joint draws from the posterior distribution and the posterior predictive distribution of the linear regression model with a g-prior.
Value
a list with the following elements:
-
coefficients
the posterior mean of the regression coefficients -
post.beta
:nsave x p
samples from the posterior distribution of the regression coefficients -
post.pred
: draws from the posterior predictive distribution ofy
-
post.pred.test
:nsave x n0
samples from the posterior predictive distribution at test pointsX_test
(if given, otherwise NULL) -
sigma
: The estimated latent data standard deviation -
post.g
:nsave
posterior samples of the transformation evaluated at the uniquey
values (only applies for 'bnp' transformations) -
marg.like
: the marginal likelihood (if requested; otherwise NULL)
Note
The 'bnp' transformation (without the Fy
approximation) is
slower than the other transformations because of the way
the TruncatedNormal
sampler must be updated as the lower and upper
limits change (due to the sampling of g
). Thus, computational
improvements are likely available.