Skip to contents

This returns a unique number corresponding to the Bayes Factor associated to the test \(M_0: \Lambda_{obs} = \lambda_{ref}\) versus \(M_1: \Lambda_{obs}\neq \lambda_{ref}\) (with all other \(\Lambda_j,\neq obs\) free). The value of \(\lambda_{ref}\) is required as input. The user should expect long running times for the log-Student’s t model, in which case a reduced chain given \(\Lambda_{obs} = \lambda_{ref}\) needs to be generated

Usage

BF_u_obs_LEP(
  N,
  thin,
  burn,
  ref,
  obs,
  Time,
  Cens,
  X,
  chain,
  prior = 2,
  set = TRUE,
  eps_l = 0.5,
  eps_r = 0.5,
  ar = 0.44
)

Arguments

N

Total number of iterations. Must be a multiple of thin.

thin

Thinning period.

burn

Burn-in period

ref

Reference value \(u_{ref}\). Vallejos & Steel recommends this value be set to \(1.6 +1_\alpha\) for the LEP model.

obs

Indicates the number of the observation under analysis

Time

Vector containing the survival times.

Cens

Censoring indication (1: observed, 0: right-censored).

X

Design matrix with dimensions \(n\) x \(k\) where \(n\) is the number of observations and \(k\) is the number of covariates (including the intercept).

chain

MCMC chains generated by a BASSLINE MCMC function

prior

Indicator of prior (1: Jeffreys, 2: Type I Ind. Jeffreys, 3: Ind. Jeffreys).

set

Indicator for the use of set observations (1: set observations, 0: point observations). The former is strongly recommended over the latter as point observations cause problems in the context of Bayesian inference (due to continuous sampling models assigning zero probability to a point).

eps_l

Lower imprecision \((\epsilon_l)\) for set observations (default value: 0.5).

eps_r

Upper imprecision \((\epsilon_r)\) for set observations (default value: 0.5)

ar

Optimal acceptance rate for the adaptive Metropolis-Hastings updates

Examples

library(BASSLINE)

# Please note: N=1000 is not enough to reach convergence.
# This is only an illustration. Run longer chains for more accurate
# estimations (especially for the log-exponential power model).

LEP <- MCMC_LEP(N = 1000, thin = 20, burn = 40, Time = cancer[, 1],
                Cens = cancer[, 2], X = cancer[, 3:11])
#> Sampling initial betas from a Normal(0, 1) distribution
#> Initial beta 1 : 0.47 
#>  Initial beta 2 : -0.28 
#>  Initial beta 3 : -0.84 
#>  Initial beta 4 : 0.27 
#>  Initial beta 5 : 1.55 
#>  Initial beta 6 : 0.11 
#>  Initial beta 7 : -0.96 
#>  Initial beta 8 : -0.02 
#>  Initial beta 9 : 0.08 
#> 
#> Sampling initial sigma^2 from a Gamma(2, 2) distribution
#> Initial sigma^2 : 0.6 
#> 
#> Sampling initial alpha from a Uniform(1, 2) distribution
#> Initial alpha : 1.33 
#> 
#> AR beta 1 : 0.19 
#>  AR beta 2 : 0.26 
#>  AR beta 3 : 0.37 
#>  AR beta 4 : 0.34 
#>  AR beta 5 : 0.39 
#>  AR beta 6 : 0 
#>  AR beta 7 : 0.02 
#>  AR beta 8 : 0 
#>  AR beta 9 : 0.35 
#> AR sigma2 : 0.6 
#> AR alpha : 0.14 
alpha <- mean(LEP[, 11])
uref <- 1.6 + 1 / alpha
LEP.Outlier <- BF_u_obs_LEP(N = 100, thin = 20, burn =1 , ref = uref,
                            obs = 1, Time = cancer[, 1], Cens = cancer[, 2],
                            cancer[, 3:11], chain = LEP)
#> AR beta 1 : 0.26 
#>  AR beta 2 : 0.21 
#>  AR beta 3 : 0.27 
#>  AR beta 4 : 0.29 
#>  AR beta 5 : 0.28 
#>  AR beta 6 : 0 
#>  AR beta 7 : 0 
#>  AR beta 8 : 0.01 
#>  AR beta 9 : 0.27 
#> AR sigma2 : 0.58 
#> AR alpha : 0.12