Skip to contents

Fits a Bayesian time-course model for model-based network meta-analysis (MBNMA) that can account for repeated measures over time within studies by applying a desired time-course function. Follows the methods of Pedder et al. (2019) .

Usage

mb.run(
  network,
  fun = tpoly(degree = 1),
  positive.scale = FALSE,
  intercept = NULL,
  link = "identity",
  sdscale = FALSE,
  parameters.to.save = NULL,
  rho = 0,
  covar = "varadj",
  omega = NULL,
  corparam = FALSE,
  class.effect = list(),
  UME = FALSE,
  pd = "pv",
  parallel = FALSE,
  priors = NULL,
  n.iter = 20000,
  n.chains = 3,
  n.burnin = floor(n.iter/2),
  n.thin = max(1, floor((n.iter - n.burnin)/1000)),
  model.file = NULL,
  jagsdata = NULL,
  ...
)

Arguments

network

An object of class "mb.network".

fun

An object of class "timefun" generated (see Details) using any of tloglin(), tpoly(), titp(), temax(), tfpoly(), tspline() or tuser()

positive.scale

A boolean object that indicates whether all continuous mean responses (y) are positive and therefore whether the baseline response should be given a prior that constrains it to be positive (e.g. for scales that cannot be <0).

intercept

A boolean object that indicates whether an intercept (written as alpha in the model) is to be included. If left as NULL (the default), an intercept will be included only for studies reporting absolute means, and will be excluded for studies reporting change from baseline (as indicated in network$cfb).

link

Can take either "identity" (the default), "log" (for modelling Ratios of Means (Friedrich et al. 2011) ) or "smd" (for modelling Standardised Mean Differences - although this also corresponds to an identity link function).

sdscale

Logical object to indicate whether to write a model that specifies a reference SD for standardising when modelling using Standardised Mean Differences. Specifying sdscale=TRUE will therefore only modify the model if link function is set to SMD (link="smd").

parameters.to.save

A character vector containing names of parameters to monitor in JAGS

rho

The correlation coefficient when modelling within-study correlation between time points. The default is a string representing a prior distribution in JAGS, indicating that it be estimated from the data (e.g. rho="dunif(0,1)"). rho also be assigned a numeric value (e.g. rho=0.7), which fixes rho in the model to this value (e.g. for use in a deterministic sensitivity analysis). If set to rho=0 (the default) then this implies modelling no correlation between time points.

covar

A character specifying the covariance structure to use for modelling within-study correlation between time-points. This can be done by specifying one of the following:

  • "varadj" - a univariate likelihood with a variance adjustment to assume a constant correlation between subsequent time points (Jansen et al. 2015) . This is the default.

  • "CS" - a multivariate normal likelihood with a compound symmetry structure

  • "AR1" - a multivariate normal likelihood with an autoregressive AR1 structure

omega

DEPRECATED IN VERSION 0.2.3 ONWARDS (~uniform(-1,1) now used for correlation between parameters rather than a Wishart prior). A scale matrix for the inverse-Wishart prior for the covariance matrix used to model the correlation between time-course parameters (see Details for time-course functions). omega must be a symmetric positive definite matrix with dimensions equal to the number of time-course parameters modelled using relative effects (pool="rel"). If left as NULL (the default) a diagonal matrix with elements equal to 1 is used.

corparam

A boolean object that indicates whether correlation should be modeled between relative effect time-course parameters. Default is FALSE and this is automatically set to FALSE if class effects are modeled. Setting it to TRUE models correlation between time-course parameters. This can help identify parameters that are estimated poorly for some treatments by allowing sharing of information between parameters for different treatments in the network, but may also cause some shrinkage.

class.effect

A list of named strings that determines which time-course parameters to model with a class effect and what that effect should be ("common" or "random"). For example: list(emax="common", et50="random").

UME

Can take either TRUE or FALSE (for an unrelated mean effects model on all or no time-course parameters respectively) or can be a vector of parameter name strings to model as UME. For example: c("beta.1", "beta.2").

pd

Can take either:

  • pv only pV will be reported (as automatically outputted by R2jags).

  • plugin calculates pD by the plug-in method (Spiegelhalter et al. 2002) . It is faster, but may output negative non-sensical values, due to skewed deviances that can arise with non-linear models.

  • pd.kl (the default) calculates pD by the Kullback–Leibler divergence (Plummer 2008) . This will require running the model for additional iterations but will always produce a sensical result.

  • popt calculates pD using an optimism adjustment which allows for calculation of the penalized expected deviance (Plummer 2008)

parallel

A boolean value that indicates whether JAGS should be run in parallel (TRUE) or not (FALSE). If TRUE then the number of cores to use is automatically calculated. Functions that involve updating the model (e.g. devplot(), fitplot()) cannot be used with models implemented in parallel.

priors

A named list of parameter values (without indices) and replacement prior distribution values given as strings using distributions as specified in JAGS syntax (see Plummer (2017) ).

n.iter

number of total iterations per chain (including burn in; default: 20000)

n.chains

number of Markov chains (default: 3)

n.burnin

length of burn in, i.e. number of iterations to discard at the beginning. Default is n.iter/2``, that is, discarding the first half of the simulations. If n.burnin` is 0, jags() will run 100 iterations for adaption.

n.thin

thinning rate. Must be a positive integer. Set n.thin > 1`` to save memory and computation time if n.iteris large. Default ismax(1, floor(n.chains * (n.iter-n.burnin) / 1000))`` which will only thin if there are at least 2000 simulations.

model.file

The file path to a JAGS model (.jags file) that can be used to overwrite the JAGS model that is automatically written based on the specified options in MBNMAtime. Useful for adding further model flexibility.

jagsdata

A named list of the data objects to be used in the JAGS model. Only required if users are defining their own JAGS model using model.file. Format should match that of standard models fitted in MBNMAtime (see mbnma$model.arg$jagsdata)

...

Arguments to be sent to R2jags.

Value

An object of S3 class c("mbnma", "rjags")`` containing parameter results from the model. Can be summarized by print()and can check traceplots usingR2jags::traceplot()or various functions from the packagemcmcplots`.#'

If there are errors in the JAGS model code then the object will be a list consisting of two elements - an error message from JAGS that can help with debugging and model.arg, a list of arguments provided to mb.run()

which includes jagscode, the JAGS code for the model that can help users identify the source of the error.

Time-course parameters

Nodes that are automatically monitored (if present in the model) have the same name as in the time-course function for named time-course parameters (e.g. emax). However, for named only as beta.1, beta.2, beta.3 or beta.4 parameters may have an alternative interpretation.

Details of the interpretation and model specification of different parameters can be shown by using the summary() method on an "mbnma" object generated by mb.run().

Parameters modelled using relative effects

  • If pooling is relative (e.g. pool.1="rel") for a given parameter then the named parameter (e.g. emax) or a numbered d parameter (e.g. d.1) corresponds to the pooled relative effect for a given treatment compared to the network reference treatment for this time-course parameter.

  • sd. followed by a named (e.g. emax, beta.1) is the between-study SD (heterogeneity) for relative effects, reported if pooling for a time-course parameter is relative (e.g. pool.1="rel") and the method for synthesis is random (e.g. method.1="random).

  • If class effects are modelled, parameters for classes are represented by the upper case name of the time-course parameter they correspond to. For example if class.effect=list(emax="random"), relative class effects will be represented by EMAX. The SD of the class effect (e.g. sd.EMAX, sd.BETA.1) is the SD of treatments within a class for the time-course parameter they correspond to.

Parameters modelled using absolute effects

  • If pooling is absolute (e.g. pool.1="abs") for a given parameter then the named parameter (e.g. emax) or a numbered beta parameter (e.g. beta.1) corresponds to the estimated absolute effect for this time-course parameter.

  • For an absolute time-course parameter if the corresponding method is common (e.g. method.1="common") the parameter corresponds to a single common parameter estimated across all studies and treatments. If the corresponding method is random (e.g. method.1="random") then parameter is a mean effect around which the study-level absolute effects vary with SD corresponding to sd. followed by the named parameter (e.g. sd.emax, sd.beta.1) .

Other model parameters

  • rho The correlation coefficient for correlation between time-points. Its interpretation will differ depending on the covariance structure specified in covar

  • totresdev The residual deviance of the model

  • deviance The deviance of the model

Time-course function

Several general time-course functions with up to 4 time-course parameters are provided, but a user-defined time-course relationship can instead be used. Details can be found in the respective help files for each function.

Available time-course functions are:

Correlation between observations

When modelling correlation between observations using rho, values for rho must imply a positive semidefinite covariance matrix.

Advanced options

model.file and jagsdata can be used to run an edited JAGS model and dataset. This allows users considerably more modelling flexibility than is possible using the basic MBNMAtime syntax, though requires strong understanding of JAGS and the MBNMA modelling framework. Treatment-specific priors, meta-regression and bias-adjustment are all possible in this way, and it allows users to make use of the subsequent functions in MBNMAtime (plotting, prediction, ranking) whilst fitting these more complex models.

References

Friedrich JO, Adhikari NKJ, Beyene J (2011). “Ratio of means for analyzing continuous outcomes in meta-analysis performed as well as mean difference methods.” Journal of Clinical Epidemiology, 64(5), 556-564. doi:10.1016/j.jclinepi.2010.09.016 .

Jansen JP, Vieira MC, Cope S (2015). “Network meta-analysis of longitudinal data using fractional polynomials.” Stat Med, 34(15), 2294-311. ISSN 1097-0258 (Electronic) 0277-6715 (Linking), doi:10.1002/sim.6492 , https://pubmed.ncbi.nlm.nih.gov/25877808/.

Pedder H, Dias S, Bennetts M, Boucher M, Welton NJ (2019). “Modelling time-course relationships with multiple treatments: Model-Based Network Meta-Analysis for continuous summary outcomes.” Res Synth Methods, 10(2), 267-286.

Plummer M (2008). “Penalized loss functions for Bayesian model comparison.” Biostatistics, 9(3), 523-39. ISSN 1468-4357 (Electronic) 1465-4644 (Linking), https://pubmed.ncbi.nlm.nih.gov/18209015/.

Plummer M (2017). JAGS user manual. https://people.stat.sc.edu/hansont/stat740/jags_user_manual.pdf.

Spiegelhalter DJ, Best NG, Carlin BP, van der Linde A (2002). “Bayesian measures of model complexity and fit.” J R Statistic Soc B, 64(4), 583-639.

Examples

# \donttest{
# Create mb.network object
network <- mb.network(osteopain)
#> Reference treatment is `Pl_0`
#> Studies reporting change from baseline automatically identified from the data

# Fit a linear time-course MBNMA with:
# random relative treatment effects on the slope
mb.run(network, fun=tpoly(degree=1, pool.1="rel", method.1="random"))
#> Compiling model graph
#>    Resolving undeclared variables
#>    Allocating nodes
#> Graph information:
#>    Observed stochastic nodes: 417
#>    Unobserved stochastic nodes: 163
#>    Total graph size: 7701
#> 
#> Initializing model
#> 
#> Inference for Bugs model at "/tmp/RtmpfcAta0/file1a401d3d62b2", fit using jags,
#>  3 chains, each with 20000 iterations (first 10000 discarded), n.thin = 10
#>  n.sims = 3000 iterations saved
#>            mu.vect sd.vect     2.5%      25%      50%      75%    97.5%  Rhat
#> d.1[1]       0.000   0.000    0.000    0.000    0.000    0.000    0.000 1.000
#> d.1[2]      -0.064   0.044   -0.151   -0.092   -0.065   -0.033    0.022 1.002
#> d.1[3]      -0.094   0.018   -0.128   -0.105   -0.094   -0.082   -0.059 1.001
#> d.1[4]      -0.092   0.044   -0.180   -0.122   -0.093   -0.063   -0.004 1.001
#> d.1[5]      -0.045   0.051   -0.147   -0.078   -0.045   -0.012    0.054 1.001
#> d.1[6]       0.003   0.068   -0.135   -0.042    0.004    0.049    0.131 1.001
#> d.1[7]      -0.147   0.036   -0.220   -0.170   -0.147   -0.122   -0.079 1.001
#> d.1[8]      -0.026   0.071   -0.170   -0.073   -0.026    0.019    0.112 1.001
#> d.1[9]      -0.289   0.050   -0.387   -0.322   -0.289   -0.255   -0.193 1.001
#> d.1[10]     -0.302   0.071   -0.442   -0.349   -0.303   -0.254   -0.168 1.001
#> d.1[11]     -0.076   0.037   -0.150   -0.101   -0.076   -0.052   -0.003 1.001
#> d.1[12]     -0.070   0.037   -0.144   -0.095   -0.070   -0.044    0.004 1.001
#> d.1[13]     -0.083   0.045   -0.174   -0.113   -0.083   -0.053    0.005 1.001
#> d.1[14]     -0.082   0.061   -0.206   -0.123   -0.082   -0.042    0.038 1.001
#> d.1[15]     -0.128   0.026   -0.179   -0.145   -0.128   -0.111   -0.077 1.001
#> d.1[16]     -0.135   0.039   -0.212   -0.160   -0.134   -0.109   -0.056 1.003
#> d.1[17]      0.113   0.065   -0.013    0.070    0.113    0.157    0.243 1.001
#> d.1[18]     -0.128   0.047   -0.220   -0.159   -0.127   -0.096   -0.034 1.001
#> d.1[19]     -0.119   0.077   -0.269   -0.172   -0.118   -0.070    0.039 1.001
#> d.1[20]     -0.115   0.049   -0.210   -0.148   -0.115   -0.083   -0.017 1.002
#> d.1[21]     -0.426   0.074   -0.573   -0.474   -0.425   -0.377   -0.281 1.001
#> d.1[22]     -0.170   0.042   -0.255   -0.197   -0.169   -0.143   -0.085 1.002
#> d.1[23]     -0.030   0.040   -0.105   -0.057   -0.031   -0.005    0.049 1.001
#> d.1[24]     -0.041   0.041   -0.121   -0.068   -0.040   -0.013    0.040 1.001
#> d.1[25]     -0.067   0.041   -0.147   -0.093   -0.067   -0.040    0.013 1.001
#> d.1[26]     -0.084   0.063   -0.205   -0.127   -0.085   -0.043    0.043 1.001
#> d.1[27]     -0.065   0.066   -0.197   -0.106   -0.064   -0.022    0.060 1.001
#> d.1[28]     -0.091   0.065   -0.217   -0.136   -0.091   -0.048    0.035 1.001
#> d.1[29]     -0.078   0.066   -0.207   -0.120   -0.077   -0.035    0.051 1.001
#> rho          0.000   0.000    0.000    0.000    0.000    0.000    0.000 1.000
#> sd.beta.1    0.072   0.009    0.056    0.066    0.071    0.078    0.092 1.002
#> totresdev 9298.840  16.978 9266.057 9287.269 9298.213 9309.400 9333.896 1.001
#> deviance  8418.801  16.978 8386.017 8407.229 8418.173 8429.361 8453.857 1.001
#>           n.eff
#> d.1[1]        1
#> d.1[2]     1300
#> d.1[3]     2600
#> d.1[4]     3000
#> d.1[5]     2200
#> d.1[6]     2800
#> d.1[7]     3000
#> d.1[8]     3000
#> d.1[9]     3000
#> d.1[10]    3000
#> d.1[11]    3000
#> d.1[12]    3000
#> d.1[13]    3000
#> d.1[14]    3000
#> d.1[15]    3000
#> d.1[16]     840
#> d.1[17]    3000
#> d.1[18]    3000
#> d.1[19]    3000
#> d.1[20]    1900
#> d.1[21]    3000
#> d.1[22]    1800
#> d.1[23]    3000
#> d.1[24]    3000
#> d.1[25]    3000
#> d.1[26]    3000
#> d.1[27]    3000
#> d.1[28]    3000
#> d.1[29]    3000
#> rho           1
#> sd.beta.1  1900
#> totresdev  3000
#> deviance   3000
#> 
#> For each parameter, n.eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor (at convergence, Rhat=1).
#> 
#> DIC info (using the rule, pD = var(deviance)/2)
#> pD = 144.2 and DIC = 8562.4
#> DIC is an estimate of expected predictive error (lower deviance is better).

# Fit an emax time-course MBNMA with:
# fixed relative treatment effects on emax
# a common parameter estimated independently of treatment
# a common Hill parameter estimated independently of treatment
# a prior for the Hill parameter (normal with mean 0 and precision 0.1)
# data reported as change from baseline
result <- mb.run(network, fun=temax(pool.emax="rel", method.emax="common",
                                    pool.et50="abs", method.et50="common",
                                    pool.hill="abs", method.hill="common"),
                 priors=list(hill="dunif(0.5, 2)"),
                 intercept=TRUE)
#> 'et50' parameters must take positive values.
#>  Default half-normal prior restricts posterior to positive values.
#> 'hill' parameters must take positive values.
#>  Default half-normal prior restricts posterior to positive values.
#> Compiling model graph
#>    Resolving undeclared variables
#>    Allocating nodes
#> Graph information:
#>    Observed stochastic nodes: 417
#>    Unobserved stochastic nodes: 90
#>    Total graph size: 7740
#> 
#> Initializing model
#> 


#### commented out to prevent errors from JAGS version in github actions build ####
# Fit a log-linear MBNMA with:
# random relative treatment effects on the rate
# an autoregressive AR1 covariance structure
# modelled as standardised mean differences
# copdnet <- mb.network(copd)
# result <- mb.run(copdnet, fun=tloglin(pool.rate="rel", method.rate="random"),
#                covar="AR1", rho="dunif(0,1)", link="smd")



####### Examine MCMC diagnostics (using mcmcplots package) #######

# Traceplots
# mcmcplots::traplot(result)

# Plots for assessing convergence
# mcmcplots::mcmcplot(result, c("rate", "sd.rate", "rho"))

########## Output ###########

# Print R2jags output and summary
print(result)
#> Inference for Bugs model at "/tmp/RtmpfcAta0/file1a402531b3e0", fit using jags,
#>  3 chains, each with 20000 iterations (first 10000 discarded), n.thin = 10
#>  n.sims = 3000 iterations saved
#>           mu.vect sd.vect    2.5%     25%     50%     75%   97.5%  Rhat n.eff
#> emax[1]     0.000   0.000   0.000   0.000   0.000   0.000   0.000 1.000     1
#> emax[2]    -0.632   0.105  -0.840  -0.700  -0.631  -0.560  -0.431 1.001  3000
#> emax[3]    -0.919   0.048  -1.011  -0.953  -0.920  -0.885  -0.826 1.003   670
#> emax[4]    -1.024   0.103  -1.228  -1.094  -1.023  -0.950  -0.824 1.002  1600
#> emax[5]    -0.628   0.191  -1.015  -0.752  -0.621  -0.500  -0.266 1.001  3000
#> emax[6]    -0.112   0.145  -0.395  -0.211  -0.110  -0.016   0.175 1.002  2000
#> emax[7]    -1.212   0.075  -1.356  -1.263  -1.212  -1.160  -1.061 1.002  1200
#> emax[8]    -0.108   0.144  -0.397  -0.206  -0.108  -0.011   0.176 1.001  2500
#> emax[9]    -1.867   0.113  -2.091  -1.945  -1.867  -1.788  -1.655 1.001  3000
#> emax[10]   -1.869   0.165  -2.186  -1.981  -1.870  -1.755  -1.551 1.002  1600
#> emax[11]   -0.864   0.061  -0.987  -0.903  -0.863  -0.823  -0.746 1.002  1200
#> emax[12]   -0.848   0.067  -0.979  -0.894  -0.847  -0.803  -0.715 1.002  1500
#> emax[13]   -1.049   0.078  -1.206  -1.102  -1.048  -0.995  -0.900 1.001  2700
#> emax[14]   -0.980   0.126  -1.227  -1.066  -0.978  -0.896  -0.735 1.001  2900
#> emax[15]   -1.301   0.069  -1.431  -1.349  -1.300  -1.252  -1.170 1.001  3000
#> emax[16]   -1.207   0.079  -1.360  -1.260  -1.205  -1.150  -1.054 1.001  2700
#> emax[17]    0.350   0.136   0.080   0.260   0.350   0.437   0.620 1.001  2200
#> emax[18]   -0.915   0.091  -1.093  -0.973  -0.913  -0.855  -0.738 1.001  3000
#> emax[19]   -1.272   0.237  -1.725  -1.437  -1.272  -1.107  -0.804 1.001  3000
#> emax[20]   -0.869   0.103  -1.067  -0.937  -0.868  -0.799  -0.673 1.001  2700
#> emax[21]   -2.607   0.209  -3.016  -2.750  -2.608  -2.458  -2.207 1.002  1300
#> emax[22]   -1.296   0.116  -1.520  -1.371  -1.296  -1.218  -1.067 1.002  1100
#> emax[23]   -0.209   0.070  -0.344  -0.255  -0.211  -0.163  -0.073 1.002  1200
#> emax[24]   -0.327   0.069  -0.467  -0.374  -0.326  -0.280  -0.191 1.001  2800
#> emax[25]   -0.755   0.075  -0.908  -0.806  -0.754  -0.707  -0.607 1.003   750
#> emax[26]   -0.819   0.095  -1.007  -0.883  -0.818  -0.754  -0.635 1.001  3000
#> emax[27]   -0.779   0.119  -1.024  -0.858  -0.776  -0.700  -0.547 1.001  3000
#> emax[28]   -1.011   0.122  -1.262  -1.090  -1.008  -0.927  -0.777 1.001  3000
#> emax[29]   -0.827   0.123  -1.069  -0.911  -0.826  -0.744  -0.589 1.001  2500
#> et50        0.459   0.041   0.385   0.431   0.457   0.485   0.550 1.003   750
#> hill        0.614   0.096   0.503   0.537   0.589   0.667   0.850 1.004   610
#> rho         0.000   0.000   0.000   0.000   0.000   0.000   0.000 1.000     1
#> totresdev 836.911  13.880 811.626 827.095 835.868 845.967 865.224 1.001  3000
#> deviance  -43.129  13.880 -68.413 -52.944 -44.171 -34.072 -14.815 1.001  3000
#> 
#> For each parameter, n.eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor (at convergence, Rhat=1).
#> 
#> DIC info (using the rule, pD = var(deviance)/2)
#> pD = 96.4 and DIC = 52.2
#> DIC is an estimate of expected predictive error (lower deviance is better).
summary(result)
#> ========================================
#> Time-course MBNMA
#> ========================================
#> 
#> Time-course function: emax
#> Data modelled with intercept
#> 
#> emax parameter
#> Pooling: relative effects
#> Method: common treatment effects
#> 
#> |Treatment |Parameter |  Median|    2.5%|   97.5%|
#> |:---------|:---------|-------:|-------:|-------:|
#> |Pl_0      |emax[1]   |  0.0000|  0.0000|  0.0000|
#> |Ce_100    |emax[2]   | -0.6312| -0.8404| -0.4314|
#> |Ce_200    |emax[3]   | -0.9196| -1.0115| -0.8261|
#> |Ce_400    |emax[4]   | -1.0234| -1.2285| -0.8241|
#> |Du_90     |emax[5]   | -0.6212| -1.0153| -0.2662|
#> |Et_10     |emax[6]   | -0.1104| -0.3946|  0.1750|
#> |Et_30     |emax[7]   | -1.2116| -1.3561| -1.0614|
#> |Et_5      |emax[8]   | -0.1080| -0.3970|  0.1759|
#> |Et_60     |emax[9]   | -1.8670| -2.0906| -1.6547|
#> |Et_90     |emax[10]  | -1.8702| -2.1856| -1.5506|
#> |Lu_100    |emax[11]  | -0.8633| -0.9868| -0.7460|
#> |Lu_200    |emax[12]  | -0.8474| -0.9791| -0.7152|
#> |Lu_400    |emax[13]  | -1.0481| -1.2056| -0.8999|
#> |Lu_NA     |emax[14]  | -0.9778| -1.2271| -0.7348|
#> |Na_1000   |emax[15]  | -1.3004| -1.4315| -1.1695|
#> |Na_1500   |emax[16]  | -1.2053| -1.3600| -1.0539|
#> |Na_250    |emax[17]  |  0.3499|  0.0804|  0.6200|
#> |Na_750    |emax[18]  | -0.9125| -1.0933| -0.7384|
#> |Ox_44     |emax[19]  | -1.2715| -1.7250| -0.8043|
#> |Ro_12     |emax[20]  | -0.8683| -1.0672| -0.6733|
#> |Ro_125    |emax[21]  | -2.6081| -3.0155| -2.2072|
#> |Ro_25     |emax[22]  | -1.2956| -1.5205| -1.0674|
#> |Tr_100    |emax[23]  | -0.2106| -0.3440| -0.0726|
#> |Tr_200    |emax[24]  | -0.3264| -0.4672| -0.1911|
#> |Tr_300    |emax[25]  | -0.7540| -0.9075| -0.6073|
#> |Tr_400    |emax[26]  | -0.8185| -1.0069| -0.6350|
#> |Va_10     |emax[27]  | -0.7757| -1.0240| -0.5474|
#> |Va_20     |emax[28]  | -1.0084| -1.2617| -0.7767|
#> |Va_5      |emax[29]  | -0.8257| -1.0693| -0.5886|
#> 
#> 
#> et50 parameter
#> Pooling: absolute effects
#> Method: common treatment effects
#> 
#> |Treatment |Parameter | Median|  2.5%|  97.5%|
#> |:---------|:---------|------:|-----:|------:|
#> |Pl_0      |et50      | 0.4571| 0.385| 0.5503|
#> |Ce_100    |et50      | 0.4571| 0.385| 0.5503|
#> |Ce_200    |et50      | 0.4571| 0.385| 0.5503|
#> |Ce_400    |et50      | 0.4571| 0.385| 0.5503|
#> |Du_90     |et50      | 0.4571| 0.385| 0.5503|
#> |Et_10     |et50      | 0.4571| 0.385| 0.5503|
#> |Et_30     |et50      | 0.4571| 0.385| 0.5503|
#> |Et_5      |et50      | 0.4571| 0.385| 0.5503|
#> |Et_60     |et50      | 0.4571| 0.385| 0.5503|
#> |Et_90     |et50      | 0.4571| 0.385| 0.5503|
#> |Lu_100    |et50      | 0.4571| 0.385| 0.5503|
#> |Lu_200    |et50      | 0.4571| 0.385| 0.5503|
#> |Lu_400    |et50      | 0.4571| 0.385| 0.5503|
#> |Lu_NA     |et50      | 0.4571| 0.385| 0.5503|
#> |Na_1000   |et50      | 0.4571| 0.385| 0.5503|
#> |Na_1500   |et50      | 0.4571| 0.385| 0.5503|
#> |Na_250    |et50      | 0.4571| 0.385| 0.5503|
#> |Na_750    |et50      | 0.4571| 0.385| 0.5503|
#> |Ox_44     |et50      | 0.4571| 0.385| 0.5503|
#> |Ro_12     |et50      | 0.4571| 0.385| 0.5503|
#> |Ro_125    |et50      | 0.4571| 0.385| 0.5503|
#> |Ro_25     |et50      | 0.4571| 0.385| 0.5503|
#> |Tr_100    |et50      | 0.4571| 0.385| 0.5503|
#> |Tr_200    |et50      | 0.4571| 0.385| 0.5503|
#> |Tr_300    |et50      | 0.4571| 0.385| 0.5503|
#> |Tr_400    |et50      | 0.4571| 0.385| 0.5503|
#> |Va_10     |et50      | 0.4571| 0.385| 0.5503|
#> |Va_20     |et50      | 0.4571| 0.385| 0.5503|
#> |Va_5      |et50      | 0.4571| 0.385| 0.5503|
#> 
#> 
#> hill parameter
#> Pooling: absolute effects
#> Method: common treatment effects
#> 
#> |Treatment |Parameter | Median|   2.5%|  97.5%|
#> |:---------|:---------|------:|------:|------:|
#> |Pl_0      |hill      |  0.589| 0.5028| 0.8495|
#> |Ce_100    |hill      |  0.589| 0.5028| 0.8495|
#> |Ce_200    |hill      |  0.589| 0.5028| 0.8495|
#> |Ce_400    |hill      |  0.589| 0.5028| 0.8495|
#> |Du_90     |hill      |  0.589| 0.5028| 0.8495|
#> |Et_10     |hill      |  0.589| 0.5028| 0.8495|
#> |Et_30     |hill      |  0.589| 0.5028| 0.8495|
#> |Et_5      |hill      |  0.589| 0.5028| 0.8495|
#> |Et_60     |hill      |  0.589| 0.5028| 0.8495|
#> |Et_90     |hill      |  0.589| 0.5028| 0.8495|
#> |Lu_100    |hill      |  0.589| 0.5028| 0.8495|
#> |Lu_200    |hill      |  0.589| 0.5028| 0.8495|
#> |Lu_400    |hill      |  0.589| 0.5028| 0.8495|
#> |Lu_NA     |hill      |  0.589| 0.5028| 0.8495|
#> |Na_1000   |hill      |  0.589| 0.5028| 0.8495|
#> |Na_1500   |hill      |  0.589| 0.5028| 0.8495|
#> |Na_250    |hill      |  0.589| 0.5028| 0.8495|
#> |Na_750    |hill      |  0.589| 0.5028| 0.8495|
#> |Ox_44     |hill      |  0.589| 0.5028| 0.8495|
#> |Ro_12     |hill      |  0.589| 0.5028| 0.8495|
#> |Ro_125    |hill      |  0.589| 0.5028| 0.8495|
#> |Ro_25     |hill      |  0.589| 0.5028| 0.8495|
#> |Tr_100    |hill      |  0.589| 0.5028| 0.8495|
#> |Tr_200    |hill      |  0.589| 0.5028| 0.8495|
#> |Tr_300    |hill      |  0.589| 0.5028| 0.8495|
#> |Tr_400    |hill      |  0.589| 0.5028| 0.8495|
#> |Va_10     |hill      |  0.589| 0.5028| 0.8495|
#> |Va_20     |hill      |  0.589| 0.5028| 0.8495|
#> |Va_5      |hill      |  0.589| 0.5028| 0.8495|
#> 
#> 
#> 
#> Correlation between time points
#> Covariance structure: varadj 
#> Rho assigned a numeric value: 0
#> 
#> #### Model Fit Statistics ####
#> 
#> Effective number of parameters:
#> pD (pV) calculated using the rule, pD = var(deviance)/2 = 96
#> Deviance = -44
#> Residual deviance = 836
#> Deviance Information Criterion (DIC) = 52 
#> 

# Plot forest plot of results
plot(result)



###### Additional model arguments ######

# Use gout dataset
goutnet <- mb.network(goutSUA_CFBcomb)
#> Reference treatment is `Plac`
#> Studies reporting change from baseline automatically identified from the data

# Define a user-defined time-course relationship for use in mb.run
timecourse <- ~ exp(beta.1 * time) + (time^beta.2)

# Run model with:
# user-defined time-course function
# random relative effects on beta.1
# default common effects on beta.2
# default relative pooling on beta.1 and beta.2
# common class effect on beta.2
mb.run(goutnet, fun=tuser(fun=timecourse, method.1="random"),
       class.effect=list(beta.1="common"))
#> Compiling model graph
#>    Resolving undeclared variables
#>    Allocating nodes
#> Graph information:
#>    Observed stochastic nodes: 224
#>    Unobserved stochastic nodes: 121
#>    Total graph size: 4851
#> 
#> Initializing model
#> 
#> Inference for Bugs model at "/tmp/RtmpfcAta0/file1a403890b484", fit using jags,
#>  3 chains, each with 20000 iterations (first 10000 discarded), n.thin = 10
#>  n.sims = 3000 iterations saved
#>               mu.vect sd.vect        2.5%         25%         50%         75%
#> D.1[1]          0.000   0.000       0.000       0.000       0.000       0.000
#> D.1[2]          4.186  16.794     -25.809      -6.892       2.711      13.602
#> D.1[3]        -19.589  22.952     -68.828     -34.279     -16.812      -5.762
#> D.1[4]         -4.779  27.642     -47.262     -27.319      -6.234      13.468
#> D.1[5]         -7.078  26.053     -62.403     -24.211      -5.091      10.046
#> D.1[6]        -18.563  14.502     -54.146     -26.575     -17.413      -9.429
#> D.1[7]        -10.747  18.371     -45.813     -22.630     -11.058       0.758
#> d.1[1]          0.000   0.000       0.000       0.000       0.000       0.000
#> d.1[2]          4.186  16.794     -25.809      -6.892       2.711      13.602
#> d.1[3]          4.186  16.794     -25.809      -6.892       2.711      13.602
#> d.1[4]          4.186  16.794     -25.809      -6.892       2.711      13.602
#> d.1[5]          4.186  16.794     -25.809      -6.892       2.711      13.602
#> d.1[6]        -19.589  22.952     -68.828     -34.279     -16.812      -5.762
#> d.1[7]         -4.779  27.642     -47.262     -27.319      -6.234      13.468
#> d.1[8]         -4.779  27.642     -47.262     -27.319      -6.234      13.468
#> d.1[9]         -4.779  27.642     -47.262     -27.319      -6.234      13.468
#> d.1[10]        -4.779  27.642     -47.262     -27.319      -6.234      13.468
#> d.1[11]        -7.078  26.053     -62.403     -24.211      -5.091      10.046
#> d.1[12]       -18.563  14.502     -54.146     -26.575     -17.413      -9.429
#> d.1[13]       -18.563  14.502     -54.146     -26.575     -17.413      -9.429
#> d.1[14]       -18.563  14.502     -54.146     -26.575     -17.413      -9.429
#> d.1[15]       -18.563  14.502     -54.146     -26.575     -17.413      -9.429
#> d.1[16]       -10.747  18.371     -45.813     -22.630     -11.058       0.758
#> d.1[17]       -10.747  18.371     -45.813     -22.630     -11.058       0.758
#> d.1[18]       -10.747  18.371     -45.813     -22.630     -11.058       0.758
#> d.1[19]       -10.747  18.371     -45.813     -22.630     -11.058       0.758
#> d.2[1]          0.000   0.000       0.000       0.000       0.000       0.000
#> d.2[2]         -3.294  29.492     -62.440     -23.279      -2.908      16.157
#> d.2[3]          2.978  31.115     -56.975     -17.229       2.731      23.776
#> d.2[4]        -16.536  10.220     -42.385     -21.629     -13.967      -8.782
#> d.2[5]          2.932  30.134     -57.203     -16.744       2.285      23.644
#> d.2[6]        -15.275  27.359     -67.448     -32.583     -16.282       0.848
#> d.2[7]         -2.773  31.121     -64.331     -23.080      -2.612      17.512
#> d.2[8]         -7.929  27.732     -63.667     -25.692      -6.992      11.507
#> d.2[9]         -2.600  29.824     -63.607     -22.560      -2.521      16.844
#> d.2[10]        -5.867  28.975     -64.806     -24.229      -5.253      13.064
#> d.2[11]       -17.768  26.347     -72.225     -33.948     -17.365       0.220
#> d.2[12]       -11.508  26.601     -65.353     -29.566     -10.813       6.381
#> d.2[13]       -20.029  26.080     -70.816     -35.831     -20.003      -8.006
#> d.2[14]       -16.466  22.054     -61.256     -30.848     -15.150      -1.513
#> d.2[15]       -14.714  25.195     -67.019     -30.330     -13.297       1.710
#> d.2[16]         3.595  30.271     -54.399     -16.921       3.597      23.268
#> d.2[17]       -25.955  20.462     -71.966     -37.650     -22.766     -12.295
#> d.2[18]       -25.299  19.313     -69.438     -36.856     -22.818     -12.267
#> d.2[19]       -26.485  20.479     -70.268     -38.606     -24.411     -12.657
#> rho             0.000   0.000       0.000       0.000       0.000       0.000
#> sd.beta.1       3.225   2.496       0.147       1.107       2.744       4.700
#> totresdev 2013067.238   7.530 2013056.691 2013062.690 2013065.242 2013069.484
#> deviance  2012373.049   7.530 2012362.502 2012368.501 2012371.053 2012375.296
#>                 97.5%  Rhat n.eff
#> D.1[1]          0.000 1.000     1
#> D.1[2]         41.517 1.024    88
#> D.1[3]         23.513 1.016   870
#> D.1[4]         51.227 1.084    60
#> D.1[5]         38.822 1.046    65
#> D.1[6]          8.421 1.103    27
#> D.1[7]         29.297 1.015   240
#> d.1[1]          0.000 1.000     1
#> d.1[2]         41.517 1.024    88
#> d.1[3]         41.517 1.024    88
#> d.1[4]         41.517 1.024    88
#> d.1[5]         41.517 1.024    88
#> d.1[6]         23.513 1.016   870
#> d.1[7]         51.227 1.084    60
#> d.1[8]         51.227 1.084    60
#> d.1[9]         51.227 1.084    60
#> d.1[10]        51.227 1.084    60
#> d.1[11]        38.822 1.046    65
#> d.1[12]         8.421 1.103    27
#> d.1[13]         8.421 1.103    27
#> d.1[14]         8.421 1.103    27
#> d.1[15]         8.421 1.103    27
#> d.1[16]        29.297 1.015   240
#> d.1[17]        29.297 1.015   240
#> d.1[18]        29.297 1.015   240
#> d.1[19]        29.297 1.015   240
#> d.2[1]          0.000 1.000     1
#> d.2[2]         54.398 1.001  3000
#> d.2[3]         63.776 1.001  3000
#> d.2[4]         -4.312 1.001  2700
#> d.2[5]         62.751 1.001  3000
#> d.2[6]         43.322 1.047    51
#> d.2[7]         57.555 1.001  3000
#> d.2[8]         42.915 1.001  3000
#> d.2[9]         56.667 1.001  3000
#> d.2[10]        48.676 1.004   540
#> d.2[11]        31.338 1.001  3000
#> d.2[12]        40.565 1.001  3000
#> d.2[13]        38.736 1.095    28
#> d.2[14]        24.335 1.001  3000
#> d.2[15]        33.025 1.001  2900
#> d.2[16]        65.542 1.001  2100
#> d.2[17]        10.519 1.027   130
#> d.2[18]        10.845 1.041   110
#> d.2[19]        10.690 1.061    64
#> rho             0.000 1.000     1
#> sd.beta.1       8.970 1.030   110
#> totresdev 2013085.089 1.000     1
#> deviance  2012390.900 1.000     1
#> 
#> For each parameter, n.eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor (at convergence, Rhat=1).
#> 
#> DIC info (using the rule, pD = var(deviance)/2)
#> pD = 27.5 and DIC = 2012398.6
#> DIC is an estimate of expected predictive error (lower deviance is better).

# Fit a log-linear MBNMA
# with variance adjustment for correlation between time-points
result <- mb.run(network, fun=tloglin(),
                 rho="dunif(0,1)", covar="varadj")
#> Compiling model graph
#>    Resolving undeclared variables
#>    Allocating nodes
#> Graph information:
#>    Observed stochastic nodes: 417
#>    Unobserved stochastic nodes: 89
#>    Total graph size: 7303
#> 
#> Initializing model
#> 
# }