Skip to contents

Run an NMA model

Usage

nma.run(
  data.ab,
  treatments = NULL,
  method = "common",
  link = "identity",
  sdscale = FALSE,
  ...
)

Arguments

data.ab

A data frame of arm-level data in "long" format containing the columns:

  • studyID Study identifiers

  • time Numeric data indicating follow-up times

  • y Numeric data indicating the aggregate response for a given observation (e.g. mean)

  • se Numeric data indicating the standard error for a given observation

  • treatment Treatment identifiers (can be numeric, factor or character)

  • class An optional column indicating a particular class identifier. Observations with the same treatment identifier must also have the same class identifier.

  • n An optional column indicating the number of participants used to calculate the response at a given observation (required if modelling using Standardised Mean Differences)

  • standsd An optional column of numeric data indicating reference SDs used to standardise treatment effects when modelling using Standardised Mean Differences (SMD).

treatments

A vector of treatment names. If left as NULL it will use the treatment coding given in data.ab

method

Can take "common" or "random" to indicate the type of NMA model used to synthesise data points given in overlay.nma. The default is "random" since this assumes different time-points in overlay.nma have been lumped together to estimate the NMA.

link

Can take either "identity" (the default), "log" (for modelling Ratios of Means (Friedrich et al. 2011) ) or "smd" (for modelling Standardised Mean Differences - although this also corresponds to an identity link function).

sdscale

Logical object to indicate whether to write a model that specifies a reference SD for standardising when modelling using Standardised Mean Differences. Specifying sdscale=TRUE will therefore only modify the model if link function is set to SMD (link="smd").

...

Options for plotting in igraph.

Value

Returns an object of class("nma", "rjags")

Examples

network <- mb.network(osteopain)
#> Reference treatment is `Pl_0`
#> Studies reporting change from baseline automatically identified from the data

# Get the latest time point
late.time <- get.latest.time(network)

# Get the closest time point to a given value (t)
early.time <- get.closest.time(network, t=7)

# Run NMA on the data
nma.run(late.time$data.ab, treatments=late.time$treatments,
  method="random")
#> Compiling model graph
#>    Resolving undeclared variables
#>    Allocating nodes
#> Graph information:
#>    Observed stochastic nodes: 104
#>    Unobserved stochastic nodes: 133
#>    Total graph size: 1452
#> 
#> Initializing model
#> 
#> Inference for Bugs model at "/tmp/RtmpAXd0fj/file176e6c664ddf", fit using jags,
#>  3 chains, each with 2000 iterations (first 1000 discarded)
#>  n.sims = 3000 iterations saved
#>            mu.vect sd.vect     2.5%      25%      50%     75%   97.5%  Rhat
#> d[1]         0.000   0.000    0.000    0.000    0.000   0.000   0.000 1.000
#> d[2]        -0.411   0.196   -0.798   -0.537   -0.415  -0.282  -0.017 1.004
#> d[3]        -0.686   0.075   -0.836   -0.735   -0.685  -0.634  -0.538 1.002
#> d[4]        -0.718   0.187   -1.100   -0.842   -0.719  -0.595  -0.356 1.003
#> d[5]        -0.552   0.246   -1.032   -0.723   -0.553  -0.389  -0.055 1.001
#> d[6]        -0.062   0.318   -0.679   -0.277   -0.058   0.157   0.542 1.001
#> d[7]        -1.008   0.152   -1.314   -1.114   -1.006  -0.904  -0.714 1.006
#> d[8]        -0.457   0.312   -1.073   -0.665   -0.453  -0.243   0.144 1.001
#> d[9]        -1.594   0.215   -2.033   -1.738   -1.585  -1.451  -1.189 1.004
#> d[10]       -1.483   0.315   -2.083   -1.702   -1.486  -1.276  -0.862 1.001
#> d[11]       -0.678   0.143   -0.957   -0.774   -0.676  -0.584  -0.400 1.002
#> d[12]       -0.608   0.150   -0.905   -0.706   -0.609  -0.509  -0.319 1.006
#> d[13]       -0.754   0.175   -1.099   -0.870   -0.753  -0.639  -0.396 1.010
#> d[14]       -0.797   0.239   -1.284   -0.951   -0.798  -0.636  -0.324 1.001
#> d[15]       -0.851   0.107   -1.071   -0.918   -0.849  -0.778  -0.641 1.003
#> d[16]       -0.849   0.170   -1.187   -0.961   -0.849  -0.732  -0.526 1.003
#> d[17]        0.338   0.293   -0.244    0.141    0.342   0.531   0.918 1.003
#> d[18]       -0.679   0.198   -1.062   -0.812   -0.677  -0.548  -0.290 1.002
#> d[19]       -1.049   0.471   -1.984   -1.348   -1.045  -0.734  -0.116 1.002
#> d[20]       -0.609   0.214   -1.029   -0.745   -0.609  -0.468  -0.190 1.001
#> d[21]       -1.957   0.354   -2.667   -2.192   -1.960  -1.720  -1.253 1.003
#> d[22]       -0.882   0.201   -1.277   -1.010   -0.882  -0.749  -0.497 1.003
#> d[23]       -0.330   0.183   -0.686   -0.449   -0.331  -0.205   0.024 1.003
#> d[24]       -0.390   0.184   -0.747   -0.508   -0.387  -0.268  -0.036 1.001
#> d[25]       -0.588   0.183   -0.958   -0.708   -0.586  -0.468  -0.235 1.003
#> d[26]       -0.660   0.279   -1.210   -0.840   -0.656  -0.473  -0.103 1.002
#> d[27]       -0.406   0.277   -0.948   -0.589   -0.405  -0.226   0.152 1.007
#> d[28]       -0.660   0.278   -1.225   -0.842   -0.658  -0.475  -0.114 1.001
#> d[29]       -0.557   0.285   -1.133   -0.733   -0.560  -0.372   0.019 1.001
#> sd           0.249   0.050    0.159    0.213    0.248   0.282   0.354 1.010
#> totresdev  109.047  15.506   80.682   98.032  108.187 119.134 141.564 1.001
#> deviance  -105.740  15.506 -134.106 -116.756 -106.601 -95.654 -73.224 1.002
#>           n.eff
#> d[1]          1
#> d[2]        520
#> d[3]       1500
#> d[4]        690
#> d[5]       2800
#> d[6]       3000
#> d[7]        400
#> d[8]       2300
#> d[9]        550
#> d[10]      3000
#> d[11]      1500
#> d[12]       340
#> d[13]       230
#> d[14]      3000
#> d[15]      1100
#> d[16]       860
#> d[17]       680
#> d[18]      1800
#> d[19]      1200
#> d[20]      2500
#> d[21]      1500
#> d[22]      1300
#> d[23]       710
#> d[24]      3000
#> d[25]       730
#> d[26]      1900
#> d[27]       310
#> d[28]      3000
#> d[29]      2600
#> sd         1100
#> totresdev  3000
#> deviance   3000
#> 
#> For each parameter, n.eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor (at convergence, Rhat=1).
#> 
#> DIC info (using the rule, pD = var(deviance)/2)
#> pD = 120.3 and DIC = 14.6
#> DIC is an estimate of expected predictive error (lower deviance is better).