Skip to contents

Run an NMA model

Usage

nma.run(
  data.ab,
  treatments = NULL,
  method = "common",
  link = "identity",
  sdscale = FALSE,
  ...
)

Arguments

data.ab

A data frame of arm-level data in "long" format containing the columns:

  • studyID Study identifiers

  • time Numeric data indicating follow-up times

  • y Numeric data indicating the aggregate response for a given observation (e.g. mean)

  • se Numeric data indicating the standard error for a given observation

  • treatment Treatment identifiers (can be numeric, factor or character)

  • class An optional column indicating a particular class identifier. Observations with the same treatment identifier must also have the same class identifier.

  • n An optional column indicating the number of participants used to calculate the response at a given observation (required if modelling using Standardised Mean Differences)

  • standsd An optional column of numeric data indicating reference SDs used to standardise treatment effects when modelling using Standardised Mean Differences (SMD).

treatments

A vector of treatment names. If left as NULL it will use the treatment coding given in data.ab

method

Can take "common" or "random" to indicate the type of NMA model used to synthesise data points given in overlay.nma. The default is "random" since this assumes different time-points in overlay.nma have been lumped together to estimate the NMA.

link

Can take either "identity" (the default), "log" (for modelling Ratios of Means (Friedrich et al. 2011) ) or "smd" (for modelling Standardised Mean Differences - although this also corresponds to an identity link function).

sdscale

Logical object to indicate whether to write a model that specifies a reference SD for standardising when modelling using Standardised Mean Differences. Specifying sdscale=TRUE will therefore only modify the model if link function is set to SMD (link="smd").

...

Options for plotting in igraph.

Value

Returns an object of class("nma", "rjags")

Examples

network <- mb.network(osteopain)
#> Reference treatment is `Pl_0`
#> Studies reporting change from baseline automatically identified from the data

# Get the latest time point
late.time <- get.latest.time(network)

# Get the closest time point to a given value (t)
early.time <- get.closest.time(network, t=7)

# Run NMA on the data
nma.run(late.time$data.ab, treatments=late.time$treatments,
  method="random")
#> Compiling model graph
#>    Resolving undeclared variables
#>    Allocating nodes
#> Graph information:
#>    Observed stochastic nodes: 104
#>    Unobserved stochastic nodes: 133
#>    Total graph size: 1452
#> 
#> Initializing model
#> 
#> Inference for Bugs model at "/tmp/RtmpfcAta0/file1a40a5e4536", fit using jags,
#>  3 chains, each with 2000 iterations (first 1000 discarded)
#>  n.sims = 3000 iterations saved
#>            mu.vect sd.vect     2.5%      25%      50%     75%   97.5%  Rhat
#> d[1]         0.000   0.000    0.000    0.000    0.000   0.000   0.000 1.000
#> d[2]        -0.408   0.191   -0.788   -0.531   -0.411  -0.282  -0.042 1.002
#> d[3]        -0.687   0.077   -0.838   -0.738   -0.686  -0.635  -0.539 1.001
#> d[4]        -0.712   0.191   -1.095   -0.843   -0.710  -0.584  -0.345 1.004
#> d[5]        -0.565   0.252   -1.055   -0.729   -0.565  -0.400  -0.081 1.003
#> d[6]        -0.086   0.309   -0.702   -0.293   -0.084   0.114   0.502 1.001
#> d[7]        -1.027   0.156   -1.344   -1.130   -1.026  -0.921  -0.729 1.003
#> d[8]        -0.484   0.322   -1.127   -0.699   -0.484  -0.273   0.140 1.002
#> d[9]        -1.620   0.223   -2.059   -1.769   -1.620  -1.474  -1.177 1.002
#> d[10]       -1.491   0.318   -2.133   -1.696   -1.482  -1.277  -0.882 1.003
#> d[11]       -0.673   0.150   -0.967   -0.772   -0.672  -0.572  -0.377 1.002
#> d[12]       -0.604   0.151   -0.899   -0.701   -0.604  -0.503  -0.299 1.002
#> d[13]       -0.754   0.182   -1.096   -0.880   -0.752  -0.636  -0.391 1.001
#> d[14]       -0.788   0.255   -1.282   -0.955   -0.791  -0.624  -0.290 1.002
#> d[15]       -0.856   0.114   -1.080   -0.932   -0.853  -0.780  -0.634 1.003
#> d[16]       -0.850   0.170   -1.185   -0.957   -0.847  -0.739  -0.508 1.001
#> d[17]        0.321   0.301   -0.285    0.119    0.324   0.529   0.900 1.005
#> d[18]       -0.689   0.211   -1.099   -0.831   -0.694  -0.549  -0.267 1.002
#> d[19]       -1.055   0.494   -2.005   -1.386   -1.070  -0.737  -0.030 1.005
#> d[20]       -0.618   0.212   -1.022   -0.764   -0.618  -0.480  -0.179 1.001
#> d[21]       -1.937   0.353   -2.643   -2.172   -1.930  -1.703  -1.250 1.011
#> d[22]       -0.872   0.208   -1.278   -1.011   -0.871  -0.731  -0.453 1.002
#> d[23]       -0.337   0.190   -0.708   -0.459   -0.337  -0.216   0.039 1.002
#> d[24]       -0.392   0.193   -0.765   -0.523   -0.396  -0.260  -0.009 1.002
#> d[25]       -0.584   0.187   -0.969   -0.709   -0.583  -0.455  -0.228 1.001
#> d[26]       -0.658   0.285   -1.228   -0.849   -0.655  -0.471  -0.104 1.002
#> d[27]       -0.406   0.293   -0.996   -0.606   -0.402  -0.201   0.141 1.004
#> d[28]       -0.655   0.295   -1.251   -0.845   -0.657  -0.456  -0.079 1.012
#> d[29]       -0.555   0.293   -1.127   -0.750   -0.555  -0.363   0.027 1.012
#> sd           0.261   0.044    0.184    0.230    0.258   0.288   0.354 1.002
#> totresdev  107.476  14.770   80.922   97.153  106.607 116.693 139.134 1.009
#> deviance  -107.312  14.770 -133.866 -117.635 -108.181 -98.095 -75.654 1.009
#>           n.eff
#> d[1]          1
#> d[2]       1400
#> d[3]       3000
#> d[4]        650
#> d[5]       1300
#> d[6]       2300
#> d[7]       1200
#> d[8]       1400
#> d[9]       1200
#> d[10]       890
#> d[11]      1700
#> d[12]      1600
#> d[13]      3000
#> d[14]      1200
#> d[15]       780
#> d[16]      2100
#> d[17]       480
#> d[18]      1400
#> d[19]       440
#> d[20]      3000
#> d[21]       190
#> d[22]      2400
#> d[23]      1100
#> d[24]      1000
#> d[25]      3000
#> d[26]      1400
#> d[27]       520
#> d[28]       170
#> d[29]       170
#> sd         1100
#> totresdev   230
#> deviance    240
#> 
#> For each parameter, n.eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor (at convergence, Rhat=1).
#> 
#> DIC info (using the rule, pD = var(deviance)/2)
#> pD = 108.2 and DIC = 0.9
#> DIC is an estimate of expected predictive error (lower deviance is better).