Run an NMA model
nma.run.Rd
Run an NMA model
Usage
nma.run(
data.ab,
treatments = NULL,
method = "common",
link = "identity",
sdscale = FALSE,
...
)
Arguments
- data.ab
A data frame of arm-level data in "long" format containing the columns:
studyID
Study identifierstime
Numeric data indicating follow-up timesy
Numeric data indicating the aggregate response for a given observation (e.g. mean)se
Numeric data indicating the standard error for a given observationtreatment
Treatment identifiers (can be numeric, factor or character)class
An optional column indicating a particular class identifier. Observations with the same treatment identifier must also have the same class identifier.n
An optional column indicating the number of participants used to calculate the response at a given observation (required if modelling using Standardised Mean Differences)standsd
An optional column of numeric data indicating reference SDs used to standardise treatment effects when modelling using Standardised Mean Differences (SMD).
- treatments
A vector of treatment names. If left as
NULL
it will use the treatment coding given indata.ab
- method
Can take
"common"
or"random"
to indicate the type of NMA model used to synthesise data points given inoverlay.nma
. The default is"random"
since this assumes different time-points inoverlay.nma
have been lumped together to estimate the NMA.- link
Can take either
"identity"
(the default),"log"
(for modelling Ratios of Means friedrich2011MBNMAtime) or"smd"
(for modelling Standardised Mean Differences - although this also corresponds to an identity link function).- sdscale
Logical object to indicate whether to write a model that specifies a reference SD for standardising when modelling using Standardised Mean Differences. Specifying
sdscale=TRUE
will therefore only modify the model if link function is set to SMD (link="smd"
).- ...
Options for plotting in
igraph
.
Examples
network <- mb.network(osteopain)
#> Reference treatment is `Pl_0`
#> Studies reporting change from baseline automatically identified from the data
# Get the latest time point
late.time <- get.latest.time(network)
# Get the closest time point to a given value (t)
early.time <- get.closest.time(network, t=7)
# Run NMA on the data
nma.run(late.time$data.ab, treatments=late.time$treatments,
method="random")
#> Compiling model graph
#> Resolving undeclared variables
#> Allocating nodes
#> Graph information:
#> Observed stochastic nodes: 104
#> Unobserved stochastic nodes: 133
#> Total graph size: 1452
#>
#> Initializing model
#>
#> Inference for Bugs model at "/tmp/Rtmp7mZJmf/file1eec163b14b0", fit using jags,
#> 3 chains, each with 2000 iterations (first 1000 discarded)
#> n.sims = 3000 iterations saved. Running time = 0.409 secs
#> mu.vect sd.vect 2.5% 25% 50% 75% 97.5% Rhat
#> d[1] 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000
#> d[2] -0.405 0.197 -0.797 -0.537 -0.405 -0.277 -0.014 1.002
#> d[3] -0.686 0.076 -0.832 -0.735 -0.687 -0.636 -0.539 1.011
#> d[4] -0.706 0.195 -1.101 -0.833 -0.703 -0.576 -0.336 1.003
#> d[5] -0.577 0.246 -1.067 -0.743 -0.579 -0.411 -0.099 1.007
#> d[6] -0.042 0.320 -0.689 -0.253 -0.035 0.167 0.597 1.001
#> d[7] -1.010 0.158 -1.332 -1.112 -1.008 -0.904 -0.703 1.007
#> d[8] -0.456 0.320 -1.094 -0.673 -0.455 -0.235 0.164 1.001
#> d[9] -1.589 0.223 -2.051 -1.733 -1.589 -1.440 -1.151 1.002
#> d[10] -1.473 0.311 -2.096 -1.677 -1.481 -1.266 -0.866 1.001
#> d[11] -0.671 0.147 -0.968 -0.767 -0.669 -0.575 -0.385 1.005
#> d[12] -0.606 0.149 -0.910 -0.700 -0.608 -0.511 -0.306 1.002
#> d[13] -0.750 0.177 -1.100 -0.869 -0.751 -0.632 -0.404 1.001
#> d[14] -0.784 0.245 -1.274 -0.944 -0.782 -0.620 -0.313 1.007
#> d[15] -0.850 0.111 -1.064 -0.926 -0.848 -0.774 -0.630 1.001
#> d[16] -0.845 0.163 -1.162 -0.954 -0.846 -0.737 -0.528 1.003
#> d[17] 0.330 0.292 -0.247 0.138 0.321 0.520 0.922 1.005
#> d[18] -0.680 0.198 -1.074 -0.806 -0.675 -0.543 -0.304 1.002
#> d[19] -1.020 0.504 -1.999 -1.373 -1.018 -0.680 -0.042 1.002
#> d[20] -0.623 0.212 -1.033 -0.762 -0.626 -0.484 -0.205 1.002
#> d[21] -1.933 0.366 -2.652 -2.174 -1.935 -1.679 -1.233 1.018
#> d[22] -0.871 0.198 -1.251 -1.006 -0.871 -0.739 -0.474 1.006
#> d[23] -0.357 0.186 -0.718 -0.482 -0.354 -0.227 0.001 1.002
#> d[24] -0.398 0.178 -0.757 -0.517 -0.396 -0.282 -0.050 1.001
#> d[25] -0.600 0.184 -0.957 -0.728 -0.601 -0.478 -0.230 1.001
#> d[26] -0.674 0.286 -1.217 -0.866 -0.677 -0.494 -0.093 1.003
#> d[27] -0.403 0.281 -0.957 -0.592 -0.396 -0.225 0.139 1.001
#> d[28] -0.655 0.286 -1.210 -0.842 -0.657 -0.470 -0.070 1.003
#> d[29] -0.555 0.292 -1.132 -0.752 -0.554 -0.362 0.016 1.002
#> sd 0.254 0.050 0.163 0.218 0.250 0.286 0.361 1.028
#> totresdev 108.748 15.127 81.597 97.893 108.093 118.839 139.488 1.005
#> deviance -106.040 15.127 -133.191 -116.895 -106.695 -95.949 -75.300 1.005
#> n.eff
#> d[1] 1
#> d[2] 1100
#> d[3] 220
#> d[4] 900
#> d[5] 300
#> d[6] 3000
#> d[7] 320
#> d[8] 3000
#> d[9] 3000
#> d[10] 3000
#> d[11] 440
#> d[12] 3000
#> d[13] 3000
#> d[14] 580
#> d[15] 3000
#> d[16] 830
#> d[17] 470
#> d[18] 1800
#> d[19] 2300
#> d[20] 1100
#> d[21] 120
#> d[22] 340
#> d[23] 3000
#> d[24] 3000
#> d[25] 3000
#> d[26] 1500
#> d[27] 3000
#> d[28] 890
#> d[29] 3000
#> sd 76
#> totresdev 450
#> deviance 480
#>
#> For each parameter, n.eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor (at convergence, Rhat=1).
#>
#> DIC info (using the rule: pV = var(deviance)/2)
#> pV = 114.0 and DIC = 8.0
#> DIC is an estimate of expected predictive error (lower deviance is better).