Statistics

Inappropriate meta-analyses

Systematic reviews may, or may not, contain meta-analyses. Where participants, interventions, or outcomes are clearly too different to summate, reviewers must resist the temptation to use powerful statistics on inappropriate data.

Summary measures

Much has been written about the statistics for meta-analysis, (36> but if their meaning is not conveyed to the user of the review, they are of little value. Summary measures such as odds ratios or relative risk are frequently employed for dichotomous outcomes and weighted and standard mean difference for continuous data. Where continuous data are presented from different scales measuring similar phenomena then effect size may be calculated. The effect size has statistical integrity, but is even more problematic to interpret clinically than weighted or standard mean difference. In each of these summary measures an individual trial contributes to the final statistic, inversely proportionally to the precision of its result.

Currently, in this new discipline, statistics for meta-analysis are powerful but limited. However, better understanding of how to summate data from cluster randomized trials and non-parametric data can confidently be expected in the next few years.

Sensitivity analyses

Systematic reviewers may not only summate data from similar studies but may also investigate specific hypotheses. For example, the reviewers may state, again a priori, that they wish to compare the size of effect of industry-sponsored trials versus those undertaken independently of the manufacturer of the experimental drug. (6) The sensitivity of the final result to adding and subtracting sets of trials is then tested. Sensitivity analyses can be proposed on many variables, such as severity of illness, age of participant, means of diagnosis, subtype of intervention, and quality of trial. This can easily lead to the problems of multiple testing, although for meta-analyses of published data the quality and extent of trial reporting severely restricts the numbers of sensitivity analyses that are possible.

Heterogeneity

In systematic reviews heterogeneity refers to variability or differences between studies' estimates of effects. Despite rigorous definition and application of inclusion criteria, the trials eventually selected may not be homogeneous enough to summate. Statistical tests of heterogeneity are used to assess whether the observed variability in study results (effect sizes) is greater than that expected to occur by chance. These tests, however, have low statistical power and careful inspection of results for outlying findings is just as valuable. Heterogeneity can be caused by various factors, and its presence usually generates debate about differences in study design (methodological heterogeneity) and differences between studies in key characteristics of the participants, interventions, or outcome measures (clinical heterogeneity).

Publication bias

There are several ways to assess whether publication bias (see above) is operating within a review. The reviews may use a funnel plot technique (3Z) where the results of a trial are plotted against its size. Large studies with any result, positive or negative, tend to be published. Small positive studies are also usually easily identified, but it quickly becomes apparent if small 'negative' studies have not been found.

Funny Wiring Autism

Funny Wiring Autism

Autism is a developmental disorder that manifests itself in early childhood and affects the functioning of the brain, primarily in the areas of social interaction and communication. Children with autism look like other children but do not play or behave like other children. They must struggle daily to cope and connect with the world around them.

Get My Free Ebook


Post a comment