Data management

A priori primary analysis

As with any quantitative research, a systematic review will generate the potential for multiple analyses. As one in 20 will be statistically significant by chance, it is important to state a priori the primary analyses to be undertaken. Although multiple secondary analysis are often undertaken, these are only hypothesis generating as data have been multiply tested—so-called data-dredging.

Unacceptable loss to follow-up

In every study there must be a certain attrition that renders data meaningless. For example, in a trial of tacrine for those with Alzheimer's disease 68 per cent of people taking the experimental compound were withdrawn or lost to follow-up. (31) Drawing conclusions from the data provided by the 32 per cent of 'completers' is problematic as selection bias, originally addressed by randomization, is likely to be great. Trial attrition may not be immediately apparent from first glance. For example, a meta-analysis of studies comparing the antipsychotic quetiapine with chlorpromazine and haloperidol for schizophrenia shows a 58 per cent loss to follow-up at 6 weeks.'32' The last observation of those leaving was carried forward to the results, so that data presented in the trials were on the numbers originally randomized. The trialists made an assumption that data collected just before leaving the study would reflect the situation at the end of the trial. The reviewers must also make judgements, before seeing the data, and make these explicit.

The limit at which data become meaningless may differ depending on the question addressed. For example, in the situation of trialling a new oral drug for schizophrenia, clearly a loss of nearly 60 per cent of people at 6 weeks is clinically untenable. The reviewer may judge that the unfortunate clinician may lose up to 30 per cent of people by 6 weeks but that any greater loss would reflect more than misfortune and render data of little use. In different circumstances, such as the acute care of very disturbed people in closed wards, the loss of even 10 per cent of participants could be seen as a threat to the value of the data presented.

Intention-to-treat analysis

Interventions are not randomized in trials—it is the intention to give treatments that is randomly allocated. Once people are lost to follow-up the property of the randomization to distribute known and unknown confounding variables is under threat. The randomization has, in effect, been broken. The real threat of an introduction of selection bias has led to the phrase—once randomized, always analyse. (3)

Once a limit to trial attrition has been set, reviewers must, before seeing trial data, exercise more judgement in what outcome is to attributed to those who were lost. It is impossible to avoid assumptions, but these should be based on common sense if not evidence. For example, when presenting data for the outcome of 'clinically improved', reviewers could assume, unless contrary information is provided in the trials, that those who left early did not have an important recovery. If good quality sources of information are available, this assumption can become more evidence-based. Perhaps an exemplary trial within a systematic review managed 100 per cent follow-up even on those who left the study early. If this trial found that 90 per cent of those who had not complied with the study protocol were not 'clinically improved', it would provide a rationale for applying this figure to the other trials in the meta-analysis. Unless individual patient data are available, this process is impossible for continuous outcomes and only 'completer' data must be presented.

Continuous data

Data on continuous outcomes are frequently skewed, with the mean not being the centre of the distribution. The statistics for meta-analysis are thought to be able to cope with some skew, but were formulated for parametric normally distributed data. Reviewers may wish to build in simple rules to avoid the potential pitfall of applying parametric tests to very skewed data. For example, in scale data where a mean endpoint score is provided with a standard deviation, when the latter is multiplied by 2 and is then greater than the mean, data could be stated to be too skewed to summate.(33) This rule cannot be applied for scale data reporting change, rather than endpoint, scores.

A wide range of rating scales are available to measure outcomes in mental health trials. These scales vary in quality and many are poorly validated. It is generally accepted that measuring instruments should have the properties of reliability (the extent to which a test effectively measures anything at all) and validity (the extent to which a test measures that which it is supposed to measure). Before publication of an instrument, most scientific journals insist that both reliability and validity be demonstrated to the satisfaction of referees. Reviewers may well decide, as a minimum standard, to exclude data from unpublished rating scales.

Individual patient data

Most mental health meta-analyses are of aggregate data from published reports. Other specialties have set a 'gold standard' for systematic reviews by acquiring, checking, and reanalysing each person's data from the original trialists. (34) Collecting individual patient data allows reviewers to undertake time-to-event analyses and subgroup analyses, to ensure the quality of the randomization and data through detailed checking and correction of errors by communication with trialists, and finally to update follow-up information through patient record systems (such as mortality registers).

Limited empirical evidence exists for some of the advantages of individual patient data reviews over other types of review. The former does help to control publication bias, to ensure use of the intention-to-treat principle in the analysis, and to obtain a fuller picture of the effects of different treatments over time. Undertaking individual patient data reviews requires considerable additional skills, time, and effort on the part of the reviewers when compared to meta-analyses of published aggregate data. (35>

Positive Thinking Power Play

Positive Thinking Power Play

Learning About A Positive Thinking Power Play Can Have Amazing Benefits For Your Life And Success. Learn About Positive Thinking Power Play -And Have A Look At 10 Steps to Success To Create Amazing Results.

Get My Free Ebook

Post a comment