ORBIT II
ORBIT II: Understanding the process and impact of within-study selective reporting bias for harm outcomes (ORBIT II Outcome Reporting Bias in Trials).
The ORBIT II study has recently been funded by the MRC Methodology Research Panel (February 2012) and will be led by Dr Jamie Kirkham, working alongside co-investigators, Professor Paula Williamson, Dr Carrol Gamble and Professor Doug Altman. The ORBIT II study is an extension to the ORBIT study where the focus will now look at the problem of outcome reporting bias in harm outcomes. Empirical evidence suggests that the reporting of harms data is likely to be less complete than that of efficacy measures; Chan et al., 2004 reported a median of 31% of efficacy outcomes per trial were incompletely reported, as compared with 59% of harm outcomes per trial. Chan et al. 2004 also classified the reason for trialists not reporting harms to be either ‘lack of clinical importance’ or ‘lack of statistical significance’.
In this new project, the aim is to use the current ORBIT classification system (developed in the ORBIT study) and to adapt the sensitivity analysis for assessing the impact of ORB for harm outcomes. Our interviews with trialists showed that “undesirable” data on harms can go unreported (Smyth et al., 2011). When referring to a harm outcome, one trialist stated
“When we looked at that data, it actually showed an increase in harm amongst those who got the active treatment, and we ditched it because we weren’t expecting it and we were concerned that the presentation of these data would have an impact on people’s understanding of the study findings”.
These interviews with trialists were limited in that questions about reporting of harms data were not systematically asked. Unlike efficacy outcomes (defined and measured in a particular way), harms can be measured in trials by specific testing/questioning for a particular harm, by open questioning (e.g. ‘have you experienced any adverse event’, followed by some categorisation of the responses), or by a combination of both approaches. The distinction between methods of data collection for efficacy measures and harms may be important. For example, if it was known from the trial report that a trialist had explicitly asked participants about specific harms (e.g. from the method section) that went unreported, then the risk of bias will differ from a situation where you knew from the trial report that they only asked general questions about harms.
Limited data exists on the impact of ORB in harms in systematic reviews. This is the motivation for the ORBIT II study, in which outcome reporting bias will be investigated in relation to harms. The project also aims to gain an understanding on how trialists collect and report harms data. This is essential to improve the ability of decision makers to make informed decisions that consider both benefits and harms of an intervention in an unbiased way.
References
Chan AW, Krleza-Jeric K, Schmid I, Altman DG. Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health research. CMAJ (2004); 171 (7): 735–740.
Smyth R, Kirkham JJ, Jacoby A, Altman DG, Gamble C, Williamson, PR. Frequency and reasons for outcome reporting bias in clinical trials: interviews with trialists. BMJ (2011); 342:c7153.
Aims and objectives
1) To estimate the prevalence and assess the impact of selective outcome reporting in trials within a cohort of published meta-analyses where the outcome is a harm.
2) Estimate the sensitivity and specificity of a method for assessing the likelihood of outcome reporting bias within trial reports in relation to harms.
3) To compute the benefit-harm ratio in a selection of reviews that contain a meta-analysis of the primary harm outcome where ORB is suspected and look at the effect of ORB on this ratio.
4) To understand the mechanisms that may lead to incomplete reporting of harms data, in particular the selective or incomplete reporting of either significant or non-significant findings.