New meta-analytical paper: Identifying who benefits most from treatments
20 Mar 2017
Meta-analysis is often the only way to reliably detect whether treatment benefit differs between groups of participants. When testing for participant-level interactions in meta-analysis, using a deft approach to analysis and presentation is preferable, according to a recent paper published in the BMJ’s Research Methods and Reporting section.
The paper, penned by David Fisher and colleagues, identifies and critiques three general approaches - deft, daft and deluded. It concludes that the deft approach alone avoids ecological bias and hence provides robust results. By providing greater power than a single trial, meta-analysis is often the only way to detect interactions and hence to inform stratified medicine. Since systematic review is often held as the gold standard of evidence, it is crucial to avoid methodology that could result in misleading conclusions.
The paper is underpinned by a published case study that describes the three approaches, and details why daft and deluded methods may be at risk of bias. It indicates that the vast majority of recent reviews either used a deluded approach or gave inadequate methodological information. As the paper demonstrates, this is a problem since deluded analyses are more likely to show a statistically significant result. Since true clinically-meaningful interactions are rare, reviewers using deluded methods may be tempted to give undue prominence to weak results, with potential consequences for clinical decision making.
Good presentation is fundamental to accurate interpretation, especially for less familiar approaches. The deft analysis can be clearly presented and readily interpreted using familiar techniques such as forest plots and use of heterogeneity statistics. Author David Fisher has provided a Stata package making such analysis and presentation straightforward. If you have any questions or thoughts related to the deft approach or the use of the Stata package, please get in touch.
An example based on real data, showing how a deft analysis can give a substantially different conclusion from the equivalent deluded or daft analysis.
Read the BMJ paper