A new study of e-cigarettes’ efficacy in smoking cessation has not only pitted some of vaping’s most outspoken scientific supporters against one of its fiercest academic critics, but additionally illustrates lots of the pitfalls facing researchers on the topic and those – including policy-makers – who must interpret their work.
The furore has erupted spanning a paper published inside the Lancet Respiratory Medicine and co-authored by Stanton Glantz, director of the Center for Tobacco Control Research and Education at the University of California, San Francisco, plus a former colleague – Sara Kalkhoran, now of Harvard Medical School, who is in fact named as first author but does not enjoy Glantz’s fame (or notoriety) in tobacco control and vaping circles.
Their research sought to evaluate the success rates in quitting combustible cigarettes of smokers who vape and smokers who don’t: quite simply, to learn whether use of e-cigs is correlated with success in quitting, which might well mean that vaping can help you quit smoking. To achieve this they performed a meta-analysis of 20 previously published papers. That is, they didn’t conduct any new research entirely on actual smokers or vapers, but alternatively tried to blend the outcomes of existing studies to determine if they converge on the likely answer. This can be a common and well-accepted method of extracting truth from statistics in many fields, although – as we’ll see – it’s one fraught with challenges.
Their headline finding, promoted by Glantz himself online along with through the university, is the fact vapers are 28% more unlikely to avoid smoking than non-vapers – a conclusion which will suggest that vaping is not only ineffective in smoking cessation, but actually counterproductive.
The effect has, predictably, been uproar through the supporters of E-Cig inside the scientific and public health community, particularly in Britain. Among the gravest charges are the types levelled by Peter Hajek, the psychologist who directs the Tobacco Dependence Research Unit at Queen Mary University of London, calling the Kalkhoran/Glantz paper “grossly misleading”, and also by Carl V. Phillips, scientific director from the pro-vaping Consumer Advocates for Smoke-Free Alternatives Association (CASAA) in the U.S., who wrote “it is apparent that Glantz was misinterpreting the information willfully, rather than accidentally”.
Robert West, another British psychologist as well as the director of tobacco studies in a centre run by University College London, said “publication of the study represents a significant failure of the peer review system in this journal”. Linda Bauld, professor of health policy on the University of Stirling, suggested the “conclusions are tentative and sometimes incorrect”. Ann McNeill, professor of tobacco addiction in the National Addiction Centre at King’s College London, said “this review is not really scientific” and added that “the information included about two studies that I co-authored is either inaccurate or misleading”.
But what, precisely, are definitely the problems these eminent critics find in the Kalkhoran/Glantz paper? To answer some of that question, it’s required to go underneath the sensational 28%, and examine that which was studied and how.
Meta-analysis is a seductive idea. If (say) you have 100 separate studies, every one of 1000 individuals, why not combine those to create – essentially – one particular study of 100,000 people, the results that should be significantly less vunerable to any distortions that might have crept into a person investigation?
(This might happen, as an example, by inadvertently selecting participants using a greater or lesser propensity to quit smoking because of some factor not considered by the researchers – an instance of “selection bias”.)
Obviously, the statistical side of a meta-analysis is quite modern-day than just averaging out your totals, but that’s the general concept. And also from that simplistic outline, it’s immediately apparent where problems can arise.
If its results have to be meaningful, the meta-analysis must somehow take account of variations in the design of the patient studies (they may define “smoking cessation” differently, for example). When it ignores those variations, and tries to shoehorn all results right into a model that a number of them don’t fit, it’s introducing their own distortions.
Moreover, if the studies it’s based on are inherently flawed in any respect, the meta-analysis – however painstakingly conducted – will inherit those same flaws.
It is a charge created by the reality Initiative, a United states anti-smoking nonprofit which normally takes an unwelcoming look at e-cigarettes, about a previous Glantz meta-analysis which comes to similar conclusions to the Kalkhoran/Glantz study.
In a submission this past year for the Usa Food and Drug Administration (FDA), responding to that federal agency’s call for comments on its proposed electronic cigarette regulation, the reality Initiative noted which it had reviewed many studies of e-cigs’ role in cessation and concluded they were “marred by poor measurement of exposures and unmeasured confounders”. Yet, it said, “many of them happen to be included in a meta-analysis [Glantz’s] that states show that smokers who use e-cigarettes are more unlikely to give up smoking in comparison to those who usually do not. This meta- analysis simply lumps together the errors of inference from these correlations.”
In addition, it added that “quantitatively synthesizing heterogeneous studies is scientifically inappropriate as well as the findings of these meta-analyses are therefore invalid”. Put bluntly, don’t mix apples with oranges and anticipate to get an apple pie.
Such doubts about meta-analyses are far from rare. Steven L. Bernstein, professor of health policy at Yale, echoed the Truth Initiative’s points as he wrote within the Lancet Respiratory Medicine – the same journal that published this year’s Kalkhoran/Glantz work – that the studies a part of their meta-analysis were “mostly observational, often without control group, with tobacco use status assessed in widely disparate ways” though he added that “this is not any fault of [Kalkhoran and Glantz]; abundant, published, methodologically rigorous studies just do not exist yet”.
So a meta-analysis are only able to be as effective as the research it aggregates, and drawing conclusions from it is simply valid in the event the studies it’s based on are constructed in similar methods to each other – or, a minimum of, if any differences are carefully compensated for. Of course, such drawbacks also affect meta-analyses that are favourable to e-cigarettes, such as the famous Cochrane Review from late 2014.
Other criticisms in the Kalkhoran/Glantz work go beyond the drawbacks of meta-analyses in general, and concentrate on the specific questions caused from the San Francisco researchers as well as the ways they made an effort to respond to them.
One frequently-expressed concern has become that Kalkhoran and Glantz were studying the incorrect people, skewing their analysis by not accurately reflecting the true variety of e-cig-assisted quitters.
As CASAA’s Phillips indicates, the e-cigarette users inside the two scholars’ number-crunching were all current smokers who had already tried e-cigarettes if the studies on the quit attempts started. Thus, the analysis by its nature excluded people who had started vaping and quickly abandoned smoking; if such people appear in large numbers, counting them would have made e-cigarettes seem a much more successful way to quitting smoking.
Another question was raised by Yale’s Bernstein, who observed that not all vapers who smoke want to give up combustibles. Naturally, those who aren’t attempting to quit won’t quit, and Bernstein observed that when these individuals kndnkt excluded from the data, it suggested “no effect of e-cigarettes, not too e-cigarette users were more unlikely to quit”.
Excluding some who did find a way to quit – then including individuals who have no intention of quitting anyway – would certainly appear to change the results of a study purporting to measure successful quit attempts, despite the fact that Kalkhoran and Glantz debate that their “conclusion was insensitive to a variety of study design factors, including whether or not the analysis population consisted only of smokers interested in smoking cessation, or all smokers”.
But additionally there is a further slightly cloudy area which affects much science – not simply meta-analyses, and not just these particular researchers’ work – and, importantly, is frequently overlooked in media reporting, along with by institutions’ publicity departments.