Economic & Political Weekly EPW april 2, 2011 vol xlvi no 14. Page 23-26 http://epw.in/epw/uploads/articles/15901.pdf
Evidence Based Medicine: Making It Better
Jacob M Puliyel MD MPhil
Head of Pediatrics
St Stephens Hospital
Evidence Based Medicine (EBM) began as a 'bottom-up' paradigm that taught residents to search the literature for the best available evidence and to critically appraise it for making patient care decisions. As its popularity increased, there evolved a huge market for readymade EBM summaries and reviews and there is a now a scramble to provide this service. Those who provide the service come to wield tremendous influence and power. This article describes the evolution of this important tool and describes the pitfalls in how it is practiced. People in the health care field need to understand all these aspects of EBM if they are to exploit its potential for public health.
Evidence Based Medicine: Making It Better
Evidence Based Medicine (EBM) is today a buzzword - it is used in the wider society outside of its originally narrow technical context, often pretentiously and inappropriately to impress and to make discourse appear esoteric, and technically sound. EBM has an enchanting image that reaches out to researchers and scholars (Holmes et al 2006). Also it has a ring of scientific authority that mesmerizes decision-makers and government officials, that health planners value and purchasers and payers feel reassured by. This paper looks at the evolution of EBM and describes the pitfalls in how it is practiced. People in the health care field need to understand all these aspects of EBM if they are to exploit its potential for public health.
How it started
EBM was originally developed as a method for teaching medical residents (Druss 2000). Keeping up to date with knowledge has become more difficult in the internet age. Coiera has shown how the exponential growth of information creates a poverty of attention (Coiera 2000). The low cost of production of poor quality information results in high quality information being drowned out, increasing the cost of finding specific information. It was estimated in 1992 that a dedicated doctor would have to study at least 17 papers every day of the year to keep abreast (Davidoff et al 1995). Alongside this glut in information and data, the cost of medical care also increased with introduction of newer technology - many of them of doubtful utility. These developments resulted in enormous variation in the standard of care and costs of care. It is in this milieu that the term EBM was coined at McMaster Medical School in Canada in the 1980s to ‘make use of explicit search criteria to find the best available evidence’ (Rosenberg & Donald 1995). EBM has been described by one of its leading lights, Dr David Sackett, as the conscientious, explicit and judicious use of current best research evidence in making decisions about care of individual patients (Sackett et al 1996). It was expected that this would result in better care of patients. Costs would be curtailed by the avoidance of less useful technologies. Thus it began as a 'bottom-up' paradigm that taught residents to ask answerable and focused questions, search the literature in a transparent and reproducible way to find the best evidence and to critically appraise it in an explicit and structured manner, often using mathematical analyses to give a clear idea of the strength, statistical significance and possible clinical significance of the results. This article describes also some of the risks attendant on its spectacular success in capturing the public imagination. It will touch on how vested interests have exploited its vulnerabilities.
The basic principles underlying the ‘evidence-based’ practice movement are that there is a hierarchy of evidence and that modern informatics can make the evidence available to practitioners at the point of care. Clinicians should seek evidence from as high in the appropriate hierarchy of evidence as possible (Guyatt et al 2000). This was seen as a major shift away from traditional medicine that emphasized the expertise of the medical profession. The ‘freestyle’ nature of ‘expert’ critical appraisal was sought to be reined in (Malone et al 2002). It undercut the autonomy and authority of the doctor and the resultant variability in care breaking the lockhold the profession had over how medicine is practiced and compensated (Healy 2006). It was tremendously appealing for those who sought to impose uniform standards to assess performance and cost effectiveness. However EBM has had its critics. It was noted that the team that coined the term EBM considered using the phase ‘scientific medicine’ but rejected it because it implied that other approaches were by definition unscientific (Guyatt 2002). They ignored the fact that the term ‘evidence based medicine’ carries a similar moral valence and linguistic slipperiness (Sehon & Stanley 2003). Holmes and colleagues have castigated EBM because it excluded alternate forms of knowledge (Holmes et al 2006).
Newer definitions of EBM now acknowledge that research evidence alone is not adequate to guide action. It emphasizes that clinicians must use their expertise to assess the patient’s problem and incorporate patient’s preferences or values to research evidence before making management recommendations (Haynes et al 2002). It appears as if we have come a full circle, giving the clinician preeminence again, so much so that Druss has lamented the overly inclusive definition threatening to deprive the term of meaning (Druss 2005) Sehon and Stanley have argued that the new definition merely says that EBM is the wise use of the best evidence available (Sehon & Stanley 2003). They write that EBM defined in this manner cannot be thought of as revolutionary or even useful. After all who could possibly be opposed to using best evidence wisely (Sehon & Stanley 2003)? They suggest that the debate between EBM and alternate approaches can change medical practice only if EBM ceases to be described in this ‘all embracing and vacuous’ manner.
The heart of EBM is the use of evidence hierarchies including randomized controlled trials (RCTs), systematic reviews and meta-analysis of RCTs. Alternative approaches to medical practice also take into account the patient’s condition and values hence this is not what separates EBM from the other approaches. What separates it is how it gives priority to certain forms of evidence (Sonnabend 2008). This essay will look primarily at the aspects that make EBM distinctive and revolutionary.
Systematic Reviews and Meta-analysis Versus Traditional Reviews
Traditionally review articles were written for journals by ‘experts’. Sonnabend writes that experts are often elevated to this rank by marketing department of drug manufacturers. It is not beyond conjecture, he says, that an expert has been created expressly to justify their claims (Sonnabend 2008). In the review ‘experts’ state their opinion about the proper evaluation and management of a condition, supporting key conclusions with selected references, and they have been shown to be both non-reproducible and as a scientific exercise, of low mean scientific quality (Sackett & Rosenberg 1995). Oxman and Guyatt found that adherence to simple scientific principles in reviews were inversely proportional to self self-professed expertise of the experts (Oxman & Guyatt 1993).
EBM provided the framework for systematic reviews and the popularity of EBM has been helped by journals seeking explicit and transparent methods in reviews with bias-free list of citations. The hierarchy of evidence meant that the best evidence (that with the least chance of bias) was considered. Meta-analysis combines the results of several studies. In its simplest form, output of meta-analyses is the effect size where the weighting might be related to sample sizes of the individual studies. This aggregation of different studies helps overcome the problem of reduced statistical power in studies with small sample sizes.
Minor Flaws in EBM Concepts
For a meta analysis to be meaningful all studies need to be included – both those that showed benefit and those that did not. It is usually hard to publish studies that show no significant results. Studies that fail to show benefit don’t get sent for publication and if they do, they are seldom published by editors. This ‘file drawer problem’ where non-significant study results are hidden away from general view in someone’s file drawer creates a serious base rate fallacy, biased or skewed distribution of effect-sizes and the overestimation of the significance of the published studies (Rosenthal 1979). An attempt is being made to overcome this file-drawer problem by making registration of clinical trials mandatory. But the benefits of such a registry in meta-analyses have not been tested as yet. Also it has been suggested that the practice of using weights in a meta analysis according to the sample size, rather than the size of the population they represent, may be misleading (Puliyel & Sreenivas 2005; Batham et al 2009).
This is best illustrated with the block buster pain killer (anti-inflammatory drug) Rofecoxib (brand name Vioxx), which has now been withdrawn from the market. Initially, according to an Editorial in the New England Journal of Medicine, peer-reviewed-literature was flooded by papers and RCTs from the employees of Merck and their consultants. There were epidemiological studies showing concerns about myocardial infarction and stroke with Vioxx but Merck claimed that only RCTs were suitable for determining whether there was any risk. There was an excess of 16 cases of myocardial infarction or stroke per 1000 patients on the drug. 80 million people had received the drug before it was withdrawn (Topol 2004).
The appellation ‘evidence based recommendations’ does not necessarily mean that the recommendations are based on firm empirical data. It only means that the level of evidence is indicated along side each recommendation. (See Box for ‘quality of evidence’ and ‘probability of harm over good’ and how reviewers ‘judgment’ relates to the ‘recommendations’ made. It shows how recommendations based on opinion, not substantiated by any study data can be provided as evidence-based consensus-statements/recommendations)
Do-it-yourself EBM Evolves in Complexity
Along with the popularity of EBM the complexity for evaluating evidence has increased. No longer was it an amateurs’ enterprise. Multiple data bases are explored, the references in the papers are further hand searched for new references, Clinical Trials Registers and conference proceedings are scrutinized and pharmaceutical companies and individual researchers are contacted for unpublished data and ongoing trials. It has now been felt that most practitioners are not able to keep up to date by learning evidence based strategies but are willing to seek out EBM produced by others. Busy clinicians are provided a detailed report and through a process of dumbing down EBM, also a one line answer called ‘clinical bottom lines’ (Puliyel et al 2004). Thus there evolved a huge market for EBM summaries and reviews and a scramble to provide this service. Those who provide the service come to wield tremendous influence and power and have introduced methodological refinements making the process more and more complicated to minimize the competition from copycat startups.
Funding and Bias in Conclusions of RCT
Theoretically, well blinded RCTs provide incontrovertible evidence. However empirical evidence has shown repeatedly that randomised trials are more positive if funded by for-profit organizations (Davidson 1986;Kjaergard et al 2002; Djulbegovic et al 2000; Bekelman et al 2003; Lexchin et al 2003). Als-Nielsen and colleagues have shown that association with for profit organizations had little effect on treatment effect but the conclusions were more positive due to biased interpretation of trial results (Als-Neilsen et al 2003). Lundh and colleagues have shown that publication of industry-supported trials was associated with an increase in journal impact factors and revenue (Lundh et al 2010). Richard Smith – the former Editor of the BMJ, has suggested that publishing one drug company sponsored RCT could yield a million dollars in the sales of reprints alone (Smith R 2010). According to Marcovitch – another BMJ Editor, potential conflicts arise when the journal or publisher receives a substantial proportion of its income from reprints (23%, Massachusetts Medical Society –publishers of the New England Journal of Medicine; 41%, The Lancet; 53%, American Medical Association publishers of the JAMA) (Marcovitch 2010). There is therefore an obvious publication bias favoring drug trials sponsored by the pharmaceutical industry.
Funding of Systematic Reviews and Meta-analyses
Yank and colleagues found that not just RCTs are biased by industry funding - even meta-analysis done by persons with financial ties to drug companies are likely to come to more favorable conclusions although not with more favorable results (Yank et al 2007). It is therefore important that meta-analyses are done by not-for-profit organizations. The Cochrane Collaboration is a rapidly growing international group of researchers who form an unselfish collaboration to provide evidence from systematic searches (Sackett & Rosenberg 1995). However as the group becomes bigger it becomes easy for those with vested interests to infiltrate the organisation. The Cochrane review on surfactant illustrates the point clearly. Surfactact is a substance put into the airways of premature babies to help them breath easier. The drug is expensive and meta analysis showed that it’s use did not improve survival. However the Cochrane review says the drug reduces ‘neonatal mortality’ (Soll 2000). The author who has declared conflicts of interests (payments in the past from many surfactant manufacturers) did a further analysis and found there were more children surviving the first 30 days of life (neonatal age group) and although there were no differences in mortality prior to discharge from the hospital he was able to write in the Abstract that it reduces neonatal mortality and in the Conclusion that it reduces mortality. Although this anomaly has been publicized in the BMJ (Tiwari et al 2004), this misleading statement has not been revised in the updated meta-analysis (Soll & Ozek 2010).
Dangers of agenda-driven bias
Wikipedia suggests the most severe weakness and abuse of meta-analysis often occurs when the person or persons doing the meta-analysis have an economic, social or political agenda such as the passage or defeat of legislation. “If a meta-analysis is conducted by an individual or organization with a bias or predetermined desired outcome, it should be treated as highly suspect or having a high likelihood of being junk science. From an integrity perspective, researchers with a bias should avoid meta-analysis and use a less abuse-prone (or independent) form of research” (Wikipedia 2011). However reviews often ignore this warning. In the Indian context the Cochrane Database of Systematic Reviews recently published a protocol that illustrates the point poignantly (Kapoor et al 2010). The protocol states that the rationale for the systematic review is a public interest petition in the Delhi High Court questioning the introduction of newer vaccines and vaccine combination (DPT vaccine combined with Hepatitis B vaccine and H Influenza B vaccine) in the public health system by the Government, under the influence of vaccine manufacturers and international agencies like World Health Organization (WHO), without proper epidemiological and clinical studies (Delhi 2009). The ICMR and the National Technical Advisory Group on Immunization (India) are named as respondents. Yet the new review to be done by the South Asian Cochrane Network is to be performed by the very persons who were party to the impugned recommendation (Subcommittee 2009).
Holms has written that EBM groups like the Cochrane Collaboration have a profound sense of entitlement, what they take as a universal right to control the scientific agenda. In a polarised world it is as if you either embrace them or else be condemned as recklessly non-scientific (Holmes et al 2006). The picture may appear hopeless. Marcia Angell, editor of the NEJM for 20 years writes, “It is simply no longer possible to believe much of the clinical research that is published or to rely on the judgment of trusted physicians or authoritative medical guidelines. I take no pleasure in this conclusion which I reached slowly and reluctantly over my two decades as an editor of the New England Journal of Medicine” (Angell 2009).
All is however not so bleak. Shakespeare has pointed out: ‘Though all things foul would wear the brows of grace, yet grace must still look so’ (Shakespeare Macbeth). Although a lot of junk science purports to be EBM, we must not discredit everything that carries the name. A healthy skepticism and more widespread appreciation of the misuses of the label will make EBM better. One hopes EBM will somehow reincarnate itself to live by its original bottom-up paradigm.
Als-Nielsen B, Chen W, Gluud C, Kjaergard LL. (2003) Association of funding and conclusion in randomised drug trials JAMA:290:921-928
Angell M (2009) Drug companies and doctors: A story of corruption. The New York Review of Books 56. Available: http://www.nybooks.com/articles/archives/2009/jan/15/drug-companies-doctorsa-story-of-corruption/
Batham A, Gupta MA, Rastogi P, Garg S, Sreenivas V, Puliyel JM. (2009) Calculating prevalence of hepatitis B in India:Using population weights to look for publication bias in conventional meta analysis. Indian J of Pediatrics;76 1247-57.
Bekelman JE, Li Y, Gross CP. (2003) Scope and impact of financial conflicts of interest in biomedical research: a systematic review. JAMA;289:454-465.
Coiera E. (2000) Information economics and the internet. J Am Med Inform Assoc;7:215-21.
Davidoff F, Haynes B, Sackett D, Smith R. (1995) BMJ ;310:1085-1112
Davidson RA.(1986) Source of funding and outcome of clinical trials. J Gen Intern Med;1:155-158.
Delhi High Court. (2009) Writ Petition (Civil) No. 13698 of 2009. Public Interest Litigation. Available at http://delhihighcourt.nic.in/index.html 2009
Djulbegovic B, Lacevic M, Cantor A, et al. (2000) The uncertainty principle and industry-sponsored research. Lancet. 356:635-638.
Druss B. (2005) Evidencebasd medicine: does it make a difference. Use wisely BMJ 2005;330:92.
Guyatt GH, Haynes RB, Jaeschke RZ, et al., (2000) Users' Guides to the Medical Literature: XXV. Evidence-based medicine: principles for applying the Users' Guides to patient care. Evidence- Based Medicine Working Group. JAMA; 284 (10):1290-1296.
Guyatt G. (2002) Preface in Users' Guides to the Medical Literature: Essentials of Evidence-Based Clinical Practice Guyatt G, Rennie D, eds. Chicago, IL 60610, AMA Press.
Haynes RB, Devereaux PJ, Guyatt GH. (2002) Clinical expertise in the era of evidence-based medicine and patient choice. Evid Based Med;7:36-38 doi:10.1136/ebm.7.2.36
Healy B. (2006) Who says what is best. http://health.usnews.com/usnews/health/articles/060903/11healy.htm
Holmes D, Murray SJ, Perron A, Rail G. (2006) Deconstructing the evidence-based discourse in health sciences: truth, power and fascism. Int J Evid Based Health.;4:180-6.
Kapoor AN, Tharyan P, Kant L, Balraj V, Shemilt I. (2010) Combined DTP-HBV vaccine versus separately administered DTP and HBV vaccines for primary prevention of diphtheria, tetanus, pertussis, and hepatitis B (Protocol). Cochrane Database of Systematic Reviews 2010, Issue 9. Art. No.: CD008658. DOI: 10.1002/14651858.CD008658. http://onlinelibrary.wiley.com/o/cochrane/clsysrev/articles/CD008658/frame.html
Kjaergard LL, Als-Nielsen B. (2002) Association between competing interests and authors' conclusions: epidemiological study of randomised clinical trials published in BMJ. BMJ.;325:249-252.
Lexchin J, Bero LA, Djulbegovic B, Clark O. (2003) Pharmaceutical industry sponsorship and research outcome and quality: systematic review. BMJ.;326:1167-1170.
Lundh A, Barbateskovic M, Hrobjartsson A, Gotzsche PC. (2010) Conflict of interest at medical journals: The influence of industry-supported randomised trials on journal impact factors and revenue – Cohort study. PLoS Medicine:7:e1000354 doi:10.1371/journal.pmed.1000354
Malone DE, Skehan SJ, MacEneaney PM, Staunton M, Schranz M (2002) Evidence based radiology.net http://www.evidencebasedradiology.net/ebr_overview/ebr_overview_notes.html
Marcovitch H (2010) Editors, Publishers, Impact Factors, and Reprint Income. PLoS Med 7(10): e1000355. doi:10.1371/journal.pmed.1000355
Oxman A, Guyatt GH. (1993) The science of reviewing research. Ann NY Acad Sci;703:125-34.
Puliyel J, Noopur Baijal, Dherain Narula (2004) Evidence-Based Investigation into the Relation Between Sexual Intercourse and Pregnancy http://adc.bmj.com/content/88/12/1135/reply#archdischild_el_742?sid=0ef6f4a3-575a-422c-8699-95871d4bef9d.
Puliyel J, Sreenivas V. (2005) Meta-analysis can be statistically misleading. EBM; 10:130.
Rosenthal R (1979). "The "File Drawer Problem" and the Tolerance for Null Results". Psychological Bulletin. 86 (3): 638–641. doi:10.1037/0033-2909.86.3.638
Rosenberg W, Donald A. (1995) Evidence based medicine: an approach to clinical problem solving. BMJ ;310:1122-6.
Sackett DL, Rosenberg WMC, Grey JAM, Haynes RB, Richardson WS. (1996) Evidence based medicine: what it is and what it isn’t. BMJ 1996;312:71.
Sackett DL, Rosenberg WMC. (1995) The need for evidence-based medicine J R Soc Med;88:620-24.
Sehon SR, Stanley DE. (2003) A philosophical analysis of the evidence-based medicine debate. BMC Health Serv Res;3:14.
Shakespeare W. Macbeth, Act IV scene iii.
Smith R (2010) On editors’ conflict of interests BMJ Blog 2 Nov 2010. http://blogs.bmj.com/bmj/2010/11/02/richard-smith-on-editors-conflicts-of-interest/#more-5395
Soll RF. (2000) Prophylactic natural surfactant extract for preventing morbidity and mortality in preterm infants (Cochrane Review). In: Cochrane Library. Issue 3. Chichester: John Wiley, 2004. (Cochrane Database Syst Rev 2000(2):CD000511.
Soll R, Özek E. (2010) Prophylactic animal derived surfactant extract for preventing morbidity and mortality in preterm infants. Cochrane Database of Systematic Reviews 1997, Issue 4. Art. No.: CD000511. DOI: 10.1002/14651858.CD000511.
Sonnabend JA. (2008) What constitutes expert opinion. http://www.bmj.com/content/312/7023/71.full/reply#bmj_el_242222
Subcommittee on introduction of Hib vaccine in the universal immunization programme, National Technical Advisory Group on Immunization, India, Kant L. (2009) NTAGI Subcommittee Recommendation on Himophilus influenza Type b (Hib) Vaccine Introduction in India. Indian Pediatr;46:945-54 .
Tiwari L, Puliyel JM, Upadhyay P. (2004) Truth and evidence based medicine: spin is everything. BMJ;329:1043.
Topol EJ. (2004) Failing the public health -- Rofecoxib, Merck, and the FDA. N Engl J Med; 351:1707-9.
Wikipedia (2011) Meta-analysis http://en.wikipedia.org/wiki/Meta-analysis
Yank V, Rennie D, Bero LA. (2007) Financial ties and concordance between results and conclusions in meta-analyses: retrospective cohort study. BMJ doi:10.1136/bmj39376.447211BE.
1. The ‘file drawer problem’ and bias of meta analysis
2. Harm from Hierarchy of Evidence
3. ‘Best Available Evidence’ Confers EBM Status to Dodgy Science