• Users Online: 129
  • Home
  • Print this page
  • Email this page
Home About us Editorial board Ahead of print Current issue Search Archives Submit article Instructions Subscribe Contacts Login 


 
 Table of Contents  
REVIEW ARTICLE
Year : 2021  |  Volume : 33  |  Issue : 2  |  Page : 55-61

How to write systematic review and meta-analysis


1 Department of Conservative Dentistry and Endodontics, Faculty of Dentistry, Jamia Millia Islamia, New Delhi, India
2 Department of Conservative Dentistry and Endodontics, SGT Dental College, Gurgaon, Haryana, India
3 Department of Conservative Dentistry and Endodontics, Manav Rachna Dental College, Faridabaad, Haryana, India
4 Division of Conservative Dentistry and Endodontics, Post Graduate Institute of Medical Sciences, Chandigarh, India

Date of Submission04-Apr-2021
Date of Decision11-Apr-2021
Date of Acceptance18-Apr-2021
Date of Web Publication11-Jun-2021

Correspondence Address:
Dr. Vivek Aggarwal
Department of Conservative Dentistry and Endodontics, Faculty of Dentistry, Jamia Millia Islamia, New Delhi
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/endo.endo_86_21

Rights and Permissions
  Abstract 


There has been a paradigm shift in the treatment options available to an endodontist. A lot of clinical, as well as laboratory studies, have been published over the last few decades. Often a clinician faces a problem in deciding a treatment plan for a particular clinical problem. Systematic reviews (SRs) and meta-analysis (MA) can help to find reliable data for a specific research question. In simple words, the SR and MA are research on all existing literature on a specific research question. The purpose of this article is to provide a step-by-step procedure for conducting SRs and MA.

Keywords: Literature review, meta-analysis, systematic review


How to cite this article:
Aggarwal V, Singla M, Gupta A, Mehta N, Kumar U. How to write systematic review and meta-analysis. Endodontology 2021;33:55-61

How to cite this URL:
Aggarwal V, Singla M, Gupta A, Mehta N, Kumar U. How to write systematic review and meta-analysis. Endodontology [serial online] 2021 [cited 2021 Oct 18];33:55-61. Available from: https://www.endodontologyonweb.org/text.asp?2021/33/2/55/318137




  Introduction Top


”Literature is the art of discovering something extraordinary about ordinary people and saying with ordinary words something extraordinary:” Boris Pasternak.

The Cochrane collaboration has defined the systematic reviews (SRs) as “a scientific process where all empirical evidence that fits prespecified eligibility criteria are collated to answer a specific research question.”[1] The SR is designed with a set of objectives to perform a step-by-step search of included studies to provide reliable data. Usually, the SR is combined with meta-analysis (MA) to perform statistical tests on the combined data of the included studies.[2] The findings of SRs and MA are considered as the highest level of evidence and are used for clinical decision-making and developing practice guidelines.[3] If conflicting reports are present on a similar topic, the MA can help to reach a suitable conclusion. However, as new clinical studies are performed, the SRs and MA should be updated to include the new data. The SR is often confused with literature reviews (LR). The LRs are descriptive, whereas the SRs deploy a systematic and refined search of the literature using a “priori” derived search strategy.[2] The SRs are initiated after the authors identify a subset of studies with specific inclusion/exclusion criteria. On the other hand, the LRs can involve almost every aspect of the selected topic. The search criteria are usually not standardized in LRs. Since the LRs include studies with different datasets, statistical analysis of the data is not possible.[4] Thus, the LRs may be informative but cannot provide a decisive help in clinical decision-making.

Using mathematical calculations, the SRs and MA provide an effect size or the quantitative estimate of the variables tested.[3] This is different from the traditional perspective experimental designs, where a statistical significance is derived between the control and the test groups. At this point, we should be aware of the fact that statistical significance is different from the clinical significance. A test may provide statistically superior data, but that increase/decrease in the values may not be clinically significant. Let us take an example of an evaluation of a new technique to improve the anesthetic success in the endodontic management of symptomatic mandibular molars. The success rate with a traditional inferior alveolar nerve block using 2% lidocaine with epinephrine is almost 40%.[5] If a new technique reports a success rate of 45% in the large sample size, the difference between the control and test groups shall be statistically significant. However, the clinician shall not be benefited from a mere increase of 5% in the success rates. In simple words, the clinical significance is different from statistical significance and depends upon the magnitude of the intervention effect. The MA extracts the effect size from the data of different trials (but with the same design). The effect size may be presented as odds ratios (OR), relative risk or risk ratios (RR), and standardized mean differences (SMD). The type of effect size is dependent upon the data available in the trials. If success rates are evaluated, the OR or RR are reported. I mean, along with standard deviations, are reported, SMD values are reported by the MA.

A common query asked by most of the young researchers is “what is the need of an SR and MA when I can find out the answer to a clinical problem via a randomized controlled trial.” The randomized controlled trial (RCT) includes a limited number of patients (depending upon the sample size calculations) and randomly allocates them to the control and the test groups utilizing CONSORT guidelines.[6] These trials can include a certain number of patients, due to various factors, thus limiting the power of the study. The MA combines the data of various studies evaluating a similar question. The combined data shall now have many times more patients than an individual study. The analysis of this data shall provide some meaningful results with sufficient power.

Any SR and MA should follow the PRISMA guidelines (Preferred Reporting Items for SR and MA statement, http://www.prisma-statement.org).[7],[8],[9] While conducting the SRs, it is usually beneficial to refer to the PRISMA checklist.[7] A simple way to go ahead with an SR starts with the framing of a research question.


  Frame a Clear Research Question, Writing a Review Protocol, and Registration of the Protocol Top


Formulation of the research question is the first step of any investigative research. As with other research, the topic should be “FINER” in nature. That is, it should be Feasible, Interesting, Novel, Ethical, and Relevant.[10] Well-focused research questions provide a better search and clear search criteria. However, it is always advisable to perform a small literature search to confirm that the question has not already been answered by a recent SR. The search should include a simple google search, along with searching of a database of Cochrane and Pubmed. Also, check the primary registration database of SRs (PROSPERO) to find out whether a similar search is being carried out at any other place.[11],[12] If there is a previous review on the same topic, and there have been additional studies performed after the publication of the SR, the investigators can consider carrying out an updated review involving new data.

The recommended step to formulate a research question is to utilize a PICOS framework.[13],[14] The PICO stands for participants, intervention, comparator, outcomes, and studies. A well-formatted research question looks like this: “To evaluate the efficacy of (intervention/comparison) for (health condition) in (clinical settings).” The investigators should explicitly define the intervention/comparator, populations, and the outcomes. It shall help the investigator to include only the relevant data in the review. For example, the investigator can choose a research question like “Is articaine better than lidocaine?” The research question looks feasible and innovative. However, the topic is vague. A more structured and scientific way to present this question is “Is articaine (I) better than lidocaine (C) in achieving pulpal anesthesia (O) in patients with symptomatic irreversible pulpitis in mandibular molars (P). The investigator shall now include randomized clinical trials (S) dealing with a similar topic in the SR. It is recommended to narrowly define the intervention. This shall help in the statistical evaluation of similar data during MA. After a topic has been selected, the inclusion/exclusion criteria for selecting the studies must be defined. These may include, age of the patients, the extent of the disease, the clinical approach toward the investigation, secondary outcomes, and search dates. The study designs (randomized clinical trial, COHORT, and case–control studies) to be included in the SRs should also be specified. The exclusion criteria include unrelated researches, unavailable full texts, or abstract-only papers.[15]

The SR should follow a “priori” design. A protocol of the review should be made and registered at a suitable database.[9] The most common database is the PROSPERO (https://www.crd.york.ac.uk/prospero).[11] It is an open-access database of SR protocols on health-related topics. The prospective registration of the protocol helps to prevent selective outcome reporting, since a record of the research protocol is maintained in an independent database. It also helps to prevent duplication of the research.[12] Almost all scientific publications require the SRs to be “prospectively” registered on PROSPERO or other databases. In addition to the research question and inclusion/exclusion criteria, the protocol also includes the background of the question, proposed search strategy, proposed risk of bias tools, and the proposed methods of MA.[9]


  Literature Search Top


While performing a literature search for the relevant articles, the sources and the databases should be defined before the beginning of the search.[15] The keywords or the key search terms should be identified using similar studies on the same topic. The search strategies used in previously published SRs can also be used. It is advisable to use Medical Subject Heading (MeSH) terms while performing a search.[16] Usually a search is initiated with PubMed. This is followed by subsequent modifications for a specific database to get the most relevant articles.

AMSTAR has laid certain guidelines for SRs.[17],[18] Accordingly, the search should be conducted in at least two databases. However, the investigators are encouraged to search more databases to perform a comprehensive search of the relevant material. The databases are categorized into two types: general database, and subject-specific database.[2] In dentistry, General databases are commonly used. These include PubMed/Medline, Embase, Scopus, Google Scholar, and web of science. The order of the database to be search depends upon the review question. For example, a research question on clinical trials should include search in Cochrane, mRCTs, international clinical trial registries such as ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry should be searched for unpublished and ongoing studies. The search should also involve data from dissertations, conference proceedings, and gray literature.[19] An important source of searching the relevant articles is to find the references from identified relevant studies and previous SRs on the same topic. Furthermore, one or more important journals can be hand-searched along with the database search. After the identification of databases, the search can be performed using the defined search terms and different Boolean operators (AND, OR, and NOT).[2] While performing the initial search, using too many search words should be avoided. At times, some keywords may not be present in the title/abstract, which can lead to missing out on important articles. It may be advisable to use broad search terms (population and outcomes), combined with OR, to yield better results. If the investigator has access to a reference management tool (Endnote, Zotero, Mendeley), the selected articles can be exported to it.[20] The duplicate articles are to be removed before screening.[10] Make sure to keep a record of the number of selected studies and the number of remaining studies after the exclusion of duplicate articles. The search strategy, database, and date of search should also be recorded and mentioned in the SR. Some journals may require the full search strategies to be published as Supplementary Files.


  Title, Abstract, and Full-Text Screening Top


As per the guidelines by Cochrane, at least two reviewers shall select the relevant articles from the databases.[21],[22] The abstract screening should follow a “priori” designed inclusion/exclusion criteria. All aspects of predesigned PICOS should be followed to exclude or include an abstract. The reviewers shall independently review the title and abstract of the relevant articles. In case of any dispute regarding the exclusion or inclusion of an article, the opinion of a third reviewer should be taken. All excluded abstracts should be marked with suitable reasons. If the abstract does not provide the required information, it should be marked as “unclear” and should be included for full-text review. Full texts of all the included abstracts should be downloaded. For this purpose, the authors can check the search engines that may provide a link for free full-text access. Websites, such as Research Gate, may be utilized to request full text from the corresponding authors. A request can be made to the editor or the principal investigator (PI) to provide access to full text or a link to purchase the article if required. The full text should screen similar to the abstract screening. The full text of abstracts marked as “unclear” should also be reviewed.[10] At this stage, a definitive decision has to be taken to include or exclude the article. The investigators should keep a record of excluded articles, with reasons. There is a crucial step in the full-text screening. In the selected articles, go through the reference list. This shall widen your search and may help to include more relevant articles.


  Data Extraction Top


As with the abstract screening, the data extraction should be performed by at least two reviewers. This step is important as a single reviewer may miss out on useful information. Any disagreement between the reviewers should be resolved by a third reviewer. The type of data that needs to be extracted shall depend upon the “priori” designed protocol. As per Cochrane guidelines, a minimum of the following data should be recorded for each article:

  1. The study identification: name of the first author, year of the publication
  2. Methods employed: study design (randomized/observational), methodology, enrolment period, duration, and number of centers
  3. Characteristics of the participants: Inclusion/exclusion criteria, number, age/sex, and lost to follow-up
  4. Characteristics of the disease: Specific features, duration, and diagnostic methods
  5. Intervention: Specific intervention technique for each test and control group, details of drug used, and route of delivery
  6. Outcomes: Definition of outcome, measurement time-point, and unit of measurements
  7. If the outcome is continuous: Mean and standard deviation in intervention and control groups
  8. If the outcome is binary: Number of people with outcome in intervention and control groups
  9. If the outcome is diagnostic test accuracy: True positive/negative, false-positive/negative
  10. Results: Outcomes (point VII to IX), number of participants, sample size, and summary data
  11. Miscellaneous: Funding, risk of bias, and authors' conclusions/comments.


Each of the above review questions requires a thorough evaluation of the included articles. Various data extraction tools or templates can be used. Some of the recommended data extraction tools, which also help in quality assessment (risk of bias), can be downloaded from. It is not mandatory to use an exact/fixed set of variables in a tool. The investigators can modify the template according to their needs. The use of software, such as Revman, can guide the investigators through the process of review.


  Assessment of the Data Quality Including the Risk of Bias Top


The quality of a review is as good as the quality of the included studies. In the case of SRs, the bias in the studies can lead to over/underestimation of the effect size.[2] Thus, all included studies should be assessed for risk of bias. The risk of bias assessment can be performed using standardized critical appraisal tools.[23] Different tools are used for different study designs. For example, if the SR is evaluating randomized clinical trials, each included study is categorized as either low, high, or unclear using Cochrane Collaboration's RoB-2 Tool. This tool has the following points:

  • Selection bias, which includes random sequence generation bias and allocation concealment bias
  • Performance bias due to knowledge of the allocated intervention by study personnel and participants
  • Detection bias in assessing the outcome due to knowledge of the allocated intervention
  • Attrition bias due to incomplete data and exclusion from analysis
  • Reporting bias due to selective outcome reporting
  • Other sources of bias.


The Revman software also includes these risks of bias domains. The data from each study can be fed into Revman to create a risk of bias summary graph. This graph helps the reader to visually analyze the risk of bias for each domain in each included study. Similar tools are available for other study designs. NIH tool for observational studies, ROBINS-I tool for nonrandomized clinical trials, and CARE tool for case reports.


  Data Synthesis Top


estimation of heterogeneity and publication bias, subgroup analysis, and MA.

The data synthesis begins with the description of the included studies.[10] This includes the final number of studies and their characteristics. These findings can also be summarized using a PRISMA flow chart. Before a MA analysis, this assessment of included studies helps the investigator to confirm the suitability of the MA. If the included studies have inherent problems such as a high risk of bias or are underpowered, the MA can give unreliable results. An important aspect of the included studies is the heterogeneity.[24] There are always some differences in the included studies, most commonly due to different population set up. This can lead to variations among the data of the study, commonly known as heterogeneity.[25] There are two major types of heterogeneity: clinical and statistical.[24] The clinical heterogeneity is due to the differences in the population set up, minor deviations in the interventions or methods of assessment of the outcomes. The statistical heterogeneity is the difference in the treatment effects of individual studies. If the two studies are carried out at different clinical set-up and report different results, there shall be a presence of both clinical and statistical heterogeneity. However, if the results are similar, the statistical heterogeneity shall be low. Heterogeneity can be measured by different methods. The visual inspection of forest plots can give an idea about heterogeneity among the studies [Figure 1]. The I2 scores (I2 scores above 50% are usually considered substantial heterogeneity) can also provide information about the heterogeneity.[24] Kindly note that these methods check the statistical data and provide information regarding the statistical heterogeneity. If the data, confidence intervals, and effect size are similar, the I2 scores of zero are obtained. In this context, publication bias should also be reported.[26] Studies with clinically significant results are more likely to get published than studies with no or negative findings. This can have led to a bias in the publication of the studies. To evaluate the publication bias, funnel plot [Figure 2] diagrams are generated.[26] The symmetry of the funnel plots can provide information regarding the presence/absence of studies with negative findings.
Figure 1: Representative example of a forest plot diagram. The studies are arranged in the order of year of publication. The events represent the number of “failed” cases. Accordingly, the forest plot generates the graph in favor of treatment with less number of events. This setting can be changed in the RevMan software. The solid diamond at the bottom represents the effect size and its width represents the confidence interval. The odds ratio has been calculated using a random model with the Mantel-Haenszel method. The in-built risk of bias tool has been used. Green represents low risk, red represents high risk and unmarked represents unclear risk of bias

Click here to view
Figure 2: Representative example of a funnel plot diaggram. Each dot represents a single study. The y-axis represnts the standard error of the effect. Studies with higher poer are placed towards the top and vice-versa. The X-axis represents the study results expressed as an odds ratio. The plot resembles an inverted funnel with reasonable symmetry, thus excluding the chance of a publication bias

Click here to view


If the heterogeneity and publication bias is high, the investigators can opt for a narrative analysis rather than a quantitative one. The narrative analysis involves the description of the individual studies and their main findings. If the bias and heterogeneity are low, information from different studies can be combined for quantitative analysis, also known as MA. It should be noted that MA can be performed if data from more than one study is available for analysis. The MA provides information in terms of the effect size of the treatment, which corresponds to the differences in outcomes between intervention and control groups. This quantitative analysis aims to find the size and direction of the effect. To perform the MA, various statistical programs can be used. The most common software is the RevMan. It provides effect size with 95% confidence intervals, along with a graphical representation in form of forest plots

The forest plot help in easy understanding of the combined data from each study.[27] The individual studies are plotted as a horizontal square box. The area of the box is proportional to the weight of the study. The box is bisected by a horizontal line. Then, across each study estimate runs a horizontal line that represents the confidence interval.[27] The individual boxes are plotted with a centerline representing the no effect mark. This centerline is marked as 1.0 when the study outcomes are binary. In this scenario, the effect size is measured in terms of RR or OR. In case the studies evaluate continuous variables, the effect measure is calculated in terms of SMD, and the centerline is marked as 0. The left side usually represents the side in favor of the treatment and vice versa. This, however, can be changed in the settings while generating the forest plot. The final treatment effect size is plotted as a solid diamond at the bottom of the graph. The position of this diamond helps us to understand the size and direction of the effect. The width of the diamond represents the confidence intervals [Figure 1]. The RevMan, also, calculates the statistical heterogeneity along with the forest plot calculations. An important aspect in performing the MA is the use of fixed versus Random models.[2] These models are based on the assumption of the presence of uniformity among the studies. The fixed model assumes that the study population in the included articles is sufficiently uniform to draw conclusions that exposure and outcomes are related. The random effect model relaxes this assumption and rather assumes that studies are heterogeneous. Usually, random models are employed in performing the MA. Some MAs also employ meta-regression analysis. In simple words, they find out the association between the exposure or the intervention and the outcome.


  Grading the Evidence Top


The GRADE (Grading of Recommendations, Assessment, Development, and Evaluation) tool helps in the appraisal of the quality of evidence/strength of recommendation in SRs.[28],[29],[30],[31] It rates the evidence in four categories, ranging from high to very low. The evidence from RCTs is marked as high quality, while the observational data is marked at low quality. GRADE tool allows the upgradations of observational studies if they meet certain criteria. Also, it assesses the quality of evidence for each outcome. For applying GRADE to an SR, the first step is to give a priori ranking of “high” to randomized controlled trials and “low” to observational studies.[10],[30] The next step is to “upgrade” or “downgrade” the initial ranking. There are 5 reasons for downgrading the ranking: risk of bias (including lack of allocation sequence/concealment, lack of blinding, large losses to follow-up); inconsistency (significant and unexplained variability in results from different trials); indirectness of evidence; imprecision (wide confidence intervals) and; publication bias. There are three reasons to upgrade: Large effect; Dose-response relationship (result is proportional to the degree of exposure) and; when residual confounding would decrease the magnitude of effect. Accordingly, the evidence of an SR can be categorized into high, moderate. Low and very low. The high category is marked when the investigator is confident that the effect in the study reflects the actual effect. Moderate implies that the effect is close to the true effect, but maybe substantially different. The low category means that the true effect may differ significantly from the estimate and very low implies that the true effect is likely to be substantially different from the estimated effect.


  Disseminate the Findings Top


The analyzed data should be double-checked. The writing of SR generally follows the pattern of a research article and includes an introduction, methods, results, and discussion, and a conclusion. The characteristics of each study, with the level of evidence, should be provided as a table. If MA is performed, a forest plot diagram, supplemented with a funnel plot should be attached in the manuscript. The manuscript should be prepared according to the PRISMA checklist and a duly filled PRISMA flow diagram should be provided.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.

Supplementary files





 
  References Top

1.
What Is a Systematic Review? Available from: https://handbook-5-1.cochrane.org/chapter_1/1_2_2_what_is_a_systematic_review.htm. [Last accessed on 2021 Apr 01].  Back to cited text no. 1
    
2.
Knoll T, Omar MI, Maclennan S, Hernández V, Canfield S, Yuan Y, et al. Key steps in conducting systematic reviews for underpinning clinical practice guidelines: Methodology of the European association of urology. Eur Urol 2018;73:290-300.  Back to cited text no. 2
    
3.
Mulrow CD. Rationale for systematic reviews. BMJ 1994;309:597-9.  Back to cited text no. 3
    
4.
Ryś P, Władysiuk M, Skrzekowska-Baran I, Małecki MT. Review articles, systematic reviews and meta-analyses: Which can be trusted? Pol Arch Med Wewn 2009;119:148-56.  Back to cited text no. 4
    
5.
Hargreaves KM, Keiser K. Local anesthetic failure in endodontics: Mechanisms and management. Endod Topics 2002;1:26-39.  Back to cited text no. 5
    
6.
Schulz KF, Altman DG, Moher D, CONSORT Group. CONSORT 2010 statement: Updated guidelines for reporting parallel group randomised trials. BMJ 2010;340:c332.  Back to cited text no. 6
    
7.
Moher D. Preferred reporting items for systematic reviews and meta-analysis: The PRISMA statement. Ann Intern Med 2009;151:264.  Back to cited text no. 7
    
8.
Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: Explanation and elaboration. BMJ 2009;339:b2700.  Back to cited text no. 8
    
9.
Shamseer L, Moher D, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: Elaboration and explanation. BMJ 2015;350:g7647.  Back to cited text no. 9
    
10.
Tawfik GM, Dila KA, Mohamed MY, Tam DN, Kien ND, Ahmed AM, et al. A step by step guide for conducting a systematic review and meta-analysis with simulation data. Trop Med Health 2019;47:46.  Back to cited text no. 10
    
11.
Bernardo WM. PRISMA statement and PROSPERO. Int Braz J Urol 2017;43:383-4.  Back to cited text no. 11
    
12.
Schiavo JH. PROSPERO: An International register of systematic review protocols. Med Ref Serv Q 2019;38:171-80.  Back to cited text no. 12
    
13.
Methley AM, Campbell S, Chew-Graham C, McNally R, Cheraghi-Sohi S. PICO, PICOS and SPIDER: A comparison study of specificity and sensitivity in three search tools for qualitative systematic reviews. BMC Health Serv Res 2014;14:579.  Back to cited text no. 13
    
14.
Huang X, Lin J, Demner Fushman D. Evaluation of PICO as a knowledge representation for clinical questions. AMIA Annu Symp Proc 2006:359-63.  Back to cited text no. 14
    
15.
Harris JD, Quatman CE, Manring MM, Siston RA, Flanigan DC. How to write a systematic review. Am J Sports Med 2014;42:2761-8.  Back to cited text no. 15
    
16.
Yang H, Lee HJ. Research trend visualization by MeSH terms from PubMed. Int J Environ Res Public Health 2018;15:1113.  Back to cited text no. 16
    
17.
Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, et al. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol 2009;62:1013-20.  Back to cited text no. 17
    
18.
Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, et al. AMSTAR 2: A critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ 2017;358:j4008.  Back to cited text no. 18
    
19.
Paez A. Gray literature: An important resource in systematic reviews. J Evid Based Med 2017;10:233-40.  Back to cited text no. 19
    
20.
Coar JT, Sewell JP. Zotero: Harnessing the power of a personal bibliographic manager. Nurse Educ 2010;35:205-7.  Back to cited text no. 20
    
21.
Furlan AD, Pennick V, Bombardier C, van Tulder M, Editorial Board, Cochrane Back Review Group. 2009 Updated method guidelines for systematic reviews in the Cochrane Back Review Group. Spine (Phila Pa 1976) 2009;34:1929-41.  Back to cited text no. 21
    
22.
van Tulder M, Furlan A, Bombardier C, Bouter L, Editorial Board of the Cochrane Collaboration Back Review Group. Updated method guidelines for systematic reviews in the cochrane collaboration back review group. Spine (Phila Pa 1976) 2003;28:1290-9.  Back to cited text no. 22
    
23.
Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA. Cochrane Bias Methods Group; Cochrane Statistical Methods Group. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.  Back to cited text no. 23
    
24.
Melsen WG, Bootsma MC, Rovers MM, Bonten MJ. The effects of clinical and statistical heterogeneity on the predictive values of results from meta-analysis. Clin Microbiol Infect 2014;20:123-9.  Back to cited text no. 24
    
25.
Egger M, Smith GD, Phillips AN. Meta-analysis: Principles and procedures. BMJ 1997;315:1533-7.  Back to cited text no. 25
    
26.
Lin L, Chu H. Quantifying publication bias in meta-analysis. Biometrics 2018;74:785-94.  Back to cited text no. 26
    
27.
Kiran A, Crespillo AP, Rahimi K. Graphics and statistics for cardiology: Data visualisation for meta-analysis. Heart 2017;103:19-23.  Back to cited text no. 27
    
28.
Brignardello-Petersen R, Bonner A, Alexander PE, Siemieniuk RA, Furukawa TA, Rochwerg B, Hazlewood GS, Alhazzani W, Mustafa RA, Murad MH, Puhan MA, Schünemann HJ, Guyatt GH; GRADE Working Group. Advances in the GRADE approach to rate the certainty in estimates from a network meta-analysis. J Clin Epidemiol 2018;93:36-44.  Back to cited text no. 28
    
29.
Brozek JL, Akl EA, Alonso-Coello P, Lang D, Jaeschke R, Williams JW, et al. Grading quality of evidence and strength of recommendations in clinical practice guidelines. Part 1 of 3. An overview of the GRADE approach and grading quality of evidence about interventions. Allergy 2009;64:669-77.  Back to cited text no. 29
    
30.
Granholm A, Alhazzani W, Møller MH. Use of the GRADE approach in systematic reviews and guidelines. Br J Anaesth 2019;123:554-9.  Back to cited text no. 30
    
31.
Guyatt GH, Oxman AD, Montori V, Vist G, Kunz R, Brozek J, et al. GRADE guidelines: 5. Rating the quality of evidence – publication bias. J Clin Epidemiol 2011;64:1277-82.  Back to cited text no. 31
    


    Figures

  [Figure 1], [Figure 2]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract
Introduction
Frame a Clear Re...
Literature Search
Title, Abstract,...
Data Extraction
Assessment of th...
Data Synthesis
Grading the Evidence
Disseminate the ...
References
Article Figures

 Article Access Statistics
    Viewed1391    
    Printed24    
    Emailed0    
    PDF Downloaded375    
    Comments [Add]    

Recommend this journal


[TAG2]
[TAG3]
[TAG4]