Compare and Contrast Systematic Review and Meta Analysis

  • Journal List
  • Korean J Anesthesiol
  • v.71(2); 2018 Apr
  • PMC5903119

Korean J Anesthesiol. 2018 Apr; 71(ii): 103–112.

Introduction to systematic review and meta-analysis

EunJin Ahn

1Section of Anesthesiology and Pain Medicine, Inje University Seoul Paik Hospital, Seoul, Korea

Hyun Kang

twoSection of Anesthesiology and Pain Medicine, Chung-Ang Academy College of Medicine, Seoul, Korea

Received 2017 December 13; Revised 2018 Feb 28; Accepted 2018 Mar fourteen.

Abstruse

Systematic reviews and meta-analyses present results by combining and analyzing data from different studies conducted on similar research topics. In recent years, systematic reviews and meta-analyses have been actively performed in various fields including anesthesiology. These inquiry methods are powerful tools that can overcome the difficulties in performing big-scale randomized controlled trials. Yet, the inclusion of studies with any biases or improperly assessed quality of evidence in systematic reviews and meta-analyses could yield misleading results. Therefore, various guidelines take been suggested for conducting systematic reviews and meta-analyses to help standardize them and improve their quality. However, accepting the conclusions of many studies without understanding the meta-assay can be dangerous. Therefore, this article provides an like shooting fish in a barrel introduction to clinicians on performing and understanding meta-analyses.

Keywords: Anesthesiology, Meta-analysis, Randomized controlled trial, Systematic review

Introduction

A systematic review collects all possible studies related to a given topic and design, and reviews and analyzes their results [one]. During the systematic review procedure, the quality of studies is evaluated, and a statistical meta-assay of the report results is conducted on the basis of their quality. A meta-analysis is a valid, objective, and scientific method of analyzing and combining different results. Unremarkably, in lodge to obtain more reliable results, a meta-analysis is mainly conducted on randomized controlled trials (RCTs), which have a high level of show [2] (Fig. i). Since 1999, various papers have presented guidelines for reporting meta-analyses of RCTs. Following the Quality of Reporting of Meta-analyses (QUORUM) argument [3], and the appearance of registers such as Cochrane Library's Methodology Register, a big number of systematic literature reviews have been registered. In 2009, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [4] was published, and it greatly helped standardize and improve the quality of systematic reviews and meta-analyses [five].

An external file that holds a picture, illustration, etc.  Object name is kjae-2018-71-2-103f1.jpg

In anesthesiology, the importance of systematic reviews and meta-analyses has been highlighted, and they provide diagnostic and therapeutic value to various areas, including non only perioperative management but too intensive care and outpatient anesthesia [half-dozen–13]. Systematic reviews and meta-analyses include various topics, such as comparing various treatments of postoperative nausea and vomiting [xiv,15], comparing general anesthesia and regional anesthesia [16–18], comparison airway maintenance devices [8,19], comparing various methods of postoperative pain control (east.chiliad., patient-controlled analgesia pumps, nerve block, or analgesics) [20–23], comparing the precision of various monitoring instruments [7], and meta-analysis of dose-response in diverse drugs [12].

Thus, literature reviews and meta-analyses are being conducted in diverse medical fields, and the aim of highlighting their importance is to assist better excerpt accurate, good quality data from the flood of data being produced. Still, a lack of understanding about systematic reviews and meta-analyses can lead to wrong outcomes being derived from the review and assay processes. If readers indiscriminately accept the results of the many meta-analyses that are published, wrong information may be obtained. Therefore, in this review, we aim to depict the contents and methods used in systematic reviews and meta-analyses in a way that is like shooting fish in a barrel to empathize for future authors and readers of systematic review and meta-assay.

Study Planning

It is easy to confuse systematic reviews and meta-analyses. A systematic review is an objective, reproducible method to find answers to a certain research question, past collecting all bachelor studies related to that question and reviewing and analyzing their results. A meta-analysis differs from a systematic review in that it uses statistical methods on estimates from 2 or more different studies to form a pooled estimate [1]. Post-obit a systematic review, if it is not possible to form a pooled approximate, it can be published as is without progressing to a meta-analysis; however, if it is possible to form a pooled judge from the extracted data, a meta-analysis can be attempted. Systematic reviews and meta-analyses usually continue according to the flowchart presented in Fig. 2. We explain each of the stages below.

An external file that holds a picture, illustration, etc.  Object name is kjae-2018-71-2-103f2.jpg

Flowchart illustrating a systematic review.

Formulating research questions

A systematic review attempts to gather all bachelor empirical research by using clearly defined, systematic methods to obtain answers to a specific question. A meta-analysis is the statistical process of analyzing and combining results from several similar studies. Here, the definition of the word "like" is non made articulate, but when selecting a topic for the meta-analysis, it is essential to ensure that the different studies present data that can be combined. If the studies contain data on the same topic that tin can exist combined, a meta-analysis can even be performed using data from only two studies. Yet, written report selection via a systematic review is a precondition for performing a meta-analysis, and information technology is important to clearly define the Population, Intervention, Comparing, Outcomes (PICO) parameters that are primal to evidence-based research. In addition, selection of the research topic is based on logical evidence, and information technology is important to select a topic that is familiar to readers without clearly confirmed the evidence [24].

Protocols and registration

In systematic reviews, prior registration of a detailed research plan is very important. In social club to make the research process transparent, principal/secondary outcomes and methods are ready in advance, and in the event of changes to the method, other researchers and readers are informed when, how, and why. Many studies are registered with an organization like PROSPERO (http://world wide web.crd.york.ac.uk/PROSPERO/), and the registration number is recorded when reporting the written report, in order to share the protocol at the fourth dimension of planning.

Defining inclusion and exclusion criteria

Data is included on the study blueprint, patient characteristics, publication condition (published or unpublished), language used, and enquiry period. If there is a discrepancy between the number of patients included in the study and the number of patients included in the analysis, this needs to exist clearly explained while describing the patient characteristics, to avoid confusing the reader.

Literature search and study selection

In order to secure proper ground for testify-based enquiry, information technology is essential to perform a broad search that includes every bit many studies as possible that run into the inclusion and exclusion criteria. Typically, the iii bibliographic databases Medline, Embase, and Cochrane Central Register of Controlled Trials (Primal) are used. In domestic studies, the Korean databases KoreaMed, KMBASE, and RISS4U may be included. Try is required to identify non just published studies merely also abstracts, ongoing studies, and studies awaiting publication. Among the studies retrieved in the search, the researchers remove duplicate studies, select studies that meet the inclusion/exclusion criteria based on the abstracts, and then brand the final selection of studies based on their full text. In gild to maintain transparency and objectivity throughout this process, study selection is conducted independently past at least 2 investigators. When there is a inconsistency in opinions, intervention is required via debate or by a third reviewer. The methods for this procedure also need to be planned in advance. It is essential to ensure the reproducibility of the literature selection procedure [25].

Quality of evidence

Even so, well planned the systematic review or meta-analysis is, if the quality of show in the studies is low, the quality of the meta-assay decreases and incorrect results tin can exist obtained [26]. Fifty-fifty when using randomized studies with a loftier quality of evidence, evaluating the quality of evidence precisely helps determine the forcefulness of recommendations in the meta-assay. One method of evaluating the quality of evidence in not-randomized studies is the Newcastle-Ottawa Scale, provided by the Ottawa Hospital Research Establish 1) . However, we are mostly focusing on meta-analyses that employ randomized studies.

If the Grading of Recommendations, Cess, Development and Evaluations (GRADE) system (http://www.gradeworkinggroup.org/) is used, the quality of prove is evaluated on the basis of the study limitations, inaccuracies, incompleteness of result information, indirectness of evidence, and chance of publication bias, and this is used to determine the strength of recommendations [27]. As shown in Table 1, the study limitations are evaluated using the "run a risk of bias" method proposed past Cochrane 2) . This method classifies bias in randomized studies as "depression," "high," or "unclear" on the ground of the presence or absence of six processes (random sequence generation, allocation concealment, blinding participants or investigators, incomplete outcome information, selective reporting, and other biases) [28].

Table one.

The Cochrane Collaboration'due south Tool for Assessing the Take chances of Bias [28]

Domain Support of judgement Review author'south sentence
Sequence generation Draw the method used to generate the resource allotment sequence in sufficient detail to permit for an assessment of whether it should produce comparable groups. Option bias (biased allocation to interventions) due to inadequate generation of a randomized sequence.
Allotment concealment Describe the method used to conceal the allocation sequence in sufficient detail to decide whether intervention allocations could have been foreseen in advance of, or during, enrollment. Option bias (biased allocation to interventions) due to inadequate darkening of allocations prior to assignment.
Blinding Describe all measures used, if any, to bullheaded report participants and personnel from knowledge of which intervention a participant received. Operation bias due to noesis of the allocated interventions past participants and personnel during the study.
Draw all measures used, if any, to blind study issue assessors from knowledge of which intervention a participant received. Detection bias due to noesis of the allocated interventions past upshot assessors.
Incomplete outcome data Describe the completeness of outcome data for each main consequence, including compunction and exclusions from the analysis. Land whether attrition and exclusions were reported, the numbers in each intervention grouping, reasons for attrition/exclusions where reported, and any re-inclusions in analyses performed by the review authors. Compunction bias due to amount, nature, or handling of incomplete outcome data.
Selective reporting State how the possibility of selective result reporting was examined past the review authors, and what was establish. Reporting bias due to selective event reporting.
Other bias Land whatever important concerns about bias not addressed in the other domains in the tool. Bias due to bug not covered elsewhere in the table.
If particular questions/entries were prespecified in the reviews protocol, responses should be provided for each question/entry.

Data extraction

Two unlike investigators extract data based on the objectives and grade of the report; thereafter, the extracted data are reviewed. Since the size and format of each variable are different, the size and format of the outcomes are besides different, and slight changes may be required when combining the data [29]. If at that place are differences in the size and format of the result variables that cause difficulties combining the data, such equally the use of different evaluation instruments or unlike evaluation timepoints, the analysis may exist express to a systematic review. The investigators resolve differences of opinion past contend, and if they neglect to achieve a consensus, a tertiary-reviewer is consulted.

Data Assay

The aim of a meta-assay is to derive a conclusion with increased ability and accuracy than what could not be able to achieve in individual studies. Therefore, earlier analysis, information technology is crucial to evaluate the direction of upshot, size of result, homogeneity of furnishings among studies, and strength of evidence [30]. Thereafter, the data are reviewed qualitatively and quantitatively. If it is determined that the unlike research outcomes cannot be combined, all the results and characteristics of the individual studies are displayed in a table or in a descriptive course; this is referred to every bit a qualitative review. A meta-assay is a quantitative review, in which the clinical effectiveness is evaluated by calculating the weighted pooled judge for the interventions in at least ii separate studies.

The pooled guess is the outcome of the meta-assay, and is typically explained using a wood plot (Figs. 3 and 4). The black squares in the woods plot are the odds ratios (ORs) and 95% confidence intervals in each study. The expanse of the squares represents the weight reflected in the meta-analysis. The black diamond represents the OR and 95% confidence interval calculated across all the included studies. The bold vertical line represents a lack of therapeutic event (OR = i); if the confidence interval includes OR = ane, it ways no significant departure was found between the treatment and control groups.

An external file that holds a picture, illustration, etc.  Object name is kjae-2018-71-2-103f3.jpg

Forest plot analyzed past two unlike models using the same information. (A) Stock-still-effect model. (B) Random-effect model. The figure depicts individual trials as filled squares with the relative sample size and the solid line as the 95% confidence interval of the difference. The diamond shape indicates the pooled estimate and uncertainty for the combined effect. The vertical line indicates the treatment group shows no result (OR = 1). Moreover, if the confidence interval includes 1, and then the result shows no testify of difference betwixt the handling and control groups.

An external file that holds a picture, illustration, etc.  Object name is kjae-2018-71-2-103f4.jpg

Forest plot representing homogeneous data.

Dichotomous variables and continuous variables

In data analysis, event variables can exist considered broadly in terms of dichotomous variables and continuous variables. When combining information from continuous variables, the hateful divergence (Physician) and standardized mean departure (SMD) are used (Table 2).

Table 2.

Summary of Meta-analysis Methods Available in RevMan [28]

Blazon of data Effect measure Fixed-consequence methods Random-effect methods
Dichotomous Odds ratio (OR) Mantel-Haenszel (Grand-H) Mantel-Haenszel (Chiliad-H)
Inverse variance (IV) Inverse variance (IV)
Peto
Risk ratio (RR), Mantel-Haenszel (Chiliad-H) Mantel-Haenszel (G-H)
Risk departure (RD) Inverse variance (IV) Inverse variance (IV)
Continuous Mean difference (MD), Standardized mean difference (SMD) Inverse variance (IV) Inverse variance (IV)

Chiliad D = A b s o l u t e d i f f e r eastward n c e b e t w e e n t h e grand e a northward v a 50 u due east i n t w o g r o u p s S K D = D i f f e r e n c e i northward one thousand e a n o u t c o m e b e t w e east n yard r o u p s S t a north d a r d d east 5 i a t i o n o f o u t c o m eastward a grand o n m p a r t i c i p a n t southward

The MD is the absolute deviation in mean values between the groups, and the SMD is the mean departure betwixt groups divided by the standard deviation. When results are presented in the same units, the Doctor can be used, but when results are presented in unlike units, the SMD should exist used. When the Doc is used, the combined units must be shown. A value of "0" for the Dr. or SMD indicates that the effects of the new treatment method and the existing treatment method are the aforementioned. A value lower than "0" means the new treatment method is less constructive than the existing method, and a value greater than "0" ways the new treatment is more than effective than the existing method.

When combining data for dichotomous variables, the OR, risk ratio (RR), or gamble difference (RD) can be used. The RR and RD tin can be used for RCTs, quasi-experimental studies, or accomplice studies, and the OR can exist used for other case-control studies or cross-exclusive studies. Even so, because the OR is difficult to interpret, using the RR and RD, if possible, is recommended. If the outcome variable is a dichotomous variable, information technology tin can be presented as the number needed to treat (NNT), which is the minimum number of patients who demand to exist treated in the intervention group, compared to the command grouping, for a given event to occur in at least one patient. Based on Table 3, in an RCT, if ten is the probability of the event occurring in the control group and y is the probability of the event occurring in the intervention group, then x = c/(c + d), y = a/(a + b), and the absolute risk reduction (ARR) = 10 − y. NNT can be obtained as the reciprocal, one/ARR.

Table 3.

Calculation of the Number Needed to Care for in the Dichotomous table

Upshot occurred Event non occurred Sum
Intervention A B a + b
Control C D c + d

Fixed-effect models and random-effect models

In lodge to clarify effect size, two types of models tin be used: a fixed-effect model or a random-effect model. A fixed-effect model assumes that the effect of treatment is the aforementioned, and that variation between results in different studies is due to random error. Thus, a fixed-effect model can be used when the studies are considered to accept the aforementioned design and methodology, or when the variability in results within a study is small, and the variance is thought to exist due to random error. Three common methods are used for weighted estimation in a fixed-effect model: 1) inverse variance-weighted estimation three) , 2) Mantel-Haenszel estimation 4) , and 3) Peto estimation five) .

A random-effect model assumes heterogeneity betwixt the studies being combined, and these models are used when the studies are assumed dissimilar, even if a heterogeneity test does not show a significant result. Different a fixed-effect model, a random-effect model assumes that the size of the outcome of treatment differs among studies. Thus, differences in variation among studies are thought to be due to not just random error but also between-study variability in results. Therefore, weight does non decrease greatly for studies with a small number of patients. Among methods for weighted estimation in a random-effect model, the DerSimonian and Laird method 6) is more often than not used for dichotomous variables, as the simplest method, while inverse variance-weighted estimation is used for continuous variables, every bit with fixed-effect models. These 4 methods are all used in Review Managing director software (The Cochrane Collaboration, UK), and are described in a written report past Deeks et al. [31] (Table 2). All the same, when the number of studies included in the assay is less than 10, the Hartung-Knapp-Sidik-Jonkman method vii) can ameliorate reduce the risk of blazon one fault than does the DerSimonian and Laird method [32].

Fig. 3 shows the results of analyzing outcome data using a fixed-consequence model (A) and a random-event model (B). Every bit shown in Fig. iii, while the results from large studies are weighted more heavily in the fixed-event model, studies are given relatively similar weights irrespective of study size in the random-effect model. Although identical data were being analyzed, as shown in Fig. three, the pregnant result in the fixed-effect model was no longer significant in the random-event model. I representative case of the small study effect in a random-effect model is the meta-analysis by Li et al. [33]. In a big-scale study, intravenous injection of magnesium was unrelated to acute myocardial infarction, but in the random-upshot model, which included numerous small-scale studies, the small study effect resulted in an association being institute between intravenous injection of magnesium and myocardial infarction. This small-scale study effect tin can exist controlled for by using a sensitivity analysis, which is performed to examine the contribution of each of the included studies to the final meta-assay result. In item, when heterogeneity is suspected in the study methods or results, by changing certain data or analytical methods, this method makes it possible to verify whether the changes affect the robustness of the results, and to examine the causes of such furnishings [34].

Heterogeneity

Homogeneity test is a method whether the degree of heterogeneity is greater than would be expected to occur naturally when the effect size calculated from several studies is higher than the sampling error. This makes it possible to test whether the issue size calculated from several studies is the same. Three types of homogeneity tests can be used: 1) woods plot, 2) Cochrane'due south Q test (chi-squared), and three) Higgins I2 statistics. In the forest plot, as shown in Fig. iv, greater overlap between the confidence intervals indicates greater homogeneity. For the Q statistic, when the P value of the chi-squared exam, calculated from the woods plot in Fig. four, is less than 0.one, information technology is considered to prove statistical heterogeneity and a random-event tin exist used. Finally, I2 can be used [35].

I ii = 100% × (Q -d f)/Q Q:c h i -due south q u a r due east ds t a t i s t i c d f:d e g r due east eo ff r e e d o mo fQdue south t a t i southward t i c

I2 , calculated equally shown higher up, returns a value between 0 and 100%. A value less than 25% is considered to show strong homogeneity, a value of 50% is boilerplate, and a value greater than 75% indicates potent heterogeneity.

Even when the information cannot be shown to be homogeneous, a fixed-effect model tin be used, ignoring the heterogeneity, and all the study results tin be presented individually, without combining them. Nevertheless, in many cases, a random-effect model is applied, as described in a higher place, and a subgroup assay or meta-regression analysis is performed to explain the heterogeneity. In a subgroup analysis, the data are divided into subgroups that are expected to be homogeneous, and these subgroups are analyzed. This needs to be planned in the predetermined protocol before starting the meta-analysis. A meta-regression analysis is similar to a normal regression analysis, except that the heterogeneity between studies is modeled. This process involves performing a regression analysis of the pooled guess for covariance at the report level, and so it is usually not considered when the number of studies is less than 10. Here, univariate and multivariate regression analyses can both be considered.

Publication bias

Publication bias is the nearly mutual type of reporting bias in meta-analyses. This refers to the baloney of meta-analysis outcomes due to the college likelihood of publication of statistically significant studies rather than non-significant studies. In gild to examination the presence or absenteeism of publication bias, first, a funnel plot can be used (Fig. v). Studies are plotted on a besprinkle plot with effect size on the ten-centrality and precision or total sample size on the y-centrality. If the points form an upside-down funnel shape, with a broad base that narrows towards the top of the plot, this indicates the absence of a publication bias (Fig. 5A) [29,36]. On the other hand, if the plot shows an asymmetric shape, with no points on i side of the graph, so publication bias can be suspected (Fig. 5B). Second, to test publication bias statistically, Begg and Mazumdar'south rank correlation examination 8) [37] or Egger's exam 9) [29] tin can be used. If publication bias is detected, the trim-and-fill method 10) tin can be used to correct the bias [38]. Fig. half-dozen displays results that show publication bias in Egger'south test, which has so been corrected using the trim-and-fill method using Comprehensive Meta-Analysis software (Biostat, The states).

An external file that holds a picture, illustration, etc.  Object name is kjae-2018-71-2-103f5.jpg

Funnel plot showing the result size on the x-axis and sample size on the y-axis as a scatter plot. (A) Funnel plot without publication bias. The private plots are broader at the bottom and narrower at the acme. (B) Funnel plot with publication bias. The individual plots are located asymmetrically.

An external file that holds a picture, illustration, etc.  Object name is kjae-2018-71-2-103f6.jpg

Funnel plot adapted using the trim-and-fill method. White circles: comparisons included. Black circles: inputted comparisons using the trim-and-fill method. White diamond: pooled observed log hazard ratio. Blackness diamond: pooled inputted log risk ratio.

Event Presentation

When reporting the results of a systematic review or meta-analysis, the belittling content and methods should be described in detail. First, a flowchart is displayed with the literature search and selection process according to the inclusion/exclusion criteria. 2d, a tabular array is shown with the characteristics of the included studies. A tabular array should likewise be included with information related to the quality of testify, such as GRADE (Table iv). 3rd, the results of data analysis are shown in a wood plot and funnel plot. Fourth, if the results use dichotomous data, the NNT values can be reported, as described above.

Tabular array iv.

The Form Evidence Quality for Each Consequence

Quality cess
Number of patients
Outcome
Quality Importance
N ROB Inconsistency Indirectness Imprecision Others Palonosetron (%) Ramosetron (%) RR (CI)
PON 6 Serious Serious Not serious Not serious None 81/304 (26.6) lxxx/305 (26.2) 0.92 (0.54 to 1.58) Very low Of import
POV 5 Serious Serious Not serious Not serious None 55/274 (20.1) sixty/275 (21.8) 0.87 (0.48 to 1.57) Very low Important
PONV three Non serious Serious Not serious Not serious None 108/184 (58.7) 107/186 (57.5) 0.92 (0.54 to 1.58) Depression Important

When Review Director software (The Cochrane Collaboration, United kingdom of great britain and northern ireland) is used for the analysis, two types of P values are given. The outset is the P value from the z-test, which tests the zip hypothesis that the intervention has no issue. The second P value is from the chi-squared exam, which tests the null hypothesis for a lack of heterogeneity. The statistical effect for the intervention effect, which is more often than not considered the well-nigh important result in meta-analyses, is the z-test P value.

A common mistake when reporting results is, given a z-exam P value greater than 0.05, to say there was "no statistical significance" or "no difference." When evaluating statistical significance in a meta-analysis, a P value lower than 0.05 can be explained equally "a pregnant difference in the effects of the two treatment methods." Withal, the P value may appear not-significant whether or not at that place is a difference between the two handling methods. In such a state of affairs, information technology is better to announce "in that location was no strong evidence for an outcome," and to present the P value and confidence intervals. Another mutual mistake is to think that a smaller P value is indicative of a more pregnant effect. In meta-analyses of large-scale studies, the P value is more greatly afflicted by the number of studies and patients included, rather than by the significance of the results; therefore, care should be taken when interpreting the results of a meta-analysis.

Decision

When performing a systematic literature review or meta-analysis, if the quality of studies is not properly evaluated or if proper methodology is not strictly applied, the results tin can exist biased and the outcomes can be incorrect. However, when systematic reviews and meta-analyses are properly implemented, they can yield powerful results that could usually simply be achieved using large-scale RCTs, which are difficult to perform in individual studies. Equally our understanding of prove-based medicine increases and its importance is better appreciated, the number of systematic reviews and meta-analyses will keep increasing. However, indiscriminate acceptance of the results of all these meta-analyses can be unsafe, and hence, we recommend that their results be received critically on the basis of a more accurate understanding.

Footnotes

i)http://www.ohri.ca.

two)http://methods.cochrane.org/bias/assessing-risk-bias-included-studies.

3)The changed variance-weighted estimation method is useful if the number of studies is small with large sample sizes.

four)The Mantel-Haenszel interpretation method is useful if the number of studies is large with small sample sizes.

5)The Peto interpretation method is useful if the event rate is low or one of the two groups shows aught incidence.

half-dozen)The most popular and simplest statistical method used in Review Manager and Comprehensive Meta-assay software.

7)Alternative random-effect model meta-analysis that has more adequate error rates than does the common DerSimonian and Laird method, especially when the number of studies is small-scale. All the same, even with the Hartung-Knapp-Sidik-Jonkman method, when at that place are less than five studies with very unequal sizes, extra caution is needed.

8)The Begg and Mazumdar rank correlation exam uses the correlation between the ranks of event sizes and the ranks of their variances [37].

9)The caste of funnel plot asymmetry as measured past the intercept from the regression of standard normal deviates against precision [29].

10)If in that location are more small studies on one side, we expect the suppression of studies on the other side. Trimming yields the adapted effect size and reduces the variance of the furnishings past adding the original studies back into the assay every bit a mirror image of each report.

References

one. Kang H. Statistical considerations in meta-analysis. Hanyang Med Rev. 2015;35:23–32. [Google Scholar]

2. Uetani K, Nakayama T, Ikai H, Yonemoto N, Moher D. Quality of reports on randomized controlled trials conducted in Japan: evaluation of adherence to the Consort statement. Intern Med. 2009;48:307–13. [PubMed] [Google Scholar]

3. Moher D, Melt DJ, Eastwood Due south, Olkin I, Rennie D, Stroup DF. Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. Quality of Reporting of Meta-analyses. Lancet. 1999;354:1896–900. [PubMed] [Google Scholar]

4. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, et al. The PRISMA argument for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. J Clin Epidemiol. 2009;62:e1–34. [PubMed] [Google Scholar]

5. Willis BH, Quigley K. The assessment of the quality of reporting of meta-analyses in diagnostic research: a systematic review. BMC Med Res Methodol. 2011;xi:163. [PMC free commodity] [PubMed] [Google Scholar]

vi. Chebbout R, Heywood EG, Drake TM, Wild JR, Lee J, Wilson M, et al. A systematic review of the incidence of and risk factors for postoperative atrial fibrillation post-obit general surgery. Anaesthesia. 2018;73:490–8. [PubMed] [Google Scholar]

7. Chiang MH, Wu SC, Hsu SW, Chin JC. Bispectral Index and not-Bispectral Alphabetize anesthetic protocols on postoperative recovery outcomes. Minerva Anestesiol. 2018;84:216–28. [PubMed] [Google Scholar]

viii. Damodaran Southward, Sethi S, Malhotra SK, Samra T, Maitra S, Saini V. Comparison of oropharyngeal leak force per unit area of air-Q, i-gel, and laryngeal mask airway supreme in adult patients during general anesthesia: A randomized controlled trial. Saudi J Anaesth. 2017;eleven:390–5. [PMC gratis article] [PubMed] [Google Scholar]

9. Kim MS, Park JH, Choi YS, Park SH, Shin S. Efficacy of palonosetron vs. ramosetron for the prevention of postoperative nausea and airsickness: a meta-assay of randomized controlled trials. Yonsei Med J. 2017;58:848–58. [PMC free article] [PubMed] [Google Scholar]

10. Lam T, Nagappa M, Wong J, Singh Thousand, Wong D, Chung F. Continuous pulse oximetry and capnography monitoring for postoperative respiratory low and adverse events: a systematic review and meta-analysis. Anesth Analg. 2017;125:2019–29. [PubMed] [Google Scholar]

xi. Landoni M, Biondi-Zoccai GG, Zangrillo A, Bignami E, D'Avolio S, Marchetti C, et al. Desflurane and sevoflurane in cardiac surgery: a meta-analysis of randomized clinical trials. J Cardiothorac Vasc Anesth. 2007;21:502–11. [PubMed] [Google Scholar]

12. Lee A, Ngan Kee WD, Gin T. A dose-response meta-analysis of prophylactic intravenous ephedrine for the prevention of hypotension during spinal anesthesia for elective cesarean delivery. Anesth Analg. 2004;98:483–90. [PubMed] [Google Scholar]

thirteen. Xia ZQ, Chen SQ, Yao X, Xie CB, Wen SH, Liu KX. Clinical benefits of dexmedetomidine versus propofol in adult intensive care unit of measurement patients: a meta-analysis of randomized clinical trials. J Surg Res. 2013;185:833–43. [PubMed] [Google Scholar]

14. Ahn Due east, Choi G, Kang H, Baek C, Jung Y, Woo Y, et al. Palonosetron and ramosetron compared for effectiveness in preventing postoperative nausea and vomiting: a systematic review and meta-analysis. PLoS Ane. 2016;eleven:e0168509. [PMC free article] [PubMed] [Google Scholar]

15. Ahn EJ, Kang H, Choi GJ, Baek CW, Jung YH, Woo YC. The effectiveness of midazolam for preventing postoperative nausea and vomiting: a systematic review and meta-assay. Anesth Analg. 2016;122:664–76. [PubMed] [Google Scholar]

xvi. Yeung J, Patel V, Champaneria R, Dretzke J. Regional versus general anaesthesia in elderly patients undergoing surgery for hip fracture: protocol for a systematic review. Syst Rev. 2016;5:66. [PMC costless article] [PubMed] [Google Scholar]

17. Zorrilla-Vaca A, Healy RJ, Mirski MA. A comparing of regional versus general anesthesia for lumbarspine surgery: a meta-analysis of randomized studies. J Neurosurg Anesthesiol. 2017;29:415–25. [PubMed] [Google Scholar]

18. Zuo D, Jin C, Shan M, Zhou L, Li Y. A comparison of general versus regional anesthesia for hip fracture surgery: a meta-analysis. Int J Clin Exp Med. 2015;8:20295–301. [PMC free commodity] [PubMed] [Google Scholar]

19. Ahn EJ, Choi GJ, Kang H, Baek CW, Jung YH, Woo YC, et al. Comparative efficacy of the air-q intubating laryngeal airway during general anesthesia in pediatric patients: a systematic review and meta-analysis. Biomed Res Int. 2016;2016:6406391. [PMC free commodity] [PubMed] [Google Scholar]

20. Kirkham KR, Grape S, Martin R, Albrecht Due east. Analgesic efficacy of local infiltration analgesia vs. femoral nerve block after inductive cruciate ligament reconstruction: a systematic review and meta-analysis. Anaesthesia. 2017;72:1542–53. [PubMed] [Google Scholar]

21. Tang Y, Tang X, Wei Q, Zhang H. Intrathecal morphine versus femoral nerve block for pain control after total human knee arthroplasty: a metaanalysis. J Orthop Surg Res. 2017;12:125. [PMC free article] [PubMed] [Google Scholar]

22. Hussain Northward, Goldar G, Ragina North, Banfield L, Laffey JG, Abdallah FW. Suprascapular and interscalene nervus block for shoulder surgery: a systematic review and meta-assay. Anesthesiology. 2017;127:998–1013. [PubMed] [Google Scholar]

23. Wang K, Zhang HX. Liposomal bupivacaine versus interscalene nervus cake for hurting control after full shoulder arthroplasty: A systematic review and meta-analysis. Int J Surg. 2017;46:61–lxx. [PubMed] [Google Scholar]

24. Stewart LA, Clarke Chiliad, Rovers Thou, Riley RD, Simmonds Yard, Stewart G, et al. Preferred reporting items for systematic review and meta-analyses of private participant data: the PRISMA-IPD Statement. JAMA. 2015;313:1657–65. [PubMed] [Google Scholar]

25. Kang H. How to sympathize and conduct evidence-based medicine. Korean J Anesthesiol. 2016;69:435–45. [PMC costless article] [PubMed] [Google Scholar]

26. Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, et al. Grade: an emerging consensus on rating quality of evidence and forcefulness of recommendations. BMJ. 2008;336:924–6. [PMC costless article] [PubMed] [Google Scholar]

27. Dijkers G. Introducing Course: a systematic arroyo to rating prove in systematic reviews and to guideline development. Knowl Translat Update. 2013;one:1–ix. [Google Scholar]

28. Higgins JP, Altman DG, Sterne JA. Affiliate 8: Assessing the risk of bias in included studies. In: Cochrane Handbook for Systematic Reviews of Interventions: The Cochrane Collaboration 2011. updated 2017 Jun. cited 2017 Dec xiii. Bachelor from http://handbook.cochrane.org.

29. Egger Yard, Schneider M, Davey Smith 1000. Spurious precision? Meta-analysis of observational studies. BMJ. 1998;316:140–4. [PMC gratuitous commodity] [PubMed] [Google Scholar]

30. Higgins JP, Altman DG, Sterne JA. Chapter 9: Assessing the risk of bias in included studies. In: Cochrane Handbook for Systematic Reviews of Interventions: The Cochrane Collaboration 2011. updated 2017 Jun. cited 2017 December 13. Available from http://handbook.cochrane.org.

31. Deeks JJ, Altman DG, Bradburn MJ. Statistical methods for examining heterogeneity and combining results from several studies in meta-analysis. In: Systematic Reviews in Wellness Intendance. In: Egger G, Smith GD, Altman DG, editors. London: BMJ Publishing Grouping; 2008. pp. 285–312. [Google Scholar]

32. IntHout J, Ioannidis JP, Borm GF. The Hartung-Knapp-Sidik-Jonkman method for random effects meta-analysis is straightforward and considerably outperforms the standard DerSimonian-Laird method. BMC Med Res Methodol. 2014;14:25. [PMC free commodity] [PubMed] [Google Scholar]

33. Li J, Zhang Q, Zhang Yard, Egger M. Intravenous magnesium for astute myocardial infarction. Cochrane Database Syst Rev. 2007;(2):CD002755. [PMC free commodity] [PubMed] [Google Scholar]

34. Thompson SG. Controversies in meta-analysis: the case of the trials of serum cholesterol reduction. Stat Methods Med Res. 1993;2:173–92. [PubMed] [Google Scholar]

35. Higgins JP, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses. BMJ. 2003;327:557–sixty. [PMC free commodity] [PubMed] [Google Scholar]

36. Sutton AJ, Abrams KR, Jones DR. An illustrated guide to the methods of meta-analysis. J Eval Clin Pract. 2001;7:135–48. [PubMed] [Google Scholar]

37. Begg CB, Mazumdar Chiliad. Operating characteristics of a rank correlation exam for publication bias. Biometrics. 1994;50:1088–101. [PubMed] [Google Scholar]

38. Duval S, Tweedie R. Trim and fill: a simple funnel-plot-based method of testing and adjusting for publication bias in meta-assay. Biometrics. 2000;56:455–63. [PubMed] [Google Scholar]


Manufactures from Korean Journal of Anesthesiology are provided here courtesy of Korean Society of Anesthesiologists


astonprignoced45.blogspot.com

Source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5903119/

0 Response to "Compare and Contrast Systematic Review and Meta Analysis"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel