Sorry, this entry is only available in 美式英文 For the sake of viewer convenience, the content is shown below in the alternative language. You may click the link to switch the active language.

  • Compares the outcomes of two or more studies to answer questions, such as efficacy of treatments.
  • The results of two or more studies are pooled out and compared
  • Capable of answering questions or hypothesis
  • Null hypothesis is valid
  • Content components of meta-analysis: study-design, combinability, control of bias, statistical analysis, sensitivity analysis, and problems of applicability.
  • Evidence gathering techniques must be mentioned clearly: literature search, reference search, and contacts of authors of unpublished work
  • Avoid publication bias: tendency of articles reporting positive and/or “new” results to be published, and tendency of not publishing negative or confirmatory results
  • Synthesizes evidence and a quantitative overall estimate (and confidence intervals) based on results of individual studies.
  • According to JAMA guidelines, results of unpublished studies may be published provided they meet the inclusion criteria of published studies.
  • To prevent publication bias: define the number of negative studies that can change the results of meta-analysis from positive to negative.
  • Criteria for combining studies is same as criteria for multicenter trials: we are not just restricted to randomized control trials. We can combine the results of studies focused on same prognostic factors, such as severity of illness, equal potency, administration of the intervention, equal detection of outcome events, and sufficient similarity of study subjects.

Question: I want to know if we can combine the results of a prospective randomized clinical trial with a retrospective case-control study. My feedback: These would be completely different studies as the study-design and prognostic outcomes for prospective randomized clinical trial would be very different from a retrospective case-control study.

I would prefer combining the results of case-control study with a retrospective cohort study as similar prognostic factors are analyzed. The results of randomized clinical trial can be combined with a crossover trial as both measure the same prognostic factors, except that the latter eliminates the use of control group.

  • To determine whether we can combine two or more studies, the following statistical analysis is used: degree of heterogeneity, effect size (population effect = z), sample size in each group, and whether the effect sizes from different studies are homogeneous. In case of statistically significant heterogeneity, the results of two studies cannot be combined.
  • Fixed-effects and random-effects models are used to determine how different assumptions affect the results.

Meta-analysis needs to be updated according to the results of latest published studies. For example, Cochrane Collaboration, is an international effort in this regard.

Confidence interval: range of numerical expressions within which one can be confident (usually 95% confident, to correspond to an  level of .05) the population value the study is intended to estimate lies. The CI is an indication of the precision of an estimated population value.

Confidence intervals (CI) used to estimate a population value usually are symmetric or nearly symmetric around a value, but CIs used for relative risks and odds may not be. Confidence intervals are preferable to P values since they convey information about precision as well as statistical significance. If the CI does not overlap 1, the result is significant ( P < 0.05); if the CI overlaps 1, the results are consistent with null hypothesis (equivalent to P > 0.05). If  a CI value equals 1, then generally P = 0.05. In all cases, the point estimate should be contained within the CI (although if the CIs are very close to the point estimate, the rounded-off CI may be identical to the point estimate)

Confidence intervals are expressed with to or a hyphen separating the 2 values. To avoid confusion, hyphens are not used if 1 of the values is a negative number. Units that are closed up with the numeral are repeated for each CI; those not closed up are repeated only with the last numeral.

For example, the odds ratio was 3.1 (95% CI, 2.2–4.8). The prevalence of disease in the population was 1.2% (95% CI, 0.8–1.6%)

P value: probability of obtaining the observed data (or data that are more extreme) if the null hypothesis were exactly true.

While hypothesis testing often results in P value, P values themselves cannot provide information about whether the null hypothesis is accepted or rejected. Confidence intervals (CIs) are much more informative since they provide a plausible range of values for an unknown parameter, as well as some indication of the power of the study as indicated by the width of the CI.

For example, an odds ratio of 0.5 with a 95% CI of 0.05–4.5 indicates to the reader the [im]precision of the estimate, whereas P = 0.63 does not provide such information.

Confidence intervals are preferred whenever possible. Including both the CI and P value provides more information than either alone. This is especially true if the CI is used to provide an interval estimate and the P value to provide the results of hypothesis testing.

When any P value is expressed, it should be clear to the reader what parameters and groups were compared, what statistical test was performed, and the degrees of freedom (df) and whether the test was 1-tailed or 2-tailed

For expressing P values in manuscripts and articles, the actual value for P should be expressed to 2 digits for P  0.01, whether or not P is significant. (When rounding, a P value expressed to 3 digits  would make the P value non-significant, such as P = 0.049 rounded to 0.05, the P value can be left as 3 digits.)

If P < .01, P should be expressed to 3 digits. The actual P value should be expressed (P = 0.04), rather than expressing a statement of inequality (P < 0.05), unless P < 0.001. Expressing P to more than 3 significant digits does not add useful information to P < 0.001, since precise P values with extreme results are sensitive to biases or departures from the statistical model.

For meta-analysis, P values should be listed simply as not significant (NS) as we deal with actual values; however, not providing exact P values is a form of incomplete reporting. Because the P value represents the result of a statistical test and not the strength of the association or the clinical importance of the result, P values should be referred to simply as statistically significant or not significant; terms such as highly significant or very highly significant should be avoided.

If P < 0.00001, P should be expressed as P < 0.001. If P > .999, P should be expressed as P > .99

Significance: Statistically, the testing of a hypothesis that an effect is not present. A significant result rejects null hypothesis. Statistical significance is highly dependent on the sample size and provides no information about the clinical significance of the result. Clinical significance, on the other hand, involves a judgment as to whether the risk factor or intervention studied would affect a patient’s outcome enough to make intervention worthwhile. The level of clinical significance considered important is sometimes defined prospectively (often by consensus of a group of physicians) as the minimal clinically important difference, but the cutoff is arbitrary.

Degree of freedom (df): the number of independent comparisons that can be made among members of a sample. In a contingency table, df is calculated as (number of rows–1) (number of columns–1)

df should be reported as a subscript after the related statistic, such as the t test, analysis of variance, and X2 test (e.g., X32 = 17.7, P = 0.2; in this example, the subscript 3 is the number of df)

Contingency Table: a table created when categorical variables are used to calculate expected frequencies in an analysis and to present the data, especially for a X2 test (2-dimensional data) or log-linear models (data with at least 3 dimensions). A 2  3 contingency table has 2 rows and 3 columns. The df are calculated as (number of rows – 1) (number of columns – 1). Thus, a 2  3 contingency table has 6 cells and 2 df.

Fixed-effect model: it is used in meta-analysis that assumes that differences in treatment effect in each study and all estimate the same true difference. This is not often the case, but the model assumes that it is so close enough to the truth that the results will not be misleading.

Random-effects model: model used in meta-analysis that assumes that there is a universe of conditions and that the effects seen in the studies are only a sample, ideally a random sample, of the possible effects.