DECONSTRUCTING META-ANALYSIS
DECONSTRUCTING META-ANALYSIS
Abstract: The authors suggest phasing out meta-analysis. We tried a statistical technique to nominalize the data so conclusions were more cautious (Snell & Marsh, 2003.) However, now years later, we believe that meta-analysis is not viable, nor is our modifications. The literature review suggest that meta-analysis has become a subculture in science of all kinds, but that even statistical and research modifications to reduce the heterogeneity of the data, meta-analysis generally is not accurate.
Introduction: Gene Glass the founder of meta-analysis has noted that his research and statistical procedure has been an incredible success since its origination over 30 years ago. (Glass.ed.asu.edu/papers/meta25.html)
For those not familiar with meta-analysis, we describe the research strategy as the following:
It is a quantified review of the literature that encapsulates numerous studies with various research protocols, differing samples, and collapses numerical findings into a ratio analysis that infers probabilistic random sampling analysis. (Snell& Marsh, 2003)
Discussion:
Meta-analysis is every publicationist’s dream. It is a secondary analysis of many different studies that for the most part that have been published on a specific topic. The researcher can utilize a research assistant to track down all or most of the studies conducted. Then the results are collapsed into what appears to be one study. An Analysis of Variance or related can be calculated. The studies that have been aggregated are like apples and oranges in terms of research strategies and sample populations. If there is a question about the validity of some of the research, statistical adjustments are made to the analysis. (Carey,Weintraub, 2007)
As an example, we are looking at the relationship between an anti-depressant and a measure of decrease in depression by many different indexes and scales. The sample can include 400 nurses from South Dakota, 50 handicapped war veterans from Maine, 100
employees at defense plant in Tennessee and on and on. What they have in common is that the studies were completed independently with differing samples over various periods with non-probabilistic or even convenience samples. Or, they may have been double blind experimental/control subjects measured by medical researchers who may have different interpretations about the definition of depression. Last, some truly spurious
research is included. Weintraub, 2008.
All of this is added together. The analysis is calculated and the author(s) make generalizations about the anti-depressant. This secondary analysis is inexpensive, easy to complete, and carries the connotation of validity and reliability. It connotes or deifies as if it is comprehensive and hard to understand research “science.” It is not.
It is too good to be true. The name of analysis and the mystique that is attached to it, gives it credence well beyond what it deserves. We believe that it is an historic tool that should be laid to rest
We are not alone. Carey (2007) makes the same point when there are a small number of patients and the findings are merged with other studies and meta-analysis is used.
He indicates that biostatisticians admit the inferiority of meta-analysis and that the best most valid and reliable strategy is the hardest and most expensive. That means drawing a random sample using the most valid research protocols and statistical analyses for significance testing or some cross-sectional relationship. That means an original sample of patients all of whom the remaining strategies and analysis is the same.
Machtay, N., Kaiser, LR, Glatstein (1999) suggests that meta-analysis is really metaphysics. For meta-analysis to be truly valid, all the data must come from independent and randomize samples. That the “file drawer” effect of not including
studies that were not published perhaps because they found no significant difference thus little interest was found in publishing the findings could also negatively influence the findings. That all the studies aggregated together are equally reliable. Further, one assumes that the stimuli treatment is the same. That similar measurement of the results is the same. The above authors also reinforce their position in “Reality and Meta-analysis” (2000)
Lelorier, J. Gregoire, G, Benhaddad, A. Lapiere, J, Derdian, F (1997) compared 12 large (over 1,000 patients) randomized controlled trials with 19 meta-analytic studies on the same topic. There was a disagreement in results 35% of the time. Further, there were only 5 studies that the point estimate was in agreement or 12% of the time. Significance testing was used at the .05 level. This study has been cited in 40 other articles ( see references at the end of Lelorier et.al artricle in the New England Journal of Medicine on internet searches as of 2008)
Unfortunately, a search of other studies and opinions that is easy for the reader to conduct on the internet generally (with some exceptions) does not support our position. Our minority opinion varies slightly but there appears to others and ourselves that there is an overriding concern that meta-analysis lessens the public’s perception of valid & reliable research in medicine, or education or on any discipline. Thus, scientific research has lost its believability. In other words, the public believes that any research can be conducted and published to “prove” anything.
Our stance is, if you are doing research, avoid meta-analysis. It will lessen the support from the public and sully scientific research because in popular periodicals, the same studies can often be reported as valid and then soon there after as invalid. So what are the professional community and the public to accept?
At any rate, the minority voice of ours is overwhelmed by the number of books, articles, software, research companies, and related that have both financial and occupational prestige involved with meta-analysis. Like a virulent malignant cancer it has metasized throughout the body of science research.
To “improve” meta-analysis, the authors (2003) tried to minimize the flaws, by constructing a “meta-cognitive analysis” where studies on numerous subjects were analyzed by cautious nominal statistics. Alas, we found over the years that this is still too much of a stretch.
We come to the point that merging apples and oranges of meta-analysis sampling does not make a new fruit that will somehow resurrect, recover, recreate, and grow into validity. Meta-analysis just does not work in most cases.
We will be the first to recognize how small our voices are relative to the large subculture that dwells in the world of meta-analysis. However, it has to happen and others will or may compliment our position. Thus, all the adjustments to mixed research protocols and varying and sporadic samples along with statistical adjustment do not add up to validity. Research heterogeneity among the samples and procedures may one day cause a “tipping point” of change and journals will no longer publish meta-analytic studies.
Conclusion: We have tried to demonstrate that meta-analysis is not a viable research strategy. In the review of the entire literature, one can easily find that there are more supporters of meta-analysis then critics. However, science is not a democracy and we lend our voices to those who would want to phase out meta-analysis. It was exciting and easy to conduct. It connoted legitimacy and authority. However, it just does not bear fruit.
References:
Begley, S. (2008) Newsweek Whitewashing toxic chemicals, 5/12/39.
Bailar, J. (1998) letter, New england journal of medicine /338-62.
Carey, J. & A. Weintraub (2007) When medical studies collide, Businessweek,
(businessweek.com/content/07_321b4045052.hrm?campaign)
Glass, ed, asu. Edu/papers/ meta25.html
Helm, B. (2008) Businessweek Online polls:how good are they? 6/16/ 86.
Ioannidis, J. et.al. (1998) Meta-analyses and large randomized controlled trials
New england journal of medicine, 338: 59-72.
Leplorier, J. Gregoire, G. Benhaddad, A. Derderian, F. (1997) Large randomnized controlled samples. New england journal of medicine #8/337, 536-542
Machtay, M, Kaiser, L.R. Glatsein (1999) Chest, Is meta-analysis really meta-physics?
116/ 539-542.
Machtay, M. et.al. (2000) Chest, Reality and Meta-anylysis 118, 835-836.
Snell, J & M. Marsh (2003) Education Meta-cognitive analysis: an alterntive to literature reviews and meta-analysis for the sciences and the arts, Winter, 364-367.
Suojanen, J. (1999) Different criteria create false positive , New england journal of medicine, 7/8 #2.
Weintraub, A. (2008) Businessweek, Doctors under the influence, 7/ 43-44
Weintraub, A, (2008) Businessweek, What doctors aren’t disclosing 5/26 32.