META COGNITIVE ANALYSIS
Meta-cognitive Analysis: An alternative to Literature Reviews and Meta-analysis for the Sciences and the Arts / REPLAY
Introduction:
The authors will introduce a new method of analysis that combines qualitative and quanatative methods to help researchers analyze data when they do not have national random samples.
Review:
Glass notes in his”Meta-analysis at 25” that he could not believe the success that his statistical method had and the number of entries on the Internet that use Meta-analysis.
(Glass.ed.asu.edu/gene/papers/meta25.html) His original idea was to question Eysinck’s literature review on psychotherapy. Glass had found inner peace with therapy and Eysinck’s indicated the whole talk therapy issue a fraud or a placebo. Glass reviewed the same studies and others and aggregated the numbers in the direction of successful outcomes and those that found no difference. To control for bias due to larger numbers in some samples as opposed to others, he was able to homogenize the data by using measures of central tendency over variance. Thus means were compared with the two groups and were divided by the means of the standard deviations or in academic jargon, he randomize the data and used a ” t” or “f” test (depending on the number of studies.)
The whole procedure was incredible success. As most researchers know, purposive samples are often drawn because the researcher cannot afford to sample the entire nation.
Corporations and political parties can do so, but individual researchers do not have that kind of money. Thus, numbers are drawn from available sample (purposive sampling) and are not random. Non-random samples are used both in experimental and control with matching demographics and a goodness of fit test is used to ascertain if there is a difference at the .05 level of confidence. Another strategy uses a large purposive sample
and cross sectional design of analyzing the “with- in difference” between two demographics or psychographics. Both assume “as if” there is a large randomized national population. A third strategy is to draw a random sample from a school, city, or target area and assume “as if” it is a large randomized national sample. All the examples listed above are flawed, but very useful.
Glass takes this a step further by aggregating ALL studies and uses significance testing for differences or lack thereof. In other words, he quantifies literature reviews. To individuals with little monies, one can contact the reference librarian and get over time a number of studies on a particular topic, quantify them, run a significance test, and publish the findings. In 25 years, Glass notes how much the strategy has been used.
Further incarnations by others have used statistical manipulations to further randomize the data and some have stratified it by using only the best studies and those with the most transparent findings that can be manipulated. (Ibid.) Thus, where original studies had double-digit samples, Meta analysis could provide thousands of individuals. Further, various controls, different stimuli, various measures of outcomes were leveled into a single set of numbers to analyze by a means test. Last, all studies that may have had nominal or ordinal qualities were treated as interval or ratio data and hard number theory was assumed. Meta-analysis gave individual researchers with little or no grant money a chance to compete with large research institutions.
Glass defended his method with exuberance, but did admit that Meta-analysis was not as robust as a large national random sample. He indicated, “Moreover, the typical meta-analysis virtually never meets the condition of probabilistic sampling of a population.” (Ibid.) To make this clearer to some, in a national presidential election Met-analysis would take all the candidates primary wins and losses, aggregate and randomize them and predict the winner. On the other hand, the two major political parties would have a large random sample that they would keep interviewing and continuously sample up to Election Day.
In other words, Meta-analysis is now a legitimate tool in research analysis but is not superior or equivalent to a national random sample.
There is numerous criticism of Met-analysis that deal with the lack of randomness, the leveling of research procedures, and related issues. This is where we would like to introduce a new research strategy that may be applicable to the sciences, soft sciences and the arts. Our position prior to this presentation is that randomize samples take precedence over Met-analysis and if the researcher wants to use Met-analysis, we support that alternative. However, if the academician is uneasy with Met-analysis, we suggest a less robust, but more defensible method. We call it Meta-cognitive analysis. It is another strategy that quantifies literature reviews.
Methodology:
Meta-cognitive analysis recognizes that in the literature review on a particular topic, 1. Numerous samples of varying randomness will be used, 2. Various research designs will be maintained, 3. Different statistical tests will be used, 4. Outcomes will be reported differently. However, the results will be cognitively assessed as in content analysis.
In our procedure, we first look to see if there is any particular bias or prejudice. If so, we stratify and leave them out. Second, if a study is methodologically flawed but some how gets published, we do not include the study. Third, some studies have no difference in their findings and our published in less prestigious journals, we most surely want to report those findings.
Thus step 1 is to use that which is to the best of our knowledge are legitimate defensible studies. Step 2, we look cognitively at the outcomes rather than in meta-analysis of the numbers. Thus, if there are differences we place them in one cell (the upper left hand) of a 2 x 2 table. Step 3, if no differences are discovered, placed in the upper right hand corner. In step 4, all the studies from literature review are added and divided by two.
As examples, if there are 40 studies, the bottom left hand cell will have 20 and the bottom right will have 20. The bottom 2 cells represent chance (based on simple probability, not sequential probability.)
Let’s take a placebo study, an antidepressant that is given to one group with similar demographics and psychographics and a placebo is given to a like group. The first upper two cells indicate that when antidepressants are used, 30 studies indicate that the medication works “better” than the placebo. In the upper right hand corner, 10 studies indicate that there were no differences between the antidepressant and the placebo.
The bottom two cells contain half of the total. Thus, 20 goes in the bottom left hand and 20 go in the bottom right hand. Do not use percentages or relative numbers. If any cell has less then 5, we will use Fischer’s correction as we are going to use Chi-square test of significance.
Chi-square is essentially a nominal test. Thus, nuances and discretion afforded by more robust, hard number oriented analysis used in Meta-analysis is lost. On the other hand, the leveling and homogenizing of data that is suspect to some researchers who question
Meta analysis is not a salient issue in our method.
In our example, when we are comparing the efficacy of a particular antidepressant, we calculate by using a Chi-square formula found in any elementary statistics books. It is
X2= sum (observed- expected) squared/ expected.
In this instance, the antidepressant is “better”( measured by such scales as the Ham D.)than the placebo. How much “better” and to how many people? We don’t claim to know. That is the genius of this strategy. It is a quantifiable process with strongly qualitative aspects. It is a very humbling procedure and can compliment a qualitative interpretation of a literature review. Further, we are not opposed to using strictly soft numbers and reporting that 30 studies found a difference in the direction of the antidepressant and 10 found no difference.
The Arts and Humanities
Let’s now move to the arts and humanities, using the same 40 cases indicated above.
Let’s assume that 30 scholars see the beginning of the civil war (on balance) as an economic struggle between the agrarian south and the industrial north. On the other hand,
10 scholars see the civil war as a struggle on balance over the issue of slavery. We then conduct the same identical test. 30 in the upper left hand as an economic struggle and 10 a slavery issue. The bottoms 2 both have 20 each. We then calculate Chi-square. Historians will be the first to note that the civil war was about something else or there is a mix of issues. We agree. That is why a qualitative analysis or literature review must come first. Further, Chi –square can provide a 3 x 2 table for other or mixed results. However, unlike meta-analysis, the nuances of history are described previous to the significance testing. And, it is done in a qualitative way through the use of words rather than numbers.
For the arts, a particular piece of poetry, art, or literature is first reviewed in terms of shadings and nuances of various experts or jury referees. Their findings are described in qualitative ways. All the virtues of the arts are on display. The panel judges the interplay of idiocincracies that make one piece of art qualitatively different and perhaps superior. And, not all panel members are equal. Chi-square can take that into account, but cannot do so without a doubling of the weight of a particular panel member. This weighting is very subjective, but permissible.
Thus, a panel reviews a new poem. 30 members find it (on balance) a great work of art; the other 10 find it not very favorable. The researcher or researchers can make that qualitative judgment combined with a quantitative analysis. This procedure can also be used for popular culture.
The Physical Sciences
The hard physical sciences may need this the least, but it is still usable. In the literature review a particular topic is analyzed, a hard physical science researcher without the benefit of a lab and considerable money to run it may find meta-cognitive analysis useful by aggregating the literature review in terms of differences versus no difference. Thus, the researcher may find a publishable article and a new insight into physical phenomena.
Summary and Conclusion
The authors have reviewed three previous strategies to assess viability of a finding in the natural world. The first is a literature review, the second is meta-analysis, and the third is to draw a random national sample and test a hypothesis. We suggest a fourth strategy. We call it Meta-cognitive analysis. It may be equivalent to literature review and meta-analysis, but inferior to random sampling/hypothesis testing. Our strategy is to quantify
literature reviews in a more humble, but more defensible way. We collapse literature reviews into difference versus no differences, or favorable/other than favorable responses. We then test this relative to chance with a chi-square test and assume “as if” we have a random national sample.
Meta-cognitive analysis may apply to the arts and humanities, social sciences, and the hard physical sciences. In terms of findings, the strategy levels the playing field for those without large grant money and research teams to gather original data and test hypothesis. We believe our strategy is less robust than original sampling, but may be equivalent to literature reviews and meta-analysis in terms of defining a problem and is superior to Meta-analysis in that we do not level strategies, numbers, and classifications and related.
References Cited:
Glass.ed.asu.edu/gene/papers/meta25.html
+See 2,318 patients were aggregated from 19 studies in Kirsch, Irving (1998) “Listening to Prozac but Hearing Placebo: A Meta-Analysis of Antidepressant Medications”
PREVENTION AND TREATMENT, Vol. 1, Article 2. See also the rejoinder in
Buetler, Larry (1998)“Prozac and Placebo: There is a Pony in There Somewhere”
PREVENTION AND TREATMENT, Vol. 1, Article 3.
See also, an excellent commentary on the politicization of studies and samples used by pharmaceutical companies to get the results that the corporations prefer. Raeburn, Paul
(2002) “Not Enough Patients? Don’t do the Study” BUSINESS WEEK, October, 20th, pp.143-144.