SOCIAL SCIENCE IS NOT SCIENCE
- About Marginal Revolution
- Categories
- Date Archives
- Our Books
- Our Textbook: Modern Principles of Economics
- Marginal Revolution University
Search
Needed in Empirical Social Science: Numbers
by Tyler Cowen June 22, 2024 at 4:29 am in
By Aaron S. Edlin and Michael Love:
Knowing the magnitude and standard error of an empirical estimate is much more important than simply knowing the estimate’s sign and whether it is statistically significant. Yet, we find that even in top journals, when empirical social scientists choose their headline results – the results they put in abstracts – the vast majority ignore this teaching and report neither the magnitude nor the precision of their findings. They provide no numerical headline results for 63%±3% of empirical economics papers and for a whopping 92% ± 1% of empirical political science or sociology papers between 1999 and 2019. Moreover, they essentially never report precision (0.1% ± 0.1%) in headline results. Many social scientists appear wedded to a null hypothesis testing culture instead of an estimation culture. There is another way: medical researchers routinely report numerical magnitudes (98%±1%) and precision (83% ± 2%) in headline results. Trends suggest that economists, but not political scientists or sociologists, are warming to numerical reporting: the share of empirical economics articles with numerical headline results doubled since 1999, and economics articles with numerical headline results get more citations (+19% ± 11%).
Via somebody on Twitter?
Comments Sort by Top Sort by Recent Sort by Controversial
rayward
2024-06-22 05:31:15
| 33 | 8 |
To put this in context, close only counts in horseshoes, hand grenades, and the social sciences.
mkt42
2024-06-22 14:21:00
| 2 | 1 |
It also counts in slow dancing!
Tyler Cohen
2024-06-22 16:39:18
| 0 | 0 |
Ah yes, but what about the ubiquitous use of “of sound mind…” used in way too many notarized documents without a single cite or reference. A scandal but has value to those who thrive on ambiguity.
Engineer
2024-06-22 07:55:09
| 30 | 7 |
One might form the impression that a very large fraction of social science papers are just the modern version of reading chicken entrails.
There should be a market opportunity here, if one could reliably apply a quality control filter, but I suspect that would break a lot of entrails reader rice bowls, and would be strongly opposed.
Pillow Lasers
2024-06-22 12:15:41
| 8 | 2 |
Well, maybe you’re right but this analysis doesn’t support you either way. It’s just about what specific things are reported in the abstract. I really don’t know why this is an issue other than abstracts are free to read without access to a journal but a 150 word abstract is never really going to give anyone much good info to evaluate the merit of the paper so who cares if they report confidence intervals, etc., especially when you have no idea how the variables were measured!!! The more I think about it, the more I dislike this study. It fits with Tyler’s link the other day about so much pessimism in media. Well, here’s some unfounded pessimism on social science research about a thing that scholars shouldn’t really cares about because they should read the actual article to evaluate it.
wd40
2024-06-22 11:04:49
| 19 | 1 |
Journals put severe limits on the number of words in an abstract (often less than 150 words). The abstract is there to entice the reader to read the rest of the article. No one should rely on the abstract for the results as they depend on methodology and typically supported by more than one result. Even the carefully specified result of Edlin and Love’s research reported in their abstract is not sufficient.
Pillow Lasers
2024-06-22 12:24:25
| 8 | 0 |
Yes, this is such a pointless analysis that is the very example of the over abundance of pessimism in the media that Tyler linked to the other day.
Physician, heal thyself!
dosta7
2024-06-22 16:34:44
| 4 | 0 |
yup —- Tyler is a major promoter of ‘deceptive’ social science research abstracts, right here in sacred academian social-media
Rolle
2024-06-22 21:20:51
| 0 | 0 |
You are absolutely right.
I don’t think it’s ultimately intended as “analysis” but as rhetoric weaved in with some simple quantitative illustration.
Crust
2024-06-22 07:58:46
| 13 | 3 |
Yes! This is a huge bugaboo of mine. Anytime I look at an abstract in the hard sciences, they give the effect size with the confidence interval in parentheses, which doesn’t take a lot of characters something like 0.36 g/L (CI 0.24-0.48). It’s completely expected. But I basically never see that in economics (at least not the confidence interval). Per the above, it’s apparently even worse in other fields (political science, sociology).
I feel like this is on journal editors. They should insist that any result mentioned in the abstract must give the point estimate and confidence interval. No more of this just stating that it was or was not statistically significant. Not insisting on this should be read as a sign of a low quality field/journal.
Pillow Lasers
2024-06-22 12:11:29
| 4 | 2 |
Why? These are small editorial decisions about how to present abstracts. Sure, I like it when journals have multi part abstracts, but also, read the paper. Abstracts can’t capture everything and that’s why you also have commenters here ripping on the jargon-filled abstracts. You can’t please them all.
Crust
2024-06-22 12:28:24
| 4 | 0 |
I think most people read at least 10x as many abstracts as full papers (certainly I do).
Pillow Lasers
2024-06-22 14:08:52
| 2 | 2 |
Sure, but why does this mean that confidence intervals should be reported? What if I conducted a variety of statistical analyses to address the question motivating the paper and don’t have room to indicate the confidence intervals on the independent variables of interest from all those analyses? The abstract is just supposed to introduce what th e paper is about and why you should care about and what the general results are. Regression tables and figures provide the specific estimates etc. This is still such a silly “critique” of social science work.
Crust
2024-06-22 16:36:12
| 3 | 0 |
Part of my point was that stating confidence intervals needn’t really take up space. It may actually take fewer characters than conveying the single bit of information of statistically significant or not.
Do you think researchers in other fields (eg physics, medical science) are making a mistake by routinely giving confidence intervals?
steve
2024-06-22 17:36:03
| 4 | 0 |
We all only have so much time to read. If a paper claims to have surprising results but I read the abstract and find that the confidence intervals suggest the results are meaningless I dont have to read the paper.
Steve
OldCurmudgeon
2024-06-22 11:55:47
| 1 | 0 |
> it’s apparently even worse in other fields (political science, sociology).
Fields where the audience won’t be good at math??
Aaronn
2024-06-22 09:48:12
| 3 | 6 |
Agree on a need to report magnitude. Disagree on confidence intervals. Do you actually know what confidence interval means under frequentist statistics? The definition is not very intuitive, widely misunderstood, and is hard to explain to people. I think it is good that papers do not report confidence interval.
Crust
2024-06-22 12:19:24
| 2 | 3 |
Your view is abstracts should just give the point estimate and state whether it was significant or not, or in other words whether zero lies in the confidence interval or not?
If there is a confidence interval given I get some sense of the magnitude of the effect found. If not, all I know is the sign (and even there, the confidence interval helps by giving me a sense of how confident I should be in the sign).
Pretty much everything in statistics is alas widely misunderstood. One thing that is very widely misunderstood: when given a statistically significant point estimate (and no further information) many people think they can reason using that as a reasonably reliable estimate of the magnitude.
mkt42
2024-06-22 14:38:16
| 5 | 1 |
“If there is a confidence interval given I get some sense of the magnitude of the effect found.”
Nope, confidence intervals do not inform us about the magnitude of the effect.
They inform us about the likely accuracy of the point estimate.
The point estimate gives us an estimate of the magnitude; the confidence interval tells us how much, yes, confidence we should have about that estimate.
So Aaronn has a valid point about most people not understanding what a confidence interval really measures. Even students who’ve taken intro stats often are unable to explain them.
OTOH, even though most people misunderstand what a confidence interval really is, their intuitive interpretation of a broad vs narrow interval is not that far off the practical realities of deciding how much credence to give to a research result.
So confidence intervals have some utility even though they are misunderstood.
Crust
2024-06-22 16:26:27
| 0 | 1 |
If eg you have a point estimate of 0.1 and a confidence interval of -0.9 to 1.1, then you have basically no idea of the true magnitude. If on the other hand, the confidence interval is 0.09 to 0.101 you do. It’s important to distinguish the point estimate from the full distribution.
Aaronn
2024-06-23 09:50:15
| 0 | 1 |
Confidence interval of -0.9 to 1.1 means that p-value is very high (or T-statistics is tiny). So this point estimate is insignificant. Thus reporting point estimate together with some metric of statistical significance contains the same information as reporting confidence interval.
The main problem with confidence interval is that few people can define it rigorously and understand what its definition means. There is frequentist statistics and Bayesian statistics. Confidence interval is a concept in frequentist statistics. When asked about the definition and meaning of confidence interval, 95% of researchers give a definition of a credible interval from Bayesian statistics. Mixing frequentist and Bayesian statistics is a big no-no. So whenever we are talking about confidence interval, most likely most participants in a discussion do not really understand what we are talking about due to their lack of clear understanding of differences between frequentist and Bayesian statistics. So any such discussion has a risk of going astray due to a need to educate people on the fundamental statistical concepts.
So in practice it is easier to focus on point estimate and p-value (or t-statistics) rather than mention confidence interval and then risk having the whole conversation derailed due to the reasons above.
Rolle
2024-06-22 11:16:49
| 7 | 1 |
Don’t know if lack of estimates or “numbers” is a problem as much as it is a symptom of a more fundamental critique of social sciences. I sincerely doubt Edlin and Love and approving commenters here would be convinced by social science studies even it’s they had numbers in the abstracts.
I’d also say there are a lot of applied studies that do include estimates, e.g., in policy papers and cost benefit analyses. Do you really believe in the cost benefit estimates for the next 100 years of a bridge being built?
I myself am very skeptical but not sure estimates would do much for me.
Pillow Lasers
2024-06-22 12:19:45
| 2 | 0 |
This study has nothing to say about your concerns because it’s about what’s reported in the 150 to 200 word abstract not what actually gets reported on in the paper, which is far more important and what good scholars are reading. I often skip the abstract when I review papers because they’re irrelevant to the scholarly arguments.
Rolle
2024-06-22 21:14:51
| 0 | 0 |
Fair enough. Still it seems the authors have some suggestion (hope) that more focus on estimates, as reflected in abstracts – the elevator pitch, would improve research overall.
Todd K
2024-06-22 09:49:52
| 3 | 0 |
“There is another way: medical researchers routinely report numerical magnitudes (98%±1%) and precision (83% ± 2%) in headline results”
It doesn’t seem to matter as medical articles are for the most part notoriously bad.
S
2024-06-22 07:39:43
| 4 | 2 |
Comments keep getting deleted on here. And on YouTube
Just an East Coast witch passing along info, doing nothing for us is sacrilegious
Joe Strummer’s Ghost
2024-06-22 07:52:43
| 0 | 0 |
You mean the guys who read a lot of books and think they hold the key? No!
XYCoir, yes recognizing limits
2024-06-22 11:39:26
| 2 | 0 |
Not just economics, countless thousands of medical research articles assume the sanctity of accepted theories, textbook acceptance. Along comes a researcher asking “Why do some people with abnormal Y readings not get X illness and some patients with normal Y readings get X illness?” Who gets to decide what percent correlation, non-correlation has meaning, rises to causation?
Zhang WZ. Biomolecules. 2021 Feb 14;11(2):280.
mkt42
2024-06-22 14:55:39
| 2 | 0 |
I was initially confused by their repeated use of the phrase “headline results”. I can’t recall seeing a headline or title with a confidence interval included.
But what they really mean is what’s reported in the abstract: “headline results – the results they put in abstracts”.
I don’t go quite far as Pillow Lasers but I agree that confidence intervals in an abstract are not a do-or-die issue. They provide some information, they’re nice to see — but they are usually much less important than the other pieces of information that the abstract has to convey: what’s the research question, what technique did the researchers use to answer it, and what is their conclusion (I mean their conclusion in intuitive and practical terms, what are our takeaways, not what are the exact numbers or the exact confidence interval). Ideally the abstract will also tell us why the research question is interesting and what the data source is.
That’s a lot to cram into 150 words. And as we often observe here in MR, too many abstracts are poorly written, which compounds the issue: a challenge to write, with too many researchers lacking the writing skills to meet that challenge.
Confidence intervals, or lack thereof, are a part of this discussion. But only a part. Abstracts that have better writing is a bigger need than abstracts that have confidence intervals.
dearieme
2024-06-22 04:37:25
| 10 | 9 |
A million years ago (+/- one dinosaur) when I taught stats I made a point of promoting estimation over null hypothesis testing.
Mind you this was to Engineering Science students not Social Science. Clever laddies and lassies the engineers; they asked good questions.
dosta7
2024-06-22 05:53:46
| 15 | 11 |
…….. the point here is that the dominant social-science culture routinely promotes deceptive point estimates in their work — in attempts to add phony credibility to their output.
Thus, JUNK science is the norm in social science published reseach papers.
Blackthorne
2024-06-22 11:00:43
| 2 | 1 |
The underlying issue highlighted here is that a large fraction of social scientists and an even larger fraction of “science journalists” only read the headlines/abstracts of these papers
jdb
2024-06-22 14:23:06
| 1 | 0 |
the strangest thing about the abstract is that they dont actually state how many articles they reviewed. But its on the first page. Academics could learn a lot from newspapers. Dont waste the title of your article. Say something declarative about what you found. Plus the visuals. WHats the point of doing a bar graph with error bars on top. The only part of it that matters is the top. Make it a dot + bars, its easier to read!
Salim
2024-06-24 15:04:06
| 0 | 0 |
Happy to have already done this right (although without noting the precision)
https://www.tandfonline.com/doi/full/10.1080/10511482.2023.2186750
Chezzy
2024-06-22 04:46:25
| 0 | 1 |
The blimey assholes from the island and even the continent might be coming over, such is the strength of this crew. From Ireland to Ukraine, the kids who used to not stand a chance are about to put their fkn foot down. This is where heroes are made and the oligarchs get to look at what they’ve spawned, that’s coming to eat them, an allegory for the AI losers…
Rich Berger
2024-06-22 10:48:12
| 5 | 7 |
I don’t think more numbers are needed, but rather a recognition of the limits of (particularly) economics. I think very few of these studies are useful, not because economists are stupid, but there are too many factors at work, to nail down “truth”. Economic wisdom is in the nature of general concepts: opportunity cost, supply and demand, marginal value, etc.
mkt42
2024-06-22 15:04:16
| 1 | 0 |
There’s actually a good amount of wisdom in this comment. And it’s why the inclusion or exclusion of confidence intervals, though somewhat useful, is a secondary issue.
The true critique of an article, at least in economics, is not going to involve the confidence interval, it’s going to be about the assumptions made, the modeling technique used, the quality of the data, etc. etc.
“I think very few of these studies are useful”
This is also a true statement. Where we might disagree is what we should do about it. Have fewer economists and fewer research articles? Or recognize Sturgeon’s Law, 90% of everything is junk.
Comments for this post are closed
Marginal Revolution University
Learn more about Mercatus Center Fellowships
Subscribe via Email
Enter your email address to subscribe to updates.
Email Address
Subscribe
Contact Us
Alex Tabarrok
Email Alex
Follow @atabarrok
Tyler Cowen
Email Tyler
Follow @tylercowen
Webmaster
Report an issue
Our Web Pages
- Alex Tabarrok’s Home Page
- Alex’s TED talk, how ideas trump crises
- Conversations with Tyler
- FDAReview.org
- Tyler Cowen’s Personal Web Page
- Tyler’s ethnic dining guide
- Apply to Emergent Ventures
Books
Modern Principles of Economics
Tyler Cowen & Alexander TabarrokMarginal Revolution 2026
