Ever since I got involved -- well, no, make that "got myself involved" -- in a catfight over partisan labeling and media bias a couple of years ago, I've been receiving emails with pointers to new quantitative studies that purport to show that the media really do have a liberal bias, just as conservatives have been claiming all along.
The latest of these -- and certainly the most ambitious and analytically complicated -- comes from Tim Groseclose of the UCLA Department of Political Science and Stanford Business School and Jeff Milyo of the Harris Public Policy Institute at the University of Chicago. Groseclose and Milyo's study has been approvingly cited by Bruce Bartlett in National Review, by Linda Seebach in the Rocky Mountain News, and by Harvard economist Robert J. Barro in Business Week, not to mention conservative bloggers like Instapundit, Andrew Sullivan, and Matt Drudge, among a number of others, who trumpet its "objectivity." (There's a bit of more critical discussion of the study at deadparrots.)
But sand sifted statistically is still sand. If you take the trouble to read the study carefully, it turns out to be based on unsupported, ideology-driven premises and to raise what would it would be most polite to describe as severe issues of data quality, however earnestly Groseclose and Milyo crunched their numbers. As we linguists have had ample opportunity to learn, sigmas ain't no substitute for scholarship.
Groseclose and Milyo describe their method as providing an "objective
measure of the slant of the news." They proceded in several steps.
First, they took a list of "200 of the most prominent think tanks" and
looked in the Congressional Record for
the period between between 1993 and 2002 to see how often a member of
Congress cited each of them for a fact or opinion. Then they assigned a
rating to each group that corresponded to the average ADA rating of the
members who cited it. On this basis, for example, the conservative
Family Research Council was assigned an ADA rating of 6, and the
liberal Economic Policy Institute received an ADA rating of 72.
G & M divided the groups in their survey into liberal and
conservative sets, according to whether their derived ADA ratings fell
north or south of the House and Senate average ADA rating of 42.2. They
then looked to see how often groups from each set were cited by news
shows on various media sources, effectively giving the media source a
point on one or the other side for each sentence of each citation of a
group. On that basis, they calculated a derived ADA rating for the
media source.
Their conclusion: the media have a "strong liberal bias."
What's wrong with this picture? Just two things: its conception and its
execution. Let's begin with the assumption that underlies Groseclose
and Milyo's assignment of ratings to the various groups they looked at:
if a group is cited by a liberal legislator, it's liberal; if it's
cited by a conservative legislator, it's conservative.
On February 24, 2004, for example, in a debate on the medical liability
bill, the liberal Senator Christopher Dodd of Connecticut cited "a
study conducted by the Rand Corporation and published in the New England Journal of Medicine
last year [which concluded] that individuals received the recommended
treatment for their condition in only 55 percent of the cases... "
That assumption is of a piece with the neoconservative critique of
"objective research," as adopted wholeheartedly by the Bush
administration. The tendency is laid out at length in Franklin Foer's
piece on the Administration's approach to science and research that
appeared in the July 5 New Republic
(available here).
As Foer observes, both the administration and neoconservatives have
systematically subordinated science to policy. To take just some of the
examples that Foer cites, the administration has downgraded the Office
of Science and Technology Policy and the Council of Economic Advisors,
disregarded the CIA's assessment of Saddam's WMD's and the
recommendation of the FDA's scientific advisory panel on the
morning-after pill, suppressed passages of the EPA's report on global
warning, and blocked the dissemination of a report analyzing the
efficacy of congressional legislation limiting the release of sulfur
dioxide, nitrogen oxides, and mercury.
As Foer says of the last:
This is a convenient position for both the administration and many
conservatives. If the facts don't fit the story, ignore them -- after
all, scientists have agendas, too. That may explain a tendency for
Republicans to cite objective scientific studies less often than
liberals do (a confound ignored by G & M, who consider a
citation of a Rand-sponsored research paper from the New
England Journal of Medicine as qualitatively equivalent to a
citation of a position paper from the Heritage Foundation). It also
gives conservative scholars ideological license to adjust their methods
to produce the desired result -- everybody slants their research,
whether they admit it or not.
There are ideological implications, too, in Groseclose and Milyo's
decision to split the think tanks into two groups, liberal and
conservative. One effect of this was to polarize the data. No group --
and hence, no study -- could be counted as centrist or apolitical. In
the event, this entailed a media citation of the Rand Corporation or
the AARP would count as evidence for a liberal bias in the same way
that a citation of the Heritage Foundation or the American Enterprise
Institute would count as evidence for a conservative bias. (If you're
puzzled as to why Groseclose and Milyo count the AARP as a "think tank"
in the first place, see below.)
In fact, even though the ADA rating that G & L's method
assigned to the Rand Corporation (53.6) was much closer to the
mean for all groups than that of the Heritage Foundation (6.17), G
& L ignored that difference in computing the effect of citations of
one or the other group on media bias, compounding the polarization
effect. That is, a media citation of a moderately left-of-center
group (according to G & M's criteria) balanced a citation of a
strongly right-wing group.
(It could be argued, of course, that the same would hold for a moderate
right-wing group and an extreme left-wing group, but in fact the
"liberal" groups in the study were far more moderate than the
"conservative" groups, owing to where G & M drew the line. The average ADA score for the conservative
groups in G & M's top 20 was 16.3, whereas the average score for
the liberal groups was 65.2 -- slightly less than the ADA rating they
calculated for Joe Lieberman. In effect, they achieved their result by
classifying a number of moderate groups as liberal.)
This effect was compounded still more when G & M took the dividing
line between left-wing and right-wing think tanks to be the midpoint of
the House and Senate average ADA ratings, making the voting record of
the Congress over recent years the criterion for defining the political
center. At another point, G & M defend their decision to use the
median ADA ranking of all House members to determine the dividing line
between left- and right-wing media outlets. But the Republican majority
in the House is proportionally much greater than the disproportion in
the popular votes for the two parties in Congressional elections, and
the aggregate voting records of House members are hardly representative
of voters' views on the issues as revealed in polls. In a Times/CBS poll
last year, for example, respondents felt by by 46 percent to 36 percent
that Democrats would do a better job than Republicans at making the tax
system fair, and just 11 percent believed the President's tax cuts
were very likely to create new jobs. By G & M's criterion,
however, the "centrist" position would be one that supported the
administration's tax proposals.
In effect, G & C have located the political center somewhere in the
middle of the Republican Party, by which standard the majority of
American voters would count as left-of-center.
Even if the Groseclose and Milyo study had been implemented carefully,
then, it wouldn't justify the claims that its authors make on its
behalf. As it happens, though, the execution of the study was flawed in
important respects, which made its conclusions even less useful.
Start with the list of groups from which G & M drew their initial
sample. They describe this simply as a list of "the most prominent
think tanks," but that isn't quite accurate. In fact their list was
drawn from the 200 links included on the site wheretodoresearch.com
(which actually describes it merely as a list of "major think
tanks and policy groups"). The list was compiled by one Saguee Saraf, a
free-lance researcher with a masters degree in history who lists among
his achievements that he was named Man of the Year by the Cheshire
(Connecticut) Republican Town Committee.
Saraf gives no indication of how his list was compiled, or what
criteria were used -- nor, what's more to the point, do Groseclose and
Milyo say why they consider the list authoritative. In fact its
contents are a jumble of think tanks, lobbying groups, trade
associations, and advocacy groups, assembled in a catch-as-can manner.
It lists the Oklahoma Council of Public Affairs but not California
Tomorrow; the National Right to Life Committee but not Planned
Parenthood; the National Federation of Independent Businesses but not
the National Association of Manufacturers; the NAACP but not the Urban
League, the American Jewish Congress, or the Mexican-American Legal
Defense and Education Fund; the Cato Institute but not the Reason
Foundation; the Sierra Club and the Audobon Society but not the League
of Conservation Voters or the Natural Resources Defense Council. On the
grounds of sample choice alone, in short, the Groseclose and Milyo
study would be disqualified as serious research on "the most prominent
think tanks."
Then, too, Groseclose and Milyo's survey of the citations of groups in
the Congressional Record
shows some results that would most kindly be described as puzzling. In
their list of the "twenty think tanks most cited by members of
Congress," for example, they list in 13th place the Alexis de
Tocqueville Institution (which they refer to as the "Alexis de
Tocqueville Institute"), which comes in ahead of Common Cause (14th),
the Family Research Council (16th), and the Economic Policy Institute
(19th), not to mention a number of much better-known groups that appear
on Saraf's list but not in G & M's top 20, like the NRA and the
Hoover Institution.
That result is pretty curious, since the Tocqueville Institution hardly
counts among the heavy hitters in the think-tank world. In fact when
you look in the Congressional Record,
you turn up just 16 mentions of the group since 1993, including a
few pieces from the Washington Times
written by people associated with it that were inserted into the Congressional Record by Republican
legislators, and a number of other mentions that would not have counted
as citations by the criteria that G & M said they used. By
contrast, the Family Reseach Council, ranked by G & M behind the
Tocqueville Institution in Congressional influence, received 186
mentions in the Congressional Record
over the same period. And among groups on Saraf's list but not listed
in G & M's top 20, the Manhattan Institute received 42 mentions, and
the Hoover Institution received 54.
I have no way of knowing why G & M assigned such prominence to
the Tocqueville Institution. Whatever the reason, though, it leaves you
with the sense that their other results can't be trusted, either. (At
another point, G & M explain that they disregarded the ACLU in
their final analysis because it turned up with an excessively
conservative score, owing to Republicans who cited it for its
opposition to McCain-Feingold. Other researchers might wonder whether
there might not be similar anomalies in the results obtained for other groups,
and might even suspect that this result cast some doubt on their
overall method. G & M seem untroubled by that possibility.)
It's clear, too, that a different choice of a sample for the study
would have turned out very different results. Among the groups that
didn't appear on Saraf's list and so were not examined by G &
M, for example, the National Association of Manufacturers
received a whopping 617 mentions over the period under consideration --
that is, 60 times as many as the Tocqueville Institute -- the majority
of them, not surpisingly, from Republican legislators. And the
Conference of Catholic Bishops was mentioned 130 times, most often in
connection with the abortion issue. Had those groups and others like
them been included in the study, they would presumably have been
classified as conservative on the basis of the ADA rankings of members
who cited then. (True, those groups aren't "think tanks," but then
Groseclose and Milyo did include groups like the AARP, which is hardly
a think tank either.)
If those groups had been included, the picture of media bias would have
changed considerably, since both groups are widely cited in the media.
For example, the Conference of Catholic Bishops has been mentioned on
CNN over the past five years more than four times as frequently as the
American Enterprise Institute, and the NAM has been mentioned three
times as frequently. By excluding conservative groups that are
frequently mentioned in the media, the study appears to exaggerate the
media's liberal tilt.
I say "appears to" because there is no way to tell from G & M's
data what results they would have come up with if they had chosen a
genuinely balanced sample that was restricted to think tanks whose
prominence was objectively determined, if they had coded the data more
reliably, if they had weighted the media citations appropriately, and
if they had classified the groups according to a more plausible
categorization scheme -- one, for example, in which the AARP was not
treated as the "liberal" counterpart of the Heritage Foundation.
It seems a pity to waste so much effort on a project that is utterly
worthless as an objective study of media bias. But in the current
climate, does anybody care?
[Update 8/4/2004: a response by Groseclose and Milyo can be found
here.
-Mark Liberman]
[Update 12/22/2005: other Language Log posts on this topic:
Science, Politics and Fair Play (8/2/2004),
Marx: Red or Blue? (10/31/2004),
A journalist's perspective on (bias in) media citations (11/13/2004),
Linguistics, Politics, Mathematics (12/22/2005),
Multiplying ideologies considered harmful (12/23/2005)
]
Interesting critique. We've been having a bit of an amateur's discussion of this study on my group blog here:
http://blog.monkeymask.net/archives/2004/07/06/10.31.05/
And here:
http://blog.monkeymask.net/archives/2004/07/05/14.16.49/
One of our posters also got a response from the study's authors regarding some objections raised, which are included in the second link.
Posted by: mallarme at July 6, 2004 06:59 PM