This is a second in a series of posts aimed at improving the rhetoric (and logic) of science journalism. Last time ("Two simple numbers", 7/22/2007), I asked for something positive: stories on "the genetic basis of X" should tell us how frequent the genomic variant is among people with X and among people without X. This time, I've got a related, but negative, request.

No, let's make it a commandment: Thou Shalt Not Report Odds Ratios. In fact, I'd like to suggest that any journalist who reports an odds ratio as if it were a relative risk should be ~~fired~~ sent back to school.

Many of you probably don't know what I'm talking about -- that's why dozens of science journalists disobey this commandment every week. But the basic concepts are simple, and nothing more than simple arithmetic is required to understand them.

Here's a simple, classic example that illustrates the problem. A few years ago, some researchers from Georgetown University published in the New England Journal of Medicine a study that demonstrated systematic race and sex bias in the behavior of America's doctors. Needless to say, this finding was widely reported in the media:

Washington Post: | "Physicians said they would refer blacks and women to heart specialists for cardiac catheterization tests only 60 percent as often as they would prescribe the procedure for white male patients." |

L.A. Times: | "[Doctors] refer blacks and women to heart specialists 60% as often as they would white male patients." |

N.Y. Times: | "Doctors are only 60% as likely to order cardiac catheterization for women and blacks as for men and whites." |

Now let't try a little test of reading comprehension. The study found that the referral rate for white men was 90.6%. What was the referral rate for blacks and women?

If you're like most literate and numerate people, you'll calculate 60% of 90.6%, and come up with .6*.906 = .5436. So, you'll reason, the referral rate for blacks and women was about 54.4 %.

But in fact, what the study found was a referral rate for blacks and women of 84.7%.

What's going on?

It's simple -- the study reported an "odds ratio". The journalists, being as ignorant as most people are about odds and odds ratios, reported these numbers as if they were ratios of rates rather than ratios of odds.

Let's go through the numbers. If 90.6% of white males were referred, then 9.4% were not referred, and so a white male's odds of being referred were 90.6/9.4, or about 9.6 to 1. Since 84.7% of blacks and women were referred, 13.3% were not referred, and so for these folks, the odds of referral were 84.7/15.3 ≅ 5.5 to 1. The ratio of odds was thus about 5.5/9.6, or about 0.6 to 1. Convert to a percentage, and you've got "60% as likely" or "60 per cent as often".

The ratio of odds (rounded to the nearest tenth) was truly 0.6 to 1. But when you report this finding by saying that "doctors refer blacks and women to heart specialists 60% as often as they would white male patients", normal readers will take "60% as often" to describe a ratio of rates -- even though in this case the ratio of rates (the "relative risk") was 84.7/90.6, or (in percentage terms) about 93.5%.

(There was another set of rhetorical problems with the reporting of this particular study, one that was created by the study's authors. In fact the referral rates for white males, black males and white females were apparently pretty much all the same -- the crucial part of the pattern was that the referral rate for black females was much lower, namely around 78.8%. The 90.6% vs. 84.7% figures were created by comparing the white male data against aggregated data for white females and blacks of both sexes. But I digress.)

My discussion of this case is drawn from an article published subsequently in the same journal: Lisa Schwartz et al., "Misunderstandings about the Effects of Race and Sex on Physicians' Referrals for Cardiac Catheterization", NEJM 341:279-283, July 22, 1999. The problem was well understood by statistically well-informed people long before then, and was explicitly discussed in an earlier study in the British Medical Journal: H. T. O. Davies et al., "When can odds ratios mislead?" BMJ 316:989-991 (1998).

OK, so this is a long-standing and well understood problem, which led to some spectacularly botched (and prominently excoriated) presentations of important results back around 1999. Surely all competent science journalists understand this now, and have mended their ways?

Guess again.

Find any piece of reporting that talks about "raising the risk of X by Y%", or any of the many other ways of putting this same concept into English, and the chances are that you've found a violation of this commandment. Let me give two recent examples, among thousands lurking in the past month's news archive.

According to Steve Connor, "Childhood asthma gene identified by scientists", The Independent, 7/5/2007

A gene that significantly increases the risk of asthma in children has been discovered by scientists who described it as the strongest link yet in the search to find a genetic basis for the condition.

Inheriting

the generaises the risk of developing asthma by between 60 and 70 per cent- enough for researchers to believe that the discovery may eventually open the way to new treatments for the condition.[emphasis added]

The study in question (I believe -- the article doesn't give any specific reference, as usual for the genre of science journalism) is Miriam F. Moffatt et al., "Genetic variants regulating ORMDL3 expression contribute to the risk of childhood asthma", Nature 448, 470-473 (26 July 2007). This is another big genome-wide association study -- roughly 300,000 single-nucleotide polymorphisms were scanned in several populations in the UK and in Germany.

In this case, general information about allele frequencies is not provided (and perhaps was not available). However, this information is given in one crucial case:

In the subset of individuals for whom expression data are available, the T nucleotide allele at rs7216389 (the marker most strongly associated with disease in the combined GWA analysis) has a frequency of 62% amongst asthmatics compared to 52% in non-asthmatics (P = 0.005 in this sample).

Now, how can 62% vs. 52% be interpreted as "[raising] the risk of developing asthma by between 60 and 70 per cent"? I mean, 62 is about 19% greater than 52, not 60-70% greater. If you're guessing "it's that old devil the odds ratio", I'm sure you're right.

If you're good at mental arithmetic, you may be worried that even the odds ratio doesn't quite make it to 1.6 or 1.7 in this case: (.62/.38)/(.52/.48) ≅ 1.51.

We can't tell what Steve Connor was really talking about when he wrote that "inheriting the gene raises the risk of developing asthma by 60 to 70 per cent", but one possibility is that he (or his informant) cherry-picked some even better odds ratios, not from the "combined GWA analysis" (where GWA stands for "genome-wide association"), but some one of the data subsets, perhaps this one:

Restricting analyses to cohort members of Caucasian ethnicity, we found that the 398 cases recalling 'asthma ever' at age 42, showed a significant association (odds ratio, 1.21, 95% confidence interval, 1.04-1.40, P = 0.012). Ninety-three individuals were reported to have 'asthma attacks' in the first seven years of life (that is during 1958 to 1965), and these were strongly associated to rs3894194 (

odds ratio = 1.68, 95% confidence interval, 1.25-2.26, P = 0.0005).[emphasis added]

With a bit of arithmetic, you can work out for yourself what that 1.68 odds ratio would correspond to in terms of allele frequencies.

Another possibility is that the odds ratio was inferred from the coefficient in a logistic regression, where many factors (in this case the presence or absence of many SNPs) are weighed together in a single statistical model. See below for a bit more discussion; but it remains true that describing an odds ratio inferred by such a model as if it were a risk ratio (i.e. a ratio of rates) is highly misleading.

Now, Steve Connor is not a sports columnist trying his hand at a science piece. (That's a plausible excuse for Denis Campbell's disastrously botched autism/MMR story in the Observer, memorably vivisected by Ben Goldacre in many Bad Science posts and a BMJ article.) Connor is listed as the "Science Editor" of the Independent, and he ought to know better.

And O'Connor is not the only "Science Editor" for a major publication who has violated this same commandment recently. Mark Henderson, identified as "Science Editor" of the London Times, was the author of "Genetic breakthrough offers MS sufferers hope of new treatment", 7/29/2007:

The first genetic advance in multiple sclerosis research in three decades has opened new approaches to treating the neurological disorder, scientists said yesterday.

Research has identified

two genetic variants that each raises a person's risk of developing MS by about 30 per cent, shedding new light on the origins of the autoimmune disease that could ultimately lead to better therapies.[emphasis added]

The scientific publication in question (well, there are several of them, but this one will do) is "Risk Alleles for Multiple Sclerosis Identified by a Genomewide Study", NEJM, July 29 2007. And the "risk-raising" in question is, needless to say, calculated in terms of odds ratios:

A number of allelic variants had a significant association with multiple sclerosis. Of these, two SNPs in intron 1 of the IL2RA gene encoding the alpha chain of the interleukin-2 receptor (also called CD25, located at chromosome 10p15) are notable: rs12722489 (P=2.96x10-8; odds ratio, 1.25; 95% confidence interval [CI], 1.16 to 1.36) and rs2104286 (P=2.16x10-7; odds ratio, 1.19; 95% CI, 1.11 to 1.26) (Figure 4).

How do odds ratios of 1.25 and 1.19 translate to "[raising] a person's risk of developing MS by about 30 per cent"? Well, again, I believe there is some cherry-picking of numbers from data subsets going on here (the odds ratios reported are from multiple logistic regression in all cases). But odds-ratio oddities side, what are the allele frequencies involved? This time the scientific article doesn't tell us, but we can work it out.

In this study, there was a "Screening Phase", consisting of 931 Family trios vs. 2431 Control Subjects, and a "Replication Phase" involving 2322 Case Subjects, 609 Family Trios, and 2987 Control Subjects. The article's Table 2 gives us the following

SNP rs12722489 has RAF ("risk allele frequency") of .85, with odds ratio of 1.35 in the screening phase, 1.19 in the replication phase, and 1.25 in the combined data. SNP rs2104286 has RAF of .75, with odds ratio 1.26 in the "screening phase", 1.16 in the "replication phase", and 1.19 in the combined data.

An odds ratio of 1.25 for the first genomic variant, given a background rate of 85%, would imply a rate in the MS patients of 87.6% : (.876/.124)/(.85/.15) ≅ 1.25. To get an overall odds ratio of 1.19 for the second variant, given a background rate of 75%, we'd need a rate in the MS patients of 78.1% (you do the math...).

Here's what the NEJM authors say about their findings:

These variants are not rare mutations of the type that occur in diseases caused by a defect in a single gene, such as muscular dystrophy or sickle cell anemia. Rather, they are polymorphic variants that also occur in normal populations. However, each is more common in patients with multiple sclerosis than in control subjects, and each has a small effect on the risk of the disease.

More specifically, one of these variants occurs in about 85% of people without MS, and about 87.6% of people with MS; the other occurs in 75% of people without MS, and 78.1% of people with MS. (At least, these are the rates reconstructed by the logistic regression model.)

On this basis, we can calculate that the first variant increases someone's risk of MS by a factor of 87.6/85 ≅ 1.03, or in ordinary language by 3%; for the second variant, it's 78.1/75 ≅ 1.04, i.e. by 4%.

The Science Editor of the London Times describes this situation by telling us that "Research has identified two genetic variants that each raises a person's risk of developing MS by about 30 per cent".

With all respect, I submit that this is someone in acute need of further education in basic statistical reasoning and journalistic responsibility.

[There's another story to be told about why researchers (as opposed to journalists) like to report odds ratios. The basic answer is that they use logistic regression to model rates, so as to assign responsibility fairly among many factors at once, for example among many SNPs distributed in a partially-correlated way in the sample under study. This is an appropriate thing to do, in general -- but odds ratios, whether calculated directly from raw frequencies, or inferred from logistic regression coefficients, are NOT risk ratios, and should never be presented as such.

When the relative risk is very low, the odds ratio approaches it asymptotically, but for ratios of risks in the range we've been talking about, the odds ratio is a massive overestimate, and to present it as if it were a ratio of risks is massively misleading.

If you believe the logistic regression, and you want to tell the public about relative risk as inferred by the regression model rather than as estimated directly from the raw frequencies, it's straightforward to calculate the (inferred) relative risk from the (inferred) odds ratio and information about overall rates (e.g. the "risk allele frequency"), as I've done above.

But my third cup of coffee is getting cold, so further discussion of logistic regression will have to wait for another morning.]

Posted by Mark Liberman at July 30, 2007 07:38 AM