February 05, 2006

112 words for misunderstanding meaning?

Robin Marantz Henig's article in today's NYT magazine, "Looking for the lie", doesn't mention Eskimos. However, it presents an interesting -- if snowless -- example of the lexicographic fallacy. Henig opens a discussion of Daniel Langleben's research on the neuroscience of lying by telling us that "The English language has 112 words for deception, according to one count, each with a different shade of meaning".

The point is not that Albion's heirs are especially perfidious, just that lying can be individually and socially complicated, and that Langleben is interested in the possibility that different kinds of lies might have different kinds of neural correlates. Here are the first two paragraphs of the section:

The English language has 112 words for deception, according to one count, each with a different shade of meaning: collusion, fakery, malingering, self-deception, confabulation, prevarication, exaggeration, denial. Lies can be verbal or nonverbal, kindhearted or self-serving, devious or baldfaced; they can be lies of omission or lies of commission; they can be lies that undermine national security or lies that make a child feel better. And each type might involve a unique neural pathway.

To develop a theory of deception requires parsing the subject into its most basic components so it can be studied one element at a time. That's what Daniel Langleben has been doing at the University of Pennsylvania. Langleben, a psychiatrist, started an experiment on deception in 2000 with a simple design: a spontaneous yes-no lie using a deck of playing cards.

Thus the question of vocabulary size is, as usual, irrelevant. The logic of this passage would be unchanged if we English speakers were forced to distinguish different kinds of lies entirely by describing them phrasally -- as Henig in fact does by listing "lies of omission" and "lies that make a child feel better" -- rather than with single lexical items like "malingering" and "exaggeration". Henig's exposition falls into the common fallacy of implying that concepts can only be distinguished by being in a one-to-one relationship with dictionary entries.

This is not just semantic fussiness. It's implausible that there are 112 (or 37 or 243 or 14) distinct and atomic types of deception. Deception is presumably a structured concept with several aspects: the audience, purpose, content, scale, source and justification of the deception, among others. Each aspect can have many values, themselves sometimes complex. Perhaps there are several overlapping conceptual systems involved at once, or at different times, or to different degrees for different people. Deception presumably engages more general neural systems of emotion, memory, communication and so on. It's surely a mistake to think that understanding deception is a matter of listing its 112 atomic types and determining where in the brain each one is localized. This is the sort of thinking that generalizes notions like "the gene for X" and "the brain region for Y" far beyond the (significant but limited) domains where they make sense.

A bit later in the article, Henig quotes Steve Kosslyn making a similar point:

Deception "is a huge, multidimensional space," he said, "in which every combination of things matters."

However, the article immediately goes back to talking in atomistic terms:

Each type of lie might lead to activation of particular parts of the brain, since each type involves its own set of neural processes.

After discussing Langleben's research program, the article gives a misleading impression of the experimental apparatus used in it:

His research involved taking brain images with a functional-M.R.I. scanner, a contraption not much bigger than a kayak but weighing 10 tons. Unlike a traditional M.R.I., which provides a picture of the brain's anatomy, the functional M.R.I. shows the brain in action. It takes a reading, every two to three seconds, of how much oxygen is being used throughout the brain, and that information is superimposed on an anatomical brain map to determine which regions are most active while performing a particular task.

There's very little about being in a functional-M.R.I. scanner that is natural: you are flat on your back, absolutely still, with your head immobilized by pillows and straps. The scanner makes a dreadful din, which headphones barely muffle. If you're part of an experiment, you might be given a device with buttons to press for "yes" or "no" and another device with a single panic button.

I guess that this passage is strictly true as written, but it is likely to leave most readers with the idea that "a functional-M.R.I. scanner" is a special kind of device, different from the device that produces "a traditional M.R.I." But in fact it's exactly the same piece of apparatus, just used in a different way. Here readers are invited to extend the fallacy one step further: the one-to-one relationship assumed for concepts and terms seems to apply to devices as well. An "fMRI scanner" performs a different function from an "MRI scanner", so it must be a different device. But it isn't. (Some technical tutorials on how MRI and fMRI work can be found here and here).

Thus linguists and logicians shouldn't feel picked on. It's true that science writers (or their editors, it's always hard to tell) are often careless with linguistic concepts, but unfortunately they're often no more careful in dealing with issues in physics, chemistry, biology and psychology.

Henig's article deals with a timely and important topic, and surveys a range of interesting research (including Paul Ekman's work on facial expressions). It's written in a clear and engaging style, and has interesting things to say about the role of lies in everyday life, and the unintended consequences that might follow from a genuinely effective technology of lie detection. And it emphasizes the important point that there isn't a single, simple phenomenon of "deception" that might have a single, easily-identifiable physiological correlate. But it's too bad that the article isn't more careful to avoid pushing the same idea back a stage, to the view that there might be a fixed number of well-defined types of deception, each with its own physiological signature.

A good review of some of the issues involved in hi-tech lie detection can be found in Paul Root Wolpe, Kenneth R. Foster and Daniel D. Langleben, "Emerging Neurotechnologies for Lie-Detection: Promises and Perils", American Journal of Bioethics, Volume 5, Number 2 / March-April 2005.

Posted by Mark Liberman at February 5, 2006 12:30 PM