Fernando Pereira concurs.
Minsky and Chomsky famously disagreed about the prospects for AI, but I don't think this means that Chomsky wins the argument.
At one level, perhaps Chomsky did win. Minsky argued that to study human language, you should combine models of language structure with models of meaning and common-sense reasoning, and that all this is accessible through the methods of classical AI. Chomsky argued that meaning is among the "mysteries" that "lie beyond the reach of the form of human inquiry that we call 'science'", while language structure is a "problem" where science can make progress.
In the 1960s, Minsky convinced a lot of people to follow his program. And 30 or 40 years later, the project is dead in the water, as he admits. So perhaps Chomsky was right about the "mysteries" business. (Though it's not obvious to outsiders that Chomsky's own theories are notably more successful than they were 30 years ago).
But Pereira makes a different objection:
Coding up a tangle of "common sense knowledge" is useless if the terms of that knowledge are not endowed with meaning by their causal connection to perception and action. The grand challenge is how meaning emerges from a combination of genetically wired circuitry and learning.
On that view, Minsky and Chomsky were both wrong, and in the same way. They share the belief that activities of the mind can (and should) be understood in terms of the manipulation of formulae that are not essentially grounded in perception and action. Pereira, along with many others, suggests that this belief (which he himself once held or at least acted on) is fatally mistaken.
This debate -- whose roots go back through Descartes and Locke -- isn't over yet. The neo-Lockeans have some neat initial results, just as Minsky and Chomsky did circa 1970. But most of the problems (or mysteries?) remain unsolved.
Posted by Mark Liberman at July 31, 2003 07:25 PM