Laputan Logic (a smart person who always has intelligent and interesting things to say) has recently posted about words and numbers. There's a fascinating section about Chinese number punning (e.g. 114 = "most surely die"), and he includes a handy in-line calculator for translating particular number strings. My telephone prefix 417 translates as "definitely guaranteed certainly", which is good to know, especially since my area code 215 is "easy guaranteed never".

LL begins by quoting a passage from Umberto Eco's "Search for the Perfect
Language", about Leibniz' *lingua generalis*, which provides a safely
17th-century example of my suggestion
that "it takes a really smart person to have a really spectacularly stupid
idea". From the limited quotations from Eco about Leibniz, you might not
grasp the deep and beautiful nuttiness of his proposal. The key ideas were
a *characteristica universalis* that assigns a different prime number
to each primitive concept (we're guaranteed never to run out of primes), and
a *calculus ratiocinator * that creates complex concepts by multiplication (since the prime factorization theorem guarantees a unique decomposition into primitives) and evaluates predication by division (are
the factors of the predicate among the factors of the subject?).

Leibniz felt that this would allow legal, religious and political disagreements to be solved by calculation rather than by violence.

I've always wondered whether Leibniz had a story to tell about how to use multiplication
of primes to construct a logical formula other than a single predication. Suppose
that A and B are propositions -- whether atomic or complex doesn't matter --
and we've assigned 27 to the concept "implies" -- what about "A
implies B" vs. "B implies A"? And what about more elaborate formulae where order matters? I can imagine various procedures
for encoding string order or formula structure as products of primes, but did
Leibniz have a story to tell about this? I've never learned enough about the
details of his *calculus ratiocinator *to determine the answer.

Then there's the problem of the algorithmic complexity of factoring products of really large primes -- and there are surely enough primitive concepts and modes of combination that we'll need some big primes to encode them all. And there's the problem of relating logical formulae to the facts of the world. And then there's the question of whether human conflicts are really very often based on different (mis)understandings of propositions, as opposed to different interests and goals.

Putting it all together, I think we have a winner. Leibniz was clearly a *really
smart person*, and the proposal to solve political and religious disagreements
by translating natural language discourses into products of prime numbers was
a *really stupid idea*. It's a good premise for historical fantasy, though
-- the idea of a sort of Leibnizian underground, operating through history into
modern times, is one of the fun
background assumptions of Neal Stephenson's currently-unfolding historical
trilogy.

To advance the discussion, in a gingerly way, into more recent times, I'll mention that I first asked myself these questions about Leibniz shortly after the publication of Katz & Fodor (1963), which advanced a theory of natural-language semantic interpretation based on the decomposition of word meanings into sets of primitive semantic features, and the collection of these features into ever-larger sets by a recursive procedure operating on syntactic "deep structures".

Posted by Mark Liberman at April 8, 2004 06:23 PM