April 02, 2005

Noisily channeling Claude Shannon

The following announcement landed in my email inbox at 11:48 last night. The author (or anyhow the sender) was Jason Eisner. It's really very funny, at least for those who are familiar with recent work in computational linguistics and machine learning. We usually try to avoid unexplained technical references, but in this case I'll make an exception.

          First and Last Call for Papers (April 1, 2005)

Frankly, NLP is just too hard, and unsupervised learning is getting
itself into all kinds of trouble now that it's in its teens.  Here in
the heart of the Silicon Swamp, we're alarmed to find ourselves
uttering random n-grams just for emphasis.  It's time to treat the world
to 99.9% accuracy.  It's time to redefine the task.  It's time for the

           1st Workshop on Unnatural Language Processing
                  Johns Hopkins University CLSP

TALK ABSTRACTS of up to 1 page due by APRIL 30, 2005 to xxx@xx.xxx.xxx.  
We will attempt to collect these in an online proceedings.  As this is
an electronic workshop, there is no time limit on the talks themselves, 
although there is also no guarantee that anyone will be within earshot.

Self-invited talks (highest bidder)
Question Evasion: Lessons from the Loebner Prize Competition
Understanding Abney's Exposition of Blum & Mitchell's Reinterpretation 
    of the Yarowsky Algorithm

Shared task

   Zero-Sum Corpora: Destructive Mining of the Web

       Twenty teams.  One Web.  Three days.
      Are you computational linguist enough? 

Government panel
Is Document Classification Easier on Classified Documents?
Information Extraction: A Government and Binding Approach 
Anti-Discriminative Training
A Sin Tax for Some Antics  
English Unzipfed: No Unigram Left Behind

Suggested paper topics
(We hasten to assure you that our purported theme on punitive
linguistics is merely a strategem to extract abstracts from you.
You know that workshop organizers would never actually twist your arm in
a way that might keep you from typing something.  Thus, we concede that
we would grudgingly salivate over any overly original work at all: 
i.e., topics that have never been addressed before, and for good reason.)

* Scaling Down: From Universal Grammar to Galactic Grammar
* Corpse Linguistics (transducer decomposition, final states, 
                      the ultimate epsilon transition ...)
* Doonerism Spetection
* To Ken is at ion correct ion
* Self-Reference and its Implications for This Workshop
* Sentence Fragment Assembly and
* Cataphora Resolution (see below)
* Dynamic Time Warping (again)
* When Summarization Meets Wintarization
* Degenerative Grammar
* The Phrenology-Phrenetics Interface  
* Neuro-Linguistic Programming (a.k.a. Machine-Assisted Charisma)
* The New Irrationalist-Experientialist Debate

Machine miseducation track:
  + Overbearingly Supervised Techniques for Very Small Corpora
  + Mixtures of Pundits, Worldly Bayes Classifiers, & other Sadistical Models
  + The Information Turtleneck Algorithm
  + Aping Syntax: Monkey c-command, Monkey do command
  + Support Vector Hotlines and Other Monologue Systems
  + Bootstrapping Without the Boot

Program Committee (unconfirmed, indeed unwitting)
To preserve plausible deniability, we are adopting a triple-blind
procedure in which the reviewers will not be known even to themselves.
The most we can disclose here is that we will be noisily channeling
Claude Shannon and other emanations grises.

We are grateful for moral support from the Notional Science
Foundation, the Defense Advanced Delirium Agency (DADA), and 
the Linguistic Stipulation Consortium.

It would be an appropriate tribute to Jason to identify him as the bud of these jokes, but the permanent home of this call for papers is at the Journal of Machine Learning Gossip.


Posted by Mark Liberman at April 2, 2005 08:07 AM