Thomas Crimi writes:
I was forwarding your article on the Pirahã resistance to using exact numbers to a friend of mine; when I summarized it I noticed that in software development circles this phenomenon has been written about a few years ago by Paul Graham, which he called the Blub Paradox.
'Blub' is the name given for a middle-of-the-pack programming language (standing in for Java so as to be politically neutral). When some people rave about the great features of other languages (say, Graham's baby, Lisp), the Blub programmer shows complete indifference, not seeing how those features would make a practical difference day-to-day. Meanwhile, for any language worse than Blub, the programmer is able to rail on about how it'd be impossible to do useful work without feature X that Blub has.
From a cursory google, it seems that this appears first in his 2001 article "Beating the Averages".
The lesson of Blub is that even though we may feel superior to 'those' programmers, we all have our own blub and need to attempt to see what other things out there are worth learning.
Yes, I think that the Blub Paradox has considerable explanatory force, and I'm glad to learn about it.
But the political economy of such situations is uncertain, it seems to me. If your way of life (or your programing language, or your way of thinking about group properties) is working for you and your peeps, it's not obvious that it's generally a good idea to invest a lot of time and effort in trying to learn something different, just because some outsider tells you that you should. It might pay off, and then again it might not; and there are various costs, not least the potential personal or social disruption.
Standing pat might sometimes be the best choice, alas, even when the outsiders are right. Thus people who think that they understand statements like "women have more sensitive hearing than men", without translating them into claims about sampled distributions, are fools at best. But the damage that people do to themselves by continuing to use crude approximate semantics, conceived in terms of the properties of prototypes, might not in individual cases outweigh the costs of learning new thinking skills.
That would be true, for example, if most of the damage is due to bad social decisions (whether explicit public policy mistakes, or implicit market outcomes), which any one individual has little control over. As a result, if you invest in better understanding, you may just wind up writing whiny weblog entries about what's wrong with bestselling books and dominant software-engineering practices.
Posted by Mark Liberman at October 9, 2007 12:55 PM