Actually I don't think that's heretical any more, at least not among the programmers I know well.
The metric I, and others I know, have used to judge unit testing is: does it find bugs? The answer has been no. For the majority of the code I write, it certainly doesn't find enough bugs to make it worth the investment of time.
Here are some things that find bugs for me (I should mention I'm the VP of Software Development at Justin.TV - I have lots of code in production, in daily use by millions of people. Importantly, the code doesn't have a formal specification and never could have one, otherwise we would never make any progress!):
- User reports
- Automated monitoring systems
- Testing with simulated high load, and/or randomized inputs
Here's an analogy that explains why I think unit testing hasn't been widely successful outside of a few niches like parsers and very well specified libraries. Like all analogies it isn't perfect, but it may be useful anyway. Suppose instead of computer programs, we are interested in the health of people. Our practitioners are now medical doctors, rather than programmers.
- User reports correspond to a patient going to the doctor and telling him something doesn't seem right
- An automated monitoring system would be something like a portable ECG machine that's hooked up to your doctor's pager (how long before we're all wearing one?)
- Logging is like your medical history. Except the doctor can specify an arbitrary level of detail, and change the stuff that gets logged whenever he wants... that seems very useful!
- I don't think any doctor tests peoples' limits, or feeds them random stuff to see what will happen. People crashing is much more expensive than programs crashing
- Unit testing is something like your doctor checking that you have exactly one head (must not violate the singleton property!)