By now (2010) every experienced software developer worth her salt has heard about unit testing. If you're not familiar with the concept, here's how it goes in a nutshell, using some fake programming language:
class Thingie
{
public static function do_something(param)
{
if (param == 0)
{
return false;
}
c = new network_connection();
data = c.fetch();
return !empty(data);
}
}
class ThingieTest extends TestFrameworkTestCase
{
public function test_do_something()
{
T = new Thingie();
this.assert_equals(T.do_something(1), true);
this.assert_equals(T.do_something(0), false);
}
}
TestFramework ThingieTest
..
1 test, 2 assertions, 0 failures.
Many languages have unit-testing frameworks available. Unit testing is an elegant, systematic, automatable system that makes obsolete all the ad-hoc programs developers often write to test various parts of their code. The field is both fresh and mature enough that there's a small, well-known set of standard practices that make it easy to start writing unit tests in language A if you're familiar with unit tests in language B. By and large, it's a great invention with considerable benefits.
Some have embraced the practice and follow it almost religiously, writing their unit tests first and actual software second. Others apply unit testing more like salt, pepper and nutmeg, as needed but not in excess. Still others write tests reluctantly and need to be reminded frequently, like patients chided for their infrequent flossing by their dental hygienist. And others just write code and custom test scripts (I actually know superlative software developers who've never written a single unit test and probably never will). You may find that a developer's attitude towards unit tests has some relation to the year they started getting paid programming gigs.
Unit Testing in Practice
The majority of developers I did, do, may, will, and won't ever work with, interview and hire probably fall in the unit-tests-as-spice camp: they're reasonably convinced unit testing is an excellent insurance policy, they may even evangelize the practice with unenlightened colleagues, but they readily admit their unit test coverage is spotty or out of date, with a curious mix of rational self-confidence ("I try to test code that's important, not getters and setters; 100% coverage is silly") and underflossed guilt ("I probably don't write as many tests as I should"). I'm certainly one of them.
A lot of (most?) software shops put a lot of pressure on developers to just crank out code that does something. As a result, software that doesn't do anything visible gets short shrift if it gets written at all, and unit tests are often the first to get the axe. Oversimplifying, the tacit (and circular) rationale looks a lot like this:
- a good developer's best code is bug-free
- bug-free code will pass all its unit tests, by definition
- therefore, writing tests is a waste of time, since they won't uncover any bugs
A common strategy to encourage beneficial behavior is to wrap the hard-to-do stuff inside something they normally do: hide a cat pill inside a yummy treat, or stick a tongue-scraper on the non-bristly side of a toothbrush head. Brush your teeth and clean the rest of your mouth at no extra charge!
Similarly, the benefit of unit tests can be self-sustaining if you can make your tests:
- run as often as necessary
- run at no cost to the developer
- draw attention to their usefulness very forcefully when something breaks
- remain invisible otherwise
Here are suggestions to increase your team's enthusiasm for, and the effectiveness of, consistent unit testing. All of these tips are some variation on the parasitic-benefit approach outlined above.
- trigger your unit tests automatically using a pre-commit hook in your source-control system. Every time someone tries to check in her code, all your tests run automatically, and any failure blocks the commit. You have to fix the code (or the test) before you're allowed to check in your code.
- trigger all your tests automatically, including your integration tests, at the top of your release or build script. The release or build is blocked if any test fails.
- use a bootstrapping framework or wizard to generate boilerplate code, including matching unit test code, for every class you write. The less code you have to write, the more time you have to work on the good stuff. Ruby on Rails or Django are such bootstrap systems; they're easily extended to add boilerplate unit test code.
- have your unit tests create persistent, semi-random test data in your dev database. Every time you run your tests, you get more data in your development database, which makes your application more realistic during testing and can uncover issues you only run into under actual use conditions (e.g., you might realize what a mess your UI is when it tries to display 250 items on a single form).
- have your unit tests obliterate the test data in your dev database every time they run. This weans you from over-reliance on test data, which can be handy if you're shipping a brand-new product with zero user data on launch day: you consistently get to interact with the same product your early users get to see on day one. Yes, this is exactly the opposite of the previous point; use your judgment.
There are many other approaches, including forgoing unit testing altogether, and sometimes that might be just what the doctor ordered. Don't be afraid to try new things, nag at your colleagues (not too much), drop what fails and pursue whatever works for your product and organization. And by all means share your experience--open-source processes are just as vital as open-source software.
No comments:
Post a Comment