Run The Tests You Write

A blog post based on my Testing Anti-Patterns Talk

Tests are supposed to help you

I’ve cloned a number of repos only to discover that while there are tests the tests haven’t been run in a long time. I usually feel a bit sad about this. Someone cared enough to write the tests but right now they’re languishing in an un-runnable or failing state. It seems like such a waste to write the tests and not run them.

Test Before Pushing

The number one recommendation I’ve received to improve my code quality is TATFT (Test All The Fucking Time). I like to run autotest in an emacs buffer while I work so that I can always see the state of the project. Each time I save test test run and I know if I broke something. Tight feedback loops make debugging much simpler and help you correct your errors quickly. It is a lot like checking your spelling while you type in word processing.

If you can’t or won’t run a tool like autotest the next best thing is to run your tests before you push your code. One project I worked on had a push script. It dropped the test database, migrated, and then ran all the tests. If all those steps passed it then pushed the code up to github. If any of those steps failed it showed you the error so you could fix it before you pushed. It probably took our lead 30 minutes to code up and helped two newbies to the project keep the main branch in good shape at all times. Again, its like spell check. If you wouldn’t turn in an English paper without running spell check don’t turn in your code without running the tests.

Use Continuous Integration

Continuous Integration is TATFT at a team level. It makes everyone aware of the status of the system. It is especially helpful when you have to work with people who won’t run tests locally.

I’ve used a number of CI systems and haven’t found one I think is perfect yet but there are several that are adequate. Pick one that meets your needs (easy to setup, runs internally, runs externally, etc.) and make sure it runs your tests on every commit.

If you have a very large (or very slow) test suite consider building different CI runs for different purposes. A fast run that checks for big problems that runs first. After that passes there could be another script for testing features in more detail, a script that tests UI with selenium, or a script that tests deployment or setup from scratch. Breaking it up this way gets the important feedback out while the developer still remembers what they did and can hopefully fix it easily.

Listen To Your Tests

Finally running your tests often doesn’t do a bit of good if you don’t care when they fail. A failing test is saying “Something is wrong”. As a developer it is your responsibility to figure out what is wrong and fix it.

A lot of team seems to have a culture to ignore a failing tests or a red CI run. Often this culture forms because the tests themselves aren’t trustworthy. The team has “that test that sometimes fails when nothing is wrong” and soon people are ignoring real failures because they can’t remember which one is the inconsistent one. The best way to fix this is to make your tests trustworthy. If the test is bad delete it. If the CI system has a bug fix it (or find a new CI). If there is a broken feature fix it.

I’ve also seen folks ignore a red CI by saying “that error is in a different part of the code I couldn’t have possibly broken it” or “Eddie owns that feature he’ll fix it tomorrow”. Group code ownership and knowledge sharing can help alleviate the “not my problem” mentality. Teams succeed or fail together so you need to own your code base together.

Automatically Generated Tests

One final word on automatically generated tests. I’ve seen a number of rails projects that look well tested on the surface. When you dig deeper you see a lot of automatically generated tests that don’t reflect the current functionality of the application. I’ve also seen a project with more than 20 test files that contained a single “test the truth” automatically generated test. If you use a library or tool that generates tests for you either maintain them or delete them. It is confusing to people joining your project to have a whole bunch of meaningless or failing tests.