The results of our first annual TDD and Automated Testing Survey are in, and the results are surprising.
First, some interesting data:
- 49% said they ran a local build 10 times a day or more.
- 50% said their local build failed 2 times a day or more.
- 23% said their build takes 5 minutes or more to run.
- 54% said they break the build on their CI server at least once per week
- 41% said they work on projects with test coverage greater than 75%
One of the things I was looking for in this survey is some data that reflects the cost of a failing build. We don't want our builds to always pass. You don't get any feedback from an indicator that always reports the same thing! But we also don't want them to fail too often because each failure has a cost. Whether it's having to fix the problem and re-run the test, or blocking the team while you fix the build on the CI server, there's a balance between the feedback we get and the cost we pay for failure.
One way to think about this is the cost you pay for getting the feedback is a hedge against finding out something is broken later. For example, I'll gladly take a minute to be 99% sure that my 30 minute deployment process is going to work. I had suspected that the typical failure rate for a local build is around 10%. However, the data seems to point to a failure rate that is closer to 20% (maybe more!).
If this is true, then it's very likely that people are paying way too much when running local builds. That is, they need to adopt practices like Continuous Testing, that help them fail faster. I'd be willing to be that there's also an inverse relationship between build length and failure rate. I think faster builds would lead to smaller steps, which naturally decrease failure rates. I'm looking forward to following up with more surveys to prove this theory.
You can get the raw data from the survey here.