The Conifer Systems Blog

On Daily Builds

no comments

For many years it’s been a standard “best practice” of software development to have a “daily build” — or, if you prefer, a “nightly build” (example, with some impressive pictures).  But why daily?  Why not twice daily, or hourly?  If the build is broken, wouldn’t we want to know about it sooner rather than later?  The faster it gets fixed, the less the productivity cost to your team.

Granted, there are some software projects that take more than an hour to build, so a single computer couldn’t do an hourly build.  But those are the same huge projects where you can easily afford to buy more than one computer to do your builds.

And of course if an hour goes by and no new changes have been committed, there’s not much sense in doing another hourly build.  Which immediately leads to the insight: why not simply build on every single change that’s committed to the repository?

Sure, it may cost you a bit to buy a few computers to do those builds.  But let’s not forget how cheap computer power has gotten; if we’re talking about machines that can run headless and don’t need a monitor or video card, we can easily build an extremely powerful build farm workhorse computer (Core 2 Quad, 4GB RAM, 750GB hard drive, etc.) for under $500.

One problem you may run into is that many changes to your repository don’t affect your builds, or only affect a subset of your builds.  No sense in rebuilding when someone changes a documentation file that doesn’t ship externally.  If you build every component of every project on every change, the system wouldn’t scale very well as the volume of changes and number of components increases over time.  You could write up a list of “rules” to tell the system when to build and when not to build, but this could be a lot of work, and your rules could have bugs in them.  You’d constantly have to keep the rules up to date.  Well, there’s a solution to this also: a system like Cascade can, by hooking the file system, automatically determine which changes affect which builds.

Once you have a system like this set up, what do you gain?

  • Quick notification of build breaks.  There’s still some latency (if the build takes 5 minutes, you might not know about a break until up to 5 minutes after it’s committed), but if this is a problem, you can mitigate it by breaking up a large build into several smaller builds.
  • The system will tell you exactly which change broke the build, without any guessing on your part.  If a particular engineer is being careless and breaking the build frequently, you can remind them to test more carefully in the future.  Or, you can set up some kind of minor penalty for breaking the build (whoever breaks the build buys the team donuts, or something like that).
  • No need to explicitly “cut a build” for QA or release at a particular change.  Since the system builds every change, chances are it’s already been built!
  • Ability to binary search for regressions down to a single change.  If the system builds every single change and saves off the results somewhere, you can determine which change introduced a bug by doing a binary search using the resulting binaries.  (For example, if a bug was introduced yesterday, and there were 30 changes on that day, you could find the responsible change by running just 5 tests.)  Since this doesn’t require any debugging or understanding of the code itself, it’s a good way to offload work from your developers to your QA team and speed up bug triage — when a bug is filed, QA can tell you if it’s a regression, and if so, which change caused it.

It’s time to update this best practice.  A daily build is good, but a build on every committed change is better.


Written by Matt

September 18th, 2008 at 3:31 pm

Leave a Reply