Blogs

RedMonk

Skip to content

Build Forge Briefing

Last week Anne and I talked with folks from (now) IBM’s Build Forge. The topic was ostensibly continuous integration, but naturally expanded into Build Forge as a whole and the context of builds.

Builds!

Build Forge provides what you might call a “full plate” build management system. As fellow Austinite Doug Fierro (Director of Build Forge Product Management), said, it’s really a framework for doing builds. In use, this means Build Forge will perform not only “traditional” builds, but also do continuous integration, release builds, and provides enterprisey features like audits and role-based access to aspects of and the build as a whole.

Enterprisey

As I look at more and more build systems, that last part is what often separates the open source build systems from the closed source, commercial systems. This is understandable as the open source build systems often come from and are driven by purely developer desires. Developers don’t care too much about auditing and compliance beyond the “you broke the build emails” that Cruise Control and others create. On the other hand, if you have compliance desires (with security, process, and even accounting), you’ll need a build stack that keeps track of and reports on who did what.

Also, in the enterprisey vein, we talked about some customers use of Build Forge with out-sources. The situation here is one in which the team is a mix of in-house and out-sourced coders. In that case you may want to limit the access the out-sources have to certain parts of the build. Or, at the very least, keep track of who access what. Clearly, that concern bleeds into your version control system (like CVS, subversion, or ClearCase), and I didn’t drill down enough on that point to get all the details.

Build Farms

Another differentiator for Build Forge was its agent-based method of creating build farms. While there are other, open source systems that do this, from what heard at ApacheCon this year, most of them are not too easy to use. That said, I don’t know how “easy” Build Forge’s agents and the resulting build farms are, but theoretically you can get rich support in figuring them out.

Why would a build farm matter? For a few reasons:

  • Performance – doing “threaded builds” to speed up any given build, load balancing each node in the build clusters, and then providing high availability (of the fault tolerance variety, not the other one).
  • Native Builds – as needed, different parts of your build can be made on different platforms. For example, in systems management work, you’ll end up needing some Windows code eventually, and those DLLs don’t build on *nix boxes, brother.

The Shame Avoidance System

Coté and the Build

One of the more intriguing (and differentiating for Build Forge?) results of using a build farm was the ability for individual developers to do test builds on the build farm before committing their code. While it’s fine to do a test build on your own boxe(s), the same ones you wrote the code on, every developer and team has experienced reoccurring cases of Works On My Box Syndrome:

That is, you do a build, run tests, and everything works. So you check in the code. Then the build breaks. “What?! Impossible!” you say, “…works on my box!”

So, by using “pre-commit builds” developers can take their new code and test it out on the official build farm, avoiding the shame of an unsightly case of Works On My Box.

RedMonk Advice

As always, I couldn’t help but make a few suggestions:

  • Given that Build Forge is a framework, it’s perfect for selling configuration and best practices around. That is, while some shops could truly benefit from a highly customized build, the reality is that the Maven theory of builds applies in most cases. The Maven theory of builds is much like the Rails theory of conventions and constraints: the framework chooses how to do a lot of things for you and by doing then that way, you’ll save time by avoiding all that low level stuff. This is not to say that Build Forge would enforce a limited number of ways to do a build, but rather they’d provide several out of the box build environments and processes. This, of course, is something that IGS and partners could sell. More broadly, it’d be a great way to extract an Express line out of Build Forge.
  • Look for chances to integrate with and use Mylar. As you know, dear readers, I’m exuberant about Mylar. In the case of Build Forge, it’s not only the nascent ALM parts that look appealing. In the present, the context sharing seems applicable to fixing failed builds. For example, if a build failed, Build Forge could immediately create a Mylar context, create an associated task to fix the broken build, assign it to the person who broke it, and attached the context. In this case, the developer(s) fixing the build could use Mylar to immediately set their code-context to everything (and only everything) related to fixing the build.

Notes

Here’s the mindmap of my notes:

20070104-Buildforge

Disclaimer: IBM and Eclipse are clients.

Technorati Tags: , , , , , ,

Categories: Companies, Enterprise Software, Programming.

Comment Feed

6 Responses

  1. I’m glad to see some discussion and real analysis of build systems these days. It has been many years at Catalyst selling Openmake where there has been no discussion at all around builds.

    I agree with your comment that developers do not care much about compliance. But there is some overlapping needs between developers and compliance – mainly what went into the build. Developers and management alike want to know if the source code they are managing matches the executables running in production. Also, what versions of 3rd party libraries such as SOA were used in the build. This is in essence compliance. And you can’t get that kind dependency listing from simply doing a check-out prior to the build or interrogating the build directory for files and calling it a bill of material. This requires really managing the compile process itself and watching what the compiler/linker is using and where it came from. And as you know, this is normally defined by the script itself – not what was checked out of the SCM tool or located in the build directory.

    So then we get to the 800lb guerilla in the room – the issue with ad hoc scripting. It is the scripts themselves that are doing the build not the ALM scheduling tools that called the script. Which then takes me to products like Maven. Maven provides a higher level of reusablity, minimizing build breaks then a simple Ant/XML script. After 12 years of providing reuse within the build process, and minimizing ad hoc scripting, we Openmake people are excited to see developers beginning to understand that reuse within the build process is the first step towards improving the build system. True dependency management, reuse, and compile management ultimately leads to more efficient builds (incremental builds for example), less broken builds and a truly agile development proocess that meets the compliance levels required by management.

  2. Thanks for that detailed comment, Tracy. There’s certainly some meaty ideas to chew on there 😉

  3. Having your current sneakers out of inside your home : Need to some others admiration your home guideline?

Continuing the Discussion

  1. […] People Over Process » Blog Archive » Build Forge Briefing (tags: alm scm ci ibm rational) […]

  2. […] PeopleOverProcess.com: Build Forge Briefing Looks like an interesting build related product (tags: buildsoftware development) […]

  3. […] morning I talked with the OpenMake folks, which was nice after a long game of meeting scheduling. Tracy Ragan was kind enough to comment at length on a recent BuildForge briefing note, so it was a nice blog-to-briefing […]