Tag: Agile

Continuous Integration: Balancing Value and Effort

Continuous integration can be a tough sell to managers. It’s hard to describe the need for extra time and resources to build automated tests that should mimic what is already being done by developers. This advocacy can be especially difficult early in development when CI failures are common and the pipeline will need a lot of work. Why would any manager want a tool that creates more problems and interferes with the development cycle? A robust continuous integration pipeline is vital during development since it protects from the deployment of broken code and will generate more issues to remove bugs before production. Since Orbital Bus is an internal project, we decided to use it as an opportunity to build the kind of CI pipeline we always wanted to have on client sites.

Early on we looked at the possibility of automated provisioning of multiple machines for integration tests. We looked at a variety of tools including Vagrant, Salt Stack, and Chef and Puppet. What we found is that this automation was not worth the time investment. This post is supposed to be about the value of investing in a CI pipeline, so why are we talking about work we abandoned? To demonstrate that the value of a CI pipeline has to be proportionate to the time cost of maintaining it. When it came to automated provisioning we realized that we would spend more time maintaining that portion of the pipeline than reaping the benefits, so we stood up the VMs manually and replaced provisioning with a stage to clean the machines between runs.

As development progressed, we added to our pipeline, making sure that the time investment for each step was proportionate to the benefits we were receiving. Gradually we added the build process, unit tests, and automated end-to-end integration tests. As we continued to experiment we began using the GitLab CI runners to enhance our testing. We also discovered that GitLab could integrate with Jenkins, and brought our pipelines together to create an integrated dashboard on GitLab. As we neared the public release, we added a whole new stage for GitLab pages to deploy our documentation.

A shot of our Jenkins Continuous Integration pipeline builds.
A shot of our Jenkins pipeline builds.

As the saying goes, Rome was not built in a day. Neither was our continuous integration. We added to it gradually, and as we did we had to overcome a number of obstacles. Our greatest problem has been false negatives. False negatives immediately negate the benefits of continuous integration because the team stops respecting the errors being thrown by the system. At one point, our disregard for the failures on the CI pipeline prevented us from noticing a significant compatibility error in our code. Each failure was an opportunity for us to understand how our code was running on multiple platforms, to explore the delta between development and production environments, and ultimately made our solution more robust. From the perspective of productivity it was costly, but the time greatly outweighed the value of hardening of our solution.

A capture of one of our Continuous Integration GitLab pipelines.
A capture of one of our GitLab pipelines.

You would be mistaken if you thought we’ve stopped working on our pipeline. We have plans to continue to grow our CI, expanding our integration tests to include performance benchmarks and to work with the multiple projects which have originated in the Orbital Bus development. These additional steps and tests will be developed alongside our new features, so as to integrate organically. As our solution matures, so will our continuous integration, which means we can continue to depend on it for increased returns in our development cycle.

  • Joseph
  • Pound

Automating Builds with TFS Build Definitions

My latest project utilized automated builds to speed up the deployment life cycle. We are running in an agile process and wanted to leverage continuous integration to help the product owners see new features as soon as possible. What follows is a quick walkthrough of the steps/tasks to get a basic build definition setup using TFS and Visual Studio.

Before we get started we need to ensure we have installed some prerequisites. Make sure you have the following:

  1. TFS installed with a Team Project and your project source code checked in.
  2. Build Controller installed with a Build Agent. More information on this can be found here.
  3. Ensure all software required for build is installed on the same machine as the Build Agent, including Visual Studio.
  4. Visual Studio must also be installed on your local development environment.
  5. A network share location to place the code drops (i.e. built code).
  6. A domain is not required but is highly recommended for secure access to network locations.

The example that follows is using TFS 2012 and Visual Studio 2012. However, this process will most likely apply to future versions (I have seen no differences if using either TFS 2013 preview or Visual Studio 2013).

To get started I first need to create a new build definition. This will be the basis for how a build is completed. In order to create a new build definition, I open Visual Studio, navigate to Team Explorer, and then to the Builds section.

In the figure below, I don’t have any build definitions for this project. However I can change this by selecting the “New Build Definition” selection from the Builds menu.

TFS-Builds-1

Figure 1 – Team Explorer – Builds Tab

A new menu will appear, this is where we will need to configure our build definition. First, in the General tab, give your build definition a name and an optional description. I’m going to call mine Blog Build Definition CI, and give a definition of “Sample build definition for my blog post”. A “Queue Processing” option must be selected as well. We are going for a continuous integration set-up, so select “Enabled”.

TFS-Builds-2

Figure 2 – New Build Definition – General Tab

Next we need to select the trigger for the build definition. This can be found in the Trigger tab. Here we want to select the best option for a continuous integration setup. Depending on your team or situation, you may opt to use “Rolling builds” or “Gated Check-in” but “Continuous Integration” is likely adequate for most environments.

TFS-Builds-3

Figure 3 – New Build Definition – Trigger Tab

Then we move onto Source Settings. Source Settings can be a little bit tricky but is straight forward once you gain an understanding of what is being asked. Source Control Folder is used to select what source the build will be acting on. You can see that I selected my “BlogBuildDefinitionProject” folder as my source. Build Agent Folder can be a bit confusing. During the build, the controller is going to grab the source code files from TFS and send them to the build agent. The Build Agent Folder refers to the location on the Build Agent’s server that the files will be sent to. “$(SourceDir)” is a variable used by TFS as a starting point for source drops. The article List of Variables Like $(SourceDir) gives a good explanation of what the variable is:

$(SourceDir) – Expands to $(BuildDir)\Sources by default

The directory “Sources” is not hard-coded and may be changed by modifying the TfsBuildService.exe.config file on the build agent. If you open that file there will be an application setting called “SourcesSubDirectory”. If you need a shorter path you may change this key to something like “s” instead of “Sources”. If you made this change then the $(SourceDir) variable would expand to $(BuildDir)\s

For the purposes of this example I only have one solution I’m building, so keeping the default is fine. However, if you want to build multiple solutions, each location will need its own source directory. You should keep all the source locations pre-pended with “$(SourceDir)” and append them with “/project1″ or more likely the name of your solution.

TFS-Builds-4

Figure 4 – New Build Definition – Source Settings

Build Defaults is next, and is a fairly simple screen. We need to select the Build Controller we are going to use. I defined my controller very quickly (see #2 in the list of prerequisites at top). You can see it is named “TeamFoundation” and has no description. Most likely you will only have one controller. Under “Staging Location” you will only have two options unless you are creating a build definition for Team Foundation Service where a third option will be presented. That third option is out scope of this article so we will focus on the two that are given for full featured Team Foundation Server Installations. The first selection is used when your build process for whatever reason doesn’t need to copy files to a drop location. The second one is the standard option and will require you to put in the network address of the network share you created earlier (see #5 in list of prerequisites at top). Remember that the Build Agent that was configured for use with TFS will need to have full access to this folder.

TFS-Builds-5

Figure 5 – New Build Definition – Source Settings

The Process section is the meat of your build definition. This is where you can dictate the steps and set up a “Build Template”. There is a lot of information that could be explained here, especially when talking about creating build templates. Custom build processes are something I want to cover further in a future post but will be skipped over for now. The default template that is created whenever a new Team Project is created is more than suitable for a basic continuous integration deployment. The default build process template is pre-selected for you, however, you can select “New…” in order to start the creation of your own. The most important part is to select the Items to Build.

For this example (see screen shot below) I selected the solution of my “BlogBuildDefinitionProject”. The defaults for Automated Tests are normally adequate enough to get tests run before your code is built on the server, however, you can also define a string that will be used to find your test projects. It’s important to realize that if using the default, any .dll file with the word “test” in the file name will be searched for test classes and methods and these tests will then be run. Furthermore, if your test project names don’t already contain the word “test”, you will need to either alter this string to something that is common between your test projects’ names, or change the names of your test projects.

TFS-Builds-6

Figure 6 – New Build Definition – Process

Retention Policy really isn’t very important. It’s just a definition as to how long certain types of builds should be kept. Depending on how the build is triggered you can keep those files for a specified amount of time. Normally the defaults are sufficient for any project you have on the go.

TFS-Builds-7

Figure 7 – New Build Definition – Retention Policy

Make sure you save, and upon saving successfully you will see in your Team Explorer Builds tab that you now have a build definition present.

TFS-Builds-8

Figure 8 – Team Explorer – Builds with Definition

Next time you check in your code you will see a build under “My Builds”. Give it a few minutes and you will get a log of the events that transpired.

TFS-Builds-9

Figure 9 – Team Explorer – Builds New Build

Building up the perfect deployment solution can take time, however when it’s done you will be able to create new builds in seconds. It really is great and has saved many hours trying to get products to launch.

If you have any questions, or want to chat more about this, contact us today – we’d love to hear from you!

  • Dan
  • McCrady

Agile Governance in SOA

I know, at first glance it looks like the cramming together of 3 loosely defined and oft-debated buzzwords into the title, but got your attention didn’t it? So if you will bear with me for a few minutes, I will explain how applying Agile to a much more abstract of an activity such as building an SOA Governance Framework works.

SOA Governance is a fluid concept, with many differing, complementary, and overlapping point of views. Is it a framework focused on lifecycle management of service assets? How does it integrate with existing governance frameworks around other IT components such as infrastructure, database, packaged applications? Or should it be focused around the monitoring and management of business capabilities. The discussion could go on forever.

Because of this fluidity, I’ve seen two major problems come up again and again with clients when it comes to creating an SOA Governance Framework:

  1. The client has invested in a shiny new SOA Governance Framework complete with templates, registries and repositories, architectural review boards, the works. It takes a year to roll out. And the moment it’s done, people start finding scenarios that don’t fit within the framework; do not apply it consistently; or refuse to use it altogether.
  2. Various architects within the client organization can’t agree on the scope, focus, or business case around such an ambiguous concept as SOA Governance, so it keeps on getting pushed off.

The reason why the first problem occurs is that organizations are treating Governance like a traditional application build project. You need to define it all up front and build it once. And that can only lead to an SOA Governance Framework, or any Governance Framework for that matter, which is out-of-date before it’s even delivered. Governance needs to constantly evolve to adapt to the changes in an organization. Is the organization moving from being IT focused to being business focused? Is there an increased uptake in COTS implementations over custom development? Is the organization moving to an offshore support model? All of these changes in the organization will impact how the services should be governed. In short, the way the SOA Governance Framework is developed and managed over time needs to be flexible and adaptable to change. Sounds like Agile, doesn’t it?

The same general mentality also leads to the second problem: the idea that the SOA Governance Framework has to be perfect and all-encompassing the first time around. The fact of the matter is that instead of being stuck in analysis paralysis, we’re always better off with having something rather than nothing. Even an imperfectly managed SOA landscape is better than a completely unmanaged landscape. One of the key tenets of SOA is to enable continuous improvement of business capabilities and efficiency by building it out in loosely coupled modules. So why can’t we implement a SOA Governance Framework by starting small and building up additional components as we learn more about the landscape and become more experienced in working with it. Isn’t it better to solve a real problem that we encounter than to create a solution in anticipation of a problem? Once again, sounds a bit like Agile, doesn’t it?

That was a long preamble. So here’s our approach:

  1. Get the organization to agree on a core set of SOA Governance deliverables, creating a baseline of the framework. In Agile terms, this list is our backlog and includes:
    – a Service Definition template
    – a Service Contract template
    – a Taxonomy
    – a Service Registry and Repository
    – and some kind of org chart from at least a support perspective
  2. For each deliverable defined in the backlog, analyze whether something similar already exists in the organization today or needs to be created. Group similar/relevant deliverables together (ie. Service Definition template with Service Contract template). Implement it and deliver it to the organization. These are our sprints.
  3. While operating our SOA landscape, constantly be on the lookout for areas to improve. Did we have any services that didn’t fit the definition template? Did we have a project that identified inefficiencies within the architectural review process? Is the service inventory becoming too cumbersome to manage? Are people adhering to the change management process and if not, why? Was there a re-org that requires us to rethink the key roles and responsibilities? Frame these into deliverables and put them into the backlog.

The key to success here is continuously looking at the applicability and appropriateness of the framework and keep it aligned to the rest of the organization. If the users of the framework start to feel like they have to shoehorn something into a structure that no longer works for them, they will cease to use it. So don’t fall into that trap and let’s apply some agility to our SOA Governance initiatives.

  • Shan
  • Gu