Tag: Testing

Continuous Integration: Balancing Value and Effort

Continuous integration can be a tough sell to managers. It’s hard to describe the need for extra time and resources to build automated tests that should mimic what is already being done by developers. This advocacy can be especially difficult early in development when CI failures are common and the pipeline will need a lot of work. Why would any manager want a tool that creates more problems and interferes with the development cycle? A robust continuous integration pipeline is vital during development since it protects from the deployment of broken code and will generate more issues to remove bugs before production. Since Orbital Bus is an internal project, we decided to use it as an opportunity to build the kind of CI pipeline we always wanted to have on client sites.

Early on we looked at the possibility of automated provisioning of multiple machines for integration tests. We looked at a variety of tools including Vagrant, Salt Stack, and Chef and Puppet. What we found is that this automation was not worth the time investment. This post is supposed to be about the value of investing in a CI pipeline, so why are we talking about work we abandoned? To demonstrate that the value of a CI pipeline has to be proportionate to the time cost of maintaining it. When it came to automated provisioning we realized that we would spend more time maintaining that portion of the pipeline than reaping the benefits, so we stood up the VMs manually and replaced provisioning with a stage to clean the machines between runs.

As development progressed, we added to our pipeline, making sure that the time investment for each step was proportionate to the benefits we were receiving. Gradually we added the build process, unit tests, and automated end-to-end integration tests. As we continued to experiment we began using the GitLab CI runners to enhance our testing. We also discovered that GitLab could integrate with Jenkins, and brought our pipelines together to create an integrated dashboard on GitLab. As we neared the public release, we added a whole new stage for GitLab pages to deploy our documentation.

A shot of our Jenkins Continuous Integration pipeline builds.
A shot of our Jenkins pipeline builds.

As the saying goes, Rome was not built in a day. Neither was our continuous integration. We added to it gradually, and as we did we had to overcome a number of obstacles. Our greatest problem has been false negatives. False negatives immediately negate the benefits of continuous integration because the team stops respecting the errors being thrown by the system. At one point, our disregard for the failures on the CI pipeline prevented us from noticing a significant compatibility error in our code. Each failure was an opportunity for us to understand how our code was running on multiple platforms, to explore the delta between development and production environments, and ultimately made our solution more robust. From the perspective of productivity it was costly, but the time greatly outweighed the value of hardening of our solution.

A capture of one of our Continuous Integration GitLab pipelines.
A capture of one of our GitLab pipelines.

You would be mistaken if you thought we’ve stopped working on our pipeline. We have plans to continue to grow our CI, expanding our integration tests to include performance benchmarks and to work with the multiple projects which have originated in the Orbital Bus development. These additional steps and tests will be developed alongside our new features, so as to integrate organically. As our solution matures, so will our continuous integration, which means we can continue to depend on it for increased returns in our development cycle.

  • Joseph
  • Pound

Testing an ESB with Selenium!

Manual integration testing is the first “test plan” in application development. This strategy quickly becomes a pain, so introducing automation early is important with any project. When working on an enterprise service bus, automated testing is a must have. Early in the development of Orbital Bus we began implementing automated unit tests and even automated integration tests in our continuous integration pipeline. Scripting such tests is a common approach for similar ESB projects. Why are we writing an article about Selenium then? Eventually we decided that console applications weren’t enough. It’s very common to have web applications that call to multiple micro-services via an ESB, so we wanted to build one and test it out. How would we automate those tests? We chose Selenium.


Selenium is an open-source testing solution for web apps. Selenium helps you automate tasks to make sure your web application is working as you expect. Selenium has multiple tools:

  • Selenium 2 (WebDriver): Supplies a well-designed object-oriented API that provides improved support for modern, advanced web-app testing problems.
  • Selenium 1 (Remote Control): Provides some features that may not be available in Selenium 2 for a while, including support for almost every browser and support of several additional languages (i.e. Java, Javascript, Ruby, PHP, Python, Perl, and C#).
  • Selenium IDE: Helps with rapid prototyping of tests with Selenium for experienced developers or beginner programmers who are looking to learn test automation.
  • Selenium-Grid: Allows for tests to run on different machines and in different browsers in parallel.

For Orbital Bus, we used Selenium 2 since it supports nine different drivers (including mobile OS’s). That ensures going forward we can adapt our tests however we need.

We know Orbital Bus will be an essential component in a web application, so we needed to test the ability of Orbital Bus to handle a lot of requests from a browser-based application. By stressing Orbital Bus with a lot of requests, we could verify that our message broker receives and delivers the messages properly and the Dispatcher and Receiver handles and translates the messages as expected.

In addition to Selenium 2, we needed to use a web driver to conduct the testing. We chose the Phantomjs Driver. This driver is “headless”, meaning we were able to manipulate all the elements in the page. Keeping in mind we wanted to stress test the system, we weren’t concerned with complicated UI scenarios. We were using Selenium just to fire requests and make sure that a UI element changed showing receipt of the response. Other test harnesses for browsers often open extra pages to capture the screen. That kind of interaction was beyond our scope. We wanted to just focus on messages getting through. The following code section is an example of our tests.

public void RequestsForYahooWeather()

var countryJsonPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Configs", "countries.json");
var jsonString = File.ReadAllText(countryJsonPath);
List countries = JsonConvert.DeserializeObject<List>(jsonString);

foreach (country countryO in countries)
var driver = new PhantomJSDriver();
WebDriverWait wait = new WebDriverWait(driver, TimeSpan.FromSeconds(10));
IWebElement wElement = driver.FindElement(By.Id("CityName"));
IWebElement wElementTemperature = driver.FindElement(By.Id("Temperature"));
IWebElement wElementButton = driver.FindElement(By.Id("getWeatherInfo"));
Assert.NotStrictEqual("-", wait.Until(drv => drv.FindElement(By.Id("Temperature")).GetAttribute("value")));



Our experience with Selenium was easy to implement and quick in performing the necessary tests. Our tests successfully passed, demonstrating that Orbital Bus can work with web applications and have no negative impact over console apps. Selenium not only let us confirm Orbital Bus was working as intended with a web application, but it will afford us with the flexibility and performance needed to grow these tests in the future as we expand our use cases.


  • Maria
  • Reyes Freaner