What’s Your Organization’s Rocket Fuel?

The conversations I regularly have with clients, other executives, and my mentors are usually around “what’s your org’s vision?” or “what do you want your org to do”. Foci has gone through a tremendous period of growth and change over the last 12 months and the answers to those questions seem to be ever changing. This has led me and the rest of the management team to have some very interesting discussions around how we define Foci and our purpose.

The “what” and the “how” doesn’t matter

Photo by John Baker on Unsplash

Regardless of how well thought out your vision or strategy is, the reality is that $%*@ happens. Your clients can change their mind, you may lose some key contracts, the market will evolve and change, competitors will emerge, or you may not be able to get that unicorn architect/developer to run that world-changing product you want to build. And every time you have to make a pivot to adjust to those changes, it can be a very painful experience, both for you and your team.

The identity of an organization is very important, especially if you have a strong team culture like us. Team members imprint themselves onto that identity and subconsciously use it as a reference point for their everyday work. We started life as an Oracle Fusion Middleware company, then became an Architecture and Integration company, and now we’re doing more Cloud-Native custom dev with a broader range of system integration and program management services. Each of those shifts in focus created quite a bit of disruption in the team. People asked “Wait what? I thought we were doing the other thing? What does that mean to our existing projects? Will we stop doing the other thing altogether?” These were all fair questions, but after working through it all, we noticed that none of it actually impacted our team culture or our core behaviours.

What we took away from this were 2 things:

  1. What you did as a firm or how you did it had very little alignment to your culture.
  2. Our people are very emotionally connected to Foci’s identity and feel any change in that identity keenly.

It’s all about the “why”

This naturally led us to look at why our folks joined Foci and what made them excited about coming into work each day. Turns out no one was really driven by the prospect of writing thousands of lines of C# code, installing and configuring Oracle SOA Suite, or creating a stakeholder engagement plan. Sure, those things interested people, but they weren’t really core motivators.

We ended up landing on 2 aspects of motivation that were the most important:

  1. What brings you the most satisfaction (e.g., solving a problem, having an impact, getting recognition, seeing something you’ve built be used)
  2. What is your metric of value (e.g., complexity of the problem, number of people impacted, transaction volume, financial savings)

Problems are our rocket fuel

We always joked about having a generic tagline like “we solve problems” (it’s on our website) because we were constantly evolving the business. Appropriately that turned out to be the answer. What we realized is that our entire team and our hiring processes all coalesced around the core desire to solve complex and interesting problems. We weren’t motivated by how many people were using an app that we had developed or whether the systems we helped our clients build were processing 100 or 1,000,000 transactions a day.

The thing that gave us all a real sense of accomplishment and gave us that little shot of dopamine we humans naturally crave was when we were able to solve a problem for our client. The bigger the complexity greater the satisfaction. As long as we had a healthy supply of complex and interesting problems to feed our team, we could go anywhere.

The destination and the things you do to get there will always change over time. The things that motivate and drive you to move forward are more constant and core to your being. Defining your organization based on the goal you want to achieve or the tasks that you do makes every pivot feel like an identity crisis. Putting in the time to identify the rocket fuel that constantly propels your team forward creates a solid corporate identity to anchor against regardless of the path your organization decides to take. Interesting problems are our rocket fuel. As long as we as a management team ensure that our team has a steady flow of interesting problems to solve, we can have every confidence in Foci’s ability to achieve any goal that we set for ourselves. Until we change our mind, of course.

  • Shan
  • Gu

The Digital Transformation Road Trip

Too often, we see articles shared preaching the importance for organizations to adopt a digital strategy without encapsulating what that really means. To remove some of that confusion, I like comparing an organization’s digital transformation with something everyone knows – a road trip!

Let’s start with some truth – a digital strategy can be enormously beneficial to a department or organization and its ability to deliver value to customers. BUT, like a good road trip, becoming a digital organization isn’t an overnight journey – and it doesn’t always follow a set path. It requires planning, understanding, commitment, and the ability to embrace the detours along the way.

Oh the places you’ll go!

Before setting out on a trip, you need to have a destination in mind.  Similarly, executives need to agree on what ‘digital’ means for their organization. What problems are you really trying to solve?  Once you have these identified, your organization can begin to evaluate the possible ways of getting there.  

Loading up the car

Digital transformation is as much a business transformation as an IT one. Digital processes are about re-examining your business from top to bottom in order to have the right information, available at the right time, to make the right decisions. Cooperation, communication, and most importantly – organization-wide understanding is key to making sure this happens.

It’s important to start with the problem without focusing on what the solution might end up being. Challenge pre-existing assumptions and ideas about who should be doing what, when, and how. Break down your tools and processes so that you can rebuild it in a more efficient, modern way.

Digital organizations have governance and management frameworks that are very different than paper-based organizations. Keeping everyone involved ensures that you have multiple sets of eyes on the road. Making sure they know why this journey is happening means they’re looking out for the right kind of obstacles and opportunities.  

Take advantage of the bumps along the way

A digital organization embraces speed, communication, learning, and also, failure. It’s less important to have a map setting the route in stone from  start to finish than it is to be aware of what’s going on around you. Being aware of your surroundings lets you be prepared to change direction when a better path becomes available (or to avoid that head-on collision up ahead)! A digital organization uses this awareness to stay relevant and ahead of the curve. Approaches and methods like incremental development, democratized governance, Test Driven Development, and Agile are all designed to support teams in this way.

This can be a big change in thinking, especially for larger, more traditional organizations. Understanding which tools are available, and when to leverage them, can significantly improve your chances of finding success in your transformation.

Embrace being off the beaten path

So – before embarking on your digital journey, make sure you understand where it is you want your organization to go, focus on the journey, and be prepared to embrace being off the beaten path. You might not take the path you first imagined, but digital transformation is about the journey, not the destination.

  • Kevin
  • Steele

Why .NET loves Linux

This is an update to the post I made a while ago titled “Why .Net Doesn’t need to be Expensive“.  A lot has changed since I made that post.  For example: Visual Studio Code wasn’t released, .NET Core was called vNext, and .NET hadn’t gone through it’s open-source transformation.  These introductions to the .NET ecosystem have changed the way .NET developers are working day-to-day and the path to deploying .NET on Linux is quickly becoming a mandatory requirement for IT shops.

Microsoft has been on the Linux love train for quite some time now, and we are slowly starting to see the fruits of this transformation.  Just recently the Linux Subsystem on Windows was added to Windows 10 without the need to turn on developer mode.  Developers now have a native Linux bash that can be enabled through the Windows store.  The new .NET core project templates in Visual Studio include Docker support with just the click of a checkbox.  Azure allows you to host .NET Core projects in Linux, and has moved to allow container orchestration using technologies like Azure Container Storage, and soon to come Azure AKS (its managed Kubernetes).  This change is also reaching out to the open source community.  Most large projects have either ported their program to use .NET standard or are in the process of converting it.


Why so much Love?

Plain and simple: moving custom code to the cloud means moving to Linux.  All cloud technologies that are coming out have Linux as a common player.  AWS, GCP, OCP, Azure, and even smaller players like Digital Ocean all provide Linux as the OS.  If an IT organization can’t migrate their .NET custom code to Linux they are dramatically limiting the choices they have to get to the cloud.  If you aren’t going with Linux you only have two real choices:

1)  Find a Windows Server VM in the cloud and deploy to IIS.  

Technically yes, you are moving to the cloud, but are you really gaining any benefits?  Your operations team still needs to monitor, maintain, and patch this VM just as if it was in your private data centre.  You also are quickly locking yourself to the provider since making an export of the VM to move to another provider will be difficult and require down time as you make that transition.

2)  Use Azure PaaS Offerings like Web App Services.  

Azure is still your friend here.  They will take your web application code that is slightly modified to be cloud ready and host it for you.  The Web App Services offering is really good stuff.  It comes with free auto-scaling, monitoring, and guaranteed availability.  They even take care of patching and maintaining the infrastructure.  The downside here is that until you have migrated that application to Linux you are tied to Azure.  No other cloud provider is looking at a way to host non-core .NET web sites.  So if Azure changes the pricing model, you will need to change with it.


What does Linux get you?

Linux buys you true portability of your applications. The most common way to get to true application portability is to write your applications as a 12 factor application, while using Docker to wrap your application and prepare it for deployment.  If you follow this procedure, then pretty much any platform is open for you to deploy your applications.  Microsoft is currently working to create Windows Server Docker containers like microsoft/nanoserver, but the licensing and deployment constraints are still unclear.  It appears that you need to deploy these images only on a licensed Windows Server 2016 system.  This restriction makes your application tightly coupled to Windows systems and reduces your deployment options significantly.


More investment for .NET Developers

A little while ago I was talking to a group about how the shift to Linux will be a big shift for .NET developers. Normally Microsoft would have a major release and developers could focus for a year or so to wrap their heads around it.  When the TPL was released, Async Await was the big player. Bloggers would write endless articles on how leverage this feature to introduce multi-threading into applications.  This update was all that .NET developers needed to focus on.  The next few years are changing a lot more than Async Await.  A new Operating System in Linux, arguably a new framework with .NET Core, Docker containers, container orchestrators like Kubernetes, all while building strong Dev Ops capabilities.  The future is bright for .NET but the time required to learn all the advantages is long.  I plan to keep our developers moving in this direction, since it is the brightest path forward for custom software development in general, including the .NET ecosystem.


  • Dan
  • McCrady

Using JavaScript and JSON as a Common Language in Orbital Bus

Large enterprises usually have many programming languages across their departments. These departments, often located in different cities, will build teams out of what they see as the best-available local resources. It’s fairly common to find large-scale enterprise or government groups that have applications written in .NET and Java, never mind the plethora of other languages and flavours thereof. This technological mishmash is a major challenge to any sort of enterprise service bus; one that Orbital Bus is trying to overcome.

In creating Orbital Bus, we decided at the start that developers shouldn’t have to learn any new languages to implement our solution. The learning curve had to be minimal to ensure wide-spread adoption. We were able to deliver some of that goal by creating our Code Generation utility. This tool would allow us to take a single input and compile it to code usable by our ESB. However, this tool still needs input, so what were we to do?

Enter Javascript. We decided that by making the code generation input Javascript we would make it accessible to as many developers as possible with no extra work. No matter what language you develop in, you’ve probably had to work on some Javascript, whether to create visual effects or to load data with an Ajax call. We could implement Javascript with a high degree of confidence that users would be able to work with it without any sort of intimidating ramp. Javascript also provides a feature-rich environment that we don’t have to worry about maintaining. If developers want functionality that already exists in a library it’s minimal work for them to implement it. Along with Javascript, we were also able to rely on the JSON schema standard for modelling objects. We don’t have to worry about maintaining an API for describing models in our system. We simply have to point towards the standard we support and let the JSON schema community do the heavy lifting.

What exactly are we doing with all this Javascript? I mentioned the use of schemas to define models. We use models to define the types which are expected for the receiver. We take in standard JSON schemas to create C# classes which are then deployed as part of a contract library with the receiver. This library is used by receiver and the dispatcher. (Check out our article about using MEF with our contact libraries.) The models defined in this schema are also the ones expected by our translation engine. The receiver node of Orbital Bus takes Javascript translation files which it executes in both directions. With this feature developers can implement any translation they want as the information passes through the receiver node. These translations are simple .js files with method calls. We even support logging and business errors through integrated methods. Check out our documentation for more information on implementation. We even used JSON files for our configurations rather than XML to make sure that our points of contact with Orbital Bus are as unified as possible. As we grow Orbital Bus’ functionality we expect to grow its use of Javascript.

The default Javascript translation template.
The default Javascript translation template.

It was tough trying to think of the best way to support a polylinguistic development environment. Thankfully Javascript gave us a single point of entry we could use across many development environments. There’s still work we want to do with our Javascript implementation. We want to integrate libraries by default in our translations, allowing developers to use library calls without having to include them manually. We also want to add Javascript to our collection of connectors for the Orbital Bus. Thankfully, with a common input set out, Orbital Bus will be free to grow its implementations while continuing to support developers from a wide variety of backgrounds.

  • Joseph
  • Pound

Dynamic Plugin Loading Using MEF

The Managed Extensibility Framework (MEF) is a library that enables software to discover and load libraries at runtime without hard-coded references. Microsoft included MEF in .NET framework version 4.0 and since then it has been commonly used for dependency resolution and inversion of control patterns.

Orbital Bus makes communication possible between different parties by sharing contract and schemas. A receiver has a contract library that has all the information needed for a dispatcher to make proper synchronous and asynchronous calls all the way to an end consumer. The dispatcher downloads a receiver’s contract library and then uses it to construct calls with the right data schemas. It became very clear to us during development that a crucial requirement was that the dispatcher to be able handle any downloaded contract library DLL and process it without code changes. This is where MEF comes into play. It lets us inject libraries, in this case the receiver’s contract libraries, at the start-up stage.

Once we chose to use MEF as our integration tool, we were able to start the Code Generation Project. This project is a convenient CLI tool that efficiently generates the contract libraries and plugins which are loaded by the receiver. These libraries are made available for download to any dispatcher on the mesh network. One challenge we encountered downloading multiple contract libraries for the dispatcher was how to distinguish between these contract libraries. What if two contracts have similar operation names? How can the dispatcher tell what is the right operation to select from its composition container? We were able to solve this challenge by making sure that each contract library generated has a unique ServiceId that would be exported as metadata within the contract library. This setting enables the dispatcher to filter out various operations based on their ServiceId:

    namespace ConsumerContractLibrary
        [ExportMetadata("ServiceId", "ConsumerLibrary")]
        public class AddCustomerOperation : IOperationDescription {}

When the receiver starts up, it will pull the plugins from its Plugins folder and load the plugin.dll and adapters into MEF’s CompositionContainer, a component used to manage the composition of parts. Those dependencies will be injected into the receiver as it loads. In addition to handling messages destined for the consumer, the receiver also serves as file server that waits for the dispatcher to download the contract library when needed.

    public PluginLoader(IConfigurationService config)
        this.config = config;
        var container = this.BuildContainer(); // load the plugin DLLs and create composition container
        var details = this.RegisterPlugins(container);
        this.BootStrapSubscriberDetails(details); //Creates needed dependencies and bootstraps the given details.

After a dispatcher downloads the available contract library specifications into a composition container, it will filter out and return all the exported values in the container corresponding the given ServiceId.

    public static IEnumerable<T> GetExportedValues<T>(this CompositionContainer container,
            Func<IDictionary<string, object>, bool> predicate)
        var exportedValues = new List<T>();

        foreach (var part in container.Catalog.Parts)
            foreach (var ExportDef in part.ExportDefinitions)
                if (ExportDef.ContractName == typeof(T).FullName)
                    if (predicate(ExportDef.Metadata))

        return exportedValues;

Where the predicate clause is actively the filter we need for ServiceId:

    metadata => metadata.ContainsKeyWithValue(METADATAKEY, serviceId)

After filtering the process, the dispatcher has all the contract library operations that are supported by the receiver.

MEF proved invaluable in solving the problem of runtime library integrations and to enable the plugin architecture. This implementation allows Orbital Bus the flexibility for developers to customize or update their contract libraries, service configurations, and translations without affecting other services on the bus. As our work continues, we plan on looking closer at the issue of versioning in the dispatcher to keep its cache in sync with the receiver’s contract libraries, making Orbital Bus an even more agile messaging solution.

  • Dan
  • McCrady

Continuous Integration: Balancing Value and Effort

Continuous integration can be a tough sell to managers. It’s hard to describe the need for extra time and resources to build automated tests that should mimic what is already being done by developers. This advocacy can be especially difficult early in development when CI failures are common and the pipeline will need a lot of work. Why would any manager want a tool that creates more problems and interferes with the development cycle? A robust continuous integration pipeline is vital during development since it protects from the deployment of broken code and will generate more issues to remove bugs before production. Since Orbital Bus is an internal project, we decided to use it as an opportunity to build the kind of CI pipeline we always wanted to have on client sites.

Early on we looked at the possibility of automated provisioning of multiple machines for integration tests. We looked at a variety of tools including Vagrant, Salt Stack, and Chef and Puppet. What we found is that this automation was not worth the time investment. This post is supposed to be about the value of investing in a CI pipeline, so why are we talking about work we abandoned? To demonstrate that the value of a CI pipeline has to be proportionate to the time cost of maintaining it. When it came to automated provisioning we realized that we would spend more time maintaining that portion of the pipeline than reaping the benefits, so we stood up the VMs manually and replaced provisioning with a stage to clean the machines between runs.

As development progressed, we added to our pipeline, making sure that the time investment for each step was proportionate to the benefits we were receiving. Gradually we added the build process, unit tests, and automated end-to-end integration tests. As we continued to experiment we began using the GitLab CI runners to enhance our testing. We also discovered that GitLab could integrate with Jenkins, and brought our pipelines together to create an integrated dashboard on GitLab. As we neared the public release, we added a whole new stage for GitLab pages to deploy our documentation.

A shot of our Jenkins Continuous Integration pipeline builds.
A shot of our Jenkins pipeline builds.

As the saying goes, Rome was not built in a day. Neither was our continuous integration. We added to it gradually, and as we did we had to overcome a number of obstacles. Our greatest problem has been false negatives. False negatives immediately negate the benefits of continuous integration because the team stops respecting the errors being thrown by the system. At one point, our disregard for the failures on the CI pipeline prevented us from noticing a significant compatibility error in our code. Each failure was an opportunity for us to understand how our code was running on multiple platforms, to explore the delta between development and production environments, and ultimately made our solution more robust. From the perspective of productivity it was costly, but the time greatly outweighed the value of hardening of our solution.

A capture of one of our Continuous Integration GitLab pipelines.
A capture of one of our GitLab pipelines.

You would be mistaken if you thought we’ve stopped working on our pipeline. We have plans to continue to grow our CI, expanding our integration tests to include performance benchmarks and to work with the multiple projects which have originated in the Orbital Bus development. These additional steps and tests will be developed alongside our new features, so as to integrate organically. As our solution matures, so will our continuous integration, which means we can continue to depend on it for increased returns in our development cycle.

  • Joseph
  • Pound

Securing the Orbital Bus

After getting familiar with the Orbital Bus Architecture, and how it solves the traditional Enterprise Service Bus (ESB) shortcomings, it was time for our development team to create a secure solution around the components and communication channels of our distributed solution.

The challenge:

The Orbital Bus makes exchanging information possible between various parties. To this effect, Orbital Bus has three components involving communication:

  • RabbitMQ is the message broker across its various components.
  • Orbital Bus service registration and discovery is built on top of Consul.
  • The receiver calls out to the consumer with a REST Adapter that is built using the RestSharp Client.

These communication components support TLS-Encryption and HTTP authentication. We also want to support additional authentication and message protection mechanisms in the future. In order to implement these solutions Orbital Bus needs to provide a way to save credentials, X.509 certificates, and other forms of tokens. To summarize, the challenges we encountered in developing Orbital Bus were:

  1. Provide a secure vault to store various types of credentials, certificates, and tokens.
  2. Make the security features optional so it could be implemented only when needed.

The Solution

While working on the Orbital Bus it became obvious that a secure vault was needed to save sensitive information such as credentials, tokens, and certificates. Inspired by Java Keystore, Foci Solutions designed and developed a platform-agonstic C# Keystore solution that could work on Windows or Linux. Foci’s Keystore is available as a Nuget Package, and it also comes with a Keystore Manager CLI Tool to perform CRUD operations on the Keystore directly. Please visit the Keystore’s How to Guide for more details on how to use the Keystore and its manager.

The Keystore addresses the first security challenge. Your system requires a secure RabbitMQ client? Not a problem. You can have the credentials saved in the Keystore and use them whenever needed. Your Orbital Bus implementation requires using a certificate for service discovery through Consul? The Keystore can encrypt and save the certificate to be used whenever needed. If you look closely at the Orbital Bus API Documentation, you will notice that there is a KeystoreService and a KeystoreRepository that makes the integration with Foci’s Keystore Seamless. The Keystore’s CRUD repository makes it available to any part of the Orbital Bus components via the KeystoreService.

Now that the first security challenge has been addressed through the Keystore integration, let’s move on to the second challenge: How to make security available but optional? The first thought that comes to mind is to modify Orbital Bus code. After further consideration, it becomes very clear that code modification based on security requirements is an expensive approach that necessitates code change based on the implementation requirements. We decided to integrate the security options into our configuration service to allow changes on the fly. This way security options throughout the Orbital Bus solution can be toggled with minimal effort. You want to secure your dispatcher’s communication to the RabbitMQ client? Then all you need is to turn on a security flag and provide the RabbitMQ credentials. Just let Orbital Bus’ configuration service take care of the rest.

How to Use the Keystore

Foci’s Keystore can accommodate various entry types like certificates, key pairs, username/password pairs, and tokens. Each entry in the Keystore has a unique Alias to keep them organized. The Keystore can be configured to encrypt/decrypt its content using either the Current User or Local Machine data protection scopes. The Keystore is fully integrated for use by any component of the Orbital Bus like the dispatcher or receiver. You will only need to initialize the Keystore with the Keystore Manager Tool and add any credentials or certificates your solution requires. For example: Your implementation requires a secure communication between the dispatcher and RabbitMQ using a username and password? All you need to do is create the Keystore using the Keystore Manager Tool and add a new entry for the required credentials with a unique Alias. What’s next? How to retrieve the stored entries? What’s this Alias for? How to use it? All this will be explained in the next section.

How to Configure Security

Orbital Bus approach favours configuration over customization for obvious reasons. In this section we will walk through how you can configure RabbitMQ, Consul, and the REST Adapter to be secure. The Orbital Bus has a KeystoreService that sits in the Root of the solution. The KeystoreService is injected into the ConfigurationService class. This ConfigurationService is a powerful and flexible tool. It can be injected into any component and it imports any set of configurations that are stored in a specified JSON file mapped into their own configuration model. For example: The ConfigurationService is injected into the DispatcherRegistration in order to configure the dispatcher with settings including the RabbitMQ options for addresses, credentials, and certificates.

RabbitMQ Configuration

Both the dispatcher and receiver establish RabbitMQ buses that can be configured as secure. The following is a JSON configuration file for a dispatcher that has the RabbitMQ security enabled:

    "BusHostIp": "localhost",
    "BusHostPort": 5672,
    "ContractLibraryPrefix": "",
    "ConsulConfiguration": {
      "HostIp": "localhost",
      "HostPort": 8500
  "BaseConfiguration": {
    "RabbitMQConfiguration": {
      "BusHostIp": "localhost",
      "BusHostPort": 5672,
      "SslEnabled": "true",
      "Alias": "rabbitCredentials"

You might notice that the property SslEnabled is set to true, and there is an Alias property with the value “rabbitCredentials”. This simple configuration allows Orbital Bus to enable secure communications with the RabbitMQ server. The Alias here is the unique name we assigned to the credentials entry saved in the Keystore using the Keystore Manger Tool. Securing RabbitMQ in Orbital Bus is as simple as this. Save your credentials in the Keystore, and make sure you edit your configuration to point to the stored credentials Alias.

Consul Configuration

For Orbital Bus we implemented Consul security connections with certificate authentication. Any REST client or request created to communicate to Consul should have an appended certificate for authentication. In return, Consul will return its certificate to authenticate to the client. The following is a JSON configuration file for a dispatcher that has the Consul security enabled:

    "BusHostIp": "localhost",
    "BusHostPort": 5672,
    "ContractLibraryPrefix": "",
    "ConsulConfiguration": {
      "HostIp": "localhost",
      "HostPort": 8500
  "BaseConfiguration": {
    "ConsulConfiguration": {
      "HostIp": "localhost",
      "HostPort": 8500,
      "SslEnabledConsul": "true",
      "Alias": "consulcert"

Here a similar approach to the RabbitMQ implementation is used. An entry with the Alias “consulcert” is referenced to retrieve the stored certificate that would be injected into the ConsulService when its initialized. The service then appends that certificate to requests.


The REST Adapter follows a similar approach to enable and configure secure HTTP communications. The RestAdapterConfiguration class has a SecureConsumer flag to indicate if the security is enabled and a ConsumerAlias contains the unique Alias name for the credentials in the Keystore.

Security is always a pressing concern and the best solution is not often easily apparent. In building the Keystore, we sought to make a tool that could be used easily and repeatedly, while at the same time making it an integral part of the Orbital Bus. We recommend checking out the How To Guide and trying it out yourself.

  • Rabie
  • Almatarneh

Testing an ESB with Selenium!

Manual integration testing is the first “test plan” in application development. This strategy quickly becomes a pain, so introducing automation early is important with any project. When working on an enterprise service bus, automated testing is a must have. Early in the development of Orbital Bus we began implementing automated unit tests and even automated integration tests in our continuous integration pipeline. Scripting such tests is a common approach for similar ESB projects. Why are we writing an article about Selenium then? Eventually we decided that console applications weren’t enough. It’s very common to have web applications that call to multiple micro-services via an ESB, so we wanted to build one and test it out. How would we automate those tests? We chose Selenium.


Selenium is an open-source testing solution for web apps. Selenium helps you automate tasks to make sure your web application is working as you expect. Selenium has multiple tools:

  • Selenium 2 (WebDriver): Supplies a well-designed object-oriented API that provides improved support for modern, advanced web-app testing problems.
  • Selenium 1 (Remote Control): Provides some features that may not be available in Selenium 2 for a while, including support for almost every browser and support of several additional languages (i.e. Java, Javascript, Ruby, PHP, Python, Perl, and C#).
  • Selenium IDE: Helps with rapid prototyping of tests with Selenium for experienced developers or beginner programmers who are looking to learn test automation.
  • Selenium-Grid: Allows for tests to run on different machines and in different browsers in parallel.

For Orbital Bus, we used Selenium 2 since it supports nine different drivers (including mobile OS’s). That ensures going forward we can adapt our tests however we need.

We know Orbital Bus will be an essential component in a web application, so we needed to test the ability of Orbital Bus to handle a lot of requests from a browser-based application. By stressing Orbital Bus with a lot of requests, we could verify that our message broker receives and delivers the messages properly and the Dispatcher and Receiver handles and translates the messages as expected.

In addition to Selenium 2, we needed to use a web driver to conduct the testing. We chose the Phantomjs Driver. This driver is “headless”, meaning we were able to manipulate all the elements in the page. Keeping in mind we wanted to stress test the system, we weren’t concerned with complicated UI scenarios. We were using Selenium just to fire requests and make sure that a UI element changed showing receipt of the response. Other test harnesses for browsers often open extra pages to capture the screen. That kind of interaction was beyond our scope. We wanted to just focus on messages getting through. The following code section is an example of our tests.

public void RequestsForYahooWeather()

var countryJsonPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Configs", "countries.json");
var jsonString = File.ReadAllText(countryJsonPath);
List countries = JsonConvert.DeserializeObject<List>(jsonString);

foreach (country countryO in countries)
var driver = new PhantomJSDriver();
WebDriverWait wait = new WebDriverWait(driver, TimeSpan.FromSeconds(10));
IWebElement wElement = driver.FindElement(By.Id("CityName"));
IWebElement wElementTemperature = driver.FindElement(By.Id("Temperature"));
IWebElement wElementButton = driver.FindElement(By.Id("getWeatherInfo"));
Assert.NotStrictEqual("-", wait.Until(drv => drv.FindElement(By.Id("Temperature")).GetAttribute("value")));



Our experience with Selenium was easy to implement and quick in performing the necessary tests. Our tests successfully passed, demonstrating that Orbital Bus can work with web applications and have no negative impact over console apps. Selenium not only let us confirm Orbital Bus was working as intended with a web application, but it will afford us with the flexibility and performance needed to grow these tests in the future as we expand our use cases.


  • Maria
  • Reyes Freaner

Getting Started with Orbital Bus

You’ve heard people talk about enterprise service buses and you think it’s time you learned out to use one. You read an awesome blog post about this new thing called Orbital Bus and you think it would be a good project to play around with. Where should you start? Let’s start here.

Understanding the Architecture

I’m sure you’ve checked out our project README, but just in case you need a refresher here’s a quick overview of how Orbital Bus works.
Everything starts with the Producer and the Consumer. The Producer produces calls into the system. These calls can be synchronous requests or asynchronous fire-and-forget messages. What’s important is that the Producer is what initiates the action. The Consumer consumes messages off the queue. Both the Producer and Consumer are external to Orbital Bus. They might be third-party services, COTS products, or custom code applications made by your developer(s). The Orbital Connector is a library the Producer uses to get messages into the system. We have a whole project dedicated to connectors. The Connector uses RabbitMQ to pass messages to the Dispatcher. The Dispatcher listens for incoming messages, finds services via Consul, and sends messages to the Receiver via it’s queue. Receiver’s do the heavy lifting. They load custom libraries, transform messages, and use adapters to send messages to the Consumer.
Here’s a diagram to give you an idea of the general flow of information:

An overview of the Orbital Bus flow.
An overview of the Orbital Bus flow.

Getting ready

For this little test, let’s put everything on your local machine. You’ll need to prepare by installing two third-party components: Consul and RabbitMQ. We use these for service discovery and message communication respectively. If you want some help you can check out our more detailed instructions. Since Orbital Bus is ready to communicate with any RESTful web service, we’re going to use JSONPlaceholder. Feel free to check it out and get a feel for the kind of messages you want to send.

Build a Producer

The Producer is the instigator of the pipeline. It calls out using the Orbital Connector and RabbitMQ to get the Dispatcher communicating with other nodes. Since our current Orbital Connector is written in .NET, you’ll want a .NET application that references it. We have a NuGet package to make it simple. We have four methods for sending with the connector: synchronously, asynchronously, synchronously that can be awaited, and a one-to-many route. We recommend starting with a synchronous call. All the producer needs is the service ID for the destination service (which you add to Consul below) and a JSON-serialized payload.
For more detailed instructions on making a Producer, check out our How-To Guide. It’s got a thorough process with code samples and everything!

Use Code Generation

Next we’ll setup the side of the Consumer. As we said above, we’re not going to bother building a web service (though you can if you really want to).
To get started you’re going to have to download the Code Generation project. We made this tool to help generate the necessary libraries for the Orbital Bus Receiver to connect with a service. All the files you work on for Code Generation are Javascript, so your C#, Java, Python, and Ruby developers should all be able to use it. Of course we have a handy guide to making a library. When you’re done building your library keep track of the `bin` folder in the project directory. We’re going to need all its contents.

Configure your Nodes

I know what you’re thinking: “Where’s the actual Orbital Bus?” That’s the beauty of our distributed system. The bus has no central hub to stand up. Each service has a node or nodes that live alongside it to facilitate communication.
To get our local instance up we’ll need both a Dispatcher and a Receiver node. You can download them on our release page. With the release package unzipped in a location of your choosing, you’ll want to copy over your code generation output. Remember that bin folder we told you to keep track of? Copy all its contents into the Plugins folder for the Receiver. The Receiver will pull in those libraries at runtime and then it’s ready to communicate to your web service.
You’ll also want to set the values of the configuration files to the appropriate values for your local deployment. We have a handy article about all the configuration properties. Be sure to open up any ports you’re planning on using for your Dispatcher and Receiver!


Now it’s time to set everything in motion! If your Consul and/or RabbitMQ aren’t already running start them up. Start up your Receiver and register it with Consul. (We also have a Consul Manager tool in the release package. Check out this article to see how you can use it to register your service.) Start up your Dispatcher and your Producer and start sending messages!
If you’ve run into any snags or want a more thorough description, check out our How-To Guide. It describes each step in detail so you can see how every part of the process should be configured.
What’s next? Try implementing a second call. Check out our other documentation, like our Handshake Diagram to better understand the paths of the messages. Maybe add another Receiver with another web service to give you an idea of multiple nodes on the network. Hopefully this test will be a small step along your long future with ESBs. Enjoy!

  • Joseph
  • Pound

IT Organizations Need to Practice More, Dunk Less

Whenever I walk into a new client, the first things I hear from the Technology Executives are typically: “We need to modernize”, “We need to transform”, “We need to adopt <insert trendy tech buzzword>”. What I never hear is: “We need to bring our development and testing methodologies up to date”, “We need more collaboration across our teams”, “We need to inventory our skills and see what’s missing”.

If we think of the IT organization as a basketball team, that would be the equivalent of the coach saying: “We need more 3-pointers”, and “We need those fancy new shoes to be able to dunk”.  Whereas even the most inexperienced youth coach knows that the key to winning includes: “We need to practice dribbling and footwork”, “We need to communicate better on the court”, and “We need to improve our free throws/jump shots/rebounds”.

While it is both valid and necessary for IT organizations to push towards the big picture objectives highlighted by glossy Gartner and Forrester whitepapers, these have to be supported by continuous and deliberate investment in foundational concepts.

Let me step in as coach for a moment and propose a strategy for focusing on the foundation…

1)    Invest in the basics: Invest in good basic IT delivery concepts, kind of like dribbling, footwork, and basic fitness in basketball:

  • Make Business Analysis about teasing out the requirements from the Business’ objectives, rather than simply asking the Business to write down their requirements
  • Encourage good testing rigor and embed it throughout the entire solution delivery lifecycle, and not just at the end just before go-live
  • Promote good documentation habits and create templates for common documents (e.g., logical solution architecture, functional designs, interface specifications, data models)
  • Spend adequate time and budget to implement solutions which improve developer productivity (e.g., continuous integration, 3rd party frameworks)
  • Allocate budget for developers to learn different languages so they can be exposed to different software concepts and improve their coding skills
  • Spend generously on training for system analysis, modeling, design methodologies (e.g., domain driven design, SOA, microservices architecture, semantic modeling, BPMN), and not only on those being standardized by the organization, but to improve people’s ability to make smart decisions

2)    Communication is key: Create an environment that promotes collaboration and teamwork:

  • Create communities of practice across your organization (or connect to external groups) to build on collective knowledge and experience
  • Implement real-time collaboration tools (no, Sharepoint and instant messenger don’t count)
  • Make governance less about formal approvals and more about ensuring the right expertise is pulled in at the right stage of a given project
  • Adopt iterative delivery methods to promote frequent touch points between IT and Business obtaining feedback and ensuring alignment

3)    Focus on the right skills: Build the skills that support your strategic objectives. After all, dunking is only made possible by training to jump higher:

  • Strengthen Information and Data Management capabilities as a foundation for Big Data Analytics
  • Educate the team on hashing algorithms, binary trees, digital contracts, and distributed storage to bring Blockchain to the table naturally
  • Leveraging Cloud means good distributed system design, loosely coupled interfaces, container-ready applications, and security frameworks that can deal with 3rd party infrastructure
  • Adopting COTS requires strong technical Business Analysis, ability to negotiate requirements with the Business, and strong platform administration skills

We all want to work with the cool new tech and follow the latest trends. Working with the latest and greatest is what draws people to technology work. But the team will be stronger if the foundation is strong and the team is well connected so take time to build our own skills and our teams’ foundations so we can all up our game.

  • Shan
  • Gu