Category: Foci Technology

Technology updtes from the Foci Solutions team.

GitHub Actions – Deploying an Angular App

Recently I built an Angular demo application that showcases some of the features provided by Angular. I will deploy this application to GitHub pages using GitHub Actions, a newly released CI/CD platform that can be used by open source repositories for free.

Since I already have a completed Angular project pushed to GitHub, all I need to do is to create a GitHub workflow to build, test, and deploy the Angular application to GitHub Pages. Before I start, I need to create a folder named .github/workflows at the root of my repository.

To learn more about GitHub workflow, please read workflow syntax for GitHub Actions article.

Create a GitHub Actions Workflow File

In .github/workflows, I added a yaml file for the workflow. And inside the workflow file, you can choose to add the name of your workflow by adding:

name: workflow name

If you omit name inside the workflow file, GitHub will set workflow name to the workflow file path relative to the root of the repository.

GitHub is flexible with however you want to name your workflow file, but the file has to be a yaml file and it has to be in the .github/workflows folder.

Setup Workflow Trigger

A workflow trigger is required for a workflow. I configured the workflow to trigger on pushes to the master branch:

on:
  push:
    branches:
      - 'master'

If you want to use a different trigger for your workflow, please take a look at events that trigger workflows article and on section of workflow syntax for GitHub Actions.

Create the Angular Build And Test Job

In GitHub Actions, jobs are defined by a series of steps that are executed on a runner. Each job runs on a different workspace, meaning that files and job side effects are not kept between jobs. In order to reduce build time and build complexity, I will keep as much work inside one job as possible.

Thus, the job below is created to build and test the Angular application:

jobs:
  build:
    name: Build and Test
    runs-on: ubuntu-latest
    steps: ...

The latest version of Ubuntu GitHub-hosted runner is utilized for this job. But if you want to use a different Github-hosted runner, pease read virtual environments for GitHub-hosted runners article.

Checking out source code

Since jobs do not pull down the source code by default, you need to explicitly tell the job to do so. Therefore, I add the following to steps of build and test job:

- name: Checkout
  uses: actions/checkout@v1

Setup Node.js

To setup Node.js used by the job, add the following under steps of the job:

- name: Use Node 12.x
  uses: actions/setup-node@v1
  with:
    node-version: '12.x'

Build and test job is configured to use Node.js version 12.x. If you wish to use a different version, please take a look at using Node.js with GitHub Actions article.

Run build and test

To build and test the Angular application, I added some supporting scripts to the application’s package.json file:

"build:ci": "ng build --prod --sourceMap=false --base-href /YOUR_REPOSITORY_NAME_HERE/"
"test:ci": "ng test --watch=false --code-coverage --source-map true"

As you can see, the test:ci script will also generate code coverage results for us, which will be used later down the line.

Note: To avoid MIME type error due to invalid path, you need to set your base-href to your repository name

Then, I add the following to the job to build and test our application:

- name: Install dependencies
  run: npm ci
- name: Build
  run: npm run build:ci
- name: Test
  run: npm run test:ci

Upload artifacts

To expose the results of the current job to the next job, I need to configure build and test job to upload the build artifacts. I also configured the job to upload the code coverage results, so they can be reviewed.

- name: Archive build
  if: success()
  uses: actions/upload-artifact@v1
  with:
    name: deploy_dist
    path: dist
- name: Archive code coverage result
  if: success()
  uses: actions/upload-artifact@v1
  with:
    name: deploy_coverage
    path: coverage

if: success() is used to make sure upload artifact only runs if all the previous steps passed. For more information, read context and expression syntax for GitHub Actions article.

Create Deploy Job

With build and test job completed, I can create the job that will deploy the Angular application to GitHub Pages. I add the following yaml below build and test job:

deploy:
  runs-on: ubuntu-latest
  needs: build
  steps:
      - name: Checkout
        uses: actions/checkout@v1
      ...

needs: build is used to tell GitHub to only execute deploy job when build and test job completed successfully.

Download build artifact

I add the following under steps in the deploy job to download build artifact from build and test job:

- name: Download build
  uses: actions/download-artifact@v1
  with:
    name: deploy_dist

To learn more, take a look at persisting workflow data using artifacts article.

Deploy to GitHub Pages

I use GitHub Pages Deploy Action to deploy our Angular build to gh-pages branch of the project repository:

- name: Deploy to GitHub Pages
  uses: JamesIves/github-pages-deploy-action@releases/v3
  with:
    GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
    BRANCH: gh-pages
    FOLDER: deploy_dist/YOUR_PROJECT_NAME_HERE

GITHUB_TOKEN is used to avoid providing a personal access token, to learn more about GITHUB_TOKEN, read authenticating with the GITHUB_TOKEN article.

Conclusion

Once you check in your workflow file, which should look similar to the yaml below, to your master branch, you should see a GitHub workflow starting in the GitHub Actions page. When the workflow is complete, you will see the build output and test coverage results in the artifacts section and a branch called gh-pages will be created.

name: workflow name

on:
  push:
    branches:
      - 'master'

jobs:
  build:
    name: Build and Test
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v2
      - name: Use Node 12.x
        uses: actions/setup-node@v1
        with:
          node-version: '12.x'
      - name: Install dependencies
        run: npm ci
      - name: Build
        run: npm run build:ci
      - name: Test
        run: npm run test:ci
      - name: Archive build
        if: success()
        uses: actions/upload-artifact@v1
        with:
          name: deploy_dist
          path: dist
      - name: Archive code coverage result
        if: success()
        uses: actions/upload-artifact@v1
        with:
          name: deploy_coverage
          path: coverage
  deploy:
    runs-on: ubuntu-latest
    needs: build
    steps:
      - name: Checkout
        uses: actions/checkout@v1
      - name: Download build
        uses: actions/download-artifact@v1
        with:
          name: deploy_dist
      - name: Deploy to GitHub Pages
        uses: JamesIves/github-pages-deploy-action@releases/v3
        with:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          BRANCH: gh-pages
          FOLDER: deploy_dist/angular-demo

Ensure that your repository has GitHub Pages enabled and the deployment is based off gh-pages branch. If this is set up properly, your Angular application should be deployed!

  • Hannah
  • Sun

Multi Stage Pipelines & Azure DevOps

Many years ago, I wrote a blog post about TFS and DevOps. A lot has changed since then, with multiple versions of the build pipeline being released, but it continues to be one of the most trafficked articles on our site.   Microsoft has worked hard to create a better experience for build automation and continuous integration – so I worked hard on updating this walkthrough. Recently, Microsoft released the idea of multi stage pipelines that work and feel much like how GitLab CI works.

In this post I’ll walk through a basic YAML file and show how you can get a C# project up, built, and tested quickly.

Setup

We need to have a project that is checked into DevOps before we begin. I have a repository that I made for this blog up on DevOps. that is a basic dotnet core console application and a unit test project that goes along with it. At the time of writing this blog post you will also need to turn on the multi-stage pipelines Preview Feature in order to get the best view for these pipelines. You can do that by clicking on the user settings button

User Settings

Then click on preview features

Preview Features

Then ensure that multi-stage pipelines are enabled

Multi stage pipelines enabled

First Steps

First we need to add a YAML file into our project. I tend to put this file directly at root and name it azure-pipelines.yaml. Then we need to define our stages. A stage is a collection of jobs and can be run concurrently or can be dependent on another stage successfully completing. For this quick project we will have two different stages

  • Build
  • Test

In order to define these stages in our pipeline we need to write some YAML like

stages:
  - stage: build
    displayName: Build
  - stage: test
    displayName: Test
    dependsOn:
    - build

this will give us building blocks to add our jobs. If you check this file into DevOps and navigate to pipelines you can see that we have a pipeline defined without any runs associated to it.

multi stage pipeline showing in dashboard

Adding a Job

A job runs on a build agent. By default DevOps provides hosted build agents. These agents are a pre-configured VM that have a lot of different development tools pre-installed. I’ll be using the hosted agents for this post.

Let’s add in some YAML to add a job that will build our dotnet solution. We can do this in one of two ways, we can use a DevOps “task” or we can write a script. Tasks can provide a lot of features that you would normally need to script yourself. These can be very helpful, however it also hides a lot of what is being run. I tend to try and use tasks as they get updated regularly to add additional features and fix bugs. Microsoft hasn’t made tasks to solve every problem however so you will need to write some scripts eventually.

Example as Task

variables:
  buildConfiguration: "Release"
  
stages:
- stage: build
  displayName: Build
  pool:
    vmImage: "Ubuntu 16.04"    
  jobs:
  - job: build_dotnet_solution
    displayName: build dotnet solution
    steps:
    - task: DotNetCoreCLI@2
      inputs:
        command: build
        arguments: '--configuration $(buildConfiguration)'
- stage: test
  displayName: Test
  dependsOn:
  - build

Example as script

variables:
  buildConfiguration: "Release"
  
stages:
- stage: build
  displayName: Build
  pool:
    vmImage: "Ubuntu 16.04"    
  jobs:
  - job: build_dotnet_solution
    displayName: build dotnet solution
    steps:
    - script: |
      dotnet build --configuration $(buildConfiguration)
- stage: test
  displayName: Test
  dependsOn:
  - build

In both examples I have added a variable to set the build configuration setting for the pipeline. Variables are very helpful and DevOps also provides a lot of pre-defined variables for you. You can ready about them here.

Artifacts

Now that we have our job running and our solution is being built. We will probably want to retain these files. We will need to artifact these files if we want to use them in a different job, or we can download them later for manually testing the build.

variables:
  buildConfiguration: "Release"
  
stages:
- stage: build
  displayName: Build
  pool:
    vmImage: "Ubuntu 16.04"    
  jobs:
  - job: build_dotnet_solution
    displayName: build dotnet solution
    steps:
    - task: DotNetCoreCLI@2
      inputs:
        command: build
        arguments: '--configuration $(buildConfiguration)'
    - publish: $(System.DefaultWorkingDirectory)/src/demo-project/bin/$(buildConfiguration)/netcoreapp3.0/
      artifact: source
- stage: test
  displayName: Test
  dependsOn:
  - build

Once the build is completed you should see the artifacts on the build page. You can download them and use them in different jobs now.

multi stage pipeline artifacts published

Testing

Now that we have our code built, we can go ahead and run the tests for our application. DevOps also has the ability to show us test results through its dashboards. It’s easiest to use the task for this, as the task has capabilities to upload the tests results for us.

variables:
  buildConfiguration: "Release"
  
stages:
- stage: build
  displayName: Build
  pool:
    vmImage: "Ubuntu 16.04"    
  jobs:
  - job: build_dotnet_solution
    displayName: build dotnet solution
    steps:
    - task: DotNetCoreCLI@2
      inputs:
        command: build
        arguments: '--configuration $(buildConfiguration)'
    - publish: $(System.DefaultWorkingDirectory)/src/demo-project/bin/$(buildConfiguration)/netcoreapp3.0/
      artifact: source
- stage: test
  displayName: Test
  dependsOn:
  - build
  jobs:
  - job: test_dotnet_solution
    displayName: test dotnet solution
    steps:
    - task: DotNetCoreCLI@2
      inputs:
        command: test        
        arguments: '--configuration $(buildConfiguration)'
multi stage pipeline tests successful

With this, you now have a basic build and test pipeline that will run with every check-in to your repository. There is a lot more that can be done, such as managing environments and performing releases. I hope that this is a good starting block to get you moving with DevOps.

  • Dan
  • McCrady

Why .NET loves Linux

This is an update to the post I made a while ago titled “Why .Net Doesn’t need to be Expensive“.  A lot has changed since I made that post.  For example: Visual Studio Code wasn’t released, .NET Core was called vNext, and .NET hadn’t gone through it’s open-source transformation.  These introductions to the .NET ecosystem have changed the way .NET developers are working day-to-day and the path to deploying .NET on Linux is quickly becoming a mandatory requirement for IT shops.

Microsoft has been on the Linux love train for quite some time now, and we are slowly starting to see the fruits of this transformation.  Just recently the Linux Subsystem on Windows was added to Windows 10 without the need to turn on developer mode.  Developers now have a native Linux bash that can be enabled through the Windows store.  The new .NET core project templates in Visual Studio include Docker support with just the click of a checkbox.  Azure allows you to host .NET Core projects in Linux, and has moved to allow container orchestration using technologies like Azure Container Storage, and soon to come Azure AKS (its managed Kubernetes).  This change is also reaching out to the open source community.  Most large projects have either ported their program to use .NET standard or are in the process of converting it.

 

Why so much Love?

Plain and simple: moving custom code to the cloud means moving to Linux.  All cloud technologies that are coming out have Linux as a common player.  AWS, GCP, OCP, Azure, and even smaller players like Digital Ocean all provide Linux as the OS.  If an IT organization can’t migrate their .NET custom code to Linux they are dramatically limiting the choices they have to get to the cloud.  If you aren’t going with Linux you only have two real choices:

1)  Find a Windows Server VM in the cloud and deploy to IIS.  

Technically yes, you are moving to the cloud, but are you really gaining any benefits?  Your operations team still needs to monitor, maintain, and patch this VM just as if it was in your private data centre.  You also are quickly locking yourself to the provider since making an export of the VM to move to another provider will be difficult and require down time as you make that transition.

2)  Use Azure PaaS Offerings like Web App Services.  

Azure is still your friend here.  They will take your web application code that is slightly modified to be cloud ready and host it for you.  The Web App Services offering is really good stuff.  It comes with free auto-scaling, monitoring, and guaranteed availability.  They even take care of patching and maintaining the infrastructure.  The downside here is that until you have migrated that application to Linux you are tied to Azure.  No other cloud provider is looking at a way to host non-core .NET web sites.  So if Azure changes the pricing model, you will need to change with it.

 

What does Linux get you?

Linux buys you true portability of your applications. The most common way to get to true application portability is to write your applications as a 12 factor application, while using Docker to wrap your application and prepare it for deployment.  If you follow this procedure, then pretty much any platform is open for you to deploy your applications.  Microsoft is currently working to create Windows Server Docker containers like microsoft/nanoserver, but the licensing and deployment constraints are still unclear.  It appears that you need to deploy these images only on a licensed Windows Server 2016 system.  This restriction makes your application tightly coupled to Windows systems and reduces your deployment options significantly.

 

More investment for .NET Developers

A little while ago I was talking to a group about how the shift to Linux will be a big shift for .NET developers. Normally Microsoft would have a major release and developers could focus for a year or so to wrap their heads around it.  When the TPL was released, Async Await was the big player. Bloggers would write endless articles on how leverage this feature to introduce multi-threading into applications.  This update was all that .NET developers needed to focus on.  The next few years are changing a lot more than Async Await.  A new Operating System in Linux, arguably a new framework with .NET Core, Docker containers, container orchestrators like Kubernetes, all while building strong Dev Ops capabilities.  The future is bright for .NET but the time required to learn all the advantages is long.  I plan to keep our developers moving in this direction, since it is the brightest path forward for custom software development in general, including the .NET ecosystem.

 

  • Dan
  • McCrady

Using JavaScript and JSON as a Common Language in Orbital Bus

Large enterprises usually have many programming languages across their departments. These departments, often located in different cities, will build teams out of what they see as the best-available local resources. It’s fairly common to find large-scale enterprise or government groups that have applications written in .NET and Java, never mind the plethora of other languages and flavours thereof. This technological mishmash is a major challenge to any sort of enterprise service bus; one that Orbital Bus is trying to overcome.

In creating Orbital Bus, we decided at the start that developers shouldn’t have to learn any new languages to implement our solution. The learning curve had to be minimal to ensure wide-spread adoption. We were able to deliver some of that goal by creating our Code Generation utility. This tool would allow us to take a single input and compile it to code usable by our ESB. However, this tool still needs input, so what were we to do?

Enter Javascript. We decided that by making the code generation input Javascript we would make it accessible to as many developers as possible with no extra work. No matter what language you develop in, you’ve probably had to work on some Javascript, whether to create visual effects or to load data with an Ajax call. We could implement Javascript with a high degree of confidence that users would be able to work with it without any sort of intimidating ramp. Javascript also provides a feature-rich environment that we don’t have to worry about maintaining. If developers want functionality that already exists in a library it’s minimal work for them to implement it. Along with Javascript, we were also able to rely on the JSON schema standard for modelling objects. We don’t have to worry about maintaining an API for describing models in our system. We simply have to point towards the standard we support and let the JSON schema community do the heavy lifting.

What exactly are we doing with all this Javascript? I mentioned the use of schemas to define models. We use models to define the types which are expected for the receiver. We take in standard JSON schemas to create C# classes which are then deployed as part of a contract library with the receiver. This library is used by receiver and the dispatcher. (Check out our article about using MEF with our contact libraries.) The models defined in this schema are also the ones expected by our translation engine. The receiver node of Orbital Bus takes Javascript translation files which it executes in both directions. With this feature developers can implement any translation they want as the information passes through the receiver node. These translations are simple .js files with method calls. We even support logging and business errors through integrated methods. Check out our documentation for more information on implementation. We even used JSON files for our configurations rather than XML to make sure that our points of contact with Orbital Bus are as unified as possible. As we grow Orbital Bus’ functionality we expect to grow its use of Javascript.

The default Javascript translation template.
The default Javascript translation template.

It was tough trying to think of the best way to support a polylinguistic development environment. Thankfully Javascript gave us a single point of entry we could use across many development environments. There’s still work we want to do with our Javascript implementation. We want to integrate libraries by default in our translations, allowing developers to use library calls without having to include them manually. We also want to add Javascript to our collection of connectors for the Orbital Bus. Thankfully, with a common input set out, Orbital Bus will be free to grow its implementations while continuing to support developers from a wide variety of backgrounds.

  • Joseph
  • Pound

Dynamic Plugin Loading Using MEF

The Managed Extensibility Framework (MEF) is a library that enables software to discover and load libraries at runtime without hard-coded references. Microsoft included MEF in .NET framework version 4.0 and since then it has been commonly used for dependency resolution and inversion of control patterns.

Orbital Bus makes communication possible between different parties by sharing contract and schemas. A receiver has a contract library that has all the information needed for a dispatcher to make proper synchronous and asynchronous calls all the way to an end consumer. The dispatcher downloads a receiver’s contract library and then uses it to construct calls with the right data schemas. It became very clear to us during development that a crucial requirement was that the dispatcher to be able handle any downloaded contract library DLL and process it without code changes. This is where MEF comes into play. It lets us inject libraries, in this case the receiver’s contract libraries, at the start-up stage.

Once we chose to use MEF as our integration tool, we were able to start the Code Generation Project. This project is a convenient CLI tool that efficiently generates the contract libraries and plugins which are loaded by the receiver. These libraries are made available for download to any dispatcher on the mesh network. One challenge we encountered downloading multiple contract libraries for the dispatcher was how to distinguish between these contract libraries. What if two contracts have similar operation names? How can the dispatcher tell what is the right operation to select from its composition container? We were able to solve this challenge by making sure that each contract library generated has a unique ServiceId that would be exported as metadata within the contract library. This setting enables the dispatcher to filter out various operations based on their ServiceId:

    namespace ConsumerContractLibrary
    {
        [ExportMetadata("ServiceId", "ConsumerLibrary")]
        public class AddCustomerOperation : IOperationDescription {}
    }

When the receiver starts up, it will pull the plugins from its Plugins folder and load the plugin.dll and adapters into MEF’s CompositionContainer, a component used to manage the composition of parts. Those dependencies will be injected into the receiver as it loads. In addition to handling messages destined for the consumer, the receiver also serves as file server that waits for the dispatcher to download the contract library when needed.

    public PluginLoader(IConfigurationService config)
    {
        this.config = config;
        var container = this.BuildContainer(); // load the plugin DLLs and create composition container
        this.RegisterAdapters(container);
        var details = this.RegisterPlugins(container);
        this.BootStrapSubscriberDetails(details); //Creates needed dependencies and bootstraps the given details.
    }

After a dispatcher downloads the available contract library specifications into a composition container, it will filter out and return all the exported values in the container corresponding the given ServiceId.

    public static IEnumerable<T> GetExportedValues<T>(this CompositionContainer container,
            Func<IDictionary<string, object>, bool> predicate)
    {
        var exportedValues = new List<T>();

        foreach (var part in container.Catalog.Parts)
        {
            foreach (var ExportDef in part.ExportDefinitions)
            {
                if (ExportDef.ContractName == typeof(T).FullName)
                {
                    if (predicate(ExportDef.Metadata))
                        exportedValues.Add((T)part.CreatePart().GetExportedValue(ExportDef));
                }
            }
        }

        return exportedValues;
    }

Where the predicate clause is actively the filter we need for ServiceId:

    metadata => metadata.ContainsKeyWithValue(METADATAKEY, serviceId)

After filtering the process, the dispatcher has all the contract library operations that are supported by the receiver.

MEF proved invaluable in solving the problem of runtime library integrations and to enable the plugin architecture. This implementation allows Orbital Bus the flexibility for developers to customize or update their contract libraries, service configurations, and translations without affecting other services on the bus. As our work continues, we plan on looking closer at the issue of versioning in the dispatcher to keep its cache in sync with the receiver’s contract libraries, making Orbital Bus an even more agile messaging solution.

  • Dan
  • McCrady

Continuous Integration: Balancing Value and Effort

Continuous integration can be a tough sell to managers. It’s hard to describe the need for extra time and resources to build automated tests that should mimic what is already being done by developers. This advocacy can be especially difficult early in development when CI failures are common and the pipeline will need a lot of work. Why would any manager want a tool that creates more problems and interferes with the development cycle? A robust continuous integration pipeline is vital during development since it protects from the deployment of broken code and will generate more issues to remove bugs before production. Since Orbital Bus is an internal project, we decided to use it as an opportunity to build the kind of CI pipeline we always wanted to have on client sites.

Early on we looked at the possibility of automated provisioning of multiple machines for integration tests. We looked at a variety of tools including Vagrant, Salt Stack, and Chef and Puppet. What we found is that this automation was not worth the time investment. This post is supposed to be about the value of investing in a CI pipeline, so why are we talking about work we abandoned? To demonstrate that the value of a CI pipeline has to be proportionate to the time cost of maintaining it. When it came to automated provisioning we realized that we would spend more time maintaining that portion of the pipeline than reaping the benefits, so we stood up the VMs manually and replaced provisioning with a stage to clean the machines between runs.

As development progressed, we added to our pipeline, making sure that the time investment for each step was proportionate to the benefits we were receiving. Gradually we added the build process, unit tests, and automated end-to-end integration tests. As we continued to experiment we began using the GitLab CI runners to enhance our testing. We also discovered that GitLab could integrate with Jenkins, and brought our pipelines together to create an integrated dashboard on GitLab. As we neared the public release, we added a whole new stage for GitLab pages to deploy our documentation.

A shot of our Jenkins Continuous Integration pipeline builds.
A shot of our Jenkins pipeline builds.

As the saying goes, Rome was not built in a day. Neither was our continuous integration. We added to it gradually, and as we did we had to overcome a number of obstacles. Our greatest problem has been false negatives. False negatives immediately negate the benefits of continuous integration because the team stops respecting the errors being thrown by the system. At one point, our disregard for the failures on the CI pipeline prevented us from noticing a significant compatibility error in our code. Each failure was an opportunity for us to understand how our code was running on multiple platforms, to explore the delta between development and production environments, and ultimately made our solution more robust. From the perspective of productivity it was costly, but the time greatly outweighed the value of hardening of our solution.

A capture of one of our Continuous Integration GitLab pipelines.
A capture of one of our GitLab pipelines.

You would be mistaken if you thought we’ve stopped working on our pipeline. We have plans to continue to grow our CI, expanding our integration tests to include performance benchmarks and to work with the multiple projects which have originated in the Orbital Bus development. These additional steps and tests will be developed alongside our new features, so as to integrate organically. As our solution matures, so will our continuous integration, which means we can continue to depend on it for increased returns in our development cycle.

  • Joseph
  • Pound

Getting Started with Orbital Bus

You’ve heard people talk about enterprise service buses and you think it’s time you learned out to use one. You read an awesome blog post about this new thing called Orbital Bus and you think it would be a good project to play around with. Where should you start? Let’s start here.

Understanding the Architecture

I’m sure you’ve checked out our project README, but just in case you need a refresher here’s a quick overview of how Orbital Bus works.
Everything starts with the Producer and the Consumer. The Producer produces calls into the system. These calls can be synchronous requests or asynchronous fire-and-forget messages. What’s important is that the Producer is what initiates the action. The Consumer consumes messages off the queue. Both the Producer and Consumer are external to Orbital Bus. They might be third-party services, COTS products, or custom code applications made by your developer(s). The Orbital Connector is a library the Producer uses to get messages into the system. We have a whole project dedicated to connectors. The Connector uses RabbitMQ to pass messages to the Dispatcher. The Dispatcher listens for incoming messages, finds services via Consul, and sends messages to the Receiver via it’s queue. Receiver’s do the heavy lifting. They load custom libraries, transform messages, and use adapters to send messages to the Consumer.
Here’s a diagram to give you an idea of the general flow of information:

An overview of the Orbital Bus flow.
An overview of the Orbital Bus flow.

Getting ready

For this little test, let’s put everything on your local machine. You’ll need to prepare by installing two third-party components: Consul and RabbitMQ. We use these for service discovery and message communication respectively. If you want some help you can check out our more detailed instructions. Since Orbital Bus is ready to communicate with any RESTful web service, we’re going to use JSONPlaceholder. Feel free to check it out and get a feel for the kind of messages you want to send.

Build a Producer

The Producer is the instigator of the pipeline. It calls out using the Orbital Connector and RabbitMQ to get the Dispatcher communicating with other nodes. Since our current Orbital Connector is written in .NET, you’ll want a .NET application that references it. We have a NuGet package to make it simple. We have four methods for sending with the connector: synchronously, asynchronously, synchronously that can be awaited, and a one-to-many route. We recommend starting with a synchronous call. All the producer needs is the service ID for the destination service (which you add to Consul below) and a JSON-serialized payload.
For more detailed instructions on making a Producer, check out our How-To Guide. It’s got a thorough process with code samples and everything!

Use Code Generation

Next we’ll setup the side of the Consumer. As we said above, we’re not going to bother building a web service (though you can if you really want to).
To get started you’re going to have to download the Code Generation project. We made this tool to help generate the necessary libraries for the Orbital Bus Receiver to connect with a service. All the files you work on for Code Generation are Javascript, so your C#, Java, Python, and Ruby developers should all be able to use it. Of course we have a handy guide to making a library. When you’re done building your library keep track of the `bin` folder in the project directory. We’re going to need all its contents.

Configure your Nodes

I know what you’re thinking: “Where’s the actual Orbital Bus?” That’s the beauty of our distributed system. The bus has no central hub to stand up. Each service has a node or nodes that live alongside it to facilitate communication.
To get our local instance up we’ll need both a Dispatcher and a Receiver node. You can download them on our release page. With the release package unzipped in a location of your choosing, you’ll want to copy over your code generation output. Remember that bin folder we told you to keep track of? Copy all its contents into the Plugins folder for the Receiver. The Receiver will pull in those libraries at runtime and then it’s ready to communicate to your web service.
You’ll also want to set the values of the configuration files to the appropriate values for your local deployment. We have a handy article about all the configuration properties. Be sure to open up any ports you’re planning on using for your Dispatcher and Receiver!

Run!

Now it’s time to set everything in motion! If your Consul and/or RabbitMQ aren’t already running start them up. Start up your Receiver and register it with Consul. (We also have a Consul Manager tool in the release package. Check out this article to see how you can use it to register your service.) Start up your Dispatcher and your Producer and start sending messages!
If you’ve run into any snags or want a more thorough description, check out our How-To Guide. It describes each step in detail so you can see how every part of the process should be configured.
What’s next? Try implementing a second call. Check out our other documentation, like our Handshake Diagram to better understand the paths of the messages. Maybe add another Receiver with another web service to give you an idea of multiple nodes on the network. Hopefully this test will be a small step along your long future with ESBs. Enjoy!

  • Joseph
  • Pound

Introducing the Orbital Bus

Today, we are proud to announce the public beta launch of the Orbital Bus open source project. The Orbital Bus is a distributed Enterprise Service Bus (ESB) intended to make it easier for developers to implement smart, reusable, loosely coupled services. We believe that a peer-to-peer mesh of lightweight integration nodes provide a much more robust and flexible ESB architecture than the traditional hub/spoke approach. Please check out our public repository and documentation.

I have been working in Service-Oriented Architecture (SOA) and with Enterprise Service Bus (ESB’s) for the majority of my career. I can’t even count how many debates I’ve been in on the value of implementing an ESB and how a hub/spoke architecture is more sustainable than point-to-point integrations. In fact, for a number of years, I took the architectural benefits of an ESB for granted.

By 2013, I was starting to see a pattern of large enterprise clients struggling to adopt SOA not for architectural or technology reasons, but for organizational and cultural reasons. I came to the realization that while ESB’s were being sold as bringing about a certain type of architectural rigor, it could only do that if the organization was very hierarchical and centralized. The ESB is really not well suited for an IT organization made up of more distributed teams and governance structures.

We thought of a better way to solve the real-time integration problem. With the help of some funding from NRC’s IRAP program, we started development of the Orbital Bus in 2015. The goal was to solve some of the major shortcomings that we see in traditional ESB’s:

Single Point of Failure – An ESB creates a single point of failure, as all common integrations must pass through it. Enterprises spend a significant amount of money to improve the availability of their ESB infrastructures (e.g., hardware redundancy, DR sites, clustering). What if the responsibility of translation and routing was pushed out to the edges to where the service providers are hosted? There would be no point of failure in the middle. The availability of each service would purely be dictated by the service provider and the lightweight integration node sitting in front of it, which means one node going down wouldn’t impact the rest of the ecosystem.

Implementation Timelines and Cost – ESB’s take a long time and a lot of money to stand up. A lot of effort is needed to design the architecture so it’s future proof and robust enough for the entire enterprise. And then there’s the cost of the infrastructure, software licenses, and redundancy. Never mind the cost of bringing in external expertise on whichever platform is being implemented. What if the platform actually promoted more organic growth? Each node is lightweight and could be stood up with no additional network zone or infrastructure. Developers will be able to build decoupled interfaces between handful of systems in a matter of days rather than months. And instead of needing to fiddle with complex platform configurations and go through 200+ page installation guides, the ESB could be stood up with a handful of scripts and created native service bindings in commodity languages such as C#.

Developer Empowerment – ESB’s move the responsibility of creating decoupled interfaces from the developer of the service into a central ESB team. It’s no surprise that nearly every SOA program I’ve ever worked on faced significant resistance from the development teams. And let’s face it, most IT organizations are poorly equipped to handle major culture changes, and that resistance often results in the killing of a SOA program. What if the architecture actually empowered developers to build better and more abstracted interfaces rather than try to wrestle control away from them? The platform would promote a contract-first implementation approach and generate all the boring binding and serialization code so developers can focus on the more fun stuff. By having the service interface code artifacts tied more closely to the service provider code, it opens up opportunities to better manage versions and dependencies through source control and CI.

We’ve had a lot of fun designing and developing this product over the last two and a half years. We are excited to offer the Orbital Bus to the community to collaborate and gather as much feedback as we can. Working with the open source community, we hope to create a more efficient and developer-centric way of integrating across distributed systems. We hope you will join us on this journey!

  • Shan
  • Gu

Embracing the Technology Meltingpot

One of the most common objections I hear among my large enterprise and government clients when discussing adopting new technologies is “We’re a Microsoft shop so why would we look at a Java-based tool?” or open source, or Google, or Salesforce, and the list goes on.  This objection is grounded in the opinion that increasing the technology mix increases complexity, and thus increases the operational risk and cost.

However, the biggest challenge for IT executives has shifted from tightening their operational budgets to managing the constant risk of technologies becoming unsupported or having a vendor development path that no longer aligns with the enterprise’s needs.  The technology market is evolving and changing faster than ever.  Programming languages grow and wane in popularity and support over a cycle of just a couple of years; new frameworks breath new life into technologies that were previously left to die; acquisitions can make entire enterprise platforms obsolete overnight; and new innovations are happening constantly throughout the IT stack from networking, to virtualization, to application platforms, to business applications.

In such a rapidly changing and unpredictable environment, the best approach to managing risk (as any good investment adviser will tell you) wordleis to diversify.

In fact, any IT organization that doesn’t have an openness to innovate and investigate new technologies will ultimate die a slow death through obsolescence and operational bloat.

Instead of being afraid of the operational complexity and cost introduced by shaking up the technology stack, IT executives should be embracing them as opportunities for their teams to develop new skills and to gain a wider perspective on how to implement solutions.  Instead of remaining within the comfortable confines and protections of a given development framework, developers should be pushed to understand how different technologies interoperate and the importance of having disciplined SDLC methodologies to deal with complex deployments.

The key to success in all of this is integration.  Developing mature integration practices like modular design, loose coupling, and standards-based interoperability ensures that new technologies can be plugged into and unplugged from the enterprise without cascading impacts on existing systems.  Disciplined SDLC methodology especially around configuration management and change control allow different technology teams to work in parallel, resulting in more efficient project delivery.

IT organizations must adopt a culture of openness and curiosity to from the inevitable changes to their technology ecosystem.  They must invest in mature and disciplined integration practices to effectively manage those changes.

  • Shan
  • Gu

Prescriptive Governance Leads to Shadow IT

Let’s face it, to most people in IT, “Governance” is a dirty word.  That perception is not born out of an idea that IT governance is bad, but out of the reality that IT governance is badly implemented in most organizations.  When an organization confuses good IT governance with overly detailed and prescriptive IT governance, it starts to constrain rather than enable its people.  And when people feel constrained and not empowered to make the decisions, they work around or against the process, which then results in proliferation of shadow IT.Governance_ShadowIT

The reason for this phenomenon is that many organizations approach IT governance with a few very flawed assumptions around how software and technology projects work:

  1. Changes are bad;
  2. Consistent results are driven from consistent processes;
  3. Standardization reduces the number of decisions, which makes the process more efficient and consistent; and
  4. Measure of a good project is on-budget and on-time.
These assumptions are fatal to any IT organization because they fail to recognize the realities of the nature of technology and the people implementing it:
  1. Technology is about change.  The whole point of implementing technology is to support and enable more rapidly changing business needs.  Add that to the speed of technology changes, the default position of any good IT governance process should be to assume constant change and deal with it head on instead of trying to contain and avoid it.
  2. Speaking of change, there is no way for a one-size-fits-all process to anticipate all the different ways a technology project can unfold.  In fact, the more prescriptive a process is, the less likely it will fit the next project.  There are simply too many variables and moving targets in modern enterprise IT projects.
  3. You hired smart people to implement technology and guess what?  Smart people like to make decisions and feel ownership of their work.  By over-standardizing, talented tech resources are turned into the IT equivalent of assembly line workers.  At best they become disengaged and stale in their skills.  But more likely, they take matters into their own hands and create opportunities to make decisions or fight the governance process to retain some ownership of the work they’re being asked to do.
  4. IT initiatives exist to make your business better and the users happier.  While budget, scope, and schedule are important, they’re management measures on the process rather than whether a project was truly successful.
So how do we fix this?  In a word, simplify!  And here are some things to think about when slimming down your IT governance process:
  1. Reduce and align the number of gates to the number of “point-of-no-return” decisions on a project (e.g., business case, functional design, technical design, go-live).
  2. For each gate, focus on what kinds of decisions need to be made, guidance on people who should be involved, and some basic examples of information that should be provided as input.  Let the smart people do what they’re being paid to do, which is talk it out and own the decision.
  3. Standardize only the mundane and highly repeatable decisions.  Standards are about helping speed up the process and focusing the effort of only debating things that deserve to be debated.  It’s not about compliance metrics and enforcement.  If you have to put in an exception or exemption process, you’ve gone too far.
  4. Ensure communications on what the project will deliver in terms of functionality and value.  Most stakeholders care a lot more about whether a particular feature set is being implemented for their users rather than whether a particular deliverable is green or yellow.
In the end, this is about creating a process that helps to focus people on the real objectives of the business and fostering communications.  It’s about assuming that people are intelligent and reasonable and capable of making good decisions when given the opportunity.  And if that turns out not to be the case, it’s a HR problem and not something that should be fixed with yet more governance processes.
  • Shan
  • Gu