Category: Foci Technology

Technology updtes from the Foci Solutions team.

Dynamic Plugin Loading Using MEF

The Managed Extensibility Framework (MEF) is a library that enables software to discover and load libraries at runtime without hard-coded references. Microsoft included MEF in .NET framework version 4.0 and since then it has been commonly used for dependency resolution and inversion of control patterns.

Orbital Bus makes communication possible between different parties by sharing contract and schemas. A receiver has a contract library that has all the information needed for a dispatcher to make proper synchronous and asynchronous calls all the way to an end consumer. The dispatcher downloads a receiver’s contract library and then uses it to construct calls with the right data schemas. It became very clear to us during development that a crucial requirement was that the dispatcher to be able handle any downloaded contract library DLL and process it without code changes. This is where MEF comes into play. It lets us inject libraries, in this case the receiver’s contract libraries, at the start-up stage.

Once we chose to use MEF as our integration tool, we were able to start the Code Generation Project. This project is a convenient CLI tool that efficiently generates the contract libraries and plugins which are loaded by the receiver. These libraries are made available for download to any dispatcher on the mesh network. One challenge we encountered downloading multiple contract libraries for the dispatcher was how to distinguish between these contract libraries. What if two contracts have similar operation names? How can the dispatcher tell what is the right operation to select from its composition container? We were able to solve this challenge by making sure that each contract library generated has a unique ServiceId that would be exported as metadata within the contract library. This setting enables the dispatcher to filter out various operations based on their ServiceId:

    namespace ConsumerContractLibrary
    {
        [ExportMetadata("ServiceId", "ConsumerLibrary")]
        public class AddCustomerOperation : IOperationDescription {}
    }

When the receiver starts up, it will pull the plugins from its Plugins folder and load the plugin.dll and adapters into MEF’s CompositionContainer, a component used to manage the composition of parts. Those dependencies will be injected into the receiver as it loads. In addition to handling messages destined for the consumer, the receiver also serves as file server that waits for the dispatcher to download the contract library when needed.

    public PluginLoader(IConfigurationService config)
    {
        this.config = config;
        var container = this.BuildContainer(); // load the plugin DLLs and create composition container
        this.RegisterAdapters(container);
        var details = this.RegisterPlugins(container);
        this.BootStrapSubscriberDetails(details); //Creates needed dependencies and bootstraps the given details.
    }

After a dispatcher downloads the available contract library specifications into a composition container, it will filter out and return all the exported values in the container corresponding the given ServiceId.

    public static IEnumerable<T> GetExportedValues<T>(this CompositionContainer container,
            Func<IDictionary<string, object>, bool> predicate)
    {
        var exportedValues = new List<T>();

        foreach (var part in container.Catalog.Parts)
        {
            foreach (var ExportDef in part.ExportDefinitions)
            {
                if (ExportDef.ContractName == typeof(T).FullName)
                {
                    if (predicate(ExportDef.Metadata))
                        exportedValues.Add((T)part.CreatePart().GetExportedValue(ExportDef));
                }
            }
        }

        return exportedValues;
    }

Where the predicate clause is actively the filter we need for ServiceId:

    metadata => metadata.ContainsKeyWithValue(METADATAKEY, serviceId)

After filtering the process, the dispatcher has all the contract library operations that are supported by the receiver.

MEF proved invaluable in solving the problem of runtime library integrations and to enable the plugin architecture. This implementation allows Orbital Bus the flexibility for developers to customize or update their contract libraries, service configurations, and translations without affecting other services on the bus. As our work continues, we plan on looking closer at the issue of versioning in the dispatcher to keep its cache in sync with the receiver’s contract libraries, making Orbital Bus an even more agile messaging solution.

  • Yi
  • Luo

Continuous Integration: Balancing Value and Effort

Continuous integration can be a tough sell to managers. It’s hard to describe the need for extra time and resources to build automated tests that should mimic what is already being done by developers. This advocacy can be especially difficult early in development when CI failures are common and the pipeline will need a lot of work. Why would any manager want a tool that creates more problems and interferes with the development cycle? A robust continuous integration pipeline is vital during development since it protects from the deployment of broken code and will generate more issues to remove bugs before production. Since Orbital Bus is an internal project, we decided to use it as an opportunity to build the kind of CI pipeline we always wanted to have on client sites.

Early on we looked at the possibility of automated provisioning of multiple machines for integration tests. We looked at a variety of tools including Vagrant, Salt Stack, and Chef and Puppet. What we found is that this automation was not worth the time investment. This post is supposed to be about the value of investing in a CI pipeline, so why are we talking about work we abandoned? To demonstrate that the value of a CI pipeline has to be proportionate to the time cost of maintaining it. When it came to automated provisioning we realized that we would spend more time maintaining that portion of the pipeline than reaping the benefits, so we stood up the VMs manually and replaced provisioning with a stage to clean the machines between runs.

As development progressed, we added to our pipeline, making sure that the time investment for each step was proportionate to the benefits we were receiving. Gradually we added the build process, unit tests, and automated end-to-end integration tests. As we continued to experiment we began using the GitLab CI runners to enhance our testing. We also discovered that GitLab could integrate with Jenkins, and brought our pipelines together to create an integrated dashboard on GitLab. As we neared the public release, we added a whole new stage for GitLab pages to deploy our documentation.

A shot of our Jenkins Continuous Integration pipeline builds.
A shot of our Jenkins pipeline builds.

As the saying goes, Rome was not built in a day. Neither was our continuous integration. We added to it gradually, and as we did we had to overcome a number of obstacles. Our greatest problem has been false negatives. False negatives immediately negate the benefits of continuous integration because the team stops respecting the errors being thrown by the system. At one point, our disregard for the failures on the CI pipeline prevented us from noticing a significant compatibility error in our code. Each failure was an opportunity for us to understand how our code was running on multiple platforms, to explore the delta between development and production environments, and ultimately made our solution more robust. From the perspective of productivity it was costly, but the time greatly outweighed the value of hardening of our solution.

A capture of one of our Continuous Integration GitLab pipelines.
A capture of one of our GitLab pipelines.

You would be mistaken if you thought we’ve stopped working on our pipeline. We have plans to continue to grow our CI, expanding our integration tests to include performance benchmarks and to work with the multiple projects which have originated in the Orbital Bus development. These additional steps and tests will be developed alongside our new features, so as to integrate organically. As our solution matures, so will our continuous integration, which means we can continue to depend on it for increased returns in our development cycle.

  • Joseph
  • Pound

Securing the Orbital Bus

After getting familiar with the Orbital Bus Architecture, and how it solves the traditional Enterprise Service Bus (ESB) shortcomings, it was time for our development team to create a secure solution around the components and communication channels of our distributed solution.

The challenge:

The Orbital Bus makes exchanging information possible between various parties. To this effect, Orbital Bus has three components involving communication:

  • RabbitMQ is the message broker across its various components.
  • Orbital Bus service registration and discovery is built on top of Consul.
  • The receiver calls out to the consumer with a REST Adapter that is built using the RestSharp Client.

These communication components support TLS-Encryption and HTTP authentication. We also want to support additional authentication and message protection mechanisms in the future. In order to implement these solutions Orbital Bus needs to provide a way to save credentials, X.509 certificates, and other forms of tokens. To summarize, the challenges we encountered in developing Orbital Bus were:

  1. Provide a secure vault to store various types of credentials, certificates, and tokens.
  2. Make the security features optional so it could be implemented only when needed.

The Solution

While working on the Orbital Bus it became obvious that a secure vault was needed to save sensitive information such as credentials, tokens, and certificates. Inspired by Java Keystore, Foci Solutions designed and developed a platform-agonstic C# Keystore solution that could work on Windows or Linux. Foci’s Keystore is available as a Nuget Package, and it also comes with a Keystore Manager CLI Tool to perform CRUD operations on the Keystore directly. Please visit the Keystore’s How to Guide for more details on how to use the Keystore and its manager.

The Keystore addresses the first security challenge. Your system requires a secure RabbitMQ client? Not a problem. You can have the credentials saved in the Keystore and use them whenever needed. Your Orbital Bus implementation requires using a certificate for service discovery through Consul? The Keystore can encrypt and save the certificate to be used whenever needed. If you look closely at the Orbital Bus API Documentation, you will notice that there is a KeystoreService and a KeystoreRepository that makes the integration with Foci’s Keystore Seamless. The Keystore’s CRUD repository makes it available to any part of the Orbital Bus components via the KeystoreService.

Now that the first security challenge has been addressed through the Keystore integration, let’s move on to the second challenge: How to make security available but optional? The first thought that comes to mind is to modify Orbital Bus code. After further consideration, it becomes very clear that code modification based on security requirements is an expensive approach that necessitates code change based on the implementation requirements. We decided to integrate the security options into our configuration service to allow changes on the fly. This way security options throughout the Orbital Bus solution can be toggled with minimal effort. You want to secure your dispatcher’s communication to the RabbitMQ client? Then all you need is to turn on a security flag and provide the RabbitMQ credentials. Just let Orbital Bus’ configuration service take care of the rest.

How to Use the Keystore

Foci’s Keystore can accommodate various entry types like certificates, key pairs, username/password pairs, and tokens. Each entry in the Keystore has a unique Alias to keep them organized. The Keystore can be configured to encrypt/decrypt its content using either the Current User or Local Machine data protection scopes. The Keystore is fully integrated for use by any component of the Orbital Bus like the dispatcher or receiver. You will only need to initialize the Keystore with the Keystore Manager Tool and add any credentials or certificates your solution requires. For example: Your implementation requires a secure communication between the dispatcher and RabbitMQ using a username and password? All you need to do is create the Keystore using the Keystore Manager Tool and add a new entry for the required credentials with a unique Alias. What’s next? How to retrieve the stored entries? What’s this Alias for? How to use it? All this will be explained in the next section.

How to Configure Security

Orbital Bus approach favours configuration over customization for obvious reasons. In this section we will walk through how you can configure RabbitMQ, Consul, and the REST Adapter to be secure. The Orbital Bus has a KeystoreService that sits in the Root of the solution. The KeystoreService is injected into the ConfigurationService class. This ConfigurationService is a powerful and flexible tool. It can be injected into any component and it imports any set of configurations that are stored in a specified JSON file mapped into their own configuration model. For example: The ConfigurationService is injected into the DispatcherRegistration in order to configure the dispatcher with settings including the RabbitMQ options for addresses, credentials, and certificates.

RabbitMQ Configuration

Both the dispatcher and receiver establish RabbitMQ buses that can be configured as secure. The following is a JSON configuration file for a dispatcher that has the RabbitMQ security enabled:

{
  "DispatcherConfiguration":
  {
    "BusHostIp": "localhost",
    "BusHostPort": 5672,
    "ContractLibraryPrefix": "",
    "ConsulConfiguration": {
      "HostIp": "localhost",
      "HostPort": 8500
    }
  },
  "BaseConfiguration": {
    "RabbitMQConfiguration": {
      "BusHostIp": "localhost",
      "BusHostPort": 5672,
      "SslEnabled": "true",
      "Alias": "rabbitCredentials"
    }
  }
}

You might notice that the property SslEnabled is set to true, and there is an Alias property with the value “rabbitCredentials”. This simple configuration allows Orbital Bus to enable secure communications with the RabbitMQ server. The Alias here is the unique name we assigned to the credentials entry saved in the Keystore using the Keystore Manger Tool. Securing RabbitMQ in Orbital Bus is as simple as this. Save your credentials in the Keystore, and make sure you edit your configuration to point to the stored credentials Alias.

Consul Configuration

For Orbital Bus we implemented Consul security connections with certificate authentication. Any REST client or request created to communicate to Consul should have an appended certificate for authentication. In return, Consul will return its certificate to authenticate to the client. The following is a JSON configuration file for a dispatcher that has the Consul security enabled:

{
  "DispatcherConfiguration":   
  {
    "BusHostIp": "localhost",
    "BusHostPort": 5672,
    "ContractLibraryPrefix": "",
    "ConsulConfiguration": {
      "HostIp": "localhost",
      "HostPort": 8500
    }
  },
  "BaseConfiguration": {
    "ConsulConfiguration": {
      "HostIp": "localhost",
      "HostPort": 8500,
      "SslEnabledConsul": "true",
      "Alias": "consulcert"
    }
  }
}

Here a similar approach to the RabbitMQ implementation is used. An entry with the Alias “consulcert” is referenced to retrieve the stored certificate that would be injected into the ConsulService when its initialized. The service then appends that certificate to requests.

REST API

The REST Adapter follows a similar approach to enable and configure secure HTTP communications. The RestAdapterConfiguration class has a SecureConsumer flag to indicate if the security is enabled and a ConsumerAlias contains the unique Alias name for the credentials in the Keystore.

Security is always a pressing concern and the best solution is not often easily apparent. In building the Keystore, we sought to make a tool that could be used easily and repeatedly, while at the same time making it an integral part of the Orbital Bus. We recommend checking out the How To Guide and trying it out yourself.

  • Rabie
  • Almatarneh

Testing an ESB with Selenium!

Manual integration testing is the first “test plan” in application development. This strategy quickly becomes a pain, so introducing automation early is important with any project. When working on an enterprise service bus, automated testing is a must have. Early in the development of Orbital Bus we began implementing automated unit tests and even automated integration tests in our continuous integration pipeline. Scripting such tests is a common approach for similar ESB projects. Why are we writing an article about Selenium then? Eventually we decided that console applications weren’t enough. It’s very common to have web applications that call to multiple micro-services via an ESB, so we wanted to build one and test it out. How would we automate those tests? We chose Selenium.

 

Selenium is an open-source testing solution for web apps. Selenium helps you automate tasks to make sure your web application is working as you expect. Selenium has multiple tools:

  • Selenium 2 (WebDriver): Supplies a well-designed object-oriented API that provides improved support for modern, advanced web-app testing problems.
  • Selenium 1 (Remote Control): Provides some features that may not be available in Selenium 2 for a while, including support for almost every browser and support of several additional languages (i.e. Java, Javascript, Ruby, PHP, Python, Perl, and C#).
  • Selenium IDE: Helps with rapid prototyping of tests with Selenium for experienced developers or beginner programmers who are looking to learn test automation.
  • Selenium-Grid: Allows for tests to run on different machines and in different browsers in parallel.

For Orbital Bus, we used Selenium 2 since it supports nine different drivers (including mobile OS’s). That ensures going forward we can adapt our tests however we need.

We know Orbital Bus will be an essential component in a web application, so we needed to test the ability of Orbital Bus to handle a lot of requests from a browser-based application. By stressing Orbital Bus with a lot of requests, we could verify that our message broker receives and delivers the messages properly and the Dispatcher and Receiver handles and translates the messages as expected.

In addition to Selenium 2, we needed to use a web driver to conduct the testing. We chose the Phantomjs Driver. This driver is “headless”, meaning we were able to manipulate all the elements in the page. Keeping in mind we wanted to stress test the system, we weren’t concerned with complicated UI scenarios. We were using Selenium just to fire requests and make sure that a UI element changed showing receipt of the response. Other test harnesses for browsers often open extra pages to capture the screen. That kind of interaction was beyond our scope. We wanted to just focus on messages getting through. The following code section is an example of our tests.


public void RequestsForYahooWeather()
{

var countryJsonPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Configs", "countries.json");
var jsonString = File.ReadAllText(countryJsonPath);
List countries = JsonConvert.DeserializeObject<List>(jsonString);

foreach (country countryO in countries)
{
var driver = new PhantomJSDriver();
WebDriverWait wait = new WebDriverWait(driver, TimeSpan.FromSeconds(10));
driver.Navigate().GoToUrl("https://localhost:55794/Home/Weather");
IWebElement wElement = driver.FindElement(By.Id("CityName"));
IWebElement wElementTemperature = driver.FindElement(By.Id("Temperature"));
wElement.SendKeys(countryO.capital);
IWebElement wElementButton = driver.FindElement(By.Id("getWeatherInfo"));
wElementButton.Click();
Assert.NotStrictEqual("-", wait.Until(drv => drv.FindElement(By.Id("Temperature")).GetAttribute("value")));

driver.Quit();
}

}

Our experience with Selenium was easy to implement and quick in performing the necessary tests. Our tests successfully passed, demonstrating that Orbital Bus can work with web applications and have no negative impact over console apps. Selenium not only let us confirm Orbital Bus was working as intended with a web application, but it will afford us with the flexibility and performance needed to grow these tests in the future as we expand our use cases.

 

  • Maria
  • Reyes Freaner

Getting Started with Orbital Bus

You’ve heard people talk about enterprise service buses and you think it’s time you learned out to use one. You read an awesome blog post about this new thing called Orbital Bus and you think it would be a good project to play around with. Where should you start? Let’s start here.

Understanding the Architecture

I’m sure you’ve checked out our project README, but just in case you need a refresher here’s a quick overview of how Orbital Bus works.
Everything starts with the Producer and the Consumer. The Producer produces calls into the system. These calls can be synchronous requests or asynchronous fire-and-forget messages. What’s important is that the Producer is what initiates the action. The Consumer consumes messages off the queue. Both the Producer and Consumer are external to Orbital Bus. They might be third-party services, COTS products, or custom code applications made by your developer(s). The Orbital Connector is a library the Producer uses to get messages into the system. We have a whole project dedicated to connectors. The Connector uses RabbitMQ to pass messages to the Dispatcher. The Dispatcher listens for incoming messages, finds services via Consul, and sends messages to the Receiver via it’s queue. Receiver’s do the heavy lifting. They load custom libraries, transform messages, and use adapters to send messages to the Consumer.
Here’s a diagram to give you an idea of the general flow of information:
An overview of the Orbital Bus flow.
An overview of the Orbital Bus flow.

Getting ready

For this little test, let’s put everything on your local machine. You’ll need to prepare by installing two third-party components: Consul and RabbitMQ. We use these for service discovery and message communication respectively. If you want some help you can check out our more detailed instructions. Since Orbital Bus is ready to communicate with any RESTful web service, we’re going to use JSONPlaceholder. Feel free to check it out and get a feel for the kind of messages you want to send.

Build a Producer

The Producer is the instigator of the pipeline. It calls out using the Orbital Connector and RabbitMQ to get the Dispatcher communicating with other nodes. Since our current Orbital Connector is written in .NET, you’ll want a .NET application that references it. We have a NuGet package to make it simple. We have four methods for sending with the connector: synchronously, asynchronously, synchronously that can be awaited, and a one-to-many route. We recommend starting with a synchronous call. All the producer needs is the service ID for the destination service (which you add to Consul below) and a JSON-serialized payload.
For more detailed instructions on making a Producer, check out our How-To Guide. It’s got a thorough process with code samples and everything!

Use Code Generation

Next we’ll setup the side of the Consumer. As we said above, we’re not going to bother building a web service (though you can if you really want to).
To get started you’re going to have to download the Code Generation project. We made this tool to help generate the necessary libraries for the Orbital Bus Receiver to connect with a service. All the files you work on for Code Generation are Javascript, so your C#, Java, Python, and Ruby developers should all be able to use it. Of course we have a handy guide to making a library. When you’re done building your library keep track of the `bin` folder in the project directory. We’re going to need all its contents.

Configure your Nodes

I know what you’re thinking: “Where’s the actual Orbital Bus?” That’s the beauty of our distributed system. The bus has no central hub to stand up. Each service has a node or nodes that live alongside it to facilitate communication.
To get our local instance up we’ll need both a Dispatcher and a Receiver node. You can download them on our release page. With the release package unzipped in a location of your choosing, you’ll want to copy over your code generation output. Remember that bin folder we told you to keep track of? Copy all its contents into the Plugins folder for the Receiver. The Receiver will pull in those libraries at runtime and then it’s ready to communicate to your web service.
You’ll also want to set the values of the configuration files to the appropriate values for your local deployment. We have a handy article about all the configuration properties. Be sure to open up any ports you’re planning on using for your Dispatcher and Receiver!

Run!

Now it’s time to set everything in motion! If your Consul and/or RabbitMQ aren’t already running start them up. Start up your Receiver and register it with Consul. (We also have a Consul Manager tool in the release package. Check out this article to see how you can use it to register your service.) Start up your Dispatcher and your Producer and start sending messages!
If you’ve run into any snags or want a more thorough description, check out our How-To Guide. It describes each step in detail so you can see how every part of the process should be configured.
What’s next? Try implementing a second call. Check out our other documentation, like our Handshake Diagram to better understand the paths of the messages. Maybe add another Receiver with another web service to give you an idea of multiple nodes on the network. Hopefully this test will be a small step along your long future with ESBs. Enjoy!

  • Joseph
  • Pound

Introducing the Orbital Bus

Today, we are proud to announce the public beta launch of the Orbital Bus open source project. The Orbital Bus is a distributed Enterprise Service Bus (ESB) intended to make it easier for developers to implement smart, reusable, loosely coupled services. We believe that a peer-to-peer mesh of lightweight integration nodes provide a much more robust and flexible ESB architecture than the traditional hub/spoke approach. Please check out our public repository and documentation.

I have been working in Service-Oriented Architecture (SOA) and with Enterprise Service Bus (ESB’s) for the majority of my career. I can’t even count how many debates I’ve been in on the value of implementing an ESB and how a hub/spoke architecture is more sustainable than point-to-point integrations. In fact, for a number of years, I took the architectural benefits of an ESB for granted.

By 2013, I was starting to see a pattern of large enterprise clients struggling to adopt SOA not for architectural or technology reasons, but for organizational and cultural reasons. I came to the realization that while ESB’s were being sold as bringing about a certain type of architectural rigor, it could only do that if the organization was very hierarchical and centralized. The ESB is really not well suited for an IT organization made up of more distributed teams and governance structures.

We thought of a better way to solve the real-time integration problem. With the help of some funding from NRC’s IRAP program, we started development of the Orbital Bus in 2015. The goal was to solve some of the major shortcomings that we see in traditional ESB’s:

Single Point of Failure – An ESB creates a single point of failure, as all common integrations must pass through it. Enterprises spend a significant amount of money to improve the availability of their ESB infrastructures (e.g., hardware redundancy, DR sites, clustering). What if the responsibility of translation and routing was pushed out to the edges to where the service providers are hosted? There would be no point of failure in the middle. The availability of each service would purely be dictated by the service provider and the lightweight integration node sitting in front of it, which means one node going down wouldn’t impact the rest of the ecosystem.

Implementation Timelines and Cost – ESB’s take a long time and a lot of money to stand up. A lot of effort is needed to design the architecture so it’s future proof and robust enough for the entire enterprise. And then there’s the cost of the infrastructure, software licenses, and redundancy. Never mind the cost of bringing in external expertise on whichever platform is being implemented. What if the platform actually promoted more organic growth? Each node is lightweight and could be stood up with no additional network zone or infrastructure. Developers will be able to build decoupled interfaces between handful of systems in a matter of days rather than months. And instead of needing to fiddle with complex platform configurations and go through 200+ page installation guides, the ESB could be stood up with a handful of scripts and created native service bindings in commodity languages such as C#.

Developer Empowerment – ESB’s move the responsibility of creating decoupled interfaces from the developer of the service into a central ESB team. It’s no surprise that nearly every SOA program I’ve ever worked on faced significant resistance from the development teams. And let’s face it, most IT organizations are poorly equipped to handle major culture changes, and that resistance often results in the killing of a SOA program. What if the architecture actually empowered developers to build better and more abstracted interfaces rather than try to wrestle control away from them? The platform would promote a contract-first implementation approach and generate all the boring binding and serialization code so developers can focus on the more fun stuff. By having the service interface code artifacts tied more closely to the service provider code, it opens up opportunities to better manage versions and dependencies through source control and CI.

We’ve had a lot of fun designing and developing this product over the last two and a half years. We are excited to offer the Orbital Bus to the community to collaborate and gather as much feedback as we can. Working with the open source community, we hope to create a more efficient and developer-centric way of integrating across distributed systems. We hope you will join us on this journey!

  • Shan
  • Gu

Embracing the Technology Meltingpot

One of the most common objections I hear among my large enterprise and government clients when discussing adopting new technologies is “We’re a Microsoft shop so why would we look at a Java-based tool?” or open source, or Google, or Salesforce, and the list goes on.  This objection is grounded in the opinion that increasing the technology mix increases complexity, and thus increases the operational risk and cost.

However, the biggest challenge for IT executives has shifted from tightening their operational budgets to managing the constant risk of technologies becoming unsupported or having a vendor development path that no longer aligns with the enterprise’s needs.  The technology market is evolving and changing faster than ever.  Programming languages grow and wane in popularity and support over a cycle of just a couple of years; new frameworks breath new life into technologies that were previously left to die; acquisitions can make entire enterprise platforms obsolete overnight; and new innovations are happening constantly throughout the IT stack from networking, to virtualization, to application platforms, to business applications.

In such a rapidly changing and unpredictable environment, the best approach to managing risk (as any good investment adviser will tell you) wordleis to diversify.

In fact, any IT organization that doesn’t have an openness to innovate and investigate new technologies will ultimate die a slow death through obsolescence and operational bloat.

Instead of being afraid of the operational complexity and cost introduced by shaking up the technology stack, IT executives should be embracing them as opportunities for their teams to develop new skills and to gain a wider perspective on how to implement solutions.  Instead of remaining within the comfortable confines and protections of a given development framework, developers should be pushed to understand how different technologies interoperate and the importance of having disciplined SDLC methodologies to deal with complex deployments.

The key to success in all of this is integration.  Developing mature integration practices like modular design, loose coupling, and standards-based interoperability ensures that new technologies can be plugged into and unplugged from the enterprise without cascading impacts on existing systems.  Disciplined SDLC methodology especially around configuration management and change control allow different technology teams to work in parallel, resulting in more efficient project delivery.

IT organizations must adopt a culture of openness and curiosity to from the inevitable changes to their technology ecosystem.  They must invest in mature and disciplined integration practices to effectively manage those changes.

  • Shan
  • Gu

Prescriptive Governance Leads to Shadow IT

Let’s face it, to most people in IT, “Governance” is a dirty word.  That perception is not born out of an idea that IT governance is bad, but out of the reality that IT governance is badly implemented in most organizations.  When an organization confuses good IT governance with overly detailed and prescriptive IT governance, it starts to constrain rather than enable its people.  And when people feel constrained and not empowered to make the decisions, they work around or against the process, which then results in proliferation of shadow IT.Governance_ShadowIT

The reason for this phenomenon is that many organizations approach IT governance with a few very flawed assumptions around how software and technology projects work:

  1. Changes are bad;
  2. Consistent results are driven from consistent processes;
  3. Standardization reduces the number of decisions, which makes the process more efficient and consistent; and
  4. Measure of a good project is on-budget and on-time.
These assumptions are fatal to any IT organization because they fail to recognize the realities of the nature of technology and the people implementing it:
  1. Technology is about change.  The whole point of implementing technology is to support and enable more rapidly changing business needs.  Add that to the speed of technology changes, the default position of any good IT governance process should be to assume constant change and deal with it head on instead of trying to contain and avoid it.
  2. Speaking of change, there is no way for a one-size-fits-all process to anticipate all the different ways a technology project can unfold.  In fact, the more prescriptive a process is, the less likely it will fit the next project.  There are simply too many variables and moving targets in modern enterprise IT projects.
  3. You hired smart people to implement technology and guess what?  Smart people like to make decisions and feel ownership of their work.  By over-standardizing, talented tech resources are turned into the IT equivalent of assembly line workers.  At best they become disengaged and stale in their skills.  But more likely, they take matters into their own hands and create opportunities to make decisions or fight the governance process to retain some ownership of the work they’re being asked to do.
  4. IT initiatives exist to make your business better and the users happier.  While budget, scope, and schedule are important, they’re management measures on the process rather than whether a project was truly successful.
So how do we fix this?  In a word, simplify!  And here are some things to think about when slimming down your IT governance process:
  1. Reduce and align the number of gates to the number of “point-of-no-return” decisions on a project (e.g., business case, functional design, technical design, go-live).
  2. For each gate, focus on what kinds of decisions need to be made, guidance on people who should be involved, and some basic examples of information that should be provided as input.  Let the smart people do what they’re being paid to do, which is talk it out and own the decision.
  3. Standardize only the mundane and highly repeatable decisions.  Standards are about helping speed up the process and focusing the effort of only debating things that deserve to be debated.  It’s not about compliance metrics and enforcement.  If you have to put in an exception or exemption process, you’ve gone too far.
  4. Ensure communications on what the project will deliver in terms of functionality and value.  Most stakeholders care a lot more about whether a particular feature set is being implemented for their users rather than whether a particular deliverable is green or yellow.
In the end, this is about creating a process that helps to focus people on the real objectives of the business and fostering communications.  It’s about assuming that people are intelligent and reasonable and capable of making good decisions when given the opportunity.  And if that turns out not to be the case, it’s a HR problem and not something that should be fixed with yet more governance processes.
  • Shan
  • Gu

Why .NET Doesn’t Have to be Expensive

.NET is a proven and mature framework that has great benefits, however it is often overlooked when companies are deciding on a language and framework. Many developers remember the Microsoft of old where you were immediately stuck with proprietary frameworks and Microsoft-only products that have high initial costs and outrageous scaling overhead. Fortunately for the industry, Microsoft is taking a sharp turn away from proprietary restrictions and is moving towards open source.

Let’s examine CheapMumble, a project which has been successfully deployed using .NET with no licensing costs. I’ll take a look at the frameworks, software, and hosting that has been used to make his project successful. I’ll also explore other options and the future of .NET in the open source world.

The CheapMumble Project

To understand what CheapMumble does, you first need to understand what mumble is. From the Mumble wiki: “Mumble is an open source, low-latency, high quality voice chat software primarily intended for use while gaming.” CheapMumble is simply a cheap hosting solution for mumble servers.

Take a look at the software stack used to create CheapMumble.

Front End

Razor (Open Source)

The beloved view-rendering engine of MVC.Net has been open sourced and has been freely available for some time.

Application Tier

.NET Framework 4.5

The same framework you read about or are familiar with, including Async Await, Linq, and all other features.

Mono (Open Source)

The team was able to use the framework by choosing the ever growing project Mono. At the time this article is written, Mono just recently released version 3.6.0. If you want to know about compatibility take a look here.

Nancyfx (Open Source)

Nancy is the web framework chosen to drive CheapMumble. You may have never heard of it, however it’s a full featured web framework ready to be used in any of your next web projects. The great thing about Nancy is the firm dedication of support for the Mono libraries. Take a look at their blog to learn more and see what they are up to.

Entity Framework (Open Source)

Don’t compromise on your data access. Use the best ORM out there (and yes I’ll fight you on that). Entity Framework has been open sourced for a long time and has great support under the Mono framework. Linq your hearts out on your next project.

Backend

MySQL (Open Source)

Entity Framework allows you to connect to any relational database you please, including MySQL, using the .NET Connector. Setup is easy and you will forget about your database while using code first features and strong object relational models.

Software during development can be a substantial cost if you’re not careful. Especially if you consider the cost of Visual Studio Ultimate MSDN subscriptions. Visual Studio is the best development IDE out there, however do you really need all its features? Let’s take a look at some cheaper alternatives.

Visual Studio Express

Free Visual Studio! What could go wrong? I’d love to tell you this is the solution to all your problems. It isn’t. They have, however, added a lot to the express editions over the years. Ability for multi-project solutions, unit testing, NuGet, code analysis. Trying to find the limitations online was not easy, and I didn’t find a reliable source. I would recommend giving it a shot. See what happens. It could very well be all your team needs.

Xamarin / MonoDevelop

MonoDevelop evolved into Xamarin whether on Windows or Mac. Don’t get scared by the price tag. The only price for Xamarin is when you want to compile source code to work with Android or iOS in a closed-source application. This means that all web applications can be developed free of use on Xamarin.

Sublime Text

Wait, really? Though sublime isn’t a full, feature-rich IDE, it is still a very strong candidate for a lot of developers. Recently on the Nancyfx blog they went through a tutorial on setting up Sublime to work with ASP.Net development.

With these technologies, the CheapMumble team was able to develop and deploy their software on whatever platform they saw fit. The best part was that no licensing cost was required.

The future of open source on the .NET framework is bright. Everything in this post works today, and tomorrow there will be even more. Recently, Microsoft unveiled ASP.Net vNext with a large amount of the software being open source. A great rundown of features was given from Scott Hanselman in his post Introducing ASP.NET vNext. The most exciting part is at the end:

ASP.NET vNext (and Rosyln) runs on Mono, on both Mac and Linux today. While Mono isn’t a project from Microsoft, we’ll collaborate with the Mono team, plus Mono will be added to our test matrix. It’s our aspiration that it “just work”.

The future for ASP.NET development is clear: Open Source and CHEAP!

  • Dan
  • McCrady

Fakes and Shims Generic Methods

Working with Microsoft’s Fakes and Shims is really a treat. I love how easy they are to setup and how I can be sure that I can test anything I need.

Problem:

Lately I have been trying to test Generic Methods of an Interface. Doesn’t seem like a challenge, as nothing else in the framework is, however I quickly found out it wasn’t as simple as I thought.

Take this interface as an example:

Pubic interface ISampleInterface
{
T GetMySample<T>();
}

When you add the Fakes and Shims Assembly, a class will be generated to allow you to apply functions to each of the interface members. However, you will notice the expected member is missing. Instead, you will get something that looks like the following:

fakes-shims-1

 

The issue is because of the Generic type in the interfaces member definition. The stubs don’t know what method type to create for you. It could try and generate a method for every implementation of T, but that seems to be a bit much.

Solution:

Instead, they give you a way of defining the type after the creation of the stub. You will see a new method on the stub called GetMySampleOf1. This is the method you will use to a give a definition to the generic method. Implementation of the stub will look like the following:

fakes-shims-2

You can see that I have implemented the interface for 4 different data types. String, Integer, Boolean, and DateTime. This makes it really easy to implement interface members with generic type arguments.

  • Dan
  • McCrady