Embracing the Technology Meltingpot

One of the most common objections I hear among my large enterprise and government clients when discussing adopting new technologies is “We’re a Microsoft shop so why would we look at a Java-based tool?” or open source, or Google, or Salesforce, and the list goes on.  This objection is grounded in the opinion that increasing the technology mix increases complexity, and thus increases the operational risk and cost.

However, the biggest challenge for IT executives has shifted from tightening their operational budgets to managing the constant risk of technologies becoming unsupported or having a vendor development path that no longer aligns with the enterprise’s needs.  The technology market is evolving and changing faster than ever.  Programming languages grow and wane in popularity and support over a cycle of just a couple of years; new frameworks breath new life into technologies that were previously left to die; acquisitions can make entire enterprise platforms obsolete overnight; and new innovations are happening constantly throughout the IT stack from networking, to virtualization, to application platforms, to business applications.

In such a rapidly changing and unpredictable environment, the best approach to managing risk (as any good investment adviser will tell you) wordleis to diversify.

In fact, any IT organization that doesn’t have an openness to innovate and investigate new technologies will ultimate die a slow death through obsolescence and operational bloat.

Instead of being afraid of the operational complexity and cost introduced by shaking up the technology stack, IT executives should be embracing them as opportunities for their teams to develop new skills and to gain a wider perspective on how to implement solutions.  Instead of remaining within the comfortable confines and protections of a given development framework, developers should be pushed to understand how different technologies interoperate and the importance of having disciplined SDLC methodologies to deal with complex deployments.

The key to success in all of this is integration.  Developing mature integration practices like modular design, loose coupling, and standards-based interoperability ensures that new technologies can be plugged into and unplugged from the enterprise without cascading impacts on existing systems.  Disciplined SDLC methodology especially around configuration management and change control allow different technology teams to work in parallel, resulting in more efficient project delivery.

IT organizations must adopt a culture of openness and curiosity to from the inevitable changes to their technology ecosystem.  They must invest in mature and disciplined integration practices to effectively manage those changes.

  • Shan
  • Gu

Prescriptive Governance Leads to Shadow IT

Let’s face it, to most people in IT, “Governance” is a dirty word.  That perception is not born out of an idea that IT governance is bad, but out of the reality that IT governance is badly implemented in most organizations.  When an organization confuses good IT governance with overly detailed and prescriptive IT governance, it starts to constrain rather than enable its people.  And when people feel constrained and not empowered to make the decisions, they work around or against the process, which then results in proliferation of shadow IT.Governance_ShadowIT

The reason for this phenomenon is that many organizations approach IT governance with a few very flawed assumptions around how software and technology projects work:

  1. Changes are bad;
  2. Consistent results are driven from consistent processes;
  3. Standardization reduces the number of decisions, which makes the process more efficient and consistent; and
  4. Measure of a good project is on-budget and on-time.
These assumptions are fatal to any IT organization because they fail to recognize the realities of the nature of technology and the people implementing it:
  1. Technology is about change.  The whole point of implementing technology is to support and enable more rapidly changing business needs.  Add that to the speed of technology changes, the default position of any good IT governance process should be to assume constant change and deal with it head on instead of trying to contain and avoid it.
  2. Speaking of change, there is no way for a one-size-fits-all process to anticipate all the different ways a technology project can unfold.  In fact, the more prescriptive a process is, the less likely it will fit the next project.  There are simply too many variables and moving targets in modern enterprise IT projects.
  3. You hired smart people to implement technology and guess what?  Smart people like to make decisions and feel ownership of their work.  By over-standardizing, talented tech resources are turned into the IT equivalent of assembly line workers.  At best they become disengaged and stale in their skills.  But more likely, they take matters into their own hands and create opportunities to make decisions or fight the governance process to retain some ownership of the work they’re being asked to do.
  4. IT initiatives exist to make your business better and the users happier.  While budget, scope, and schedule are important, they’re management measures on the process rather than whether a project was truly successful.
So how do we fix this?  In a word, simplify!  And here are some things to think about when slimming down your IT governance process:
  1. Reduce and align the number of gates to the number of “point-of-no-return” decisions on a project (e.g., business case, functional design, technical design, go-live).
  2. For each gate, focus on what kinds of decisions need to be made, guidance on people who should be involved, and some basic examples of information that should be provided as input.  Let the smart people do what they’re being paid to do, which is talk it out and own the decision.
  3. Standardize only the mundane and highly repeatable decisions.  Standards are about helping speed up the process and focusing the effort of only debating things that deserve to be debated.  It’s not about compliance metrics and enforcement.  If you have to put in an exception or exemption process, you’ve gone too far.
  4. Ensure communications on what the project will deliver in terms of functionality and value.  Most stakeholders care a lot more about whether a particular feature set is being implemented for their users rather than whether a particular deliverable is green or yellow.
In the end, this is about creating a process that helps to focus people on the real objectives of the business and fostering communications.  It’s about assuming that people are intelligent and reasonable and capable of making good decisions when given the opportunity.  And if that turns out not to be the case, it’s a HR problem and not something that should be fixed with yet more governance processes.
  • Shan
  • Gu

Announcing Foci Solutions

A little over three years ago, I had the great fortune to reconnect with an old friend from my university days. I lured him away from his well-paying and stable position at one of the Big 5 consulting firms to help me incubate an Integration practice focused on helping large enterprise clients connect their various COTS investments.

Since convincing him to recklessly quit his job and join BoldRadius, Shan and I have been through a lot of ups and downs. His relentless focus on operational and delivery excellence and sharp strategic mind is a strong complement to my ability to create strong culture and set up structures for success. Throughout the time we’ve worked together, I’ve learned to be objective, fair and direct. And my entrepreneurial approach to getting initiatives off the ground has rubbed off on him. Lean, Agile, and Kanban replaced the large and heavy institutions of Waterfall and PMI.

We looked at the market around us and saw that we were building something special, something that landed neatly between the armies of independent consultants and the giant multi-billion dollar consultancies. We have been able to fully leverage our entrepreneurial approach in combination with our experience in large-scale IT implementation to cut through the noise on enterprise IT transformation programs and to focus on the core actions needed to drive it forward. We have become that small tactical team that could help our clients get out of the infinite spin of analysis and to just do something. To move boldly forward instead of being paralyzed with fear when staring down a massively complex problem.

The Integration practice we built within BoldRadius has attracted some amazing talent and has established a strong reputation. Through trial and many errors, we’ve learned what we need to do to secure and maintain enduring relationships built on results for our clients. Finally, we’ve established solid financial footing and the ability to invest in furthering our success.

The Integration business has matured – it needs focus, direction, independence and talent. It’s time for it to spread its wings and take its own path under a new structure, new brand and a new name – Foci Solutions.

Speaking for Shan, myself and the team, we’re excited about what the next chapter holds for Foci. We’re looking forward to solidifying our success and expanding into new areas. We’re relishing the possibilities of new client interactions around better, more mature ways to manage IT and we’ve got big plans to build capabilities that don’t currently exist for IT teams.

Keep an eye on this company – it’s going places.

  • Mike
  • Kelland

Why .NET Doesn’t Have to be Expensive

.NET is a proven and mature framework that has great benefits, however it is often overlooked when companies are deciding on a language and framework. Many developers remember the Microsoft of old where you were immediately stuck with proprietary frameworks and Microsoft-only products that have high initial costs and outrageous scaling overhead. Fortunately for the industry, Microsoft is taking a sharp turn away from proprietary restrictions and is moving towards open source.

Let’s examine CheapMumble, a project which has been successfully deployed using .NET with no licensing costs. I’ll take a look at the frameworks, software, and hosting that has been used to make his project successful. I’ll also explore other options and the future of .NET in the open source world.

The CheapMumble Project

To understand what CheapMumble does, you first need to understand what mumble is. From the Mumble wiki: “Mumble is an open source, low-latency, high quality voice chat software primarily intended for use while gaming.” CheapMumble is simply a cheap hosting solution for mumble servers.

Take a look at the software stack used to create CheapMumble.

Front End

Razor (Open Source)

The beloved view-rendering engine of MVC.Net has been open sourced and has been freely available for some time.

Application Tier

.NET Framework 4.5

The same framework you read about or are familiar with, including Async Await, Linq, and all other features.

Mono (Open Source)

The team was able to use the framework by choosing the ever growing project Mono. At the time this article is written, Mono just recently released version 3.6.0. If you want to know about compatibility take a look here.

Nancyfx (Open Source)

Nancy is the web framework chosen to drive CheapMumble. You may have never heard of it, however it’s a full featured web framework ready to be used in any of your next web projects. The great thing about Nancy is the firm dedication of support for the Mono libraries. Take a look at their blog to learn more and see what they are up to.

Entity Framework (Open Source)

Don’t compromise on your data access. Use the best ORM out there (and yes I’ll fight you on that). Entity Framework has been open sourced for a long time and has great support under the Mono framework. Linq your hearts out on your next project.

Backend

MySQL (Open Source)

Entity Framework allows you to connect to any relational database you please, including MySQL, using the .NET Connector. Setup is easy and you will forget about your database while using code first features and strong object relational models.

Software during development can be a substantial cost if you’re not careful. Especially if you consider the cost of Visual Studio Ultimate MSDN subscriptions. Visual Studio is the best development IDE out there, however do you really need all its features? Let’s take a look at some cheaper alternatives.

Visual Studio Express

Free Visual Studio! What could go wrong? I’d love to tell you this is the solution to all your problems. It isn’t. They have, however, added a lot to the express editions over the years. Ability for multi-project solutions, unit testing, NuGet, code analysis. Trying to find the limitations online was not easy, and I didn’t find a reliable source. I would recommend giving it a shot. See what happens. It could very well be all your team needs.

Xamarin / MonoDevelop

MonoDevelop evolved into Xamarin whether on Windows or Mac. Don’t get scared by the price tag. The only price for Xamarin is when you want to compile source code to work with Android or iOS in a closed-source application. This means that all web applications can be developed free of use on Xamarin.

Sublime Text

Wait, really? Though sublime isn’t a full, feature-rich IDE, it is still a very strong candidate for a lot of developers. Recently on the Nancyfx blog they went through a tutorial on setting up Sublime to work with ASP.Net development.

With these technologies, the CheapMumble team was able to develop and deploy their software on whatever platform they saw fit. The best part was that no licensing cost was required.

The future of open source on the .NET framework is bright. Everything in this post works today, and tomorrow there will be even more. Recently, Microsoft unveiled ASP.Net vNext with a large amount of the software being open source. A great rundown of features was given from Scott Hanselman in his post Introducing ASP.NET vNext. The most exciting part is at the end:

ASP.NET vNext (and Rosyln) runs on Mono, on both Mac and Linux today. While Mono isn’t a project from Microsoft, we’ll collaborate with the Mono team, plus Mono will be added to our test matrix. It’s our aspiration that it “just work”.

The future for ASP.NET development is clear: Open Source and CHEAP!

  • Dan
  • McCrady

Fakes and Shims Generic Methods

Working with Microsoft’s Fakes and Shims is really a treat. I love how easy they are to setup and how I can be sure that I can test anything I need.

Problem:

Lately I have been trying to test Generic Methods of an Interface. Doesn’t seem like a challenge, as nothing else in the framework is, however I quickly found out it wasn’t as simple as I thought.

Take this interface as an example:

Pubic interface ISampleInterface
{
T GetMySample<T>();
}

When you add the Fakes and Shims Assembly, a class will be generated to allow you to apply functions to each of the interface members. However, you will notice the expected member is missing. Instead, you will get something that looks like the following:

fakes-shims-1

 

The issue is because of the Generic type in the interfaces member definition. The stubs don’t know what method type to create for you. It could try and generate a method for every implementation of T, but that seems to be a bit much.

Solution:

Instead, they give you a way of defining the type after the creation of the stub. You will see a new method on the stub called GetMySampleOf1. This is the method you will use to a give a definition to the generic method. Implementation of the stub will look like the following:

fakes-shims-2

You can see that I have implemented the interface for 4 different data types. String, Integer, Boolean, and DateTime. This makes it really easy to implement interface members with generic type arguments.

  • Dan
  • McCrady

Automating Builds with TFS Build Definitions

My latest project utilized automated builds to speed up the deployment life cycle. We are running in an agile process and wanted to leverage continuous integration to help the product owners see new features as soon as possible. What follows is a quick walkthrough of the steps/tasks to get a basic build definition setup using TFS and Visual Studio.

Before we get started we need to ensure we have installed some prerequisites. Make sure you have the following:

  1. TFS installed with a Team Project and your project source code checked in.
  2. Build Controller installed with a Build Agent. More information on this can be found here.
  3. Ensure all software required for build is installed on the same machine as the Build Agent, including Visual Studio.
  4. Visual Studio must also be installed on your local development environment.
  5. A network share location to place the code drops (i.e. built code).
  6. A domain is not required but is highly recommended for secure access to network locations.

The example that follows is using TFS 2012 and Visual Studio 2012. However, this process will most likely apply to future versions (I have seen no differences if using either TFS 2013 preview or Visual Studio 2013).

To get started I first need to create a new build definition. This will be the basis for how a build is completed. In order to create a new build definition, I open Visual Studio, navigate to Team Explorer, and then to the Builds section.

In the figure below, I don’t have any build definitions for this project. However I can change this by selecting the “New Build Definition” selection from the Builds menu.

TFS-Builds-1

Figure 1 – Team Explorer – Builds Tab

A new menu will appear, this is where we will need to configure our build definition. First, in the General tab, give your build definition a name and an optional description. I’m going to call mine Blog Build Definition CI, and give a definition of “Sample build definition for my blog post”. A “Queue Processing” option must be selected as well. We are going for a continuous integration set-up, so select “Enabled”.

TFS-Builds-2

Figure 2 – New Build Definition – General Tab

Next we need to select the trigger for the build definition. This can be found in the Trigger tab. Here we want to select the best option for a continuous integration setup. Depending on your team or situation, you may opt to use “Rolling builds” or “Gated Check-in” but “Continuous Integration” is likely adequate for most environments.

TFS-Builds-3

Figure 3 – New Build Definition – Trigger Tab

Then we move onto Source Settings. Source Settings can be a little bit tricky but is straight forward once you gain an understanding of what is being asked. Source Control Folder is used to select what source the build will be acting on. You can see that I selected my “BlogBuildDefinitionProject” folder as my source. Build Agent Folder can be a bit confusing. During the build, the controller is going to grab the source code files from TFS and send them to the build agent. The Build Agent Folder refers to the location on the Build Agent’s server that the files will be sent to. “$(SourceDir)” is a variable used by TFS as a starting point for source drops. The article List of Variables Like $(SourceDir) gives a good explanation of what the variable is:

$(SourceDir) – Expands to $(BuildDir)\Sources by default

The directory “Sources” is not hard-coded and may be changed by modifying the TfsBuildService.exe.config file on the build agent. If you open that file there will be an application setting called “SourcesSubDirectory”. If you need a shorter path you may change this key to something like “s” instead of “Sources”. If you made this change then the $(SourceDir) variable would expand to $(BuildDir)\s

For the purposes of this example I only have one solution I’m building, so keeping the default is fine. However, if you want to build multiple solutions, each location will need its own source directory. You should keep all the source locations pre-pended with “$(SourceDir)” and append them with “/project1″ or more likely the name of your solution.

TFS-Builds-4

Figure 4 – New Build Definition – Source Settings

Build Defaults is next, and is a fairly simple screen. We need to select the Build Controller we are going to use. I defined my controller very quickly (see #2 in the list of prerequisites at top). You can see it is named “TeamFoundation” and has no description. Most likely you will only have one controller. Under “Staging Location” you will only have two options unless you are creating a build definition for Team Foundation Service where a third option will be presented. That third option is out scope of this article so we will focus on the two that are given for full featured Team Foundation Server Installations. The first selection is used when your build process for whatever reason doesn’t need to copy files to a drop location. The second one is the standard option and will require you to put in the network address of the network share you created earlier (see #5 in list of prerequisites at top). Remember that the Build Agent that was configured for use with TFS will need to have full access to this folder.

TFS-Builds-5

Figure 5 – New Build Definition – Source Settings

The Process section is the meat of your build definition. This is where you can dictate the steps and set up a “Build Template”. There is a lot of information that could be explained here, especially when talking about creating build templates. Custom build processes are something I want to cover further in a future post but will be skipped over for now. The default template that is created whenever a new Team Project is created is more than suitable for a basic continuous integration deployment. The default build process template is pre-selected for you, however, you can select “New…” in order to start the creation of your own. The most important part is to select the Items to Build.

For this example (see screen shot below) I selected the solution of my “BlogBuildDefinitionProject”. The defaults for Automated Tests are normally adequate enough to get tests run before your code is built on the server, however, you can also define a string that will be used to find your test projects. It’s important to realize that if using the default, any .dll file with the word “test” in the file name will be searched for test classes and methods and these tests will then be run. Furthermore, if your test project names don’t already contain the word “test”, you will need to either alter this string to something that is common between your test projects’ names, or change the names of your test projects.

TFS-Builds-6

Figure 6 – New Build Definition – Process

Retention Policy really isn’t very important. It’s just a definition as to how long certain types of builds should be kept. Depending on how the build is triggered you can keep those files for a specified amount of time. Normally the defaults are sufficient for any project you have on the go.

TFS-Builds-7

Figure 7 – New Build Definition – Retention Policy

Make sure you save, and upon saving successfully you will see in your Team Explorer Builds tab that you now have a build definition present.

TFS-Builds-8

Figure 8 – Team Explorer – Builds with Definition

Next time you check in your code you will see a build under “My Builds”. Give it a few minutes and you will get a log of the events that transpired.

TFS-Builds-9

Figure 9 – Team Explorer – Builds New Build

Building up the perfect deployment solution can take time, however when it’s done you will be able to create new builds in seconds. It really is great and has saved many hours trying to get products to launch.

If you have any questions, or want to chat more about this, contact us today – we’d love to hear from you!

  • Dan
  • McCrady

The Quiet Evolution of SOA

We’ve all found ourselves looking at an organization’s web services and commenting on how “It’s not really SOA”. Maybe because the program still maintains point-to-point interfaces, or maybe the organization hasn’t put in place any form of governance, but for whatever reason, we declare that it simply isn’t comprehensive enough to be considered SOA. That begs the question then: who is actually doing TRUE enterprise wide SOA? Well… very few organizations. Anne Manes famously declared that “SOA is dead” back in 2009. So why is it that we still find ourselves evangelizing and building towards this vision?

The answer is that our understanding of what makes an SOA program successful has quietly evolved over the last few years. Enterprise-wide re-platforming and re-architecture initiatives gave way to tactical adoption of SOA. The success of SaaS and BPM adoption meant that organizations are implementing the principles of service orientation without explicitly calling it an SOA program. And instead of trying to figure out just how to effectively measure SOA ROI at the enterprise level, much greater success has been found measuring the value created within a given portfolio and/or capability.

So while we Architects have not given up on the hope of achieving SOA utopia, we have become more realistic in our approach:

  1. Identify a very specific problem to solve with an SOA approach, be it to reduce the time-to-market of a frequently changing business process, or to reduce the application footprint of a given line-of-business.
  2. Demonstrate the value of SOA by successfully solving that problem.
  3. Rinse and repeat.

At the end of the day, any plans for enterprise level SOA can only be built a critical mass of successful self-sustaining SOA capabilities/portfolios.

  • Shan
  • Gu

Fractal Governance

SOA landscapes today look very much like fractals. An organization may have several internal capabilities presented as reusable services that connect to each other. It may even connect to 3rd party and/or cloud based services. But if you drill down into each of these services, you’ll likely see a composite application that is made up of several finer grained services interconnected together. And as a math geek, I am naturally curious about all things related to fractals.

In fact the fractal pattern appears in almost anything that’s responsible for connecting things together: highway and road systems, power grids, the internet… the list goes on. In all of these systems, there exists a hierarchical system of management and governance to regulate its functionality. Each country, for example, have national standards and regulatory bodies that define how power is to be exchanged, managed, and consumed. At the regional levels, there are additional standards and regulatory bodies that deal with region-specific decisions such as how much power to generate, pricing, and what equipment is to be installed where. Similar structures are true for transportation and telecommunications. So why is it that most organizations see SOA governance as an all-encompassing enterprise wide responsibility?

The interaction requirements and lifecycle characteristics of enterprise level composite services or business processes are very different from those of a utility service. To paint the entire enterprise service landscape with an uniform set of standards and processes will either result in a high number of exceptions or a lowest common denominator scenario. To be effective, an organization’s SOA governance model must match its SOA deployment model. The governance model must exist not just at the top, but at a granularity that matches how the services are being deployed and managed. Service Portfolio Managers, then are not just another role within the governance model, but micro versions of governance domains themselves. Service Portfolio Managers must be allowed to define their own standards and processes that are appropriate for the specific services that they’re responsible for. The SOA governance model for the enterprise must consider what standards and processes are appropriate for all services, which are appropriate for only the ones being consumed across the enterprise, and which ones should be left up to the Portfolios to govern themselves.

  • Shan
  • Gu

Introduction to SOA Development

As a hacker, I get a huge rush out of solving problems. It’s like a game for me, with the goal being always to find bigger and better dragons to slay…

Unfortunately, we all know it’s not always the case in our line of work! Usually, it’s some interesting problem hidden among piles of scut work. You know… That repetitive stuff that, while simple and pays the bills, often makes you wonder if all the good problems have been solved. That goes double if your client happens to be a big enterprise.

If you work for a company or have a client that has (or needs) a large IT Infrastructure, you’ve probably heard of Service-Oriented Architecture (SOA) before. You might have attended a meeting where a consultant had recommended such a system, or even had read up on it yourself. You might have noticed that it involves a bunch of mundane tasks like writing xml transforms, and WSDLs. I’ve been lucky enough to have been given a brief introduction to Oracle SOA Suite, and the accompanying JDeveloper tool that takes care of the tedious stuff, and lets you focus on what matters: Writing awesome code to solve interesting problems and look like a hero in your clients’ eyes. Of course, Oracle/JDeveloper is only one of the many packages out there for quickly deploying SOA apps. Other examples include TIBCO , IBM Websphere, and OpenESB.

This post isn’t some elaborate HowTo or Tutorial, but a brief intro to SOA from a programmer’s perspective. To demonstrate how using the right tools, you can have rich, reusable services up and running in a very short amount of time! The potential for this stuff is huge, and like it or not, it’s here to stay. Might as well go whole hog!

Zero to Hero in 30 minutes or less.

The following is just a quick intro at what can be done using jDeveloper. Like I said before, this is just one way of skinning the SOA cat. There are tons of other tools out there, some of them are even open-source. I’ll be posting some more elaborate tutorials, demos and HowTos later on.

The Dreaded XML Transform

XML transforms is possibly some of most boring work out there. Most of the time, you’re mapping an input from a web service to an object you’re then going to pass on to a database layer or some other service. Using jDeveloper, you can skip writing XSDs, and XSLTs, and just draw a few quick diagrams.

XML-transform-SOA

Business Process Execution Language

BPEL is how more complex SOA processes can be implemented. Usually by means of long boring XML definitions. Again, jDeveloper abstracts these in simple to read diagrams, resembling flow charts:

BPEL-SOA-jdeveloper

Putting it all together

The transforms and BPEL modules you create using jDeveloper are then used by a composite. This is a high-level definition of what comes in from the outside, and where it gets routed. Every module is configurable and can route data in and/or out based on defined conditions (more on this in future posts)

introduction-SOA-development

In Closing

This is just a (very) brief intro on what using the right tools can do to make your mundane development tasks way easier and faster to complete. Your mileage may vary depending on which package/platform you use, but they all can shave hours or even days off your service development cycle. And after all, that’s what matters when you want to get back to hacking that brilliant solution to that interesting problem you haven’t had time to focus on!

  • André
  • Racicot

Is Service Oriented Architecture (SOA) Still a Thing?

The Oldest Buzzword Around

Service Oriented Architecture (SOA) isn’t a new concept by any means.

It’s practically a decade old and, in IT years, that’s beyond the useful lifespan of just about all buzzwords. And that’s the problem; as a buzzword, SOA never attained the same level of popularity as Cloud or Big Data. The concept of SOA was nebulous and how an organization could achieve SOA was even more unclear.

Vendors were pitching anything ranging from just an asynchronous messaging infrastructure to a full blown process automation and orchestration suite as the “Conerstone of Enterprise SOA” solutions. Further confusion was caused by product vendors trying to differentiate their products by pushing the importance of interoperability standards (ie. WS-*) claiming that other competing products weren’t truly “SOA” for one reason or another.

This confusion created just as much negative stigma around the term SOA as positive sentiment. While product marketing folks were focusing on the discussion of just what is and isn’t SOA, the Architects were quietly picking and choosing the concepts of SOA that they liked and evolving their enterprises’ IT landscapes.

SOA – Still Alive & Kickin’

Fast forward to today.

RESTful has fully taken over as the web service integration style of choice for the Internet, relegating SOAP for internal enterprise interactions and transactions that are considered “low throughput”. JSON has gained traction in the same way over XML thanks to movement towards mobile computing and a renewed focus on making interfaces as lean byte-wise as possible. No one thinks twice about decoupling the UI from the business logic and integrating using a set of web service calls. And asynchronous messaging is practically the status quo method of propagating large amounts of data across the enterprise.

So yes, the key SOA concepts of:

  1. Developing applications that promote reuse
  2. Decoupling functional application components to improve flexibility and agility
  3. Standardizing the way interfaces are described and interacted with to promote predictable and consistent integrations are more prominent than ever. Exposing Big Data stores as RESTful services is one of the most popular ways of integrating with these technologies. And the SOA concepts of abstraction, service contracts, and reuse are at the foundation of SaaS solutions.

It Just Makes Sense

SOA is at a level of maturity where it no longer benefits from having its own buzzword.

After all, you don’t see organizations advertising that they’re a client-server shop or that they are prolific adopters of web architecture to differentiate themselves in 2013. We’re at a point where sound architecture principles put forward by the proponents of SOA nearly a decade ago, have become just good architecture practice.

The conversations today with IT executives should no longer be “Should you adopt SOA?” but “What should you do to better address reuse, flexibility, and consistency within your enterprise?”

  • Shan
  • Gu