Nullable reference types in C# 8.0

Nullable reference types are a new feature in C# 8.0. They allow you to spot places where you’re unintentionally dereferencing a null value (or not checking it.) You may have seen these types of checks being performed before C# 8.0 in ReSharper’s Value and Nullability Analysis checks.

These are potential sources for bugs, and can cause application crashes and NullReferenceExceptions. C# 8.0’s compiler supports nullable types, and can warn you when you are dereferencing a null value without first checking if it is null if the type ends with a “?”; consequently, any type without a “?” added to the end is a non-nullable reference type. For everything else, it’ll use flow analysis. In this article, I will explain how you can use nullable reference types to help make your code less prone to NullReferenceExceptions, and to make it more consumable by other APIs.

Null attributes

There are also a few attributes you can use to declare the arguments and return values for null-related code. These attributes extend the nullable types and allows the compiler to make more judgements:

  • AllowNull, the argument could be null, even if the type doesn’t allow it. For example, we are setting a string inside of a getter/setter to null. In C# 8.0, strings are known as a nullable “string!”, and so the AllowNull annotation allows setting it to null, even though the string that we return isn’t null (for example, we do a comparison check and set it to a default value if null.) This is a precondition.
  • DisallowNull, the argument isn’t null, even if the type allows it. This is a precondition.
  • MaybeNull, the output might be null. So, the callers have to check if the output is null. This is a postcondition.
  • NotNull, which means that the input wasn’t null when the call returns, even if the type allows it to be null. This is a postcondition.
  • NotNullWhen, which is a post condition that asserts the argument isn’t null depending on the boolean value of the return of the method. For example, say my method is bool MethodA([MaybeNullWhen(false) out string outVal], and it returns true. Then outVal isn’t null. If it returns false, then outVal could be null.
  • MaybeNullWhen, “signifies that a parameter could be null even if the type disallows it, conditional on the bool returned value of the method.” This means that if I were to annotate an argument with [MaybeNullWhen(false)], then the output (signified through the “out” keyword) could be null if the method returns false. This is a postcondition.
  • NotNullIfNotNull, “signifies that any output value is non-null conditional on the nullability of a given parameter whose name is specified”. What this means is that if I pass in a “string?” the output’s nullability is true, and vice-versa. This is a postcondition.

There are some other conditions, such as:

Why are these checks important?

These checks help ensure the safety of the code you write, and also allows other consumers of your library to know when to use null checks (and where to omit them.) While it is possible to null check every call that is null-ambiguous, it can be error-prone because:

  • Too many null checks clutter the code and wastes time trying to create error handlers to safely stop program execution.
  • You might forget to write a null check, and because there are null checks for everything else, it is unclear which method is missing a null check.

Here is an excellent example from Microsoft’s docs:

string? userInput = GetUserInput();
if (!string.IsNullOrEmpty(userInput))
{
  int messageLength = userInput.Length; // no null check needed.
}
// null check needed on userInput here.

In this case, they annotated the string.IsNullOrEmpty method with [NotNullWhen(false)], which means that if the method returns false, then no null checks are needed. The annotation can be read as “it’s not null when the output is false”. These higher-level logical statements help the compiler make inferences about the code. While this sounds like a trivial comparison to do through the compiler only without annotations, it’s actually a very complex research topic.

Microsoft Pex, a “White-Box test generation for .NET” is a program that analyses every possible path through your program symbolically to discover edge cases and missing conditionals that can cause NullReferenceExceptions (and more.) While it is extremely interesting, it’s a bit outside the scope of this post.

How do I use them?

If you are upgrading a legacy project, Microsoft recommends that you don’t turn it on for everything at once, but there might be a lot of warnings and it could be overwhelming. This is especially true if your team treats warnings as errors (a compiler option), as development would have to cease for several days to fix the null warnings. This isn’t a great strategy, and so incrementally enabling null checks helps prevent an explosion of warnings that could go ignored if not addressed promptly.

There are a few ways you can prioritize adding null annotations for the first set of checks. One of the ways is to start with the very small, straightforward methods. If the method is easy to reason about, then adding the null annotations can be easier to do, and if the small method is used throughout the code many times, then it can help infer what null checks are and are not needed in larger methods. While there are potentially infinite many ways to prioritize null checks, this approach can be helpful if you are not familiar with null checking.

Conclusion

Null reference types can help make your code more maintenance friendly and easier to spot bugs, as nulls can cause unexpected problems and application crashes. While nullable reference types aren’t a panacea (as it is possible to ignore the warnings), they can help provide the compiler with extra information. This extra information can be used to deduce errors and find logic errors. Gradually implementing nullable reference types helps find potential errors without overwhelming the developers with warnings. If there are too many warnings, they could be disregarded, further causing more problems down the line.

  • Alex
  • Yorke

Using Kind for Local Kubernetes

I have been playing around with Kubernetes lately and was looking for an easy way to get a cluster going locally. I came across Kind when looking for this solution and found it really easy to use.

It’s super easy to get a cluster going especially if you already have kubectl installed. It’s fast to install kind as you can just download the executable and include it in your path.

In PowerShell for example

curl.exe -Lo kind-windows-amd64.exe https://kind.sigs.k8s.io/dl/v0.10.0/kind-windows-amd64
Move-Item .\kind-windows-amd64.exe c:\some-dir-in-your-PATH\kind.exe

Getting a cluster going after installation is just as easy

Kind Cluster Created

Commands:

docker ps
kind create cluster
docker ps
watch kubectl get nodes

After a few seconds you will have a control plane up and ready to accept commands. I think it will be really fun to explore what can be done by having a Kubernetes cluster available in such a short amount of time.

  • Dan
  • McCrady

Want to be a Digital Enterprise? Build More Software!

“COTS-first”, “Why build it when we can buy it?”, “Custom development is expensive”, “We’re not a software company” were all slogans that were hammered into my head during the first half of my career working for a large global consulting firm. They were and are still the prevailing wisdom in the IT industry and when looked at it purely from a cost/benefit perspective, they’re easy to justify.

Those sentiments are also directly responsible for the widening gap between organization’s IT capabilities and their digital aspirations. The philosophy of outsourcing software IP is exactly what led the banking and government sectors to be woefully slow in implementing digital services. It’s why the major banks, despite their deep pockets, are having such problems catching up with small Fintech startups.

So how did we get here?

The Great Divide

For as long as I can remember, IT and software industries were considered separate and only loosely overlapping. Software companies did all the complicated engineering and computer sciency stuff that made tools, and IT practitioners installed and tailored the tools into enterprises like Ikea furniture.

Early CIO’s reported to the CFO in most large enterprises and the focus was on how to get systems deployed with the least amount of money possible. “On time and on budget”, “OpEx and CapEx efficiency”, “ROI”, and “cost recovery” were what occupied IT executive minds.

The Digital Reality

Companies like Amazon and Netflix have shined a light on the new digital reality: software is the business. They didn’t adopt the traditional thinking that software was some capital asset to be treated like a cost centre on a balance sheet, but rather a revenue generator and competitive differentiator. The focus shifted instead to “agility”, “speed to market”, “resiliency and reliability”, “scalability”, “security and integrity”, which are more closely aligned to how organizations think about their core business offerings.

The Convergence

The move to digital has pushed enterprise IT shops closer and closer towards the practices, philosophies, and skills of the software industry. Concepts like Agile, Extreme Programming, and Domain Driven Design which were widely accepted within the software industry by the mid 2000’s are finally being seen as table stakes for the digital enterprise in 2020. Sometimes we’ve even given them new names like DevOps and Microservices to make it feel like the IT industry invented these concepts.

The increasing maturity and variety of software frameworks are starting to blur the line between custom development and COTS as developers can now do a lot more with a lot less code. Cloud takes this even a step further where everything ranging from a logging service to a CRM can be provisioned and interfaced with via an API through a little bit of code. The short of it is that enterprise can’t get away from building code anymore, but they also don’t have to build as much of it to deliver the same features as 20 years ago.

The Gap

The problem that exists today is, to state it bluntly, that enterprises don’t know how to build software. Decades of prioritizing buying over building has created IT departments heavily geared towards project management, requirements gathering, system administration, and configuration of various COTS tools using whatever proprietary vendor technologies. There may be a few intrepid developers responsible for gluing all this mess together and keeping it all running plus some plucky web dev teams that push out wave upon waves of web forms. But the gap to actual modern software development is huge. And this gaping chasm is what most enterprises are being forced to cross in their shift towards a digital economy.

Crossing the Chasm

I think this is the first time since my 2nd year Marketing class that I’m actually using this phrase. Enterprises must invest in building software, especially related to the delivery of digital services. Not because it’s cheaper or less risky than buying it, because in most cases it’s not. But because that’s the only way to actually build up the type and scale of software development capacity needed to transition to digital.

We’re not just talking about coders, but all the surrounding disciplines that enable successful software delivery (e.g., product owners, UX designers, project managers, executives, testers, platform ops, security). Even accounting models have to change to stop treating software as a depreciating asset and instead as a line of business. Organizations have to fully embrace the reality that going digital means running a software company.

The new reality is that software is a part of any digital organization’s core business. And experience has taught us that any organization who outsources its core business will never be competitive.

  • Shan
  • Gu

Stop Talking About Cloud

Yes, this is an odd sentiment to have as a Cloud-native software company but hear me out. We spend a lot of time talking to organizations about adopting Cloud concepts and approaches. The large majority of the discussions land in one of two categories:

  1. Help me move stuff to the Cloud to save money and time. This line of discussion quickly focuses on technology and tools:
    • Which Cloud should I pick?
    • Should I use Kubernetes?
    • SaaS or PaaS or IaaS?
    • What are the Cloud equivalents to my current stack?
  1. I’m skeptical of Cloud, so help me understand:
    • Is Cloud secure?
    • Will it actually save me money?
    • What about vendor lock-in?
    • Will my stuff run in the Cloud?
    • How much work will it be to move?

When we dig deeper with our clients to try to answer these questions, we always end up exploring more fundamental IT organizational challenges instead. Why? Because talking about Cloud is talking about the destination rather than whether we have the capabilities to make the journey. 

Imagine if a hockey team focused on moving into a new arena to improve its record and attendance rather than investing in its coaching staff and players? 

That’s exactly what we’ve been doing in the IT industry: being fixated on where our apps run rather than how we build and operate them.

“Modern Software” not just “Cloud”

Cloud isn’t some revolutionary invention that just appeared one day. It is effectively an ongoing refinement of hosting technologies and business models that are enabled largely by two things: scale and automation. These are the same things that drove the Industrial Revolution. Therefore we should be viewing the rise of Cloud being indicative of the modern industrialization of the software industry.

So instead of talking about how we get to Cloud, we should really be talking about how we build modern software and what that really means.

What Does Modern Software Development Mean?

Traditional or legacy software principles were developed during a time where compute power was limited and optimization of CPU performance, memory, and storage was top of mind. In modern software development, we recognize that compute is cheap and so we should optimize for business outcomes instead. How quickly can we respond to changing user needs? How well can we scale if our software is wildly successful? How do we remain resilient to failures? How do we build and maintain user confidence? How do we control development costs with constant change?

Just like how the Industrial Revolution changed manufacturing, modern software means industrializing our process of building software along the same lines:

  • Focus on software frameworks rather than programming languages to minimize “building from scratch”
  • Automate the mundane and repetitive (e.g., CI/CD, test execution)
  • Design for modularity
  • Build for scale
  • Exhaustively test for quality
  • Constantly iterate for improvements and allocate budget for it
  • Instrument the process and measure velocity; then improve it
  • Design assuming failure will happen
  • Assume and embrace constant change

Development and Operations are Intertwined

Many of the operational issues associated with traditional software development (e.g., chronic underfunding, tech debt accumulation, rust out, performance degradation) can be attributed to having too clear a delineation between development and operations. User needs, organizational priorities, and technologies are constantly changing. Therefore software development is never done. 

Operating a software solution and developing new features or addressing technical debt must be an ongoing and integrated process rather than distinct activities. Concepts like DevOps aim to address this, but the change in approach involves the entire organization down to how software investments are funded.

Build People Not Widgets

Cloud migrations or app modernization initiatives are too often structured as outsourcing engagements where organizations feel the only viable path to success is to hire some experts to do it for them. This is frankly a shortsighted approach and I have yet to see it really work out, especially over a year after the project ends. Client teams are often left woefully unprepared to inherit and support the hundreds of applications which they’re no longer familiar with.

These big programs should be seen as an opportunity to upskill and reskill the organization’s technology teams instead. Expert teams can be brought in to work with the organization’s internal teams in a player-coach capacity to adopt modern software development methods and tools. The organization’s IT governance and management processes should also be adapted for modern software outcomes such as agility and velocity.

Let’s stop talking about Cloud and talk about investing in our people’s ability to build modern software instead.

  • Shan
  • Gu

GitHub Actions – Deploying an Angular App

Recently I built an Angular demo application that showcases some of the features provided by Angular. I will deploy this application to GitHub pages using GitHub Actions, a newly released CI/CD platform that can be used by open source repositories for free.

Since I already have a completed Angular project pushed to GitHub, all I need to do is to create a GitHub workflow to build, test, and deploy the Angular application to GitHub Pages. Before I start, I need to create a folder named .github/workflows at the root of my repository.

To learn more about GitHub workflow, please read workflow syntax for GitHub Actions article.

Create a GitHub Actions Workflow File

In .github/workflows, I added a yaml file for the workflow. And inside the workflow file, you can choose to add the name of your workflow by adding:

name: workflow name

If you omit name inside the workflow file, GitHub will set workflow name to the workflow file path relative to the root of the repository.

GitHub is flexible with however you want to name your workflow file, but the file has to be a yaml file and it has to be in the .github/workflows folder.

Setup Workflow Trigger

A workflow trigger is required for a workflow. I configured the workflow to trigger on pushes to the master branch:

on:
  push:
    branches:
      - 'master'

If you want to use a different trigger for your workflow, please take a look at events that trigger workflows article and on section of workflow syntax for GitHub Actions.

Create the Angular Build And Test Job

In GitHub Actions, jobs are defined by a series of steps that are executed on a runner. Each job runs on a different workspace, meaning that files and job side effects are not kept between jobs. In order to reduce build time and build complexity, I will keep as much work inside one job as possible.

Thus, the job below is created to build and test the Angular application:

jobs:
  build:
    name: Build and Test
    runs-on: ubuntu-latest
    steps: ...

The latest version of Ubuntu GitHub-hosted runner is utilized for this job. But if you want to use a different Github-hosted runner, pease read virtual environments for GitHub-hosted runners article.

Checking out source code

Since jobs do not pull down the source code by default, you need to explicitly tell the job to do so. Therefore, I add the following to steps of build and test job:

- name: Checkout
  uses: actions/checkout@v1

Setup Node.js

To setup Node.js used by the job, add the following under steps of the job:

- name: Use Node 12.x
  uses: actions/setup-node@v1
  with:
    node-version: '12.x'

Build and test job is configured to use Node.js version 12.x. If you wish to use a different version, please take a look at using Node.js with GitHub Actions article.

Run build and test

To build and test the Angular application, I added some supporting scripts to the application’s package.json file:

"build:ci": "ng build --prod --sourceMap=false --base-href /YOUR_REPOSITORY_NAME_HERE/"
"test:ci": "ng test --watch=false --code-coverage --source-map true"

As you can see, the test:ci script will also generate code coverage results for us, which will be used later down the line.

Note: To avoid MIME type error due to invalid path, you need to set your base-href to your repository name

Then, I add the following to the job to build and test our application:

- name: Install dependencies
  run: npm ci
- name: Build
  run: npm run build:ci
- name: Test
  run: npm run test:ci

Upload artifacts

To expose the results of the current job to the next job, I need to configure build and test job to upload the build artifacts. I also configured the job to upload the code coverage results, so they can be reviewed.

- name: Archive build
  if: success()
  uses: actions/upload-artifact@v1
  with:
    name: deploy_dist
    path: dist
- name: Archive code coverage result
  if: success()
  uses: actions/upload-artifact@v1
  with:
    name: deploy_coverage
    path: coverage

if: success() is used to make sure upload artifact only runs if all the previous steps passed. For more information, read context and expression syntax for GitHub Actions article.

Create Deploy Job

With build and test job completed, I can create the job that will deploy the Angular application to GitHub Pages. I add the following yaml below build and test job:

deploy:
  runs-on: ubuntu-latest
  needs: build
  steps:
      - name: Checkout
        uses: actions/checkout@v1
      ...

needs: build is used to tell GitHub to only execute deploy job when build and test job completed successfully.

Download build artifact

I add the following under steps in the deploy job to download build artifact from build and test job:

- name: Download build
  uses: actions/download-artifact@v1
  with:
    name: deploy_dist

To learn more, take a look at persisting workflow data using artifacts article.

Deploy to GitHub Pages

I use GitHub Pages Deploy Action to deploy our Angular build to gh-pages branch of the project repository:

- name: Deploy to GitHub Pages
  uses: JamesIves/github-pages-deploy-action@releases/v3
  with:
    GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
    BRANCH: gh-pages
    FOLDER: deploy_dist/YOUR_PROJECT_NAME_HERE

GITHUB_TOKEN is used to avoid providing a personal access token, to learn more about GITHUB_TOKEN, read authenticating with the GITHUB_TOKEN article.

Conclusion

Once you check in your workflow file, which should look similar to the yaml below, to your master branch, you should see a GitHub workflow starting in the GitHub Actions page. When the workflow is complete, you will see the build output and test coverage results in the artifacts section and a branch called gh-pages will be created.

name: workflow name

on:
  push:
    branches:
      - 'master'

jobs:
  build:
    name: Build and Test
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v2
      - name: Use Node 12.x
        uses: actions/setup-node@v1
        with:
          node-version: '12.x'
      - name: Install dependencies
        run: npm ci
      - name: Build
        run: npm run build:ci
      - name: Test
        run: npm run test:ci
      - name: Archive build
        if: success()
        uses: actions/upload-artifact@v1
        with:
          name: deploy_dist
          path: dist
      - name: Archive code coverage result
        if: success()
        uses: actions/upload-artifact@v1
        with:
          name: deploy_coverage
          path: coverage
  deploy:
    runs-on: ubuntu-latest
    needs: build
    steps:
      - name: Checkout
        uses: actions/checkout@v1
      - name: Download build
        uses: actions/download-artifact@v1
        with:
          name: deploy_dist
      - name: Deploy to GitHub Pages
        uses: JamesIves/github-pages-deploy-action@releases/v3
        with:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          BRANCH: gh-pages
          FOLDER: deploy_dist/angular-demo

Ensure that your repository has GitHub Pages enabled and the deployment is based off gh-pages branch. If this is set up properly, your Angular application should be deployed!

  • Hannah
  • Sun

A Realist’s Guide to Culture Change

The phrase “we need to change our culture to be successful” has become a punchline for any executive pitching ambitious visions and transformation initiatives, IT-related or otherwise. What is unfortunately less common is any mention of how such a change in culture will happen and how to know when this ideal future culture has been achieved.

Foci is by no means a change management firm nor do I profess any kind of expertise in human behaviour or organizational theory. We are however experts in helping organizations adopt new technologies and methods where culture is an unavoidable challenge. Based on our experience in the trenches, I would like to offer a realist’s perspective on what taking on “culture change” means and what one can expect when committing to this lofty goal.

Culture = People

We can’t talk about culture without defining it first. Culture is an abstraction of what the default behaviours and tendencies of a group of people are. Those behaviours and tendencies are either learned and developed in reaction to how an organization is built and managed, or inherent in the people that are being hired.

When an organization sets out to change its culture, it must accept the reality that it will likely result in a turnover in people. The culture that you desire won’t resonate with everyone in the organization. And a strong culture is built by people who naturally buy into it rather than by trying to hard sell it to someone. Therefore, it’s best to ensure that you are prepared to deal with an increase in turnover and hiring as a part of this commitment rather than assume that a new culture can be achieved without big changes to the workforce.

Culture Requires Nurturing

Executives can’t really dictate the culture of an organization the same way parents can’t really dictate the personalities of their children. An organization’s culture develops based on how people react to and are motivated by that organization’s structure, management style, processes, facilities, compensation model, other employees, and countless other factors. Any attempt to try to define a new culture without looking thoroughly at all aspects of the organization which enabled the current culture would be flawed.

Instead of asking how individuals can adopt the desired behaviours, the organization should ask what aspects of its current structure, policies, compensation, governance, rituals, and general work environment are contributing to the undesirable behaviours and then work to address those. For example, if an organization desires a culture of innovation, budget and approval processes will have to be updated to allow for more experimentation, frequent changes in project parameters, and faster decision making. This is a very organic and fluid process, so set realistic expectations and adapt the plan to how the people are reacting to the changes.

Stress = Negative Behaviours

High stress situations tend to push people to exhibit more basic survival instincts such as territorialism and combativeness. It is extremely difficult for people to adopt more desirable behaviours such as collaboration and transparency or take extra time to think about innovative solutions when timelines are aggressive and budget is tight.

People take time to learn new ways of working and making decisions. This means that efficiency and output will drop before recovering and even improving over the longer term. Project budgets and timelines must account for this and give people enough time to learn the new behaviours and repeat them enough times to become ingrained. It’s the classic “slow down to speed up” adage.

Change Starts at the Top

I am constantly surprised by the number of organizations treating culture change as an exercise whereby the executives look at how they can fix their workforce without also looking at their own behaviour. The culture of an organization is representative of how executives have made decisions over time.

If an organization wants to encourage a culture of taking responsibility, then executives must reflect this by taking actions such as increasing delegation of decision making and making their compensation more outcome-based. If more collaboration is desired, then open door policies must be adopted. Executives can’t just be the champions of change, but also become the examples of the desired culture themselves. The “do as I say and not as I do” philosophy doesn’t work here.

Achieving Success

We are extremely proud of the culture we’ve achieved at Foci. We’ve been deliberate in designing our organization and been very lucky in the type of people we’ve attracted and hired. Here are some of the things that we’ve done and learned about building a strong and innovative culture:

  1. Hire executives with diverse opinions and approaches, but very similar values. Your leadership should have different approaches to solving problems, but should see eye-to-eye on the organization’s core beliefs and philosophies;
  2. Hire for culture fit over pure technical acumen. It’s much easier to teach technical skills than modify behaviours;
  3. Constantly adjust and refine organizational processes and policies. Organizations and the people within them evolve over time. The processes and policies have to be tweaked to account for that;
  4. Create a relationship of mutual trust between our people and the company. Giving people the room to make decisions and exercise judgement encourages a sense of responsibility and ownership. Treat your staff like responsible adults who can make good decisions;
  5. Compensate people based on what you value in your employees. If you want a team that’s constantly upping their game, then compensate for personal growth and skills development;
  6. Invest in people. It’s not just training and some formalized mentorship program. Give people the time, resources, and the infrastructure needed to connect, collaborate, and share knowledge.

Culture change is hard, but by no means impossible. It takes a lot of commitment, attention, investment, time, and patience. By recognizing that the change is really building an organization that nurtures the desirable culture, “we need culture change” will become an achievable call to action rather than just an executive punchline.

  • Shan
  • Gu

Multi Stage Pipelines & Azure DevOps

Many years ago, I wrote a blog post about TFS and DevOps. A lot has changed since then, with multiple versions of the build pipeline being released, but it continues to be one of the most trafficked articles on our site.   Microsoft has worked hard to create a better experience for build automation and continuous integration – so I worked hard on updating this walkthrough. Recently, Microsoft released the idea of multi stage pipelines that work and feel much like how GitLab CI works.

In this post I’ll walk through a basic YAML file and show how you can get a C# project up, built, and tested quickly.

Setup

We need to have a project that is checked into DevOps before we begin. I have a repository that I made for this blog up on DevOps. that is a basic dotnet core console application and a unit test project that goes along with it. At the time of writing this blog post you will also need to turn on the multi-stage pipelines Preview Feature in order to get the best view for these pipelines. You can do that by clicking on the user settings button

User Settings

Then click on preview features

Preview Features

Then ensure that multi-stage pipelines are enabled

Multi stage pipelines enabled

First Steps

First we need to add a YAML file into our project. I tend to put this file directly at root and name it azure-pipelines.yaml. Then we need to define our stages. A stage is a collection of jobs and can be run concurrently or can be dependent on another stage successfully completing. For this quick project we will have two different stages

  • Build
  • Test

In order to define these stages in our pipeline we need to write some YAML like

stages:
  - stage: build
    displayName: Build
  - stage: test
    displayName: Test
    dependsOn:
    - build

this will give us building blocks to add our jobs. If you check this file into DevOps and navigate to pipelines you can see that we have a pipeline defined without any runs associated to it.

multi stage pipeline showing in dashboard

Adding a Job

A job runs on a build agent. By default DevOps provides hosted build agents. These agents are a pre-configured VM that have a lot of different development tools pre-installed. I’ll be using the hosted agents for this post.

Let’s add in some YAML to add a job that will build our dotnet solution. We can do this in one of two ways, we can use a DevOps “task” or we can write a script. Tasks can provide a lot of features that you would normally need to script yourself. These can be very helpful, however it also hides a lot of what is being run. I tend to try and use tasks as they get updated regularly to add additional features and fix bugs. Microsoft hasn’t made tasks to solve every problem however so you will need to write some scripts eventually.

Example as Task

variables:
  buildConfiguration: "Release"
  
stages:
- stage: build
  displayName: Build
  pool:
    vmImage: "Ubuntu 16.04"    
  jobs:
  - job: build_dotnet_solution
    displayName: build dotnet solution
    steps:
    - task: DotNetCoreCLI@2
      inputs:
        command: build
        arguments: '--configuration $(buildConfiguration)'
- stage: test
  displayName: Test
  dependsOn:
  - build

Example as script

variables:
  buildConfiguration: "Release"
  
stages:
- stage: build
  displayName: Build
  pool:
    vmImage: "Ubuntu 16.04"    
  jobs:
  - job: build_dotnet_solution
    displayName: build dotnet solution
    steps:
    - script: |
      dotnet build --configuration $(buildConfiguration)
- stage: test
  displayName: Test
  dependsOn:
  - build

In both examples I have added a variable to set the build configuration setting for the pipeline. Variables are very helpful and DevOps also provides a lot of pre-defined variables for you. You can ready about them here.

Artifacts

Now that we have our job running and our solution is being built. We will probably want to retain these files. We will need to artifact these files if we want to use them in a different job, or we can download them later for manually testing the build.

variables:
  buildConfiguration: "Release"
  
stages:
- stage: build
  displayName: Build
  pool:
    vmImage: "Ubuntu 16.04"    
  jobs:
  - job: build_dotnet_solution
    displayName: build dotnet solution
    steps:
    - task: DotNetCoreCLI@2
      inputs:
        command: build
        arguments: '--configuration $(buildConfiguration)'
    - publish: $(System.DefaultWorkingDirectory)/src/demo-project/bin/$(buildConfiguration)/netcoreapp3.0/
      artifact: source
- stage: test
  displayName: Test
  dependsOn:
  - build

Once the build is completed you should see the artifacts on the build page. You can download them and use them in different jobs now.

multi stage pipeline artifacts published

Testing

Now that we have our code built, we can go ahead and run the tests for our application. DevOps also has the ability to show us test results through its dashboards. It’s easiest to use the task for this, as the task has capabilities to upload the tests results for us.

variables:
  buildConfiguration: "Release"
  
stages:
- stage: build
  displayName: Build
  pool:
    vmImage: "Ubuntu 16.04"    
  jobs:
  - job: build_dotnet_solution
    displayName: build dotnet solution
    steps:
    - task: DotNetCoreCLI@2
      inputs:
        command: build
        arguments: '--configuration $(buildConfiguration)'
    - publish: $(System.DefaultWorkingDirectory)/src/demo-project/bin/$(buildConfiguration)/netcoreapp3.0/
      artifact: source
- stage: test
  displayName: Test
  dependsOn:
  - build
  jobs:
  - job: test_dotnet_solution
    displayName: test dotnet solution
    steps:
    - task: DotNetCoreCLI@2
      inputs:
        command: test        
        arguments: '--configuration $(buildConfiguration)'
multi stage pipeline tests successful

With this, you now have a basic build and test pipeline that will run with every check-in to your repository. There is a lot more that can be done, such as managing environments and performing releases. I hope that this is a good starting block to get you moving with DevOps.

  • Dan
  • McCrady

What’s Your Organization’s Rocket Fuel?

The conversations I regularly have with clients, other executives, and my mentors are usually around “what’s your org’s vision?” or “what do you want your org to do”. Foci has gone through a tremendous period of growth and change over the last 12 months and the answers to those questions seem to be ever changing. This has led me and the rest of the management team to have some very interesting discussions around how we define Foci and our purpose.

The “what” and the “how” doesn’t matter

Photo by John Baker on Unsplash

Regardless of how well thought out your vision or strategy is, the reality is that $%*@ happens. Your clients can change their mind, you may lose some key contracts, the market will evolve and change, competitors will emerge, or you may not be able to get that unicorn architect/developer to run that world-changing product you want to build. And every time you have to make a pivot to adjust to those changes, it can be a very painful experience, both for you and your team.

The identity of an organization is very important, especially if you have a strong team culture like us. Team members imprint themselves onto that identity and subconsciously use it as a reference point for their everyday work. We started life as an Oracle Fusion Middleware company, then became an Architecture and Integration company, and now we’re doing more Cloud-Native custom dev with a broader range of system integration and program management services. Each of those shifts in focus created quite a bit of disruption in the team. People asked “Wait what? I thought we were doing the other thing? What does that mean to our existing projects? Will we stop doing the other thing altogether?” These were all fair questions, but after working through it all, we noticed that none of it actually impacted our team culture or our core behaviours.

What we took away from this were 2 things:

  1. What you did as a firm or how you did it had very little alignment to your culture.
  2. Our people are very emotionally connected to Foci’s identity and feel any change in that identity keenly.

It’s all about the “why”

This naturally led us to look at why our folks joined Foci and what made them excited about coming into work each day. Turns out no one was really driven by the prospect of writing thousands of lines of C# code, installing and configuring Oracle SOA Suite, or creating a stakeholder engagement plan. Sure, those things interested people, but they weren’t really core motivators.

We ended up landing on 2 aspects of motivation that were the most important:

  1. What brings you the most satisfaction (e.g., solving a problem, having an impact, getting recognition, seeing something you’ve built be used)
  2. What is your metric of value (e.g., complexity of the problem, number of people impacted, transaction volume, financial savings)

Problems are our rocket fuel

We always joked about having a generic tagline like “we solve problems” (it’s on our website) because we were constantly evolving the business. Appropriately that turned out to be the answer. What we realized is that our entire team and our hiring processes all coalesced around the core desire to solve complex and interesting problems. We weren’t motivated by how many people were using an app that we had developed or whether the systems we helped our clients build were processing 100 or 1,000,000 transactions a day.

The thing that gave us all a real sense of accomplishment and gave us that little shot of dopamine we humans naturally crave was when we were able to solve a problem for our client. The bigger the complexity greater the satisfaction. As long as we had a healthy supply of complex and interesting problems to feed our team, we could go anywhere.

The destination and the things you do to get there will always change over time. The things that motivate and drive you to move forward are more constant and core to your being. Defining your organization based on the goal you want to achieve or the tasks that you do makes every pivot feel like an identity crisis. Putting in the time to identify the rocket fuel that constantly propels your team forward creates a solid corporate identity to anchor against regardless of the path your organization decides to take. Interesting problems are our rocket fuel. As long as we as a management team ensure that our team has a steady flow of interesting problems to solve, we can have every confidence in Foci’s ability to achieve any goal that we set for ourselves. Until we change our mind, of course.

  • Shan
  • Gu

The Digital Transformation Road Trip

Too often, we see articles shared preaching the importance for organizations to adopt a digital strategy without encapsulating what that really means. To remove some of that confusion, I like comparing an organization’s digital transformation with something everyone knows – a road trip!

Let’s start with some truth – a digital strategy can be enormously beneficial to a department or organization and its ability to deliver value to customers. BUT, like a good road trip, becoming a digital organization isn’t an overnight journey – and it doesn’t always follow a set path. It requires planning, understanding, commitment, and the ability to embrace the detours along the way.

Oh the places you’ll go!

Before setting out on a trip, you need to have a destination in mind.  Similarly, executives need to agree on what ‘digital’ means for their organization. What problems are you really trying to solve?  Once you have these identified, your organization can begin to evaluate the possible ways of getting there.  

Loading up the car

Digital transformation is as much a business transformation as an IT one. Digital processes are about re-examining your business from top to bottom in order to have the right information, available at the right time, to make the right decisions. Cooperation, communication, and most importantly – organization-wide understanding is key to making sure this happens.

It’s important to start with the problem without focusing on what the solution might end up being. Challenge pre-existing assumptions and ideas about who should be doing what, when, and how. Break down your tools and processes so that you can rebuild it in a more efficient, modern way.

Digital organizations have governance and management frameworks that are very different than paper-based organizations. Keeping everyone involved ensures that you have multiple sets of eyes on the road. Making sure they know why this journey is happening means they’re looking out for the right kind of obstacles and opportunities.  

Take advantage of the bumps along the way

A digital organization embraces speed, communication, learning, and also, failure. It’s less important to have a map setting the route in stone from  start to finish than it is to be aware of what’s going on around you. Being aware of your surroundings lets you be prepared to change direction when a better path becomes available (or to avoid that head-on collision up ahead)! A digital organization uses this awareness to stay relevant and ahead of the curve. Approaches and methods like incremental development, democratized governance, Test Driven Development, and Agile are all designed to support teams in this way.

This can be a big change in thinking, especially for larger, more traditional organizations. Understanding which tools are available, and when to leverage them, can significantly improve your chances of finding success in your transformation.

Embrace being off the beaten path

So – before embarking on your digital journey, make sure you understand where it is you want your organization to go, focus on the journey, and be prepared to embrace being off the beaten path. You might not take the path you first imagined, but digital transformation is about the journey, not the destination.

  • Kevin
  • Steele

Why .NET loves Linux

This is an update to the post I made a while ago titled “Why .Net Doesn’t need to be Expensive“.  A lot has changed since I made that post.  For example: Visual Studio Code wasn’t released, .NET Core was called vNext, and .NET hadn’t gone through it’s open-source transformation.  These introductions to the .NET ecosystem have changed the way .NET developers are working day-to-day and the path to deploying .NET on Linux is quickly becoming a mandatory requirement for IT shops.

Microsoft has been on the Linux love train for quite some time now, and we are slowly starting to see the fruits of this transformation.  Just recently the Linux Subsystem on Windows was added to Windows 10 without the need to turn on developer mode.  Developers now have a native Linux bash that can be enabled through the Windows store.  The new .NET core project templates in Visual Studio include Docker support with just the click of a checkbox.  Azure allows you to host .NET Core projects in Linux, and has moved to allow container orchestration using technologies like Azure Container Storage, and soon to come Azure AKS (its managed Kubernetes).  This change is also reaching out to the open source community.  Most large projects have either ported their program to use .NET standard or are in the process of converting it.

 

Why so much Love?

Plain and simple: moving custom code to the cloud means moving to Linux.  All cloud technologies that are coming out have Linux as a common player.  AWS, GCP, OCP, Azure, and even smaller players like Digital Ocean all provide Linux as the OS.  If an IT organization can’t migrate their .NET custom code to Linux they are dramatically limiting the choices they have to get to the cloud.  If you aren’t going with Linux you only have two real choices:

1)  Find a Windows Server VM in the cloud and deploy to IIS.  

Technically yes, you are moving to the cloud, but are you really gaining any benefits?  Your operations team still needs to monitor, maintain, and patch this VM just as if it was in your private data centre.  You also are quickly locking yourself to the provider since making an export of the VM to move to another provider will be difficult and require down time as you make that transition.

2)  Use Azure PaaS Offerings like Web App Services.  

Azure is still your friend here.  They will take your web application code that is slightly modified to be cloud ready and host it for you.  The Web App Services offering is really good stuff.  It comes with free auto-scaling, monitoring, and guaranteed availability.  They even take care of patching and maintaining the infrastructure.  The downside here is that until you have migrated that application to Linux you are tied to Azure.  No other cloud provider is looking at a way to host non-core .NET web sites.  So if Azure changes the pricing model, you will need to change with it.

 

What does Linux get you?

Linux buys you true portability of your applications. The most common way to get to true application portability is to write your applications as a 12 factor application, while using Docker to wrap your application and prepare it for deployment.  If you follow this procedure, then pretty much any platform is open for you to deploy your applications.  Microsoft is currently working to create Windows Server Docker containers like microsoft/nanoserver, but the licensing and deployment constraints are still unclear.  It appears that you need to deploy these images only on a licensed Windows Server 2016 system.  This restriction makes your application tightly coupled to Windows systems and reduces your deployment options significantly.

 

More investment for .NET Developers

A little while ago I was talking to a group about how the shift to Linux will be a big shift for .NET developers. Normally Microsoft would have a major release and developers could focus for a year or so to wrap their heads around it.  When the TPL was released, Async Await was the big player. Bloggers would write endless articles on how leverage this feature to introduce multi-threading into applications.  This update was all that .NET developers needed to focus on.  The next few years are changing a lot more than Async Await.  A new Operating System in Linux, arguably a new framework with .NET Core, Docker containers, container orchestrators like Kubernetes, all while building strong Dev Ops capabilities.  The future is bright for .NET but the time required to learn all the advantages is long.  I plan to keep our developers moving in this direction, since it is the brightest path forward for custom software development in general, including the .NET ecosystem.

 

  • Dan
  • McCrady