7 Complex deployments using Azure DevOps

This chapter covers

In previous chapters, you’ve learned a lot about the Azure Resource Manager, ARM templates, and Bicep. You’ve used different tools, like PowerShell and the Azure CLI, to interact with ARM. And you have used those tools to deploy templates to Azure manually. By deploying manually, however, there are no enforcements, checks, or policies in place to control the quality of the Bicep templates. Mistakes can easily be made, and this would allow developers to potentially break the production environment.

Also, working manually with the CLI means that someone needs to log in to Azure to deploy a template. That means at least one person has enough permissions on the production environment to potentially break it, and it would be best if that were not the case.

So far, you’ve probably stored all the templates you created while reading this book on your local system, and you’ve deployed them from there. If developers in a real company working on the same infrastructure were to do that, copying files back and forth, they would risk overwriting newer versions of files with older versions. It is safer to use a version control system.

Working with Azure DevOps allows you to solve these problems and eliminate the risks by automating the processes involved, while also taking control of the quality of code deployed to production.

Azure DevOps and Azure

This chapter is a step-by-step guide to creating multistage, multiregion pipeline-deploying infrastructure to the Azure cloud. If you want to follow along, you will need a Microsoft Azure DevOps account and a Microsoft Azure account with at least one subscription. You can create a free Azure DevOps account that is limited to a maximum of five users.

7.1 Meet Toma Toe Pizzas

In this chapter (and the next), let’s suppose you work at Toma Toe Pizzas. This company delivers pizzas in several countries. The problem you’re facing is that the website and ordering system run on a single web app deployed on a single service in Azure. Toma Toe is facing performance issues, mainly because the service is hosted in one Azure region, which is causing a lot of latency for visitors elsewhere on the planet. Also, the infrastructure is currently deployed manually. It’s your job to solve both these problems.

Your plan is to deploy the same app in several different Azure regions and to add an Azure Traffic Manager on top of that to route traffic based on the geographical location of your users to the closest possible region. Although it’s not ideal, the manual deployment process has worked so far. With the added requirement of deploying into multiple regions, however, this method is too complex and risky. The stakeholders prefer an automated process to ensure the infrastructure deployment process has a consistent outcome.

In this chapter you’ll learn how to use Azure DevOps to automate the deployment of infrastructure using an Azure DevOps pipeline. This pipeline will deploy the infrastructure in three regions and configure an instance of Azure Traffic Manager as shown in figure 7.1.

Figure 7.1 The infrastructure deployed by the Azure DevOps pipeline

Figure 7.1 shows three resource groups in three different Azure regions containing an App Service plan and an App Service. The fourth resource group contains an instance of Azure Traffic Manager redirecting your website traffic to the App Service instance closest to the visitor.

In this chapter you will craft the Bicep files required to provision the infrastructure shown in figure 7.1. You will then, from section 7.4 onwards, learn how to create an Azure DevOps pipeline that will deploy the desired infrastructure in an automated fashion.

7.2 Crafting the Bicep files

Let’s focus on the App Service plan and the App Service (figure 7.2). These services are, according to the infrastructure plans, deployed in three different resource groups as you saw in figure 7.1.

Figure 7.2 The App Service plan and the App Service

To provision this part of the desired infrastructure, you need an App Service plan and an App Service deployed to Azure. Let’s get hands on! Navigate to a folder where you would like to create the files, and create a folder called Deployment. Inside that folder, create a new folder called Web. In this folder, create two Bicep files: one for the App Service plan and one for the web app. The reason to name the nested folder Web is because both resources live in the Microsoft.Web namespace. You could name that folder differently, but Web describes what’s inside best.

7.2.1 Describing the App Service plan

In the Web folder, create a file called serverfarms.bicep. This Bicep file will describe the App Service plan. The reason for the name serverfarms.bicep is that the name of the resource in the Azure Resource Manager is serverfarms. This is the old name of an App Service plan, but this name is still used in the Azure Resource Manager.

Because you want to deploy this App Service plan in multiple regions and in multiple stages (development, test, acceptance, production), you’ll need to add parameters to allow the template to be used for these different environments and regions. In serverfarms.bicep you start by adding a few parameters as shown in the following listing (Deployment/Web/serverfarms.bicep).

Listing 7.1 Parameters in Bicep

param systemName string
 
@allowed([
    'dev'
    'test'
    'acc'
    'prod'
])
param environmentName string
 
@allowed([
    'we' // West europe
    'us' // East US (1)
    'asi' // East Japan
])
param locationAbbreviation string

The systemName parameter contains the name of your software system. In this case, tomatoe would be a good choice. The environmentName and locationAbbreviation parameters are there to distinguish between the different Azure regions and environments. Note that the @allowed() decorator allows you to define allowed values. Passing a value other than the listed values will result in an error.

Following the parameters, you describe the resource or resources you want to deploy. In this case, the App Service plan as shown in the following listing (Deployment/Web/ serverfarms.bicep continued).

Listing 7.2 Adding resource to Bicep file

var serverFarmName = 
     '${systemName}-${environmentName}-${locationAbbreviation}-plan'
 
resource serverFarm 'Microsoft.Web/serverfarms@2021-01-01' = {
    name: serverFarmName
    location: resourceGroup().location
    kind: 'app'
    sku: {
        name: 'B1'
        capacity: 1
    }
}
 
output serverFarmId string = serverFarm.id

In the first line, the parameter values are combined into one string, resulting in a unique name. Then the App Service plan resource is described using that name. The location of the App Service plan is set to be the same location as the resource group the App Service plan is deployed in.

In a bit, you will create the Bicep file to deploy the web app. To create that resource, you will need the ID of the App Service plan so you can link the App Service to it. You therefore declare an output on the last line of this example that returns the id value.

7.2.2 Describing the App Service

Next up is the Bicep file for the web app. The file will look quite similar to the App Service plan template, but it will declare an App Service instead as shown in the following listing (Deployment/Web/sites.bicep).

Listing 7.3 Deploying the web app

param serverFarmId string
param systemName string
 
@allowed([
    'dev'
    'test'
    'acc'
    'prod'
])
param environmentName string = 'prod'
 
@allowed([
    'we' // West europe
    'us' // East US (1)
    'asi' // East Japan
])
param locationAbbreviation string
 
var webAppName = 
     '${systemName}-${environmentName}-${locationAbbreviation}-app'
 
resource webApp 'Microsoft.Web/sites@2021-01-01' = {
    name: webAppName
    location: resourceGroup().location
    kind: 'app'
    properties: {
        serverFarmId: serverFarmId
    }
}

There are two differences between this template and the previous one. First, this template contains one more parameter—the ID of the App Service plan. Second, this template deploys a different resource, the App Service. Because this template is fairly small and no information about the web app is needed, this template does not contain any output parameters.

7.2.3 Finalizing the template

Next you need to create the main Bicep template and call the templates you created in the previous two sections. In the Deployment folder, create a new file called main .bicep. This will be your main template where everything comes together as shown in the following listing (Deployment/main.bicep).

Listing 7.4 The main template

targetScope = 'subscription'                                                
 
param systemName string = 'tomatoe'
param location string
param environmentName string
@allowed([
    'we' // West Europe
    'us' // East US (1)
    'asi' // East Japan
])
param locationAbbreviation string
 
resource resourceGroup 'Microsoft.Resources/resourceGroups@2021-04-01' = {  
    name: '${systemName}-${environmentName}-${locationAbbreviation}'
    location: location
}
 
module appServicePlanModule 'Web/serverfarms.bicep' = {                     
    name: 'appServicePlan'
    scope: resourceGroup                                                    
    params: {
        systemName: systemName
        environmentName: environmentName
        locationAbbriviation: locationAbbreviation
    }
}
 
module webApplicationModule 'Web/sites.bicep' = {                           
    name: 'webApplication'
    scope: resourceGroup                                                    
    params: {
        systemName: systemName
        environmentName: environmentName
        locationAbbriviation: locationAbbreviation
        serverFarmId: appServicePlanModule.outputs.serverFarmId
    }
}

Define the scope of the deployment, a subscription deployment.

Describe a resource group.

Describe a module that contains the App Service plan resource.

Set the scope of the Bicep module.

Describe a module that contains the App Service resource.

At the top of listing 7.4, the target scope for this template is set to subscription. By doing so, you can create a resource group inside your Bicep template and then describe both the App Service plan and the App Service Bicep files as modules. Note that for both modules described in the template, the scope for each module is changed to the resource group created earlier in the template.

This main.bicep template has a few parameters without default values. To pass these values while deploying, you can create parameter files for each environment, like the two shown for production and test in listings 7.5 (Deployment/prod.parameters .json) and 7.6 (Deployment/test.parameters.json).

Listing 7.5 The production parameters file

{
    "$schema": "https:/ /schema.management.azure.com/schemas/
         2019-04-01/deploymentParameters.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "systemName": {
            "value": "tomatoe"
        },
        " locationAbbreviation": {
            "value": "we"
        },
        "environmentName": {
            "value": "prod"
        }
 
    }
}

Listing 7.6 The test parameters file

{
    "$schema": "https:/ /schema.management.azure.com/schemas/
         2019-04-01/deploymentParameters.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "systemName": {
            "value": "tomatoe"
        },
        " locationAbbreviation": {
            "value": "we"
        },
        "environmentName": {
            "value": "test"
        }
 
    }
}

These two parameter files each contain the three mandatory parameters for main.bicep.

The Bicep files are not complete yet, since the Traffic Manager is missing, but let’s focus on deploying the current Bicep files in an automated fashion. In order to deploy the Bicep files using Azure DevOps, the files must be pushed to your remote source control repository.

7.3 Storing templates in source control

Azure Repos (repositories) is a Microsoft SaaS offering for storing source code under version control. There are two types of version control systems available in Azure DevOps, Git and TFVC (Team Foundation Version Control). Git is the most commonly used version control system today, so for the purposes of this chapter, we’ll assume you’re working with a Git repository.

Note If you don’t know what TFVC is, don’t bother with it. TFVC is no longer recommended, so all new projects should embrace Git.

There are several strategies for allowing multiple people to work with the same files simultaneously when working with Git repositories, and all of them involve branching. A branch is used to isolate work in progress from the completed work. When you are about to make a change to your infrastructure, you start by creating a branch. You then clone that branch to your local computer, where you commit one or more changes. Once you are done, you push the changes back to that branch. Once quality is secured, you can merge the changes back to the main branch.

Figure 7.3 shows one strategy for using source control, namely trunk-based development. A new branch is created, whose contents are initially the same as the branch it originates from. Changes are made over time and committed to that branch. Finally, the changes are pushed and merged back to the main branch.

Figure 7.3 Example of a strategy for working with source control

By configuring branch policies on one or more of your Git branches, you can require changes to be reviewed by others, through pull requests, before those changes can be merged onto the main branch. For example, in figure 7.3 the main branch could enforce at least one person, other than the person who made the change, to approve changes between the push and the merge. This way you can enforce peer reviews for every change, prior to the merge to the main branch. How to work with Git repositories in Azure DevOps is well documented on Microsoft’s “Azure Repos Git Documentation” page (http://mng.bz/QvyR).

In order to try the example in this chapter, it is a good idea to create a new project in Azure DevOps. If you are not familiar with Azure DevOps and don’t know how to create a project, you can find documentation in Microsoft’s “Create a project in Azure DevOps” article (http://mng.bz/XZP1). For a version control repository, choose Git (the default). Click on the Create button and wait for the process to complete. Then navigate to your project and click Repos. This will bring you to the default repository of your project. You can clone this repository on your workstation and then start working with it.

Details about version control systems

Although the basics of version control and code repositories in Azure DevOps are covered in the preceding section, the details are beyond the scope of this book. For detailed information about version control systems and implementing Azure DevOps solutions in general, you can read Implementing Azure DevOps Solutions by Henry Been and Maik van der Gaag (Packt Publishing, 2020).

Once you have cloned your repository, you can move the Bicep files you created earlier in this chapter into your Git repository. Copy the Deployment folder, including its contents, into the root of the folder that you cloned the Git repository to. Now that you have your Azure DevOps sources in order, it is time to create your pipeline.

7.4 Automated build and release pipelines

Modern Azure DevOps pipelines can be written in code using a language called YAML (YAML Ain’t Markup Language). Azure DevOps also allows you to create build and release pipelines using building blocks that you can drag and drop into place, but this leaves your pipeline definition on the Azure DevOps server and not on your local machine as code. The advantage of writing pipelines as code is that you can leverage your version control system to control its quality. Also, by using code, your pipeline can live right next to the version of your software system it deploys, so they can evolve in the same code base.

Developers sometimes complain about the YAML syntax because it is whitespace sensitive. That means adding or leaving a single whitespace could break the script. Also, writing a pipeline from scratch can be a challenge. To make dealing with the downsides of YAML a little bit easier, you can download and install the Azure Pipelines extension for VS Code (http://mng.bz/yvyo). You can also edit pipelines online in Azure DevOps. Like the extension in VS Code, this editor provides you with feedback such as showing errors and providing IntelliSense to help you write them quicker.

Azure Pipelines is a tool that allows you to build a CI/CD pipeline. CI/CD can either stand for continuous integration/continuous deployment or continuous integration/continuous delivery. The biggest difference between the two is that continuous delivery requires a manual step to deploy to production, whereas continuous deployment does not. For Azure Pipelines, it doesn’t matter—it can do both. CI/CD pipelines allow you to compile, test, and deploy your system. Having a CI/CD pipeline is essential for teams that want to work using the DevOps principles, because releasing and deploying your system in an automated fashion is one of the components in the DevOps lifecycle.

Pipelines are started by a trigger and typically run until the last action of the pipeline is completed, an error has occurred, or the pipeline has expired. To organize tasks to be executed in your pipeline, there are three main concepts: stages, jobs, and tasks. Figure 7.4 shows how these concepts relate to each other by illustrating a pipeline with two stages, each containing two jobs, with those jobs having a variety of tasks. The concepts of stages, jobs, and tasks are explained in the following sections.

Figure 7.4 Main pipeline concepts

You can start a pipeline manually, but in most cases it’s more convenient to hook the pipeline up to your source control repository. Doing this allows you to configure the pipeline to run when changes are made in your code repository. This mechanism is called a trigger. The next section explains triggers and points out some useful ways to configure them.

7.4.1 Using triggers

Azure pipelines can be triggered by several types of events. The most common triggers for pipelines listen to events on your version control system, such as a pull request being created, or one or more commits being pushed to the source control repository. There are also pipeline triggers allowing you to trigger a pipeline when another pipeline has completed. Scheduled triggers are common. These allow you to schedule your pipeline over time, such as every hour, day, or weekday.

Let’s create a pipeline that first transpiles the Bicep files created in the first part of this chapter into an ARM template. It will then create a deployment, provisioning all the resources described in the Bicep files.

To make future changes and maintenance easier, you can organize your pipeline by creating a new folder in the root of your Git repository called “pipeline”. This folder will contain your pipeline files. In that folder, create a new file called single-job-resource-deployment-ppl.yml. This will be your default pipeline. The following snippet will create a trigger that runs the pipeline as soon as one or more commits are pushed to the remote repository:

trigger:
  - main

In this example, you see main being used as the name of the branch to watch for triggers. - main is the default name for any new repository in Azure DevOps.

If your default branch is named differently, or if you want the pipeline to be triggered when commits are pushed to a different branch, simply change the name of the branch (main) to the desired branch name. You can also add more than one branch name or use wildcards to watch more than one branch for changes.

Now that your pipeline has a trigger configured, you can start adding tasks to the pipeline. These tasks are listed in the steps property of a job. In the following section, you will learn what tasks are and how to write them.

7.4.2 Creating tasks

Tasks are the smallest entity of an Azure pipeline. They typically do one thing. Examples of tasks that could live in your pipeline are “Restore packages,” “Compile sources,” “Run a PowerShell script,” “Run an Azure CLI command,” or “Deploy infrastructure.”

Your pipeline is going to contain two tasks: one will transpile the Bicep file into an ARM template JSON file, and one will deploy that JSON file to Azure, creating all the resources described. Let’s start with the task to transpile the Bicep template into ARM.

Listing 7.7 Task transpiling Bicep template to ARM template

- task: Bash@3
  displayName: "Transpile Bicep"
  inputs:
      targetType: 'inline'
      script: 'az bicep build --file ./deployment/main.bicep'

The preceding listing is an example of a task. You first specify the specific task you would like to run, which in this case is the Bash task. Tasks can have a version, and here you use version 3 of the task. The displayName property can be used to give the task a meaningful name that will be shown in the Azure DevOps UI and in the logs. This task will execute an Azure CLI command that transpiles the Bicep template into an ARM template JSON file. That is specified in the script property of this task.

Some tasks have a short notation that you can use. The Bash task used in this example can also be written as follows:

- bash: az bicep build --file ./deployment/main.bicep
  displayName: "Transpile Bicep"

Using this shorter notation can make your template a bit smaller and more readable.

The following task will deploy the ARM template that was created in the previous step.

Listing 7.8 Task deploying an ARM template

- task: AzureResourceManagerTemplateDeployment@3
  displayName: Deploy Main Template
  inputs:
      azureResourceManagerConnection: "TestEnvironment"
      deploymentScope: "Subscription"
      location: "westeurope"
      templateLocation: "Linked artifact"
      csmFile: "./deployment/main.json"
      csmParametersFile: "./deployment/test.parameters.json"
      deploymentMode: "Incremental" 

This AzureResourceManagerTemplateDeployment task will deploy the ARM template for you. As you can see, this task has a few more properties to fill. Remember that the extension in VS Code will help you with that.

Now that you have created your first task, let’s create a first version of your pipeline by organizing your transpile and deploy tasks in a job.

7.4.3 Grouping tasks in a job

Jobs help organize your pipeline into logical building blocks. By default, jobs will run in parallel, but you can also configure jobs to depend on one another. You will see an example of that shortly.

Jobs have a property called steps, and this is where you declare tasks. The tasks run in the order in which they are declared. The following listing is a pipeline that contains the trigger you created earlier, along with the transpile and deployment tasks (pipeline/single-job-resource-deployment-ppl.yml).

Listing 7.9 Pipeline deploying your Bicep template to Azure

trigger:
  - main
 
jobs:
  - job: publishbicep
    displayName: Publish bicep files as pipeline artifacts
    steps:
      - bash: az bicep build --file ./deployment/main.bicep
        displayName: "Transpile Bicep"
 
      - task: AzureResourceManagerTemplateDeployment@3
        displayName: Deploy Main Template
        inputs:
            azureResourceManagerConnection: "TestEnvironment"    
            deploymentScope: "Subscription"
            location: "westeurope"
            templateLocation: "Linked artifact"
            csmFile: "./deployment/main.json"
            csmParametersFile: "./deployment/test.parameters.json"
            deploymentMode: "Incremental" 

The service connection used

This snippet contains a pipeline, triggered when changes are pushed to the main branch of your version control repository. The pipeline has a jobs property containing a single task. This job has a steps property containing two tasks, the Transpile Bicep and Deploy Main Template tasks. It is a good idea to commit and push your changes now.

The preceding example only has one job, but you can add more. The following example shows two jobs and includes the dependsOn property to make the jobs not run in parallel:

jobs:
- job: JobA
  steps:
  - script: echo hello from JobA
- job: JobB
  dependsOn: JobA
  steps:
  - script: echo hello from JobB

Here we have two jobs named JobA and JobB. The dependsOn property on JobB will make sure that it will only start running when JobA has successfully finished.

For your pipeline to work, you need a service connection. This service connection will allow you to communicate with Azure from within Azure DevOps. The following section explains more about service connections.

7.4.4 Creating service connections

A service connection is a bridge from Azure DevOps to another system. There are a lot of different types of service connections and each is designed to connect Azure DevOps with another system. In listing 7.9 you saw a service connection being used to deploy the infrastructure.

For the purposes of the pipeline you created in the previous section, you need to create an Azure Resource Manager service connection. This type of service connection allows you to communicate with the Azure Resource Manager. The service connection can be scoped to the subscription scope in Azure, as you will see in a bit.

According to Microsoft’s recommendations, test and production environments are ideally separated on different Azure subscriptions. This means that, for the Toma Toe example, you need to create two service connections: one targeting a test subscription and one targeting a production subscription. If you don’t have two different subscriptions, you can set your production service connection to target the same subscription as the test service connection and still follow along with all the upcoming examples.

In Azure DevOps, navigate to your project settings and click Service Connections. Next, click the New Service Connection button to create a new service connection.

Figure 7.5 shows the screen that opens when you create a new service connection. For the first service connection, select the test Azure subscription and leave the Resource Group drop-down empty. Name the service connection TestEnvironment. Make sure that Grant Access Permission to All Pipelines is checked. If you don’t do that, you will need to allow every new pipeline to use this service connection.

Figure 7.5 Creating a service connection in Azure DevOps

Now repeat these steps but target the production subscription and name the service connection ProductionEnvironment.

Permissions for creating service connections

You will need sufficient permissions to create a service connection. That’s because this process will try to create a service principal in Azure, and that service principal will be given permissions on your resources in Azure. If you do not have enough permissions to create these Azure Resource Manager service connections, you can request your IT department to create two service principals and use that information to create the service connections. This would not require you to have elevated permissions in Azure.

Service connections can be decorated with barriers and toll booths if you like, in the form of approvals and checks. These approvals and checks allow you to configure requirements that must be met before the bridge is crossed. These requirements need not involve human interaction; they could involve a call to some IT system, for example. It is common to protect deployments to a production environment so that every deployment is checked and approved according to your standards.

Approvals and checks are validated just prior to when a stage starts. Azure DevOps will investigate which service connections are used and whether any approvals and checks are configured for these service connections. All approvals and checks must validate successfully for the stage to start.

To add an approval to your existing service connection, open the production service connection details. There you can find the Approvals and Checks menu, which is hidden under a button with three dots next to the Edit button, as shown in figure 7.6.

Figure 7.6 Adding an approval on a service connection

Select the Approvals and Checks option in the menu. In the view that opens, you can see that there are various options to choose from (figure 7.7).

Figure 7.7 The Approvals and Checks UI shows different options you can use.

The Approvals and Checks view shows multiple options that can be used to protect and verify a deployment. There is, for example, the Business Hours option, which lets you set the hours in which this connection can be used. Another option is to call a REST API and act on the result, or to query Azure Monitor for alerts. Click the Approvals option, and you’ll be shown the dialog box in figure 7.8.

Figure 7.8 Adding an approval to a service connection

Figure 7.8 shows the configuration of an approval. In this case, both Eduard Keilholz and Erwin Staal must approve before the pipeline can proceed. If the pipeline is not approved within 30 days, the pipeline will be cancelled.

Once you have created the service connections, your pipeline can communicate with Azure. This was the last step required to make your pipeline work. Let’s go to the Azure DevOps portal to configure it, so it can run your pipeline.

7.4.5 Configuring Azure DevOps to run your pipeline

Before Azure DevOps can run your pipeline, you must first make it aware of its presence. Make sure you have committed and pushed the pipeline you created in the previous section, so you can use it in Azure DevOps. Navigate to the Azure DevOps portal, and choose Pipelines in the menu. Click the New Pipeline button to create a new pipeline.

You will now get to pick the repository that contains your pipeline definition file, as shown in figure 7.9. Select Azure Repos Git. As you can see, Azure DevOps can also work with external repositories located at GitHub or Bitbucket, for example.

Figure 7.9 Choose your version control system while creating a new pipeline.

In the next step, select the repository you are using for this project (figure 7.10).

Figure 7.10 Select the repository where the pipeline file resides.

Next you need to configure your pipeline. Since your pipeline definition file is already available in source control, choose Existing Azure Pipelines YAML File. Then select the pipeline file you created earlier (pipeline/single-job-resource-deployment-ppl.yml) and click Continue.

Figure 7.11 shows the window presented when you add a new pipeline from an existing YAML file, and where you select the desired YAML file. Once you have confirmed the creation of the pipeline by clicking the Continue button, the window closes and your pipeline details will be shown. At this point, you can click the Run button to invoke the pipeline for the very first time. Also, if you make changes to a file in your repository, and you commit and push the changes, this pipeline will be triggered.

Figure 7.11 Creating a new pipeline in Azure DevOps

You have now created your first pipeline, transpiling Bicep files to ARM template JSON files, and deployed them to Azure. However, this does not fully cover the requirements of the Toma Toe deployment. You haven’t yet deployed to a test environment, and you may want to allow manual intervention in your pipeline, to approve deployment to a production environment, for example. The remainder of this chapter expands the pipeline so it meets the requirements of the Toma Toe website.

7.5 Adding logical phases to your pipeline

The pipeline you created earlier in this chapter works fine, but it is not the final product that will solve the Toma Toe case. The pipeline compiles and deploys the IaC template, but it does not deploy to multiple regions, nor does it create the Traffic Manager. In this section, you’ll expand the pipeline so it contains the logical phases required for the Toma Toe case.

7.5.1 Identifying the logical phases

It’s important to think about the logical phases your pipeline should contain. In an Azure DevOps pipeline, each such phase is called a stage. Pipelines can contain one or more stages, and you can configure these stages to depend on one another. Stages are major divisions in your pipeline, used to organize your jobs. Good examples of stages are “Compile this app,” “Deploy to test,” or “Deploy to production.”

A stage can depend on a preceding stage, allowing you to take control over when a stage will run. By default, stages run sequentially. The following example will run the pipeline stages sequentially:

stages:
- stage: CompileApp
  jobs:
  - job:
    ...
 
- stage: DeployWestEu
  jobs:
  - job:
    ...

The previous example shows two stages: CompileApp and DeployWestEu. Because no dependency was defined, the pipeline will run these stages using the default, which is sequential.

The following snippet is the same pipeline except for the dependsOn[] property in the second stage:

stages:
- stage: CompileApp
  jobs:
  - job:
    ...
 
- stage: DeployWestEu
  dependsOn: []
  jobs:
  - job:
    ...

By adding dependsOn[], you remove the implicit dependency on the first stage, causing your stages to run in parallel. Note that doing this for this example would actually break it. You cannot deploy an application before it has been compiled.

You can also make stages explicitly depend on one or more other stages. The following example is a fan-out and fan-in example. It will first compile the application, then deploy it into two Azure regions, and finally verify the deployment.

stages:
- stage: CompileApp
 
- stage: DeployWestEu
  dependsOn: CompileApp
 
- stage: DeployJapanEast
  dependsOn: CompileApp
 
- stage: VerifyProductionDeployment
  dependsOn:
  - DeployWestEu
  - DeployJapanEast

The preceding example shows a pipeline with four stages. The first stage (CompileApp) will run when the pipeline is triggered. The second and third stages (deploying to West Europe and Japan East) will run in parallel, but only when the CompileApp stage has completed successfully. Finally, the fourth stage (which verifies the production deployment) will run only when both the deployment to West Europe and the deployment to Japan East have completed successfully.

For the Toma Toe case, you can identify three stages. The first stage will contain a job that transpiles the Bicep files, the second stage will deploy the infrastructure to a test environment, and the third stage will deploy the infrastructure to a production environment.

It is important to know that stages and jobs run in different processes. They therefore often run on different underlying virtual machines. The result is that when you create a file in one stage, for example, it is not automatically available in another stage. To solve this problem, you can make use of a concept called a pipeline artifact.

7.5.2 Accessing artifacts from different jobs

Stages and jobs run within their own processes in the pipeline. When they are finished, any output that was created during their run will be deleted. Each run of a stage or job therefore starts with a clean slate. Whenever you want to make a file available outside of the current stage or job in the pipeline, you need to publish it as a pipeline artifact. A pipeline artifact has a name, and you can use that name in later stages or jobs to download the artifact.

In the Toma Toe case, you will require a pipeline artifact. The first stage is going to transpile Bicep files, creating an ARM template JSON file, and the remainder of the pipeline is going to deploy this JSON file in jobs organized in different stages.

Besides needing the file in other stages and jobs, there is another important benefit of using the pipeline artifact in this case. You could have called the bicep build command in every job and generated the JSON files again and again. However, you should not do that, because you want the pipeline to be reliable and have a consistent outcome. What could happen here is that in between the different stages or jobs, the version of Bicep could be updated, and the result of the Bicep build command could therefore lead to a slightly different ARM template. By reusing the ARM template that is output from your build stage in all the stages still to be executed, you ensure a consistent outcome.

Now that you know how to organize stages and make files from one stage or job available in another, let’s work on the first stage. This stage will transpile the Bicep file and publish the resulting ARM template as pipeline artifact.

7.5.3 Transpiling Bicep in a pipeline stage

You can now separate the jobs for your pipeline into separate files—these files are called pipeline templates. There are two advantages of doing so. First, a separate file is reusable in your pipelines. Second, the individual pipeline files stay smaller than a pipeline template declared in a single file, making it more readable and maintainable.

Listing 7.10 shows a first pipeline template example (pipeline/templates/transpile-job.yml). This template runs the Bicep build, moves the result to a particular folder, and then creates a pipeline artifact. It is good practice to place these templates in a separate folder within your pipelines folder called “templates”.

Listing 7.10 Transpile a Bicep file and publish it as a pipeline artifact

parameters:
  - name: artifactName
    type: string
    default: "arm-templates" 
 
steps:
  - bash: az bicep build --file ./deployment/main.bicep
    displayName: "Transpile Main Bicep"
 
  - task: CopyFiles@2
    displayName: "Copy JSON files to:
         $(Build.ArtifactStagingDirectory)/${{parameters.artifactName}}"
    inputs:
      SourceFolder: "deployment"
      Contents: "**/*.json"
      TargetFolder: 
           "$(Build.ArtifactStagingDirectory)/${{parameters.artifactName}}"
 
  - task: PublishPipelineArtifact@1
    displayName: "Publish Pipeline Artifact"
    inputs:
      targetPath:
           "$(Build.ArtifactStagingDirectory)/${{parameters.artifactName}}"
      artifact: "${{parameters.artifactName}}"

The preceding listing shows a template YAML file containing one parameter. This parameter can be used to pass in values when using the template. In this case, the parameter holds the name of the artifact it will publish, it is of type string, and it has a default value of arm-templates. If your parameter has a default value, you can also use the following short notation:

parameters:
- artifactName: "arm-templates"

The remainder of the template contains the steps property of a job with three tasks. The first task transpiles the main.bicep file into main.json. The second task copies that JSON file to an artifact staging directory used to collect files that you want to publish as pipeline artifacts. The final task publishes the artifact staging directory as a pipeline artifact.

7.5.4 Deploying a template from a pipeline artifact

You’ve created a job that transpiles the Bicep file into JSON, so now let’s work on a job that deploys this JSON file (pipeline/templates/deploy-arm.yml).

Listing 7.11 Deploying infrastructure

parameters:
  - name: serviceConnectionName       
    type: string
  - name: subscriptionId              
    type: string
  - name: environmentName
    type: string
  - name: artifactName
    type: string
    default: "arm-templates"
  - name: location
    type: string
  - name: locationAbbriviation
    type: string
steps:
  - task: DownloadPipelineArtifact@0
    displayName: "Download Artifact: ${{ parameters.artifactName }}"
    inputs:
      artifactName: "${{ parameters.artifactName }}"
      targetPath:
           $(System.ArtifactsDirectory)/${{ parameters.artifactName }}
 
  - task: AzureResourceManagerTemplateDeployment@3
    displayName: Deploy Main Template
    inputs:
      azureResourceManagerConnection:
           "${{ parameters.serviceConnectionName }}"
      deploymentScope: "Subscription"
      subscriptionId: "${{ parameters.subscriptionId }}"
      location: ${{ parameters.location }}
      templateLocation: "Linked artifact"
      csmFile:
           "$(System.ArtifactsDirectory)/${{parameters.artifactName}}/main.json"
      overrideParameters:
           -locationAbbreviation ${{parameters.locationAbbriviation}}
      deploymentMode: "Incremental"

The name of the service connection that you created in section 7.4.4.

The Id of the subscription that holds the test resources.

The preceding listing contains two tasks. The first task downloads the pipeline artifacts, making the JSON files available in the current job. The second task creates a new deployment in Azure. This deployment will, as described in the Bicep templates, create a resource group and deploy the App Service plan and the web app in that resource group.

The two templates you just created don’t do much on their own. You need to create one more pipeline file that will use these two templates (pipelines/multi-stage-resource-deployment-ppl.yml). Add a new pipeline file in the same folder where you placed the previous pipeline, and call this one multi-stage-resource-deployment-ppl.yml. The two templates will each be used in their own stage, so this pipeline therefore contains two stages: one that creates the ARM template using transpile-job.yml, and one that deploys that template to your test environment using deploy-arm.yml.

Listing 7.12 A multistage pipeline

trigger:
  - main
 
stages:                                                         
  - stage: build
    displayName: Publish Bicep Files
    jobs:
      - job: publishbicep
        displayName: Publish bicep files as pipeline artifacts
        steps:
          - template: ./templates/transpile-job.yml             
 
  - stage: deployinfratest
    dependsOn: build
    displayName: Deploy to test
    jobs:
      - job: deploy_us_test
        displayName: Deploy infra to US region test
        steps:
          - template: ./templates/deploy-arm.yml
            parameters:                                        
              serviceConnectionName: "TestEnvironment"
              subscriptionId: "<your-subscription-id>"
              environmentName: "test"
              location: "eastus"
              locationAbbreviation: "us"

This pipeline contains multiple stages.

Referencing another template using the template keyword

When a template requires parameters, they are passed along.

The previous template you created, single-job-resource-deployment-ppl.yml, only used a single job and no stages. This pipeline does use stages, which is why you see the stages keyword used. The pipeline contains two stages, each having a single job. Instead of defining the tasks in those jobs in this file, you use the template keyword in the steps section to reference another pipeline part. The first template, transpile-job.yml, only has a single parameter, which has a default value. The second template has mandatory parameters, and values for each are passed with the template file. The result after running this template should be identical to that of single-job-resource-deployment-ppl.yml. However, this version is more readable, and you can reuse the templates, as you will see in a bit.

One aspect of the Toma Toe infrastructure has not been discussed yet, the Azure Traffic Manager. Let’s dig into that.

7.6 Adding the Traffic Manager

It’s now time to create the Azure Traffic Manager Bicep template, and a YAML template to deploy it. Listing 7.13 contains the Bicep module for creating the Traffic Manager (deployment/Network/trafficmanagerprofiles.bicep). Remember that the Traffic Manager is an Azure Service that can be used to route user traffic in different ways. In the Toma Toe case, you will use it to route a user to the nearest Azure App Service to optimize performance.

Listing 7.13 Creating the Azure Traffic Manager

param systemName string = 'tomatoe'
 
@allowed([
    'dev'
    'test'
    'acc'
    'prod'
])
param environmentName string
 
resource trafficManager
     'Microsoft.Network/trafficmanagerprofiles@2018-08-01' = {
    name: '${systemName}-${environmentName}'
    location: 'global'                                            
    properties: {
        trafficRoutingMethod: 'Geographic'                        
        dnsConfig: {
            relativeName: '${systemName}-${environmentName}'
            ttl: 60
        }
        monitorConfig: {
            profileMonitorStatus: 'Online'
            protocol: 'HTTPS'
            path: '/'
            port: 443
            intervalInSeconds: 30
            toleratedNumberOfFailures: 3
            timeoutInSeconds: 10
        }
        endpoints: [
            {
                name: 'eur'
                type: 'Microsoft.Network/trafficManagerProfiles/externalEndpoints'
                properties: {
                    target:
                         '${systemName}-${environmentName}-we-app.azurewebsites.net'
                    weight: 1
                    priority: 1                                  
                    endpointLocation: 'West Europe'
                    geoMapping: [
                        'GEO-EU'                                 
                    ]
                }
            }
            {
                name: 'asi'
                type: 'Microsoft.Network/trafficManagerProfiles/externalEndpoints'
                properties: {
                    target:
                         '${systemName}-${environmentName}-asi-app.azurewebsites.net'
                    weight: 1
                    priority: 2
                    endpointLocation: 'East Asia'
                    geoMapping: [
                        'GEO-AS'
                        'GEO-AP'
                        'GEO-ME'
                    ]
                }
            }
            {
                name: 'global'
                type: 'Microsoft.Network/trafficManagerProfiles/externalEndpoints'
                properties: {
                    target:
                         '${systemName}-${environmentName}-us-app.azurewebsites.net'
                    weight: 1
                    priority: 3
                    endpointLocation: East US'
                    geoMapping: [
                       'WORLD'
                    ]
                }
            }
        ]
    }
}

A Traffic Manager is deployed globally.

“Geographic” is used as the routing method.

The priority is used to indicate the order of evaluation.

The geoMapping array indicates where a user should originate its request from to be directed to this endpoint.

You can see that one resource is created, an Azure Traffic Manager. The location is set to global, which is the only valid value for a Traffic Manager. In the endpoints array, you can see three endpoints added. The priority property allows you to define the order in which Azure should determine the correct endpoint to use. The three endpoints configured here will check if a visitor comes from Europe. If so, the visitor will be redirected to the West Europe data center. Then a similar check is performed for the Asia Pacific area. If none of these locations match the origin of the visitor (if they come from somewhere else in the world), the visitor will be redirected to the East US data center.

The next Bicep template will create the resource group for the Traffic Manager and use the module to deploy the Traffic Manager inside that resource group (deployment/trafficmgr.bicep). By adding this additional template, we keep things small and therefore more readable and reusable.

Listing 7.14 Adding a deployment file for the Traffic Manager

targetScope = 'subscription'
 
param systemName string = 'tomatoe'
 
@allowed([
    'dev'
    'test'
    'acc'
    'prod'
])
param environmentName string
 
resource resourceGroup 'Microsoft.Resources/resourceGroups@2021-04-01' = {
    name: '${systemName}-${environmentName}'
    location: deployment().location
}
 
module trafficManagerModule 'Network/trafficmanagerprofiles.bicep' = {
    name: 'trafficManagerModule'
    scope: resourceGroup
    params: {
        systemName: systemName
        environmentName: environmentName
    }
}

This template first creates the resource group in a certain Azure region—that is a mandatory property for a resource group, even if the resource inside has its location set to global like the Traffic Manager does. Once the resource group is there, the template will use the trafficmanagerprofiles.bicep module to create the Traffic Manager in that group. Now that all the Bicep files are in place, let’s work on a YAML template that deploys the Traffic Manager.

7.6.1 Deploying the Traffic Manager

Before you can deploy the Traffic Manager, you first need to transpile the Bicep file just created into an ARM template. That process is identical to what you did earlier with the main.bicep template. Open the transpile-job.yml file and add the following snippet under the bash task that transpiles main.bicep, on line 7:

  - bash: az bicep build --file ./deployment/trafficmgr.bicep 
    displayName: "Transpile Traffic Manager Bicep"

Now that the Bicep file is transpiled and published in the pipeline artifact, you can use that to deploy it. The following listing will download the pipeline artifact and then start a new deployment (pipeline/templates/traffic-manager.yml).

Listing 7.15 Deploying the Traffic Manager

parameters:
  - name: serviceConnectionName
    type: string
  - name: subscriptionId
    type: string
  - name: systemName
    type: string
    default: "tomatoe"
  - name: environmentName
    type: string
  - name: artifactName
    type: string
    default: "arm-templates"
  - name: location
    type: string 
 
steps:
  - task: DownloadPipelineArtifact@0
    displayName: "Download Artifact: ${{ parameters.artifactName }}"
    inputs:
      artifactName: "${{ parameters.artifactName }}"
      targetPath:
           $(System.ArtifactsDirectory)/${{ parameters.artifactName }}
 
  - task: AzureResourceManagerTemplateDeployment@3
    displayName: Deploy Main Template
    inputs:
      azureResourceManagerConnection:
           "${{ parameters.serviceConnectionName }}"
      deploymentScope: "Subscription"
      subscriptionId: "${{ parameters.subscriptionId }}"
      location: ${{ parameters.location }}
      templateLocation: "Linked artifact"
      csmFile: "$(System.ArtifactsDirectory)/${{ parameters.artifactName
           }}/trafficmgr.json"
      overrideParameters: -environmentName ${{parameters.environmentName}}
      deploymentMode: "Incremental"

The preceding listing downloads the pipeline artifact containing the compiled Bicep files. Then it runs the deployment of the Traffic Manager. Again, this deployment will create a resource group and provision the Traffic Manager inside that group.

You have now learned how to create a pipeline, and how to combine operations in smaller files to make them more readable and reusable. If you followed along with this entire chapter, you now have a pipeline folder structure that looks like this:

+-- pipelines
|   +--templates
|      +--deploy-arm.yml
|      +--traffic-manager.yml
|      +--transpile-job.yml
|   +--single-job-resource-deployment-ppl.yml
|   +--multi-stage-resource-deployment-ppl.yml

The final section in this chapter will complete the Toma Toe case. You still need to deploy the application to multiple regions, use the Traffic Manager template, and deploy to the production environment. Let’s see how that can be done!

7.7 Creating a real-world example pipeline

Now that you have learned about the basic concepts of a pipeline in Azure DevOps, let’s create a fully functional multistage, multiregion pipeline that uses the templates created earlier in this chapter. The pipeline you will create looks like the schema in figure 7.12.

Figure 7.12 Schema of the desired pipeline

You can see that the full function of the pipeline is separated into three stages. One is the build stage, compiling all Bicep files. The second and third stages are deployment stages: first to test, and then to production. Using the legend, you can match the jobs in the three stages with the YAML snippets created earlier in this chapter. Let’s complete the pipeline.

7.7.1 Completing the pipeline

To complete the pipeline, you’ll need one additional pipeline file (pipeline/multi-region-resource-deployment-ppl.yml). This file will orchestrate the smaller pipeline templates created earlier in this chapter as shown in listing 7.16, and extend the multi-stage-resource-deployment-ppl.yml pipeline. Create a new file in the pipeline folder called multi-region-resource-deployment-ppl.yml.

Listing 7.16 The complete pipeline

trigger:
  - main
 
variables:                                                   
    TestSubscriptionId: "<your-subscription-id>"
    TestServiceConnectionName: "TestEnvironment"
 
    ProdSubscriptionId: "<your-subscription-id>"
    ProdServiceConnectionName: "ProductionEnvironment"
 
stages:
    - stage: build
      displayName: Publish Bicep Files
      jobs:
          - job: publishbicep
            displayName: Publish bicep files as pipeline artifacts
            steps:
                - template: ./templates/transpile-job.yml
    - stage: deployinfratest                                 
      dependsOn: build
      displayName: Deploy to test
 
      variables:
          environmentName: "test"
 
      jobs:
          - job: deploy_us_test
            displayName: Deploy infra to US region test
            steps:
                - template: ./templates/deploy-arm.yml
                  parameters:
                      serviceConnectionName:
                           ${{ variables.TestServiceConnectionName }}
                      subscriptionId: ${{ variables.TestSubscriptionId }}
                      location: "eastus"
                      locationAbbreviation: "us"
                      environmentName: ${{ variables.environmentName }}
 
          - job: deploy_eur_test
            displayName: Deploy infra to EUR region test
            dependsOn: deploy_us_test                        
            steps:
                - template: ./templates/deploy-arm.yml
                  parameters:
                      serviceConnectionName:
                           ${{ variables.TestServiceConnectionName }}
                      subscriptionId: ${{ variables.TestSubscriptionId }}
                      location: "westeurope"
                      locationAbbreviation: "we"
                      environmentName: ${{ variables.environmentName }}
 
          - job: deploy_asia_test
            displayName: Deploy infra to ASIA region test
            dependsOn: deploy_eur_test
            steps:
                - template: ./templates/deploy-arm.yml
                  parameters:
                      serviceConnectionName:
                           ${{ variables.TestServiceConnectionName }}
                      subscriptionId: ${{ variables.TestSubscriptionId }}
                      location: "eastasia"
                      locationAbbreviation: "asi"
                      environmentName: ${{ variables.environmentName }}
 
          - job: deploy_trafficmgr_test
            displayName: Deploy traffic manager test
            dependsOn: deploy_asia_test
            steps:
                - template: ./templates/traffic-manager.yml
                  parameters:
                      serviceConnectionName:
                           ${{ variables.TestServiceConnectionName }}
                      subscriptionId: ${{ variables.TestSubscriptionId }}
                      location: "westeurope"
                      environmentName: ${{ variables.environmentName }}
 
    - stage: deployinfraprod                               
      dependsOn: deployinfratest
      displayName: Deploy to production
 
      variables:
          environmentName: "prod"
 
      jobs:
          - job: deploy_us_prod
            displayName: Deploy infra to US region prod
            steps:
                - template: ./templates/deploy-arm.yml
                  parameters:
                      serviceConnectionName:
                           ${{ variables.ProdServiceConnectionName }}
                      subscriptionId: ${{ variables.ProdSubscriptionId }}
                      location: "eastus"
                      locationAbbreviation: "us"
                      environmentName: ${{ variables.environmentName }}
 
          - job: deploy_eur_prod
            displayName: Deploy infra to EUR region prod
            dependsOn: deploy_us_prod
            steps:
                - template: ./templates/deploy-arm.yml
                  parameters:
                      serviceConnectionName:
                           ${{ variables.ProdServiceConnectionName }}
                      subscriptionId: ${{ variables.ProdSubscriptionId }}
                      location: "westeurope"
                      locationAbbreviation: "we"
                      environmentName: ${{ variables.environmentName }}
 
          - job: deploy_asia_prod
            displayName: Deploy infra to ASIA region prod
            dependsOn: deploy_eur_prod
            steps:
                - template: ./templates/deploy-arm.yml
                  parameters:
                      serviceConnectionName:
                           ${{ variables.ProdServiceConnectionName }}
                      subscriptionId: ${{ variables.ProdSubscriptionId }}
                      location: "eastasia"
                      locationAbbreviation: "asi"
                      environmentName: ${{ variables.environmentName }}
 
          - job: deploy_trafficmgr_prod
            displayName: Deploy traffic manager test
            dependsOn: deploy_asia_prod
            steps:
                - template: ./templates/traffic-manager.yml
                  parameters:
                      serviceConnectionName:
                           ${{ variables.ProdServiceConnectionName }}
                      subscriptionId: ${{ variables.ProdSubscriptionId }}
                      location: "westeurope"
                      environmentName: ${{ variables.environmentName }}

Variables are defined for values that are used multiple times.

This stage deploys the resources to the test environment.

The deployment to each region is not done in parallel.

This stage deploys the resources to the production environment.

The preceding listing describes the complete pipeline. The first stage, the build stage, contains one job with a single task that executes the transpile-job.yml file that will transpile the Bicep files and publish the products (ARM template JSON files) as pipeline artifacts.

Then two quite similar stages are executed; they both use four jobs to deploy the infrastructure to either the test or production environment. The first three jobs deploy the API infrastructure in three different Azure regions, and the last job provisions the Traffic Manager that’s configured to distribute the load depending on the geographical location of the website visitors. This pipeline uses the templates multiple times; for example, deploy-arm.yml is used six times.

Some of the parameters that you need to pass along aren’t always different. For example, the subscriptionId parameter only has two distinct values, one for test and one for production, not six. To prevent you from having to list identical values multiple times you can use variables, which can be defined on different scopes. This pipeline uses variables on the root scope for the serviceConnectionName and subscriptionId variables but uses a variable on the stage level for the environmentName variable. Depending on the scope, the variable holds a certain value. Variables can then be used in the pipeline by using the ${{ }} notation. For the TestSubscriptionId variable, that becomes ${{ variables.TestSubscriptionId }}.

Note that a successful deployment of the preceding pipeline might not mean that everything is working. You might want to add steps to the pipeline to verify that. The next chapter will discuss testing in depth.

With this file committed and pushed to your remote Git repository, you can go back to the Azure DevOps portal and remove the previously created pipeline. Then add a new pipeline and use the pipeline/multi-region-resource-deployment-ppl.yml file. You can then run the pipeline. When you review the details of the running pipeline, you will see a representation similar to figure 7.13.

Figure 7.13 The visual representation of your pipeline in Azure DevOps

Figure 7.13 shows that the pipeline contains three stages. Each job in a stage is represented, and a status icon shows the current status of these jobs. You can click on a job to zoom in even further. This will take you to a new view showing all the details of your pipeline, including the tasks in the jobs and a log file showing you log information for each separate job.

It might be wise to now delete some of the resources you have just created. With six App Service plans, things get expensive quite quickly. Chapter 13 covers how to do that.

Summary