When writing software these days, it is common to also write tests. One of the main reasons for writing tests is to verify that the code works when you write it and that the code still works after making changes later on. You write tests to build confidence that a change does not negatively impact the quality of the code and that the software is ready for deployment into production.
The same applies to testing Infrastructure as Code. When you’re talking about writing software tests, you might be introduced to what is called the test pyramid. It describes various forms of testing and shows that each has its pros and cons. The layers within this pyramid can differ from application to application, depending on how you build yours. A simple example is shown in figure 9.1. In this pyramid, you see three layers: unit tests, integration tests, and UI tests. You could have more layers in your scenario; for example, you might add performance testing.
In software, unit tests make up the foundation of your test suite. Unit tests test a single unit in isolation, making sure that the specific unit works as expected. What is considered to be a unit can vary, but it will most likely be a single method or, at most, a single class. Unit tests are very fast to run, and they’re the easiest to write of all the test layers in the pyramid. Therefore, the number of unit tests will always, by far, outnumber any of the other test types.
The next layer in the pyramid is the integration tests. Most applications integrate with other systems, like with databases or filesystems. When you write unit tests, you leave out these systems to isolate your unit tests and ensure faster test runs. But because your application still communicates with these systems at runtime, you will need to test them working together. You will want to have as few of these integration tests as possible, since they are harder to write than unit tests. For example, you’ll need to set up a database for each test run. Integration tests also tend to be more error-prone than unit tests because there are more resources and network connections involved, providing more areas where things can go wrong.
The top layer in this test pyramid is the UI tests, also known as end-to-end tests. UI tests validate whether the user interface works as expected. Tests could, for example, verify if user input triggers the right actions and if data is correctly shown to the user. The interface could be a web interface, but also a REST API or a command-line interface. In the latter case, you would talk to the API and validate, for example, the correctness of the returned JSON. These tests are often the hardest to build, as they involve the most moving parts and applications, and they take the most time to perform.
Whenever you need to create a test, you should always find the lowest layer in the pyramid that can test what you need tested. The higher you climb in the pyramid, the slower the tests become to run. They also become harder to write and are more brittle. It also becomes increasingly harder to figure out why a test failed. That’s often easy to determine in a unit test, but it can take a day of work for a UI test.
Now let’s apply this pyramid of tests to IaC. You could create the test pyramid in figure 9.2 for testing IaC. It involves four types of tests: static analysis and validation, unit test, integration tests, and end-to-end tests.
The following sections describe each of these layers in detail and will show you what tools are available. We’ll start at the bottom of the pyramid and look at some tools that fit the lowest category—static analysis and validation.
Tests that fall in this bottom layer of the test pyramid are the fastest to run. One of the main reasons for that is that they do not require your actual infrastructure to be deployed, saving much time. These tests only work on your IaC files.
Let’s start by looking at another Visual Studio Code extension. It may not strictly be a static analysis tool, but it definitely will help you create templates without errors.
As mentioned earlier in the book, Visual Studio Code is a great editor for writing ARM and Bicep templates. One of the reasons for that is the availability of extensions for both file types. In chapter 2 you were introduced to the extension for ARM templates and you saw how extensions are installed. In chapter 6, you learned about the Bicep extension.
Like the ARM template extension, the Bicep extension will help you by continuously validating the template while you’re writing. It will, for example, warn you when you use a particular parameter type in the wrong way. The extension also makes writing the templates easier, so you’ll make fewer mistakes.
To demonstrate how this works, we’ll create a simple storage account in Bicep. You will see how the extension can help you in various ways. Useful new features are being added all the time.
The required-properties
autocompletion function is a useful feature that allows you to quickly write a working Bicep template. You start by declaring your resource and then when you type the =
sign, you’ll see the options shown in figure 9.3.
By default, the required-properties
option will be selected. Now when you press Enter, Bicep will insert a snippet that contains all the properties you need to fill, as shown in figure 9.4.
Now that all the required properties have been added, you can start filling them. Using this function, you are sure to never miss a required property. It saves a lot of time having to look them up in the documentation.
When you use the extension, you get what is called IntelliSense. It’s a general term for various features that help you write code more quickly with fewer errors. One of the features is called code completion. When you, for example, look for specific property names, the editor will show them to you, and you don’t need to look them up in the documentation, saving time and avoiding errors. Thanks to another IntelliSense feature, when you hover your cursor over an item, information on that property will be shown.
Figure 9.5 shows what you’d see if you were creating an output in Bicep to return the URL of the BLOB endpoint on a storage account. You could then use that value somewhere else in your deployment. It is easy to find the right property using autocompletion, and because you just need to press Enter or click the value, you won’t make any typing errors.
The extension also helps by showing available properties on resources that you create. On the storage account in figure 9.6, tags
is not a required property so it wasn’t added previously. You can easily add it by creating an empty line in your template and then pressing Ctrl-spacebar. As you can see in figure 9.6, the extension will show you a list of available properties. Scroll down to the tags
property and press Enter to insert it.
What is even more helpful is that the extension helps you find valid options for a property for the version of the resource you are creating. Let’s take the sku
’s name
as an example. If you press the spacebar after the property, you will be shown a list with options, as you can see in figure 9.7. This makes it much easier to find and enter the proper value for the sku
’s name
parameter.
On the other hand, if you do add the value manually and make a typo, the extension will help alert you with a yellow squiggly line. That’s shown in figure 9.8.
Although these extensions are beneficial, you could go a step further in validating your templates before deploying them. Let’s see how that can be done using either PowerShell or the Azure CLI.
When you deploy a template to Azure using PowerShell, the Azure CLI, or Azure DevOps, the template is validated before it is deployed. When ARM validates a template, it checks whether the template is valid JSON and runs some other basic checks. However, you can also run that validation without deploying.
To run the check using PowerShell, you have the following commands at your disposal:
If you have ever used PowerShell to deploy a template, you might recognize these commands. Here they start with Test-
, but you use the New-
equivalents during a deployment.
When you’re using the Azure CLI, you would use az deployment group create
to deploy a template to a resource group. To run the validation, you’d just replace create
with validate
. The same applies to validating a deployment at the subscription, management group, or tenant level.
Here is an example of an ARM template that would return an error on validation—it’s a storage account that has an incorrect name:
{ "type": "Microsoft.Storage/storageAccounts", "apiVersion": "2019-04-01", "name": "ThisIsWrong", ❶ "location": "westeurope", "sku": { "name": "Premium_LRS" }, "kind": "StorageV2" }
❶ The name of this storage account is wrong.
As you may know, the name of a storage account can only contain lowercase letters or numbers. Using capitals, as in the preceding example, therefore throws an error when you run the validation command:
az deployment group validate --resource-group "BookExampleGroup" ➥ --template-file storageaccount.json
The preceding example uses the Azure CLI to run the validation on a resource group deployment. The output in figure 9.9 shows why the validation failed. It nicely specifies the rules for a storage account name so you can quickly fix the problem.
If you are using a newer version of the Azure CLI, the same approach would work for the following Bicep template:
resource storageAccount 'Microsoft.Storage/storageAccounts@2021-02-01' = { name: 'ThisIsWrong' ❶ kind:'StorageV2' sku: { name: 'Premium_LRS' } location: 'westeurope' }
❶ The name of this storage account is invalid.
In the command, you just point to the Bicep file instead of an ARM template:
az deployment group validate --resource-group "BookExampleGroup" ➥ --template-file storageaccount.bicep
The Azure CLI runs a bicep build
for you, and the output is identical.
Now that you know how to validate your templates, it is time to look at the ARM template test toolkit, which can help you enhance your static analysis of templates.
The ARM template test toolkit (ARM TTK) provides a set of default tests that check whether your templates use the recommended practices. This will help you avoid common problems in templates. When one of your templates does not comply with these recommended practices, a warning is returned. Often, a helpful suggestion is also presented to help you improve.
These are a few examples of the included tests:
A complete list of the included tests is available in Microsoft’s “Test cases for ARM templates” article (http://mng.bz/wopB).
ARM TTK runs on PowerShell, so make sure you have that installed on your machine. It runs on PowerShell Core, which means it works on Windows, Linux, and macOS.
To install ARM TTK, follow these steps:
Download the latest release of the test toolkit from https://aka.ms/arm-ttk-latest and extract it.
Open a PowerShell session and navigate to the folder where you just extracted the toolkit.
If your execution policy blocks scripts from the internet, you can unblock the script files by running the following command:
Get-ChildItem *.ps1, *.psd1, *.ps1xml, *.psm1 -Recurse | Unblock-File
Make sure you do that from the folder you extracted the toolkit to.
Import the module by running the following command:
Import-Module ./arm-ttk.psd1
Now that the toolkit is installed, you can run the tests. For example, you could run the tests against the template created in chapter 3. In that template you created various resources to run a web application on: an App Service, a few storage accounts, a SQL server, and more. If you haven’t already downloaded those files, you can get them from the book’s GitHub repository: http://mng.bz/J2QV. When you have them, navigate to the folder with those files, and run the following command:
Test-AzTemplate -TemplatePath ./template.json
This command runs all the available tests against the template, and the result should be similar to the output in figure 9.10. Tests are being added to the tool all the time, and new versions of resources in Azure might be introduced, so you might get a slightly different result when running the tests.
In figure 9.10 you can see the result for each test that was run. Most of the tests succeeded, but there were also two errors, shown in red so they stand out. Below the name of the failing tests, you’ll find more information on why that particular test failed. In some cases, as with the first error in this example, it also provides hints on fixing it.
In chapter 5 you learned about more advanced ARM templates, including linked templates. Linked templates allow you to break up a large ARM template into smaller pieces for better readability and reuse. ARM TTK can handle such a setup, but you need to cheat a little to make it work. Instead of specifying a specific template, you can point the tool to a particular folder or forgo the -TemplatePath
parameter altogether. The tool then searches for a default template that could either be named azuredeploy.json or maintemplate.json. Unfortunately, you have neither in the folder for chapter 5, since all the templates are in the Composing or Resources folder, so the tool would throw an error if you ran it now. You can work around that by creating an empty file called maintemplate.json in the root folder for chapter 5. Then run the following command:
Test-AzTemplate
The output will be similar to what you saw when running the tests against a single file, but it now runs all tests against all the files by recursively going through the folder structure. After the testing is complete, you can go through the test results for all the files.
Running these tests on your local computer is very useful, but it’s even better to run them in a pipeline on every single change, using Azure DevOps for example. That way you are assured of high-quality templates all the time.
To run the tests, you can install the ARM TTK Extension by Sam Cogan from Visual Studio Marketplace (http://mng.bz/qYNN). Once you have installed it into your Azure DevOps environment, you can use it in a template as follows (armttk-pipeline.yml).
Listing 9.1 A YAML pipeline for running the ARM TTK tests
trigger: - main pool: vmImage: windows-latest ❶ steps: - task: RunARMTTKTests@1 ❷ displayName: 'Run ARM TTK Tests' inputs: templatelocation: '$(System.DefaultWorkingDirectory)\Chapter_03\' resultLocation: '$(System.DefaultWorkingDirectory)\Chapter_03\Publish' - task: PublishTestResults@2 ❸ displayName: 'Publish ARM TTK Test results' inputs: testResultsFormat: 'NUnit' testResultsFiles: '**\*-armttk.xml' pathToSources: '$(System.DefaultWorkingDirectory)\Chapter_03\Publish' mergeTestResults: true failTaskOnFailedTests: true condition: always() ❹
❶ Running this pipeline on Windows
❷ The ARM TTK task runs the tests using the template from chapter 3.
❸ The PublishTestResults task uploads the results to Azure DevOps.
❹ Set “condition” to “always” to run this task when a test fails.
The first thing to note here is that this YAML pipeline needs to run on Windows, as the ARM TTK Extension task does not support Linux or macOS. The pipeline then contains two steps. The first one runs the tests using the task that you just installed. In this example, you can see the task run the tests against the ARM template created in chapter 3 of this book. The test outputs the test results in a format that you can push to Azure DevOps. The second task in this pipeline takes care of that. Pay attention to the last line in that task, condition: always()
. The first task, running the tests, fails when any of the tests fail. If you don’t set this condition, the second task would not be run. However, you always want to publish your test results, especially on a failure. The test results can then be viewed in Azure DevOps by going to the results of the pipeline and clicking on Tests, as shown in figure 9.11.
This test run had three errors, and the results clearly show which tests failed. You can click on each of the three failing tests, and more details on the failures will be displayed. As when running the ARM TTK tool using the Azure CLI, as shown earlier, you will then find more information on how to overcome the error.
You’ll sometimes want to go a bit further in your testing than what the tools we’ve discussed so far can do for you out of the box. When that’s the case, you’ll have to write your own tests. Pester is a helpful tool for doing that.
Pester is a tool that allows you to write tests in PowerShell. Since PowerShell is one of the easiest ways to interact with Azure and ARM templates, Pester is ideal for this job.
Before you can write your first test, you first need to install Pester. To do that, use the following command in a PowerShell window on any platform:
Find-Module pester -Repository psgallery | Install-Module
All templates that you’ll want to deploy will likely involve at least three files: the template itself, a parameter file for your test environment, and a parameter file for your production environment. The first simple test that you could write is to check if all the expected files are present. Without them, starting a deployment is useless. The following listing shows what such a test would look like using Pester (file-validation .tests.ps1).
Listing 9.2 A Pester test to run file validation tests
Describe "Template Validation" { ❶ Context "Template Syntax" { ❷ It "Has a JSON template" { ❸ "azuredeploy.json" | Should -Exist } It "Has a test parameters file" { "azuredeploy.parameters.test.json" | Should -Exist } It "Has a production parameters file" { "azuredeploy.parameters.prod.json" | Should -Exist } } }
❶ The Describe keyword allows you to group your tests.
❷ The Context keyword enables you to group your tests even further.
❸ The It block is where you write an actual test.
The Describe
keyword on the first line allows you to group tests, and you could have one or more Describe
s per file. The next line uses the Context
keyword. In almost all cases, Describe
and Context
can be used interchangeably. You’ll often name the Describe
block after the function under test and then use one or more Context
s to group tests based on that aspect of the function you are testing. Inside the Context
blocks, you write the actual tests using the It
keyword. An It
block should contain a single test. You can use the Should
keyword to make an assertion inside an It
block. The first test in this example tests whether the azuredeploy.json file exists by using a Should
combined with -Exist
.
You can run the tests in this example in a PowerShell session by navigating to the folder where the file resides, and then running the following command:
Invoke-Pester -Output Detailed
As long as your filename ends with tests.ps1, Pester will automatically discover the tests and run them. Automatic discovery allows you to quickly run all tests in all files in a particular folder. In addition, using the -Output
switch allows your command to display detailed results. For example, to run all tests in a single file, use the following command:
Invoke-Pester -Script ./file-validation.tests.ps1 -Output Detailed
This command should show you the output in figure 9.12. It first shows you how many tests were discovered in how many files. It then shows you the outcome of each test using your Describe
and Context
blocks to group the tests so you can easily find them in your files if you need to. In this example, all tests succeeded. If a test were to fail, it will be shown in red instead of green. You can also see the duration of each test. A long-running test might indicate a problem, so it’s good practice to look for that.
Another helpful but straightforward test could be to verify the syntax of the template. Since an ARM template is valid JSON, you can use PowerShell to import the file and then create a PowerShell object from the JSON. Of course, that only works if it is valid JSON, so a missing comma or other syntax error will make the test fail. A file with such a test is shown in the following listing (syntax-validation.tests.ps1).
Listing 9.3 A Pester test to run file syntax tests
BeforeAll { ❶ $templateProperties = (get-content "azuredeploy.json" ➥ -ErrorAction SilentlyContinue ➥ | ConvertFrom-Json -ErrorAction SilentlyContinue) } Describe "Syntax Validation" { Context "The templates syntax is correct" { It "Converts from JSON" { $templateProperties | Should -Not -BeNullOrEmpty } It "should have a `$schema section" { $templateProperties."`$schema" | Should -Not -BeNullOrEmpty } It "should have a contentVersion section" { $templateProperties.contentVersion | Should -Not -BeNullOrEmpty } It "should have a parameters section" { $templateProperties.parameters | Should -Not -BeNullOrEmpty } It "must have a resources section" { $templateProperties.resources | Should -Not -BeNullOrEmpty } } }
❶ The BeforeAll block allows you to run code before any test runs.
This example loads the file contents from disk and then tries to convert them into a PowerShell JSON object. It does that inside a BeforeAll
block. As you can probably guess from that name, whatever you code in this block is executed before any test is run. After the BeforeAll
block, this example runs a series of tests to verify whether some of the properties in an ARM template are present. It does that by again using the Should
keyword, but this time combined with -Not
and -BeNullOrEmpty
.
Since these tests are written using PowerShell, you can utilize all the functionality available in PowerShell to create tests. That includes the template validation that we saw earlier, allowing you to combine custom test logic and third-party tooling easily. The following example shows how you could use the Test-AzResourceGroupDeployment
cmdlet in a custom Pester test (template-validation.tests.ps1).
Listing 9.4 A Pester test to run template validation tests
# This test requires an authenticated session. ➥ Use Connect-AzAccount to login BeforeAll { New-AzResourceGroup -Name "PesterRG" -Location "West Europe" | Out-Null } Describe "Content Validation" { Context "Template Validation" { It "Template azuredeploy.json passes validation" { $TemplateParameters = @{} $TemplateParameters.Add('storageAccountName', 'strpestertest') $output = Test-AzResourceGroupDeployment ➥ -ResourceGroupName "PesterRG" ➥ -TemplateFile "azuredeploy.json" @TemplateParameters $output | Should -BeNullOrEmpty } } } AfterAll { ❶ Remove-AzResourceGroup -Name "PesterRG" -Force | Out-Null }
❶ The AfterAll keyword allows you to run code after all tests have run.
The preceding test first creates a new resource group that the Test-AzResourceGroupDeployment
cmdlet needs as input, using the BeforeAll
block. It then contains one test that creates an object in PowerShell to hold the parameters you supply when running the validation. The last line in the test verifies that the output is empty, which means there were no validation errors. Finally, an AfterAll
block is responsible for removing the resource group. Just as a BeforeAll
block runs before all tests, the AfterAll
block runs after all tests are complete, even if one or more have failed. As resources in Azure might cost money even when you don’t use them, the AfterAll
block is perfect for cleaning up after tests have run and keeping costs to a minimum.
The next layer up in the test pyramid is the unit tests. Unit tests in IaC are a little different from unit tests in software. Let’s dive in.
In software, unit tests are supposed to test a single unit and do that in isolation, but that makes less sense when it comes to infrastructure. While you may still want to test small pieces, you can’t do that in isolation, as you always need to contact Azure. You only know whether your template is valid when you deploy your template, and not just a part of it.
In this book, we will consider a unit test to be a test that deploys a small part of the infrastructure and runs validations on that. Proper candidates for that could be one of the linked templates from chapter 5 or a Bicep module from chapter 6. The following listing shows the creation of a storage account using Bicep (Unittests/storageaccount .bicep). It uses a custom expression that you could write a test for.
Listing 9.5 A Bicep template to create a storage account
param name string param location string @allowed([ 'Premium' 'Standard' ]) param sku string var premiumSku = { name: 'Premium_LRS' } var standardSku = { name: 'Standard_LRS' } var skuCalculated = sku == 'Premium' ? premiumSku : standardSku ❶ resource storageAccount 'Microsoft.Storage/storageAccounts@2021-02-01' = { name: name location: location sku: skuCalculated kind: 'StorageV2' }
❶ A custom expression that specifies which SKU to use based on a parameter
The preceding Bicep module deploys a storage account. To do that, it allows you to supply a parameter to set the SKU as a string. That input is then used to select one of the two predefined SKU objects using two variables.
The following test verifies this behavior by deploying the template, fetching the storage account details, and running some validation on that (Unittests/unit-test.tests.ps1).
Listing 9.6 A Pester unit test
# This test requires an authenticated session. ➥ Use Connect-AzAccount to login BeforeAll { ❶ $resourceGroupName = 'PesterRG' New-AzResourceGroup -Name $resourceGroupName -Location "West Europe" ➥ -Force | Out-Null $storageAccountName = 'strpestertest' $TemplateParameters = @{ storageAccountName = $storageAccountName location = 'West Europe' sku = 'Premium' } New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName ➥ -TemplateFile "storageaccount.bicep" @TemplateParameters $storageAccount = Get-AzStorageAccount -Name $storageAccountName ➥ -ResourceGroupName $resourceGroupName } Describe "Deployment Validation" { Context "StorageAccount validation" { It "Storage account should exist" { $storageAccount | Should -not -be $null } It "Storage account should have name 'Premium_LRS'" { $storageAccount.Sku.Name | Should -Be "Premium_LRS" ❷ } } } AfterAll { Remove-AzResourceGroup -Name "PesterRG" -Force | Out-Null }
❶ The BeforeAll block deploys the template.
❷ This test verifies whether the expression returns the correct value.
In the BeforeAll
block in this test class, the template from listing 9.5 is deployed. Then the test creates a resource group to hold the test resources. Next, a PowerShell object is initialized to store the parameters passed to the New-AzResourceGroupDeployment
cmdlet. The last step in the BeforeAll
block is to retrieve the created storage account to run tests against it.
The test file contains two tests. The first one verifies whether the storage account was created or not by using the Should -not -be $null
syntax. The second test checks if the passed value to the sku
parameter translated correctly into the SKU’s tier on the storage account.
You could easily add more tests to verify other properties of the created storage account. It is, however, important to always think about what you are testing. The preceding example is a valid scenario, as it tests an admittedly simple custom expression. In contrast, it would not be helpful to verify the kind
of the storage account, as its value was hardcoded in the template. If you were to write a test for that, you would effectively be testing the Azure Resource Manager, and there’s no need for that, since that’s something you bought and did not build yourself. The declarative nature of ARM templates and Bicep ensure that scripts are also idempotent and thus guaranteed always to give the same end state. The next level in the test pyramid is the integration tests.
In software, an integration test is used to verify the correctness of the relationship between two or more components. In infrastructure, you can use such tests to check how multiple parts of your infrastructure work together. Compared with unit tests, a downside of integration tests is that they need more infrastructure to be deployed, and they take more time to run.
The following example deploys two virtual networks, often abbreviated as vnets, with a peering between them. The peering connects the two vnets and lets traffic flow between them. You can write a test that verifies whether the state of the peering is correct after the deployment. First, though, you need to create some templates to deploy the resources (Integration-testing/vnet.bicep).
Listing 9.7 A Bicep template to create a virtual network
param vnetName string param addressPrefixes array param subnetName string param subnetAddressPrefix string param location string = '${resourceGroup().location}' resource vnet 'Microsoft.Network/virtualNetworks@2020-06-01' = { ❶ location: location name: vnetName properties:{ addressSpace:{ addressPrefixes:addressPrefixes } subnets:[ { name:subnetName properties:{ addressPrefix: subnetAddressPrefix } } ] } }
❶ This definition creates a virtual network with a single subnet.
The preceding listing shows a Bicep module that deploys a single vnet. It accepts a few parameters, like the name and IP ranges for both the vnet and the first subnet.
Next, you need a template that deploys the peering between the two as shown in the following (Integration-testing/vnet-peering.bicep).
Listing 9.8 A Bicep template to create virtual network peering
param localVnetName string param remoteVnetName string param remoteVnetRg string resource peer 'microsoft.network/virtualNetworks/ ➥ virtualNetworkPeerings@2020-05-01' = { ❶ name: '${localVnetName}/peering-to-remote-vnet' properties: { allowVirtualNetworkAccess: true allowForwardedTraffic: true allowGatewayTransit: false useRemoteGateways: false remoteVirtualNetwork: { id: resourceId(remoteVnetRg, 'Microsoft.Network/virtualNetworks', ➥ remoteVnetName) } } }
❶ This resource creates peering between two virtual networks.
To create peering between two virtual networks, you need to deploy the preceding resource twice, as you deploy it onto one virtual network, see the name, and then point to the other. To assign the virtual network to connect to, you set the ID in the remoteVirtualNetwork
object. Then the resourceId()
function is used to get that identifier.
Now that you have the two modules, you can use them in the main template as follows (Integration-testing/mainDeployment.bicep).
Listing 9.9 A Bicep template to deploy networking modules
targetScope = 'subscription' ❶ param rg1Name string = 'rg-firstvnet' param rg1Location string = 'westeurope' param rg2Name string = 'rg-secondvnet' param rg2Location string = 'westeurope' param vnet1Name string = 'vnet-first' param vnet2Name string = 'vnet-second' resource rg1 'Microsoft.Resources/resourceGroups@2020-06-01' = { ❷ name: rg1Name location: rg1Location } resource rg2 'Microsoft.Resources/resourceGroups@2020-06-01' = { name: rg2Name location: rg2Location } module vnet1 'vnet.bicep' = { ❸ name: 'vnet1' scope: resourceGroup(rg1.name) params: { vnetName: vnet1Name addressPrefixes: [ '10.1.1.0/24' ] subnetName: 'd-sne${vnet1Name}-01' subnetAddressPrefix: '10.1.1.0/24' } } module vnet2 'vnet.bicep' = { name: 'vnet2' scope: resourceGroup(rg2.name) params: { vnetName: vnet2Name addressPrefixes: [ '10.2.1.0/24' ] subnetName: 'd-sne${vnet2Name}-01' subnetAddressPrefix: '10.2.1.0/24' } } module peering1 'vnet-peering.bicep' = { ❹ name: 'peering1' scope: resourceGroup(rg1.name) dependsOn: [ vnet1 vnet2 ] params: { localVnetName: vnet1Name remoteVnetName: vnet2Name remoteVnetRg: rg2Name } } module peering2 'vnet-peering.bicep' = { name: 'peering2' scope: resourceGroup(rg2.name) dependsOn: [ vnet2 vnet1 ] params: { localVnetName: vnet2Name remoteVnetName: vnet1Name remoteVnetRg: rg1Name } }
❶ This template is deployed on the subscription scope.
❷ Two resource groups are created to hold the resources.
❸ Two individual virtual networks are created.
❹ The two virtual networks are connected by deploying peering.
On the first line of the preceding template, you set the targetScope
to the level of the subscription
. That allows you to first create the resource groups inside this template and then deploy the different modules into those resource groups. After the resource groups are created, the template continues by deploying the two virtual networks. When that is done, the peering between the two is deployed.
With the templates in place, it is time to write the test as follows (Integration-testing/ integrationtesting.tests.ps1).
Listing 9.10 A Pester test to validate network peering
# This test requires an authenticated session. ➥ Use Connect-AzAccount to login BeforeAll { New-AzDeployment -Location "WestEurope" ➥ -TemplateFile "mainDeployment.bicep" $vnetPeering = Get-AzVirtualNetworkPeering ➥ -Name "peering-to-remote-vnet" ➥ -VirtualNetworkName "vnet-second" ➥ -ResourceGroupName "rg-secondvnet" } Describe "Deployment Validation" { Context "VNet peering validation" { It "Vnet peering should exist" { $vnetPeering | Should -not -be $null } It "Peering status should be 'Connected'" { $vnetPeering.PeeringState | Should -Be "Connected" ❶ } } } AfterAll { Remove-AzResourceGroup -Name "rg-firstvnet" -Force | Out-Null Remove-AzResourceGroup -Name "rg-secondvnet" -Force | Out-Null }
❶ This test validates the virtual network peering.
In the BeforeAll
block, the template is deployed. Here you see New-AzDeployment
being used because the scope is set to subscription
level. When the deployment finishes, one of the virtual network peerings is retrieved to use in the test. The first test then verifies whether that virtual network peering exists. The second test checks if the state of the peering is Connected
.
You have probably noticed that running this test took a bit more time than the unit tests you looked at in the previous section. That is mainly because the deployment takes longer. In an integration test, you typically have more resources, and they depend on each other, so you can’t deploy all of them in parallel. They also are a bit more expensive in terms of Azure cost, so you’ll want to make sure you remove the resources automatically after each run. Now that you have seen an integration test, there is one level on the test pyramid remaining, the end-to-end test.
In software, an end-to-end test could, for example, start at the UI of your application. You could enter some data into a form, save the form, and then verify whether that was successful by retrieving the data somehow. That would test your whole application, from the UI to the API and probably to some datastore like a database. You could use a similar approach for infrastructure.
In the following example, you’ll build upon the integration test infrastructure. Imagine that you have a UI application that fetches data from an API. On Azure, you could pick the Azure App Service to run both. An additional requirement is that you don’t want the API to be publicly available. It should only accept connections from your UI’s App Service. To implement that, you could use the App Services virtual network integration and then use a firewall to limit the traffic to the API. You could then write a test to verify whether that works after every deployment. This infrastructure is shown in figure 9.13.
While integration tests can take quite some time to run, end-to-end tests are even more time-intensive. You deploy more infrastructure and may even need to deploy an application as well. For that reason, you’ll want to make sure that you only write the tests that you need: those that add business value. As these tests run on complete infrastructures that take a long time to deploy, you may decide not to remove and recreate the infrastructure on each run but let it stay between runs. That will, of course, mean that you will have to pay for those resources, even when you are not actively running tests against them.
Let’s dig into the example. You first need the two Azure App Services that host your UI and API. The following Bicep module creates those resources (End-2-end-testing appservice.bicep).
Listing 9.11 A Bicep template to create an App Service
targetScope = 'resourceGroup' param appName string param subnetResourceId string param location string = resourceGroup().location param restrictAccess bool param restrictAccessFromSubnetId string = '' resource hosting 'Microsoft.Web/serverfarms@2019-08-01' = { ❶ name: 'hosting-${appName}' location: location sku: { name: 'S1' } } resource app 'Microsoft.Web/sites@2018-11-01' = { ❷ name: appName location: location properties: { serverFarmId: hosting.id } } resource netConfig 'Microsoft.Web/sites/networkConfig@2019-08-01' = { ❸ name: '${appName}/virtualNetwork' dependsOn: [ app ] properties: { subnetResourceId: subnetResourceId swiftSupported: true } } resource config 'Microsoft.Web/sites/config@2020-12-01' ➥ = if (restrictAccess) { name: '${appName}/web' dependsOn: [ app ] properties: { ipSecurityRestrictions: [ { vnetSubnetResourceId: restrictAccessFromSubnetId action: 'Allow' priority: 100 name: 'frontend' } { ipAddress: 'Any' action: 'Deny' priority: 2147483647 name: 'Deny all' description: 'Deny all access' } ] } }
❶ The definition for the Azure App Service plan
❷ The definition for the Azure App Service
❸ The definition for the network configuration on the Azure App Service
The preceding template is used as a module for both the UI and the API, but there are differences in how the resource should be deployed between the two, which is why there is an if
statement. That is explained in detail shortly.
The first resource created in the template is the App Service plan. An App Service plan is a container for one or more App Services. As you see in the example, the performance of any App Service running within an App Service plan is defined on the App Service plan—all App Services share the resources. In this example, you set that performance level to S1
.
Then you deploy an App Service. That is the resource that holds either the UI or API. It is again a relatively simple resource that needs a name and location. You also need to reference the App Service plan to run on by setting the serverFarmId
property. You then specify two resources that configure the networking part of the UI and API. First, you deploy a resource of type networkConfig
, which specifies in which subnet you want the App Service to run. The subnetResourceId
property specifies that. As you’ll see later on, the UI and API will both use a separate subnet in two different virtual networks.
The second resource you need to deploy to configure the networking on the App Service is the config
resource, which allows you to configure quite a lot of things on the App Service. In this example, you use the ipSecurityRestrictions
array to configure the network access—it allows you to define a set of rules that control incoming traffic. This template is used to create both the UI and the API, and since you only need to limit the incoming traffic on the API and not on the UI, an if
statement acts on the restrictAccess
Boolean. This Boolean allows you to specify whether restrictions should be in place and allows you to reuse the template. The ipSecurityRestrictions
array in this example contains two entries. The first one allows traffic from a specific vnet resource—the subnet in which the UI resides. The second entry dictates that all other traffic should be denied.
Next, you need to make a few modifications to the vnet definitions from the previous example (listing 9.7) to make this one work (End-2-end-testing/vnet.bicep).
Listing 9.12 A Bicep template to create a virtual network
targetScope = 'resourceGroup' param vnetName string param addressPrefixes array param subnetName string param subnetAddressPrefix string param location string = '${resourceGroup().location}' resource vnet 'Microsoft.Network/virtualNetworks@2020-06-01' = { location: location name: vnetName properties:{ addressSpace:{ addressPrefixes:addressPrefixes } subnets:[ { name:subnetName properties:{ addressPrefix: subnetAddressPrefix serviceEndpoints: [ ❶ { service: 'Microsoft.Web' locations:[ '*' ] } ] delegations: [ ❷ { name: 'Microsoft.Web.serverFarms' properties: { serviceName: 'Microsoft.Web/serverFarms' } } ] privateEndpointNetworkPolicies: 'Enabled' privateLinkServiceNetworkPolicies: 'Enabled' } } ] } } output vnetId string = vnet.id output subnetId string = vnet.properties.subnets[0].id
❶ Enabling the service endpoint for Microsoft.Web
❷ Using delegation provides your App Service with a subnet to use.
Two settings were added to this definition. First, there is the serviceEndpoint
section. A service endpoint in Azure allows you to state that traffic to a specific Azure resource should stay within the Microsoft network instead of routing over the public internet. A service endpoint is available for most Azure PaaS services. In this example, you use it to ensure that traffic from the UI to the API stays on the Microsoft backbone, which allows you to restrict the traffic, as you saw earlier. Since you are using an App Service in this example, the service
property is set to Microsoft.Web
. That value is different for every service that you target. In case of an Azure SQL database, for example, it would have been Microsoft.Sql
.
The second added section is delegations
, which you need to be able to run your App Service within your virtual network. Using a delegation, you effectively say to your App Service: “here is a subnet for you to use.” Nothing else can then use that subnet, and since you are delegating this subnet to the App Service, the serviceName
becomes Microsoft.Web/serverFarms
.
With the two modules for the App Service (listing 9.11) and the virtual network (listing 9.12) in place, you can now write another template and use them. You can add the following configuration to the MainDeployment.bicep template (listing 9.9) used in the integration testing section (End-2-end-testing/mainDeployment.bicep).
Listing 9.13 A Bicep template using modules to create the infrastructure
module frontend 'appservice.bicep' = { name: 'frontend' scope: rg1 dependsOn: [ peering1 ] params: { appName: 'bicepfrontend' subnetResourceId: vnet1.outputs.subnetId restrictAccess: false } } module api 'appservice.bicep' = { name: 'api' scope: rg2 dependsOn: [ peering2 ] params: { appName: 'bicepapi' subnetResourceId: vnet2.outputs.subnetId restrictAccess: true restrictAccessFromSubnetId: vnet1.outputs.subnetId } }
All that remains to be done is to call the appservice
module twice: once for the frontend and once for the API. The difference between the two is the values you use for the parameters. You specify two different virtual networks to integrate with, and only the API network is set to restrict access to the UI’s subnet.
With the infrastructure in place, it’s time to deploy your application and write some tests. This is a simple UI that makes a call to the API. You can create the application by following along, or you can find it in the repository of this book (http://mng.bz/7yGV).
To create the application and deploy it, you will need to have .Net Core installed. If you don’t have it already, you can download it from Microsoft: https://dotnet.microsoft.com/download.
To create the app, run dotnet new web
in a terminal window. Then open Startup.cs and change it to look like the following version (End-2-end-testing/MinimalFrontend/ startup.cs).
Listing 9.14 The StartUp.cs file of the sample UI application
using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Threading.Tasks; using Microsoft.AspNetCore.Builder; using Microsoft.AspNetCore.Hosting; using Microsoft.AspNetCore.Http; using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.Hosting; namespace testweb { public class Startup { public void ConfigureServices(IServiceCollection services) { services.AddHttpClient(); } public void Configure(IApplicationBuilder app, ➥ IWebHostEnvironment env, ➥ IHttpClientFactory clientFactory) ❶ { if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); } app.UseRouting(); app.UseEndpoints(endpoints => { endpoints.MapGet("/", async context => ❷ { var request = new HttpRequestMessage(HttpMethod.Get, ➥ "https:/ /bicepapi.azurewebsites.net"); var client = clientFactory.CreateClient(); var response = await client.SendAsync(request); if(!response.IsSuccessStatusCode){ throw new Exception("Could not reach the API"); } await context.Response.WriteAsync("API Reached!"); }); }); } } }
❶ The HttpClientFactory was added using dependency injection.
❷ A route that calls the API is added.
In the preceding listing, an HttpClientFactory
was added to the Configure()
function. That client is used in the app.UseEndpoints
definition to call the API. If that API is reachable and returns an HTTP 200 status code, the UI returns a successful result. If the API is not reachable, the networking configuration was incorrect, and the UI throws an error.
The last thing left to do is to write the actual tests as follows (End-2-end-testing/ end2endtesting.tests.ps1).
Listing 9.15 A Bicep test that runs the end-to-end scenario test
# This test requires an authenticated session; ➥ use Connect-AzAccount to login BeforeAll { ❶ New-AzDeployment -Location "WestEurope" ➥ -TemplateFile "mainDeployment.bicep" dotnet publish MinimalFrontend/testweb.csproj ➥ --configuration Release -o ./MinimalFrontend/myapp Compress-Archive -Path ./MinimalFrontend/myapp/* ➥ -DestinationPath ./MinimalFrontend/myapp.zip -Force Publish-AzWebApp -ResourceGroupName rg-firstvnet -Name bicepfrontend ➥ -ArchivePath $PSScriptRoot/MinimalFrontend/myapp.zip -Force } Describe "Deployment Validation" { Context "End 2 end test" { It "Frontend should be available and respond with 200" { $Result = try { Invoke-WebRequest ➥ -Uri "https:/ /bicepfrontend.azurewebsites.net/" ➥ -Method GET } catch { $_.Exception.Response } $statusCodeInt = [int]$Result.StatusCode $statusCodeInt | Should -be 200 ❷ } It "API should be locked and respond with 403" { $Result = try { Invoke-WebRequest ➥ -Uri "https:/ /bicepapi.azurewebsites.net/" ➥ -Method GET } catch { $_.Exception.Response } $statusCodeInt = [int]$Result.StatusCode $statusCodeInt | Should -be 403 ❸ } } } AfterAll { Remove-AzResourceGroup -Name "rg-firstvnet" -Force | Out-Null Remove-AzResourceGroup -Name "rg-secondvnet" -Force | Out-Null }
❶ The BeforeAll block sets up the environment.
❷ This test verifies that the UI works properly.
❸ This test verifies that the API is not publicly accessible.
The BeforeAll
block is again used to set up the environment. First, the template is deployed. After that, the simple UI application is published using dotnet publish
. That command builds the application and generates the files you need to run the application on the Azure App Service. Next, those files are combined in an archive using the PowerShell Compress-Archive
function. The final step in BeforeAll
is to publish the app to Azure using the Publish-AzWebApp
PowerShell cmdlet. You need to specify to which App Service and resource group you want to deploy the archive to.
Next come the two tests. The first one reaches out to the UI and expects an HTTP 200 status code to be returned. As stated earlier, the UI only does that when it can reach the API. It will otherwise throw an error and return the HTTP 500 error code. The second test verifies whether the API was successfully locked down. Again, you can expect an HTTP 403 error code to be returned, because of the network restriction you’ve put in place when calling the API directly.
The AfterAll
section then removes the resources. You might decide to skip that step in the real world.
You have now seen that writing an end-to-end test isn’t easy. It involves a lot of moving parts, it’s easy to get wrong, and it takes a lot of time to run. You therefore need to be careful with writing too many of these tests. Always try first to solve your test requirements using the other test methods lower on the pyramid. Now that you have seen various ways you can use Pester to create tests, let’s see how you can run them in an Azure DevOps pipeline to verify your changes automatically.
By running the tests in an Azure DevOps pipeline, you can easily verify your changes before deploying the template to production. This will help you get as close to guaranteeing a successful deployment as possible. To run the tests from a pipeline, you’ll need to write a pipeline definition.
There are two options for running Pester tests within an Azure DevOps pipeline: use a task from the Marketplace or use a custom PowerShell script. A task from the Marketplace is the easiest option, but that won’t support tests that need to connect to Azure. For example, the unit test example at the start of this chapter won’t work with this approach.
That leaves the other option, which is to use a custom PowerShell script. Within the pipeline, that script will need to run using an Azure PowerShell task, which will provide you with an Azure context and connection to work with. Let’s first look at the PowerShell script (RunPester.ps1).
Listing 9.16 A PowerShell helper script to run Pester tests in Azure DevOps
param ( [Parameter(Mandatory = $true)] [string] $ModulePath, [Parameter(Mandatory=$false)] [string] $ResultsPath, [Parameter(Mandatory=$false)] [string] $Tag = "UnitTests" ) # Install Bicep ❶ curl -Lo bicep https:/ /github.com/Azure/bicep/releases/ ➥ latest/download/bicep-linux-x64 chmod +x ./bicep sudo mv ./bicep /usr/local/bin/bicep # Install Pester if needed ❷ $pesterModule = Get-Module -Name Pester -ListAvailable ➥ | Where-Object {$_.Version -like '5.*'} if (!$pesterModule) { try { Install-Module -Name Pester -Scope CurrentUser ➥ -Force -SkipPublisherCheck -MinimumVersion "5.0" $pesterModule = Get-Module -Name Pester -ListAvailable ➥ | Where-Object {$_.Version -like '5.*'} } catch { Write-Error "Failed to install the Pester module." } } Write-Host "Pester version: $($pesterModule.Version)" $pesterModule | Import-Module if (!(Test-Path -Path $ResultsPath)) { New-Item -Path $ResultsPath -ItemType Directory -Force | Out-Null } Write-Host "Finding tests in $ModulePath" $tests = (Get-ChildItem -Path $ModulePath "*tests.ps1" -Recurse).FullName $container = New-PesterContainer -Path $tests $configuration = [PesterConfiguration]@{ ❸ Run = @{ Container = $container } Output = @{ Verbosity = 'Detailed' } Filter = @{ Tag = $Tag } TestResult = @{ ❹ Enabled = $true OutputFormat = "NUnitXml" OutputPath = "$($ResultsPath)\Test-Pester.xml" } } Invoke-Pester -Configuration $configuration
❸ Create a Pester configuration object.
❹ Configure how to report test results.
The preceding script consists of three parts. First, it installs Bicep. Bicep is not available by default within the Azure PowerShell task, so it needs to be installed first. Second, Pester might not be installed, or an old version might be installed, so Pester is installed or upgraded. The last section of the script then runs the tests using the Invoke-Pester
cmdlet. In contrast to previous invocations of the Invoke-Pester
cmdlet you’ve seen in this chapter, here it is run by passing a configuration object. The main reason for that is that you want to collect the test results in a format that Azure DevOps will understand, so you can upload the results and make them available in the Azure DevOps UI on each run. That requires a bit of extra configuration that is only available in this more advanced configuration object and not as switches on the Invoke-Pester
command. The preceding PowerShell script is then run within the pipeline, as shown in the following listing (pester-pipeline.yaml).
Listing 9.17 An Azure DevOps pipeline to run Pester tests
trigger: - main pool: vmImage: ubuntu-latest steps: - task: AzurePowerShell@5 ❶ displayName: 'Run Pester Unit tests' inputs azureSubscription: <your service connection> ScriptType: 'FilePath' ScriptPath: ➥ '$(System.DefaultWorkingDirectory)/Chapter_09/Pester/RunPester.ps1' ScriptArguments: '-ModulePath ➥ "$(System.DefaultWorkingDirectory)/Chapter_09/Pester" ➥ -ResultsPath ➥ "$(System.DefaultWorkingDirectory)\Chapter_09\Pester\Publish"' azurePowerShellVersion: 'LatestVersion' workingDirectory: '$(System.DefaultWorkingDirectory)/Chapter_09/Pester' - task: PublishTestResults@2 ❷ displayName: 'Publish Pester Tests' inputs: testResultsFormat: 'NUnit' testResultsFiles: '**/Test-Pester.xml' pathToSources: ➥ '$(System.DefaultWorkingDirectory)\Chapter_09\Pester\Publish' mergeTestResults: true failTaskOnFailedTests: true
❶ Run the PowerShell script within an Azure context.
❷ Upload the test results to Azure DevOps.
This pipeline only has two steps. The first step runs the PowerShell script within an Azure context so your tests can also create and access resources in Azure. To do this, it needs a reference to a service connection with enough permissions. The second task uploads the test results that were generated in the first step to Azure DevOps. When a test does not succeed, the pipeline fails, and you can easily view the results, as shown in figure 9.14.
Figure 9.14 shows two failing tests. You can click on each of them to get more details on the failure. Besides failing tests, you can also find trends on your test runs to see how things are going over time.
As you have seen in this section, it’s very useful and fairly easy to run your Pester tests in Azure DevOps. That will help you feel more confident about the changes you’ve made and the likelihood of a successful deployment into your production environment.
There are different types of tests: unit tests, integration tests, and end-to-end tests. Each has its own pros and cons.
A unit test can verify the behavior of a single resource in isolation. It is the quickest test to run and therefore to be favored above integration or end-to-end tests.
An integration test can be written to verify the correct working of two pieces of infrastructure together.
An end-to-end test can help you verify the working of an entire set of infrastructure working together, even with an application deployed. This type of test takes a long time and is much more error prone than the other methods.
The ARM TTK tooling can be used to evaluate templates for best practices. It can be run using an Azure DevOps pipeline to give you feedback on every change.
Pester can be used to write custom tests. These tests can be structured using the Describe
, Context
, and It
keywords. The BeforeAll
and AfterAll
blocks can be used to create and destroy resources before and after running tests.
Pester tests can be run in an Azure DevOps pipeline, and the results can be viewed in the Azure DevOps portal.