Imagine working at a large company with many teams, and all of them are running applications on Azure infrastructure. Most of these applications run on similar infrastructure, like virtual machines, Kubernetes, or App Service plans. No one will want to write the same ARM or Bicep templates over and over to create that infrastructure. And besides the repetitive work, most companies have standard approaches to this infrastructure that they will want to impose on every team. In this chapter, you’ll learn how to store ARM or Bicep templates in Azure and make them available for others to reuse.
Let’s say you work at Doush Bank, a bank serving millions of customers all over Europe. A bank like this has hundreds of applications maintained by hundreds of development and operations teams.
While every application is different, their architectures and the platforms that they run on are not that different in the end. Almost all of them run either on virtual machines, Azure App Service, or Azure Kubernetes Service. Doush Bank has strict guidelines on how these resources should be created and configured to comply with all the regulations that apply to banks in Europe. This poses two problems for Doush Bank that they want to solve:
How can they use economies of scale and supply something like default templates to help their teams create new infrastructure more rapidly?
How can they ensure that all infrastructure created by all the different teams stays compliant with the ever-changing guidelines?
When adopting Infrastructure as Code, a company like this will start looking for a way to reuse ARM or Bicep templates, or maybe snippets of templates, to solve problems like these.
One approach is to package and distribute templates in the same way that application libraries are distributed, using a binary management system or artifact store like Azure Artifacts or GitHub Packages. This approach is familiar to those with a development background and it’s sometimes adopted due to the lack of an alternative approach.
An alternative approach first became available in 2020, and went into general availability in 2021, with the arrival of Azure template specs. A template spec is an Azure resource that is created like any other. Figure 10.1 shows the relationship between a template, a template spec, and the Azure resources defined in the template.
This may sound like it’s just IaC, but there is a difference here. Instead of the resources in the template being created, the template itself becomes a resource. Later, a resource deployment is done, where the resources defined in the template are created.
This chapter discusses setting up template specs in various ways, versioning them, and building a reusable components repository. Next, you’ll learn how to deploy template specs in various ways, and when you should and should not consider using template specs. While template specs are the go-to option in many cases, an understanding of alternative approaches is beneficial as well, so we’ll also discuss using Azure Artifacts feeds or GitHub Packages for storing templates, and the chapter will conclude with some design considerations. First, let’s create a template spec.
Let’s continue with the Doush Bank example and assume you are part of the team that has to create a library of preapproved building blocks that are compliant by design.
In figure 10.2 you see the central team and how they produce the template specs that other teams can consume. Within this team, one of the building blocks that must be created first is the compliant Windows 2019 virtual machine. This building block should contain a minimal VM configuration that contains all mandatory configuration. It might look something like the following listing (10-01.bicep).
Listing 10.1 Azure Bicep for a compliant VM
param virtualMachineName string @allowed([ 'Standard_B2s' 'Standard_D4ds_v4' ]) param virtualMachineSize string param virtualMachineUsername string @secure() param virtualMachinePassword string param virtualNetworkName string param subnetName string param virtualMachineIpAddress string output compliantWindows2019VirtualMachineId = ➥ stringcompliantWindows2019VirtualMachine.id ❶ resource compliantWindows2019VirtualMachine ➥ 'Microsoft.Compute/virtualMachines@2020-12-01' = { ❷ name: virtualMachineName location: resourceGroup().location identity: { type: 'SystemAssigned' } properties: { hardwareProfile: { vmSize: virtualMachineSize } storageProfile: { imageReference: { publisher: 'MicrosoftWindowsServer' offer: 'WindowsServer' sku: '2019-Datacenter' version: 'latest' } osDisk: { osType: 'Windows' name: virtualMachineName createOption: 'FromImage' caching: 'ReadWrite' managedDisk: { storageAccountType: 'StandardSSD_LRS' } } } osProfile: { computerName: virtualMachineName adminUsername: virtualMachineUsername adminPassword: virtualMachinePassword windowsConfiguration: { timeZone: 'Central Europe Standard Time' } } networkProfile: { networkInterfaces: [ { id: compliantNetworkCard.id } ] } } } resource compliantNetworkCard 'Microsoft.Network/networkInterfaces ❸ ➥ @2020-06-01' = { name: virtualMachineName location: resourceGroup().location properties: { ipConfigurations: [ { name: 'IPConfiguration-NICE-VM1' properties: { privateIPAllocationMethod: 'Static' privateIPAddress: virtualMachineIpAddress subnet: { id: resourceId('Microsoft.Network/virtualNetworks/subnets', ➥ virtualNetworkName, subnetName) } } } ] } } resource azureAADLogin 'Microsoft.Compute/virtualMachines/extensions ❹ ➥ @2015-06-15' = { name: '${virtualMachineName}/azureAADLogin' location: resourceGroup().location properties: { type: 'AADLoginForWindows' publisher: 'Microsoft.Azure.ActiveDirectory' typeHandlerVersion: '0.4' autoUpgradeMinorVersion: true } } resource AzurePolicyforWindows 'Microsoft.Compute/virtualMachines/extensions ❹ ➥ @2015-06-15' = { name: '${virtualMachineName}/AzurePolicyforWindows' location: resourceGroup().location properties: { type: 'ConfigurationforWindows' publisher: 'Microsoft.GuestConfiguration' typeHandlerVersion: '1.1' autoUpgradeMinorVersion: true } }
❶ This template returns the resourceId of the VM.
❸ Every VM needs at least one network card.
❹ These two extensions make the VM compliant to standards.
This building block contains the VM, the network interface it requires, and two mandatory VM extensions that have to be installed into every VM at the company. It also enforces some good practices, like keeping usernames and passwords secure and limiting usage to a limited set of precleared VM SKUs. This is the essence of template specs: in addition to permitting reuse, it allows you to define standards, configuration, or included configuration that everyone should use when working in a certain context, like a company or business unit. Note that we explicitly said “should use,” as it is not possible to enforce the use of templates. To enforce standards, you should look into using Azure Policy.
Instead of deploying this VM directly, a template spec can be created out of this template with the following Azure CLI commands:
az group create --location westeurope --name TemplateSpecs az ts create --resource-group TemplateSpecs --location westeurope ➥ --name compliantWindows2019Vm --version "1.0" ➥ --template-file 10-01.bicep
Or, if you are using an older version of the CLI that does not automatically transpile Bicep to JSON, you can use these commands:
az group create --location westeurope --name TemplateSpecs az bicep build --file 10-01.bicep az ts create --resource-group TemplateSpecs --location westeurope ➥ --name compliantWindows2019Vm --version "1.0" ➥ --template-file 10-01.json
It’s not yet possible to deploy Azure Bicep as template specs directly, so it is necessary to manually transpile a Bicep template into an ARM template first. After transpiling, the template spec is created by invoking the ts create
command and providing parameter values that describe where the spec should be located, the source file, spec name, and version.
If the name of a template spec is not descriptive enough, you can add a longer description using the --description
parameter.
After creating one or more template specs, you can retrieve the available specs list using az ts list
. This command returns a JSON array containing all the available template specs.
To retrieve the details of one of listed specs, you can use the following command:
az ts show --resource-group TemplateSpecs --name compliantWindows2019Vm
This returns a JSON object detailing the template spec created before. It will look something like the following listing.
Listing 10.2 A template spec object as stored in Azure
{ "description": "longer description, if needed", "displayName": null, "id": "...", "location": "westeurope", "metadata": null, "name": "compliantWindows2019Vm", "resourceGroup": "TemplateSpecs", "systemData": { "createdAt": "2021-05-30T17:35:11.805488+00:00", "createdBy": "henry@azurespecialist.nl", "createdByType": "User", "lastModifiedAt": "2021-05-30T17:35:12.185520+00:00", "lastModifiedBy": "henry@azurespecialist.nl", "lastModifiedByType": "User" }, "tags": {}, "type": "Microsoft.Resources/templateSpecs", "versions": [ ❶ "1.0" ] }
❶ Multiple versions of a template spec can be listed here.
One of the things to note in this object is that it contains a property called versions
.
Inside a template spec object, there is a property called versions
that is typed as an array
. This hints that it is possible to store multiple versions of a template spec. You can verify this by running the following commands:
az ts create --resource-group TemplateSpecs --location westeurope ➥ --name compliantWindows2019Vm --version "2.0" ➥ --template-file 10-01.json az ts show –resource-group TemplateSpecs --name compliantWindows2019Vm
The object that is returned this time does not only contain version 1.0 but also version 2.0. The great thing about versions is that they allow you to add new, improved template specs. Also, existing users of older versions can still trust that their versions will continue working—an important property in a supplier-consumer relationship for shared components.
Currently it is not possible to mark templates or template versions as deprecated or obsolete. It is possible, though, to remove a template spec version using the Azure CLI:
az ts delete --resource-group TemplateSpecs --name compliantWindows2019Vm ➥ --version "1.0"
To remove a whole template, omit the --version
parameter.
When building Bicep templates and transpiling them before deploying them as a template spec, Bicep modules are automatically included in the template. You do not have to worry about making your modules available as part of the template spec.
But if you are still working with ARM templates rather than Bicep templates, you may be wondering about linked templates. Fortunately, you can use linked templates in combinations with template specs. Assuming you have an ARM template available that contains the definitions for the VM extensions we discussed before, you can write an ARM template like the following listing (10-03.json). Here the relativePath
property is used to reference an ARM file.
Listing 10.3 An ARM template that uses relativePath
{ "$schema": "https://schema.management.azure.com/schemas/ ➥ 2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "virtualMachineName": { "type": "string" }, ... }, "resources": [ { ... }, { "type": "Microsoft.Network/networkInterfaces", ... }, { "type": "Microsoft.Resources/deployments", ❶ "apiVersion": "2020-10-01", "name": "[concat('vmExtensions', parameters('virtualMachineName'))]", "properties": { "mode": "Incremental", "templateLink": { "relativePath": "10-03-vmExtensions.json" ❷ }, "parameters": { "virtualMachineName": { "value": "[parameters('virtualMachineName')]" } } } } ], "outputs": { "compliantWindows2019VirtualMachineId": { "type": "string", "value": "[resourceId('Microsoft.Compute/virtualMachines', ➥ parameters('virtualMachineName'))]" } } }
❶ At this point, the deployment of a linked template starts.
❷ The file can be a file on the local filesystem.
In the preceding listing, you can see roughly the same ARM template that you would get from transpiling the first Bicep template in this chapter (listing 10.1), except that the VM extensions have been left out. Instead, a new deployment is included that references another file on a relative path on the local filesystem, using the relativePath
property. When you deploy this template as a template spec, that linked template is automatically discovered and included as part of the spec.
You can verify this by running the following commands, which create a third and final version of the template spec for compliant VMs and then show the details of that version:
az ts create --resource-group TemplateSpecs --location westeurope ➥ --name compliantWindows2019Vm --version "3.0" ➥ --template-file 10-03.json az ts show --resource-group TemplateSpecs --name compliantWindows2019Vm ➥ --version "3.0"
Here you can see a new parameter (version
) in the ts show
command. With this parameter added, the command shows all the details of a specific version of the template spec, including the actual template and all linked templates. In this case, you would get an object that has the following shape:
{ "description": null, "id": "...", "linkedTemplates": [ { // The template from 10-03-vmExtensions.json } ], "location": "westeurope", "mainTemplate": { // The template from 10-03.json }, "metadata": null, "name": "3.0", "resourceGroup": "TemplateSpecs", "systemData": { ... }, "tags": {}, "type": "Microsoft.Resources/templateSpecs/versions", "uiFormDefinition": null }
Here you can verify that both the main template and the linked template are available as part of the template spec. For your reference, the entire file of 200 lines is included in the GitHub repository under the name 10-03-deployed-as-spec.json.
Let’s circle back to your role in the central team responsible for creating a whole series of template specs that define preapproved, compliant resources that contain a good enough default configuration for Doush Bank. One of the things your team wants to do is use DevOps practices like IaC and CI/CD for building template specs.
To do that, you might want to investigate how you can create template specs using IaC. Unfortunately, this is not as straightforward as it might seem. Under the hood, a template spec is stored as a multitude of objects. Every version is its own Azure resource, and all linked templates are stored as separate resources (called artifacts) as well.
The current recommendation is to create template specs by invoking the Azure CLI or Azure PowerShell. This advice also applies when you are creating the template specs using a CI/CD pipeline like Azure DevOps or GitHub Actions. Microsoft strongly recommends using Portal, PowerShell, or the Azure CLI for creating template specs. See Angel Perez’s article “ARM Template Specs is now Public Preview!” (http://mng.bz/J2xz) for more information. Now that you have deployed a few versions of the template spec for a compliant VM with some default configuration, let’s switch sides and learn how to consume a template spec.
Let’s now assume that you are in one of the teams required to follow the company standards, so you want to deploy the predefined compliant VM stored as a template spec. To make this possible, you must first find the complete ID of the template spec you want to deploy. You can find this ID by executing the az ts show
command again, which you saw in the previous section.
az ts show --resource-group TemplateSpecs --name compliantWindows2019Vm ➥ --version "3.0"
This will show you the complete Azure Resource for this template spec, including its id
property, which will look something like this:
/subscriptions/{azure-subscription-id}/resourceGroups/TemplateSpecs ➥ /providers/Microsoft.Resources/templateSpecs ➥ /compliantWindows2019Vm/versions/3.0
With this id
in hand, you can deploy the resources declared in the template spec using the following command:
az group create --location westeurope --resource-group TemplatedResources az deployment group create --resource-group TemplatedSResources ➥ --template-spec /subscriptions/{azure-subscription-id} ➥ /resourceGroups/TemplateSpecs/providers/Microsoft.Resources ➥ /templateSpecs/compliantWindows2019Vm/versions/3.0
Looking closely, you’ll see that this is the same az deployment group create
command you have used before to deploy a template, but this time another parameter set is used: instead of --template-file
, the --template-spec
parameter is used to point to the template spec you want to deploy. You’ll also see in the example that this should include a specific version.
If you execute this command yourself, don’t forget to specify the correct parameters for putting this VM into a virtual network. These parameters were declared as part of the template spec you created in listing 10.3. If you don’t have any Azure virtual networks, or you don’t want to create one for this example, just pass any random value. This will fail the deployment, but you’ll still see how to deploy a template spec. In the following subsection, you’ll deploy a complete architecture with a virtual network and two VMs to make up for this.
If you prefer using Azure PowerShell instead of the Azure CLI, you can accomplish the same result using these commands:
New-AzResourceGroupDeployment -ResourceGroupName TemplatedResources ➥ -TemplateSpecId /subscriptions/{azure-subscription-id}/resourceGroups ➥ /TemplateSpecs/providers/Microsoft.Resources/templateSpecs ➥ /compliantWindows2019Vm/versions/3.0
Finally, as you may have guessed, it is also possible to deploy template specs from the portal, as shown in figure 10.3. First, navigate to the list of all resource groups (1), optionally filter the list down if it is very long (2), and then open the resource group containing your template spec (3). Once you have found the spec, open its details by clicking the name (4).
This will open a new view in the portal, with a Deploy button at the top. Clicking this button will open up a third view, as shown in figure 10.4. Once you have filled in all the values, click Review + Create to start the deployment. No matter which approach you choose—Azure CLI, Azure PowerShell, or the Azure portal—you will end up with the same result.
After working with template specs like this, you may find that it feels a bit ad hoc to manage infrastructure this way. Maybe it doesn’t feel right to have your specs defined using IaC and to then deploy them using imperative commands, especially when you want to combine multiple specs into a single infrastructure. Don’t fear, it’s also possible to deploy a template spec from within an ARM or Bicep template.
To complete our discussion of how template specs work, let’s explore how you can reference or deploy specs from a template. To see how this can work in practice, let’s take the example from the previous section and expand on it a little bit. Instead of just deploying a single VM, suppose you are tasked with deploying a more elaborate infrastructure. Something like what’s shown in figure 10.5.
In figure 10.5, you can see an architecture that spans a virtual network with two virtual machines and one recovery services vault for backing those VMs up. The requirement for backups makes the deployment more interesting, because backups are configured using a specific resource type, the protected item
, that requires both the ID of the virtual machine and the ID from the recovery services vault. The latter ID comes from your own deployment; the IDs of the virtual machines have to come from the template specs. In this example, you’ll see how to deploy template specs and how to use the information returned by the template spec further down the line.
To help with this deployment, the central IaC team has made the spec for a compliant virtual machine available. To deploy the complete infrastructure, a Bicep template is used. In that template, the following resources have to be declared:
Let’s start by defining a virtual network (10-04.bicep).
Listing 10.4 Azure virtual network using Bicep
var virtualNetworkName = 'workloadvnet' var subnetName = 'workloadsubnet' resource virtualNetwork 'Microsoft.Network/virtualNetworks@2020-11-01' = { name: virtualNetworkName location: resourceGroup().location properties: { addressSpace: { addressPrefixes: [ '10.0.0.0/24' ] } subnets: [ { name: subnetName properties: { addressPrefix: '10.0.0.0/24' } } ] } }
This template contains the Bicep definition for a virtual network, using an address space containing 256 IP addresses from 10.0.0.0 to 10.0.0.255. The network contains a single subnet with the same address range called workload
for holding both VMs.
As both VMs are created from the template spec for a compliant VM, you first need to build a variable of type array
that contains all the parameters needed by the template spec and that differs between the two virtual machines.
@secure() param vmPassword string var virtualMachineDetails = [ { ipAddress: '10.0.0.10' name: 'VM1' sku: 'Standard_B2s' } { ipAddress: '10.0.0.11' name: 'VM2' sku: 'Standard_D4ds_v4' } ]
This array contains two objects, each holding the variable values for one of the VMs. By now, you should be able to read and understand this Bicep code.
The following part of the template uses these objects to deploy the actual VMs, like this:
resource virtualMachines 'Microsoft.Resources ➥ /deployments@2021-01-01' = [for vm in virtualMachineDetails : { name: vm.name properties: { mode: 'Incremental' templateLink: { id: '/subscriptions/{azure-subscription-id}/resourceGroups ❶ ➥ /TemplateSpecs/providers/Microsoft.Resources/templateSpecs ➥ /compliantWindows2019Vm/versions/3.0' } parameters: { virtualNetworkName: { value: virtualNetworkName } subnetName: { value: subnetName } virtualMachineIpAddress: { value: vm.ipAddress } virtualMachineName: { value: vm.name } virtualMachineSize: { value: vm.sku } virtualMachineUsername: { value: 'bicepadmin' } virtualMachinePassword: { value: vmPassword } } } }]
❶ A template spec is deployed as the linked template.
Combining these three snippets gives you the complete Azure Bicep file (10-04.bicep in the accompanying Git repository) for deploying a virtual network and two VMs. In this third snippet, the template spec is referenced in a way that is very similar to how it works from the Azure CLI or Azure PowerShell.
With the virtual network and VMs created, it is time to add another resource to your infrastructure, the recovery services vault:
var recoveryServicesVaultName = 'backupvault' resource recoveryServicesVault 'Microsoft.RecoveryServices/vaults ➥ @2021-01-01' = { name: recoveryServicesVaultName location: resourceGroup().location sku: { name: 'Standard' } properties: {} ❶ }
❶ Adding a property called properties is mandatory.
The recovery services vault is a resource that requires only a name
and sku
to be configured. Other optional properties can be left out, but the properties
property cannot—this is why there may be an unexpected empty object.
The vault is where the backups are stored, and it is configured using a protected item. A protected item is a resource that enables the backup of a resource. It contains references to the resource to be backed up and the backup policy that governs when backups are taken and how long they are retained.
The final snippet is complex with long expressions in it, so let’s slowly prepare for that. If you are not familiar with the protectedItems
resource for VMs, this pseudo-specification may help:
resource protectedItems 'Microsoft.RecoveryServices/vaults/backupFabrics ➥ /protectionContainers/protectedItems@2016-06-01' = { name: complex name properties: { protectedItemType: 'Microsoft.Compute/virtualMachines' policyId: resource id of the backup policy sourceResourceId: resource id of the protected item } }
Besides the complex, multipart resource type, a similarly complex name is created out of five mandatory parts: the name of the services recovery vault, the containing resource group (twice), and the name of the protected virtual machine (also twice). The first entry in the properties
array contains a fixed string for virtual machines, and the other two fields reference the backup policy to use and the resource to back up. The policy specifies how often a backup should be taken and how long backups should be retained. The resource to be protected contains the ID of a virtual machine.
If we update the previous pseudocode with the proper expressions to add it to our template, we’ll get this:
resource protectedItems 'Microsoft.RecoveryServices/vaults/backupFabrics ➥/protectionContainers/protectedItems@2016-06-01' = ➥[for (vm, i) in virtualMachineDetails: { ❶ name: concat(recoveryServicesVaultName, ➥ '/Azure/iaasvmcontainer;iaasvmcontainerv2;vnetdemo;', ➥ virtualMachineDetails[i].name, ➥ '/vm;iaasvmcontainerv2;vnetdemo;', virtualMachineDetails[i].name) properties: { protectedItemType: 'Microsoft.Compute/virtualMachines' policyId: resourceId('Microsoft.RecoveryServices/vaults ➥ /backupPolicies',recoveryServicesVaultName,'DefaultPolicy') ❷ sourceResourceId: reference(virtualMachines[i].name).outputs ❸ ➥ .compliantWindows2019VirtualMachineId.value } }]
❶ For each entry in the list of details, a VM is defined.
❷ The PolicyId references another resource, the backup policy.
❸ The sourceResourceId references the resource to back up.
Comparing this to the previous snippet, you can see how the different values are calculated. The first thing to note about this snippet is that it contains a for
loop over the list of virtualMachineDetails
created earlier. Within the loop, you can trace all the parts that make up the name, where the name of the resource group is added twice, and the virtual machine’s name at position i
.
You can also see how the different resourceId
s are created. The backup policy ID is a fixed one, referencing the built-in policy called DefaultPolicy
.
The final property is the resource ID of the protected item: sourceResourceId
. This is the ID of the VM created by the template spec. This is why the spec has an output parameter—so you can pick up that output here. The reference()
function is used to translate the name of the linked template spec deployment to a resource object. Once the object is created, the outputs
property navigates to the value for compliantWindows2019VirtualMachineId
, which is the name of the output parameter.
When you’re trying to understand this, remember that the complete template of this section is available on GitHub under the name 10-04.bicep, where it is much easier to read.
Adding the protected items completed this example. You have built a complete infrastructure with one virtual network, two VMs coming from a template spec, and a recovery services vault in which backups are stored for both virtual machines.
Once you start building deployments like the one in the previous section, you may find yourself in a bit of a pickle with versions after some time. Having different versions of the same template spec coexisting can be a two-edged sword.
Multiple versions give you, the consumer of a spec, the guarantee that older versions are still available even when the maintainer of the spec starts creating newer versions. (That’s assuming they don’t delete older versions unexpectedly.) Still, there is currently no mechanism for getting warnings from Azure when you aren’t using the latest version of a spec. Nor is there a way to quickly check if any of your recent deployments are using outdated specs.
Consequently, you’ll have to keep tabs on the maintainers of the specs you use to ensure that you do not miss service updates. And, of course, the opposite is also true: if you are the maintainer of one or more template specs, do not forget to notify your users when you release new versions, so that they can update their references as well.
As a maintainer, one thing you should be cautious about is removing older spec versions. There is, as of now, no Azure-provided way of finding out if scripts, templates, or users out there are still referencing an older version. If you delete a version, you may break the next release of a downstream team, unless you put your own process in place to prevent this.
These risks around template specs may be a reason for you to explore other options. Let’s take a look at using a Bicep registry for distributing reusable ARM and Bicep templates.
If your team and organization have fully adopted Bicep and are no longer working with ARM templates in JSON, you can consider using the Bicep registry. A Bicep registry is an Azure Container Registry (ACR) that is not used for storing Docker images, but Bicep modules instead. You can use these files when building a composing template by referencing their online location instead of a location on the filesystem.
One of the advantages of this registry is that any module you reference in the registry will be downloaded and inserted into the JSON template that results from the transpilation. This means that even if the contents of the module change, or are removed, between deployments, you can still transpile once and use that same outcome multiple times.
To work with a Bicep registry, you will first have to create a new ACR, like this:
az group create --resource-group BicepRegistry --location westeurope az acr create --resource-group BicepRegistry ➥ --name BicepRegistryDemoUniqueName --sku Standard
The first line creates a new resource group for the Bicep registry, as you have seen before. The second command creates an ACR instance. The name of the instance needs to be unique, so you will have to choose your own name.
Note If you want to learn more about Azure Container Registry, you can start with Microsoft’s “Introduction to private Docker container registries in Azure” article (http://mng.bz/woRP).
Once the ACR is in place, it is time to start pushing and using modules. As you have already created a reusable module (10-01.bicep, in listing 10.1), you can reuse that and publish it as a module to your registry with the following command:
az bicep publish --file 10-01.bicep --target ➥ br:BicepRegistryDemoUniqueName.azurecr.io/bicep/modules/vm:v1
You start the target with br
, which indicates that you are pushing to a Bicep registry. The path used (/bicep/modules) is not mandatory, but it is a convention used throughout the Microsoft Docs and adopted by many users. This makes it well known and can help distinguish between Bicep templates and other contents in the registry.
The part after the colon (v1
) is used to identify the version of the template, and it’s often called the tag, in the context of ACR. If you want to add a newer version when republishing the module, it is wise to increment this to v2
, then v3
, etc.
With this module in place, you can now write another Bicep file to use this remote module as part of the deployment (10-05.bicep).
Listing 10.5 Using a module from a Bicep registry
var subnetName = 'subnetName' resource virtualNetwork 'Microsoft.Network/virtualNetworks ❶ ➥ @2020-11-01' = { name: 'exampleVirtualNetwork' location: resourceGroup().location properties: { addressSpace: { addressPrefixes: [ '10.0.0.0/24' ] } subnets: [ { name: subnetName properties: { addressPrefix: '10.0.0.0/24' } } ] } } module vm1Deployment 'br:BicepRegistryDemoUniqueName.azurecr.io/bicep ❷ ➥ /modules/vm:v1' = { name: 'vm1Deployment' params: { subnetName: subnetName virtualMachineIpAddress: '10.0.0.1' virtualMachineName: 'vm1' virtualMachinePassword: 'dontputpasswordsinbooks' virtualMachineSize: 'Standard_B2s' virtualMachineUsername: 'bicepadmin' virtualNetworkName: virtualNetwork.name } }
❷ Referencing a module that is hosted in an ACR Bicep registry
This template first creates a virtual network, which is a prerequisite for deploying a VM. Once this is in place, the module for creating a VM can be deployed by referencing it through the syntax br:<acr-name>.azureacr.io
, followed by the path that you used before for publishing the file. As an alternative to template specs or the Bicep registry, you may also want to consider using a package manager.
You may have reasons not to host your reusable templates in template specs. Or, if you are using multiple Azure tenants, template specs will not be a viable option. Some organizations also built a repository of reusable templates before template specs were available. Whatever the reason, package repositories are an alternative to sharing templates with template specs or the Bicep registry.
Package repositories are used extensively in software development for sharing libraries between software teams. In the .NET ecosystem, the NuGet package manager is used. In the Java system, Maven is often used; JavaScript and TypeScript have NPM, and so forth. Depending on the context you are working in, one of these options is likely already being used.
The packages that are created and used by package managers are stored in package repositories, like Azure Artifacts, NuGet, MyGet, NPM, GitHub Packages, JFrog Artifactory, and many more.
You’ll learn to work with Azure Artifacts in this example, a common choice when working with Azure DevOps. Azure DevOps supports multiple package types, and in this case the universal package type works best.
Before you can publish an ARM template to a repository, that repository needs to be created. When you’re working with Azure Artifacts, a repository is called an Artifacts feed.
To create a feed, perform the following steps (see figure 10.6):
Log in to your Azure DevOps account and open your working project.
Give the feed a name and optionally configure other options.
Once the feed is created, you can publish packages to it. Publishing packages can be done from an Azure pipeline, as you have seen before. For this example, assume there’s a file called 10-06.json that contains a simple ARM template for an Azure App Service. With that template in place, you can start working on an Azure pipeline that checks this file out of source control and publishes it as a universal package (10-07.yml).
Listing 10.6 A YAML pipeline for creating and publishing a package
steps: - task: CopyFiles@2 ❶ displayName: 'Copy template spec files to staging location' inputs: SourceFolder: '$(Build.SourcesDirectory)' Contents: '*.json' TargetFolder: '$(Build.ArtifactStagingDirectory)' - task: UniversalPackages@0 ❷ displayName: 'Publish package' inputs: command: 'publish' publishDirectory: '$(Build.ArtifactStagingDirectory)' ❸ feedsToUsePublish: 'internal' ❹ vstsFeedPublish: '<your-feed-id>' ❺ vstsFeedPackagePublish: 'app-service-plan-template' ❻ versionOption: 'patch' ❼
❶ Staging the files to publish
❸ The directory that contains the files to publish
❹ This is an internal NuGet feed, an Azure Artifacts feed.
❺ ID of the feed to publish to
In this pipeline, two steps are defined. The first step, CopyFiles
, is used to copy all the files that match *.json
from the templates directory to a specific location outside the sources directory from which they can be packaged. In this example, all JSON files are copied, but in larger solutions you would limit this to only the folder containing your templates. The location you copy to is called the Artifacts staging directory, and it’s available using the variable Build.ArtifactStagingDirectory
. Copying the files to a separate location ensures that only the desired files are packaged and not the complete contents of the sources directory.
After copying the files to the staging directory, that directory is published as a universal package. To make this possible, you have to specify the name of the package, a version increment strategy, and the ID that identifies your Artifacts feed. The feedsToUsePublish
option is set to internal
, which indicates that this is an Azure Artifacts feed and not an externally hosted NuGet feed.
The ID for the Artifacts feed is the most tricky to find. The easiest way is to store the file in source control with a dummy value, and then fill in the value when you create the pipeline, as shown in figure 10.7. First, click the Settings link (1) just before the definition of the second task. Next, in the Destination Feed drop-down list (2), select the correct feed. When you click Add (3) to update the pipeline definition, the correct ID will be inserted automatically.
After completing the settings in figure 10.7, click the Run button at the top right, and your package, containing an ARM template, will be created. You can verify this by navigating back to your Artifacts feed, where the welcome screen will have been replaced by something like figure 10.8.
If you are wondering whether rerunning the pipeline would add a new version of the same artifact, the answer is yes. The version in the screenshot is 0.0.1, and if the pipeline were to rerun, the new version would be 0.0.2. With this package in place, let’s switch gears again and see how to consume this package and use it from an Azure pipeline.
To deploy an ARM template that is packaged and uploaded to an Azure Artifacts feed, you can also use an Azure pipeline. Such a pipeline can look like the following listing (10-07.yml).
Listing 10.7 A YAML pipeline for downloading a package
steps: - task: UniversalPackages@0 ❶ displayName: Download ARM Template Package inputs: command: download feedsToUse: 'internal' vstsFeed: '<your-feed-id>' ❷ vstsFeedPackage: 'app-service-plan-template' vstsPackageVersion: '*' downloadDirectory: '$(Pipeline.Workspace)' ❸ - task: AzureResourceManagerTemplateDeployment@3 displayName: Deploy ARM Template inputs: azureResourceManagerConnection: 'myArmServiceConnection' deploymentScope: 'Resource Group' subscriptionId: '<your-subscription-id>' action: 'Create Or Update Resource Group' resourceGroupName: 'myResourceGroup' location: 'West Europe' templateLocation: 'Linked artifact' csmFile: '$(Pipeline.Workspace)/10-06.json' overrideParameters: '-appServicePlanName myAppServicePlan'
❶ Download the package from the Artifacts feed.
❷ The id of the feed to download the package from
❸ Deploy the downloaded artifact files.
If you look carefully at this pipeline definition, you’ll see that the first task is the inverse of the last task in the previous pipeline. As you are downloading from the same feed you have uploaded to before, you should use the same feed ID. Instead of uploading a package, it is now downloaded. The path it is downloaded to again uses a built-in variable to reference the current execution directory of the pipeline. The second task is similar to what you have seen before: it takes the downloaded ARM template and deploys it to Azure.
Now that you have seen how to use template specs and an example of packaging templates into a package repository, you have seen two of the most common private approaches. But if you ever find yourself in a situation where you do not want to use template specs or a package manager, there is a third straightforward solution.
Any discussion of approaches to sharing ARM and Bicep templates would not be complete without listing the third option, hosting them in a public repository. Maybe there are no reasons to keep your templates hidden. This may sound ridiculous, especially from the perspective of a business, but why would you want to keep your templates for a virtual machine, storage account, or other resource hidden? All possible configurations are available already from the documentation, and maybe there is nothing special enough to hide.
Warning This suggestion to make your templates public may not resonate with everyone, and especially with security experts. Weigh your options and make informed choices.
One particular situation where this line of thought applies is to open source projects. Just use the GitHub (or a similar platform) link to a specific commit on any file in any publicly accessible Git repository, and you are good to go. On GitHub, you can open the history of any file, navigate to a specific version, and then grab the URL to its raw version.
With all these options for reusing templates available, you might also want to consider how you want to write your reusable templates: not too big, but also not too small.
First, you’ll want to avoid making templates so specific that they fit only very specific use cases, making them hard to reuse. For example, creating a reusable template containing an App Service plan, three App Services, an Application Insights account, and a Cosmos DB is probably not reusable.
You’ll also want to avoid making templates that are too small and which will provide downstream users with hardly any benefit over writing templates from scratch. For example, a template with only a storage account that disables public containers is not very valuable. Your users will probably spend more time reusing that template than setting that Boolean value themselves.
The sweet spot is somewhere in the middle: semi-finished products that are valuable building blocks. For example, a template with an App Service plan, a Function App Service, and a corresponding storage account would be a common structure that could be reused by other teams.
Another thing you should think about is your versioning strategy:
How will you deprecate versions you want to remove at some point?
When should you remove older versions, and how should you manage the downstream impact?
Besides these conceptual choices, you’ll also have to choose an approach, so let’s talk about that next.
You have learned about three tactics for sharing templates between teams or more generally between producers and consumers. But with three possible approaches, how do you select which one to use for your project?
Let’s get the obvious out of the way: if there are no objections to hosting your templates at publicly accessible URLs, such as on GitHub, you should probably do that. It’s the easiest, cheapest, and most straightforward approach.
If hosting at public URLs is not an option, you can choose between using template specs and a package manager. Let’s explore the pros and cons of both approaches.
Template specs are Azure’s recommended and built-in approach to sharing templates. For that reason, you should probably choose template specs over using a package manager if there are no compelling reasons to do otherwise.
The first pro of template specs is that a template spec is just an Azure resource, like any other. This means that Azure RBAC applies, which means you can use the same means for governing access as you do for your other Azure resources. There’s no extra hassle.
But also consider one of the downsides of Azure RBAC, which is that it does not allow you to assign a role to “everyone in my organization.” This means that you’ll have to set up your own “everyone” group if you want organization-wide specs.
Another pro of template specs over packages is that they allow for maximum control by the template specs writer. If the writer finds that a specific spec or version should no longer be available, they can remove it, and that’s the end of it. The consequence here is that this can have downstream impacts, and if it’s not communicated to consuming teams correctly, it can be frowned upon.
The considerations of using a Bicep registry are very similar to those surrounding template specs. The only differences are that a Bicep registry is not backwards compatible with ARM templates, which can be a con. As a pro, it enables a workflow where modules are being pulled down to your local machine or the build server, where they are transpiled along with your own IaC into a single template. When you are looking for repeatability in your deployments, creating a snapshot or immutable artifact in this way can be an important ingredient.
What if you choose to use a package manager instead? This approach can be a big pro if a package manager is already used in your team or organization. If you reuse that package manager, you’ll have solved a problem without introducing a new tool. In many organizations, that’s a big pro.
Another benefit that most package managers support is creating local copies or downstream repositories. A downstream repository is another package repository that is not owned by the team that creates the reusable templates but by the team that uses the reusable templates. Whenever they want to start reusing a new template or a new version, the first step is to copy the package from the producing team’s repository to their own downstream repository. This way, they ensure that all their dependencies are in their control and that packages stay available to them, as long as they do not remove the copies themselves.
Consequently, creators of reusable templates can no longer control which templates and versions their users are using. Once a template version with errors is released, they can only communicate an update, and they’ll have to rely on the consuming teams to stop using the faulty version.
Depending on the package manager used, one downside can be that fine-grained access control can be more difficult or bespoke than using Azure RBAC. It is often easier to allow “everyone in the organization” to read packages from a stream.
Template specs are written and versioned to enable others to reuse your templates.
Template specs can be deployed directly, or by referencing them from ARM and Bicep templates.
It is possible to build deep integrations with template specs in your ARM and Bicep templates by passing parameters and outputs back and forth.
As an alternative to template specs, you can use the Bicep registry. It is Bicep-specific, but it enables workflows that require local transpilation of templates before deployment.
Finally, you can use a package manager. Package managers are already in use in many organizations, which makes it unnecessary to introduce yet another tool. Package managers also help limit the downstream impact of removing template specs, as they allow consumers to create their own copies of templates.