If you are reading this, then you probably already know that Azure Resource Manager should be able to access linked ARM templates via provided URI during the deployment of the master template. You cannot use local files or URI locations that require some form of password-based authentication or the usage of HTTP authentication headers.
A Microsoft-suggested approach to working with linked templates is to copy them to a storage account and to make them externally accessible with SAS tokens. Sound simple enough! However, the official samples are somewhat inconsistent from my point of view: some parts are performed in ARM templates, some require PowerShell scripting, and the life cycle of storage account for linked templates is not addressed at all.
So, how to create a deployment solution for “private” linked ARM templates that is simple, consistent, easily repeatable and fully automated with Azure DevOps pipelines?
Prerequisites
You will need four primary building blocks for your solution:
- main or “master” ARM template for deploying the services you need from corresponding linked templates;
- linked ARM templates for your specific Azure services or sub-parts of your infrastructure;
- an ARM template to create a storage account for deployment artifacts, i.e., linked templates and other needed files;
- an ARM template for the resource group you are going to deploy into.
That’s all. No PowerShell is required unless you need to perform some pre- or post-deployment tasks which might be relevant in your specific case.
Deployment order
The whole deployment process will be performed according to the following order:
- Create an empty resource group for your deployment.
- Create a storage account in that resource group to store deployment artifacts.
- Copy linked templates and any other needed deployment artifacts to a container(s) in that storage account. Get the storage account URI and SAS token.
- Deploy your main ARM template that references linked templates in the storage account.
The sample Azure DevOps pipeline to implement such deployment might look like the following:
Let’s look at the pipeline flow in detail.
First of all, pay attention to the job type – it is a good practice to run your tasks that perform actual deployment inside a deployment job. With deployment job type, you can implement different deployment strategies, target the environment for deployment, create gated deployments that require manual approval, etc.
Secondly, to pass the outputs of ARM deployments in the pipeline, you should specify an environment variable in Azure Resource Manager (ARM) Template Deployment tasks and use simple PowerShell to parse that outputs and create pipeline variables you can reference in next steps. This might be very handy if you dynamically create disposable environments for testing or implement a versioned immutable infrastructure.
Thirdly, Azure File Copy task provides you with options to create pipeline variables for storage account URI and SAS token. Plus, you can set an expiration time for the SAS to generate, so it is relatively short-lived and valid only for the time needed to run your deployment.
Lastly, you can use ‘overrideParameters’ option in Azure Resource Manager (ARM) Template Deployment task to override the deployment parameters for storage account URI and SAS token on the fly.
Steps to test and validate ARM templates are not included in the example for the sake of simplicity, so, as a rule of thumb, remember to extend this starter pipeline with additional stages to perform automated testing.
Optionally, you can also add tasks to delete deployment artifacts from the storage account if performing incremental deployments or a task to remove the storage account itself.
Template linking
To repeat the described above approach for deploying linked ARM templates, you should design your main templates to take the storage account URI and SAS token as input parameters. Besides, to make the templates more readable, you can construct the URIs for linked templates based on those parameters in the variables section:
Implementing this approach as a standard for your main templates will help you to repeat this deployment pattern with confidence and spend less time on debugging referencing errors.
Boilerplate
You can use the described approach in your Azure DevOps pipelines or use that sample repository on GitHub as a draft solution for your deployment stages.
P.S. Depending on your requirements, you might decide to keep the storage account for deployment artifacts outside of the deployment resource group, but that might add unnecessary complexity to your deployment process. If I were you, I would use a single resource group as a logical boundary for your deployments whenever possible.
Member discussion: