It has been a while since I last updated my Ghost on Azure project, and many changes have been introduced to Azure services during that time. I decided to use the break in my work to update the project deployment templates to include those changes and to use it as a learning opportunity to catch up on new cloud service features.
Ghost on Azure is a one-click Ghost deployment on Azure Web App for Containers. It’s written as a Bicep template, spinning a custom Ghost Docker container on Azure App Service, which uses Azure Database for MySQL as a backend. It can be a good starting point for anyone wanting to self-host the Ghost platform on the Microsoft Azure cloud. Plus, it leverages a lot of Azure-native services and their features. Thus, it can be considered a showcase of their practical usage.
The project started as a simple multi-container deployment and later transformed into a comprehensive solution using PaaS services such as Azure Key Vault, Front Door, and Web Application Firewall. Now, we are focusing on further security improvements and migrating from deprecated Azure services.
New Ghost 5 container image
I use a custom Ghost Docker image in this project, which is based on the official Ghost Alpine image and extended to support Azure Monitor Application Insights. I’ve updated it to Ghost 5 and removed explicit database connectivity check with wait-for-it, as it has not been very reliable in testing MySQL server readiness to accept connections. As a new instance of Azure Database for MySQL takes some time to be up and running, Azure App Service hosting the container restarts it a few times until the database backend is ready and the Ghost app can connect to the server to create its database.
Note. The initial deployment might take a few minutes before the Ghost container successfully starts and is ready to serve the content.
MySQL Flexible Server
The initial project configuration used a multi-container deployment, placing a MySQL container alongside the application container running Ghost using Docker Compose. Later, I switched to a single container deployment and used Azure Database for MySQL for the database, as the support for multi-container deployment on App Service degraded and became unstable. Recently, Microsoft deprecated Azure Database for MySQL – Single Server, and I needed to migrate my project to MySQL – Flexible Server.
As Ghost 5 supports MySQL version 8 as a primary database option, I also switched from previous version 5, which was used with the MySQL – Single Server, to that version with the Flexible Server deployment. However, the most notable change in the context of this project is probably that Azure Database for MySQL - Flexible Server now enforces using encryption connections only by default.
I think using encryption in transit, like connecting to a database, is a good thing, even if the data transfer happens in the internal network. It fully reflects the core principles of Zero Trust Architecture, as a malicious actor might also operate inside your network. The tricky part was configuring the encrypted connection options in the MySQL client used by Ghost. Putting the public certificate used by MySQL - Flexible Server into the container image or onto a file share wasn’t a good option and would introduce more unwanted dependencies. So, I ended up putting the content of that public certificate into an environment variable using multi-line string support in Bicep:
Azure Private Link
Another notable change is that I decided to leverage Azure Private Link to further restrict access to the database server at the network level and completely block access to it over the public network.
Configuring Azure Private Link and private endpoints for Azure services might seem like a daunting task at first, but when you grasp the core concepts of how it works under the hood, it should become your no-brainer option to reduce the attack surface of your cloud infrastructure. Yes, it requires a good knowledge of core network concepts, understanding how DNS resolution works, and configuring corresponding private DNS zones using Azure Private DNS zones or your other existing DNS services. Plus, it adds complexity to your deployments and requires extra work to automate its configuration using Bicep or Terraform templates. However, from the security standpoint, I would name locking down network access to your cloud resources the number one recommendation after setting up proper authorization and encryption controls.
Here is how the project network topology looked like before using Azure Private Link:
Communication between the app service and dependencies happened over their public endpoints. The backend services enforced encrypted connections. Plus, you could put some restrictions using firewalls on their public endpoints to limit access from other Azure services only. Still, the information was traversing over the Internet, and any misconfiguration of service firewall rules could expose them to external attacks.
After configuring private endpoints for Azure Database for MySQL, Key Vault and Storage, and configuring virtual network integration for App Service, all traffic to backend services is isolated in a private Azure Virtual Network:
The connectivity to the public endpoints of backend services is completely locked down, making it much easier to control and enforce at the enterprise scale using Azure Policy. Now, you don’t have to validate and control myriads of firewall rules on various Azure resources.
As you can now integrate App Service with a virtual network even on the Basic tier, Azure Private Link and private networking in Azure became even more accessible. When your App Service is integrated with a virtual network, it can connect to other services deployed in that network, like private endpoints, without going over the public network. Plus, you can apply all other security measures available in Azure Virtual Network to further segment and restrict network access using subnets, Network Security Groups (NSG), network policies for private endpoints, etc.
Azure Private DNS zones for hosting private DNS zones for Azure Private Link service also greatly simplifies its usage, as it automates the creation of corresponding DNS records for your private endpoints.
Azure Key Vault role-based access (RBAC)
The next improvement is related to configuring access to Azure Key Vault using Azure roles instead of legacy Key Vault access policies. In my opinion, it greatly simplifies access management at scale, as you no longer need to control access at two different levels, and you can configure access to both management and data plane using Azure built-in and custom roles.
From the Bicep code perspective, we now create a separate Azure role assignment resource:
Although you can configure Microsoft Entra authentication for Azure Database for MySQL and leverage App Service managed identity for authorization, it seems that authentication using managed identities is not supported by the MySQL client library used by Ghost, so I still rely on MySQL authentication and have to use Key Vault to securely store the database password used to connect to Ghost database.
Also, when I tried to migrate the Storage account used to host the file share as persistent storage for the Ghost container to the role-based access model, I faced a similar issue, as that scenario is not supported by custom-mounted storage.
Azure Front Door Standard and App Service access restrictions
The initial project version used Azure CDN for traffic offloading. Later, I added an option to deploy the solution with Azure Front Door, which used a managed Web Application Firewall (WAF) policy for inbound traffic inspection and site protection. Those Azure services, now renamed as classic ones, are scheduled for retirement in 2027. Microsoft released a comprehensive guide on migrating from the Azure Front Door (classic) to the Standard/Premium tier. I would consider that service update a breaking change, as service features don’t map one-to-one between the classic and new service offerings. Plus, the pricing model of the new offerings is quite different, which requires careful consideration and cost estimation for your specific use case.
Having said that, I’ve removed the option to deploy the solution with deprecated Azure CDN (Microsoft CDN (classic)) and updated the configuration to deploy Azure Front Door Standard as a more reasonably priced service for such a project. Unfortunately, the managed WAF policies are now supported only by the Premium tier, so Azure Front Door Standard now works more like a CDN, but you can still enhance it with custom WAF policies.
Apart from that, the Azure App service now supports more targeted access restrictions. In addition to restricting access to your Azure App service using the Azure Front Door service tag, you can now narrow it to a specific Front Door instance with HTTP headers. The Bicep template part for configuring such restrictions might look like the following:
The master project template still contains conditional logic to deploy the solution with an Azure App Service as a public endpoint or use an additional Front Door profile to serve the incoming traffic to your Ghost on Azure deployment.
Other minor tweaks
In addition to updating Azure Resource Manager API versions for resources, removing discontinued pricing tiers for some services, and updating categories for Azure Monitor Logs diagnostic settings, I’ve also reduced the number of output values passed between the Bicep modules in the master deployment template and used references to existing resources whenever possible, as it provides more flexibility in referencing their properties in other resources. For example, instead of passing (aka exposing) Storage account access keys through output values, you can just reference them using the corresponding function on the referenced resource:
For the complete deployment configuration details, check the source code in my GitHub repo.
To be continued
As you might have noticed, a few design decisions in that project originated from overcoming Azure service limitations. I think hosting a containerized app on Azure App Service is still quite limited, not production-wise, as it probably creates more challenges than solves them. I’m considering trying out Azure Container Apps, which will be explored.
Another planned modification is configuring the Azure Private link for Azure Monitor components used in the solution. It’s different from private links to other Azure services, so I plan to explore its specifics in a separate post using my Ghost on Azure project as a playground for that implementation.
Have you used Azure Container Apps or Azure Monitor Private Link Scopes in your projects? What was your experience with them? Please share your thoughts in the comments 👇
Member discussion: