Using Azure Functions Flex Consumption Plan
Introduction
For a couple of months already, Azure Functions Flex Consumption is now in Public Preview. Before this, there were several options available for running Azure functions:
- Consumption Plan
- (Elastic) Premium Plan
- Dedicated Plan
- Container Apps
Before the introduction of this plan you either could scale to zero (using the consumption plan) or where required to have at least some commitment (Elastic Premium, Dedicated plans). My container experience is somewhat limited (sadly), so I cannot really compare either to the Container Apps option. Considering containers are a different beast all together I am leaving them out of this.
The trigger for this article was seeing whether I could limit costs for an existing Azure Function deployment. At the time the deployment consisted of about 9 Azure Function Apps running on a 2 instance P1V3 plan. Also, the deployment needed to be moved to a new environment with more stringent Azure Policy requirements. While private endpoints were (not yet) specifically required, there was a need to turn on the firewall of the Azure Storage Account.
This introduces a very specific and, honestly, pretty annoying Azure issue:
When you enable the firewall on an Azure Storage account in the same region as an Azure Function App (or WebApp), the website will no longer be able to reach the Storage Account. Your first thought (and mine) would be to just get the outbound IP addresses for the website and whitelist those in the firewall of the Storage Account. Sadly this does not work: When the App Service (whatever one) and the Storage Account exist in the same region, the traffic will actually be internal (within the Microsoft internal network of that region). If you check the Audit logging of the Storage Account, you will see traffic from internal IP addresses (unknown to you). When confronted with this situation you really only have a couple of options:
- Inject the Azure Functions into a VNet and connect the storage account with private endpoints
- Move the Storage Account to a different region
- Disable the firewall on the Storage Account (why would we have to :( )
If you chose to inject the Azure Functions into a VNet you have two options:
- Using a Service Endpoint. This will allow you to whitelist the VNet in the firewall of each Storage Account
- Using Private Endpoints. Private endpoints only cost around ~€7,= per month (without data transfer), but still. For 9 functions, with a dedicated Storage Account, as is best practise, it would increase the cost with €63,= per month.
Microsoft recommends the use of Private Link and private endpoints over Service Endpoints though. Just moving the environment as is would allow for it to be VNet injected and the Storage Account to use Private Endpoints. But the environment already seemed expensive for the amount of executions and this means it would cost even more because of the price of Private Endpoints.
This lead me to seeing whether the environment could be converted to Azure Functions Flex Plan. I first tried to just convert a single function as is, however I ran into all sort of issues, complicated by the already existing deployment / configuration. So, I thought about it and it seemed to me the simplest solution would just be to try it from the get go and start simple and make it more complex along the way.
Basically, considering a Function App with a simple HTTP trigger:
- Basic: using Azure Storage Account connection string for authentication, no firewalls or anything
- Basic: the same as 1. but using Managed Identity for authentication to storage
- Intermediate: Same as 2. but now: Function App injected into a VNet, Storage Account using Private Endpoints and Firewall set to Deny by default
Setup
Because the deployment I was looking potentially rework was setup using Bicep and Azure Devops Pipelines, I did my testing also using an Azure Devops Environment. The code I used to test the deployment below can be found on my GitHub account.
I created a very simple pipeline for performing the three tests, deploy-fn-test1.yml, with a couple of simple steps:
- A task using ‘AzureResourceManagerTemplateDeployment@3’ to deploy the Bicep templates
- A ‘PowerShell@2’ task to convert some output variables from the deployment to Azure Devops variables
- An ‘ArchiveFiles@2’ task to zip the example PowerShell code (out of the box HttpTrigger)
- And finally, ‘AzureFunctionApp@2’ to upload the content to the Azure Function App
Initial deployment
The initial deployment (test1) is a straight forward Bicep template, with all the basic resources for an Azure Function App:
- Azure Storage account which is publicly accessible
- A blob container for uploading the deployment package (more over this a little later)
- A Log Analytics Workspace and linked Application Insights (not strictly needed for the test, I suppose)
- An Azure Function App and App Settings
The resources in the linked template are mostly as bare bone as they can be. What is important to note, is de inclusion of the functionAppConfig property within the Azure Function App. This is new and, as far as I can find, not particularly well documented yet:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
functionAppConfig: {
deployment: {
storage: {
type: 'blobContainer'
// storage.properties.primaryEndpoints.blob ends with '/' so do not add another '/'
value: '${storage.properties.primaryEndpoints.blob}app-package'
authentication: {
type: 'StorageAccountConnectionString'
storageAccountConnectionStringName: 'AzureWebJobsStorage'
}
}
}
scaleAndConcurrency: {
maximumInstanceCount: 40
instanceMemoryMB: 2048
}
runtime: {
name: 'powershell'
version: '7.4'
}
}
- deployment/storage: This section contains the storage settings for where the app can find the content for the Function App. Couple of things to note:
- When linking to the storage endpoint of the storage resource in the template, the URL actually ends with a ‘/’. Do not add another ‘/’ or package deployments will fail (the template will run without issue)
- value needs to contain the path to the deployment container. This container has to be empty and has to exist at the time of deployment
- authentication
- type can be ‘StorageAccountConnectionString’ or ‘SystemAssignedIdentity’.
- storageAccountConnectionStringName is required when using StorageAccountConnectionString. This should contain the name of the app setting that contains the Azure Storage Connection string
- scaleAndConcurrency has two options:
- maximumInstanceCount: has a minimum of 40
- instanceMemoryMB: can be 2048 or 4096
- runtime sets the runtime, as the name would indicate. However, keep in mind that you have to remove these two options from the App Settings
The storage section will change for the other two test, but ScaleAndConcurrency and runtime stay the same
After I figured out how the configuration actually went together, the deployment was not an issue (any more).
Adding Entra Id authentication for storage
The second deployments add a little security (test2). In most Azure environments where Azure Policy frameworks are enforced, shared key authentication is either frowned upon, or explicitly denied. So as I first step, I upgraded the Azure Function to use Entra Id authentication. System Assigned identity was already configured in the first step, so really only a couple of small changes are required:
- Storage Account is updated with ‘allowSharedKeyAccess’. This disables the use of shared keys for the account
- App Settings are updated to azurewebjobsstorage__accountname. The value of this setting is the name of the storage account. When used in stead of AzureWebJobsStorage, the function will use the managed identity for access to storage. if you use a custom domain name, you will need to use app settings for each Storage service used by your app.
- As described in the first step, we also need to reconfigure the storage section within functionAppConfig:
1 2 3 4 5 6 7
storage: { type: 'blobContainer' value: '${storage.properties.primaryEndpoints.blob}app-package' authentication: { type: 'SystemAssignedIdentity' } }
- Finally a Role Assignment resource was added to the deployment to give the System Managed Id for the Function App access to the data plane of the storage account.
With the lessons learned from the first test about how to configure the Function App, this was actually pretty straight forward and did not cause much headache.
Using Private Endpoints and VNet Injection
As a final test (test3), we get around to setting up the resources in such away as would allow us private access to our Storage Account. This requires us to make a couple of changes.
First lets add some additional resources:
- VNet: Obviously we need a simple virtual network, with 2 subnets:
- webInjection: subnet delegated to the ‘Microsoft.App/environments’ resource provider
- privateEndpoints: generic subnet where the private endpoint for the storage account can be joined
- Private DNS Zone: for this example we only need blob storage, so one DNS zone will suffice. Remember that if your app uses other services (or if you user Azure Durable Functions) you may need additional Private DNS Zones (and private endpoints linked to the storage account).
We also need to update the resources we have deployed:
- Storage Account
- networkAcls/defaultAction: We’ll set this to ‘Deny’ to make sure non-internal traffic is no longer (by default) allowed
- privateEndpoints: We add a private endpoint resource, parented to the Storage Account
- privateDnsZoneGroup: Parented to the Private Endpoint, we also need to link the Private DNS zone to make sure our Function App will be able to resolve an internal connection to the Storage Account
This is really all we need and after deployment we can confirm that the Function App works as intended.
Of course, in an enterprise setting, you’d likely also not allow the Function App to be available publicly. If you want to extend the example further, you could also create a Private Endpoint for the Function App(s), set the resource to route all traffic trough the VNet and disable public access in the network settings of the app. You’d have to use another resource to make the Function App publicly available like an API Management Gateway, Application Gateway (or both).
Some lessons learned
Everything considered, deploying Azure Functions using a Flex Consumption Plan, was not overly different than using another plan. Some things that I stumbled over:
- When using VNet injection: Delegate the subnet to ‘Microsoft.App/environments’. This is different than the other plans. The reason for this is because Flex Consumption Plan is using Project Legion as a backend, rather than a traditional App Service Plan
- Your Function App will be running on Linux. Especially make sure to set this correctly from you Azure Devops pipeline when using ‘AzureFunctionApp@2’. If you don’t there is not a very clear error (You get a Kudu 404 Error if I remember correctly)
- managedDependency within the Function App is not available. This means that your app package needs to provide all the dependencies (for instance, Azure PowerShell modules).
- If you want to test you Azure Function Flex Consumption Plan code, especially on Windows, I would highly suggest use VS Code Dev Containers. This way you can test your code in an isolated environment on the proper OS. This also allows you (especially when developing in PowerShell) to ensure you have included all the needed dependencies.