| title | Tutorial: Durable text analysis with a mounted Azure Files share in Azure Functions | |||
|---|---|---|---|---|
| description | Learn how to deploy a Python Azure Functions app that uses Durable Functions to orchestrate parallel text file analysis by using a mounted Azure Files share on a Flex Consumption plan. | |||
| ms.service | azure-functions | |||
| ms.topic | tutorial | |||
| ms.date | 03/24/2026 | |||
| ms.custom |
|
In this tutorial, you deploy a Python Azure Functions app that uses Durable Functions to orchestrate parallel text file analysis. Your function app mounts an Azure Files share, analyzes multiple text files in parallel (fan-out), aggregates the results (fan-in), and returns them to the caller. This approach demonstrates a key advantage of storage mounts: shared file access across multiple function instances without per-request network overhead.
In this tutorial, you:
[!div class="checklist"]
- Use Azure Developer CLI to deploy a Durable Functions app in a Flex Consumption plan with a mounted Azure Files share
- Trigger an orchestration to process sample text files in parallel
- Verify the aggregated analysis results
[!INCLUDE functions-azure-files-samples-note]
- An Azure account with an active subscription. Create an account for free.
- Azure Developer CLI (azd) version 1.9.0 or later
- Git
The CLI examples in this tutorial use Bash syntax and have been tested in Azure Cloud Shell (Bash) and Linux/macOS terminals.
You can find the sample code for this tutorial in the Azure Functions Flex Consumption with Azure Files OS Mount Samples GitHub repository. The durable-text-analysis folder contains the function app code, a Bicep template that provisions the required Azure resources, and a post-deployment script that uploads sample text files.
-
Open a terminal and go to the directory where you want to clone the repository.
-
Clone the repository:
git clone https://github.com/Azure-Samples/Azure-Functions-Flex-Consumption-with-Azure-Files-OS-Mount-Samples.git
-
Go to the project folder:
cd Azure-Functions-Flex-Consumption-with-Azure-Files-OS-Mount-Samples/durable-text-analysis -
Initialize the
azdenvironment. When prompted, enter an environment name such asdurable-text:azd init
The three key pieces that make this sample work are the infrastructure that creates the mount, the script that uploads sample files, and the function code that orchestrates the analysis.
The mounts.bicep module configures an Azure Files SMB mount on the function app. The mountPath value determines the local path where files appear at runtime. You pass the storage account access key as a parameter, and the platform resolves it at runtime through a Key Vault reference:
:::code language="bicep" source="~/functions-flex-azure-files-samples/durable-text-analysis/infra/app/mounts.bicep" :::
Because Azure Files SMB mounts don't yet support managed identity authentication, you need a storage account key. As a best practice, store this key in Azure Key Vault and use a Key Vault reference in an app setting. The mount configuration references that app setting by using @AppSettingRef(), so the key never appears in your Bicep templates. The keyvault.bicep module creates the vault, stores the key, and grants RBAC roles:
:::code language="bicep" source="~/functions-flex-azure-files-samples/durable-text-analysis/infra/app/keyvault.bicep" :::
The main.bicep file invokes the mount and Key Vault modules:
:::code language="bicep" source="~/functions-flex-azure-files-samples/durable-text-analysis/infra/main.bicep" range="173-206" :::
After azd up deploys the infrastructure and code, a post-deployment script creates sample text files, uploads them to the Azure Files share, and runs a health check:
:::code language="bash" source="~/functions-flex-azure-files-samples/durable-text-analysis/scripts/post-up.sh" range="33-88" :::
The HTTP starter in function_app.py starts a Durable Functions orchestration. The orchestrator in orchestrator.py lists all .txt files on the mount, fans out to analyze each file in parallel, and aggregates the results:
:::code language="python" source="~/functions-flex-azure-files-samples/durable-text-analysis/src/orchestrator.py" :::
Each activity function reads directly from the mounted share by using standard file I/O. It doesn't need any SDK or network calls:
:::code language="python" source="~/functions-flex-azure-files-samples/durable-text-analysis/src/activities.py" range="30-51" :::
This sample is an Azure Developer CLI (azd) template. A single azd up command provisions infrastructure, deploys the function code, and uploads sample text files to the Azure Files share.
-
Sign in to Azure. The post-deployment script uses Azure CLI commands, so you need to authenticate by using both tools:
azd auth login az login
-
Provision and deploy everything:
azd up
When prompted, select the Azure subscription and location to use. The command then:
- Creates a resource group, storage account, Key Vault, Flex Consumption function app with a Durable Functions configuration, Application Insights instance, and managed identity
- Deploys the Python function code
- Uploads sample text files to the Azure Files share
- Runs a health check
[!NOTE] Because Azure Files SMB mounts don't yet support managed identity authentication, you need a storage account key. As a best practice, the deployment stores this key in Azure Key Vault and uses a Key Vault reference so the key is never exposed in app settings. This approach provides centralized secret management, auditing, and support for key rotation.
The deployment takes a few minutes. When it completes, you see a summary of the created resources.
-
Save resource names as shell variables for the remaining steps:
RESOURCE_GROUP=$(azd env get-value AZURE_RESOURCE_GROUP) FUNCTION_APP_NAME=$(azd env get-value AZURE_FUNCTION_APP_NAME) FUNCTION_APP_URL=$(azd env get-value AZURE_FUNCTION_APP_URL)
-
Get the function host key:
HOST_KEY=$(az functionapp keys list \ --resource-group $RESOURCE_GROUP \ --name $FUNCTION_APP_NAME \ --query "functionKeys.default" \ -o tsv) -
Start the orchestration:
curl -s -X POST "${FUNCTION_APP_URL}/api/start-analysis?code=${HOST_KEY}" | jq .
The response includes an instance ID and status query URIs:
{ "id": "abc123def456", "statusQueryGetUri": "https://...", "sendEventPostUri": "https://...", "terminatePostUri": "https://..." }
-
Check orchestration status. Use the
statusQueryGetUrifrom the previous response, or construct the URL manually:INSTANCE_ID="<instance-id-from-trigger-response>" curl -s "${FUNCTION_APP_URL}/api/orchestrators/TextAnalysisOrchestrator/${INSTANCE_ID}?code=${HOST_KEY}" | jq .
While the orchestration is running, the
runtimeStatusisRunning. When complete, the response looks like:{ "name": "TextAnalysisOrchestrator", "instanceId": "abc123def456", "runtimeStatus": "Completed", "output": { "results": [ { "file": "sample1.txt", "word_count": 15, "char_count": 98, "sentiment": "positive" }, { "file": "sample2.txt", "word_count": 18, "char_count": 120, "sentiment": "positive" }, { "file": "sample3.txt", "word_count": 12, "char_count": 85, "sentiment": "neutral" } ], "total_words": 45, "total_chars": 303, "analysis_duration_seconds": 2.34 } }
Tip
Your function app accesses all three files in parallel through the storage mount. The app doesn't need any per-request network calls. The function reads them directly from the mounted share by using standard file I/O. This approach demonstrates the power of storage mounts combined with Durable Functions.
To avoid ongoing charges, delete all the resources created by this tutorial:
azd down --purgeWarning
This command deletes the resource group and all resources in it, including the function app, storage account, and Application Insights instance.