Skip to content

Commit 8375eab

Browse files
committed
add performance cap and CSI driver prereq
1 parent 891beae commit 8375eab

1 file changed

Lines changed: 20 additions & 19 deletions

File tree

articles/storage/files/azure-kubernetes-service-workloads.md

Lines changed: 20 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -12,44 +12,45 @@ ai-usage: ai-generated
1212

1313
# Azure Files guidance for Azure Kubernetes Service (AKS) workloads
1414

15-
Azure Files provides file shares (Azure Files SMB/NFS endpoints) accessible via SMB 3.x or NFS 4.1 protocols. When integrated with Azure Kubernetes Service (AKS), Azure Files enables persistent, shared storage for containerized applications with `ReadWriteMany` access mode, allowing multiple pods (Kubernetes container groups) to mount the same share concurrently.
15+
Azure Files provides file shares (Azure Files SMB/NFS endpoints) accessible via SMB 3.x or NFS 4.1 protocols. When integrated with Azure Kubernetes Service (AKS), Azure Files enables persistent, shared storage for containerized applications with `ReadWriteMany` (RWX) access mode, allowing multiple pods (Kubernetes container groups) to mount the same share concurrently.
1616

1717
## AKS overview: managed Kubernetes on Azure
1818

1919
Azure Kubernetes Service is a managed Kubernetes service for deploying and scaling containerized applications on Azure. AKS manages control plane components (API server, etcd, scheduler); you manage worker node pools. AKS 1.21+ includes the Azure Files CSI driver by default.
2020

2121
## Azure Files benefits for AKS storage
2222

23-
Azure Files supports `ReadWriteMany` (RWX) access mode required for multi-pod shared storage. Use the following SKU guidance:
23+
Azure Files supports `ReadWriteMany` access mode required for multi-pod shared storage. Azure Files has two media tiers: solid state drives (SSD) and hard disk drives (HDD). It also offers three different [billing models](understanding-billing.md): provisioned v2, pay-as-you-go, and the legacy provisioned v1 billing model.
24+
25+
> [!IMPORTANT]
26+
> To use the provisioned v2 billing model for Azure Files, you must use the Azure Files CSI driver [version 1.35.0](https://github.com/kubernetes-sigs/azurefile-csi-driver/releases/tag/v1.35.0) or later.
27+
28+
Use the following SKU guidance:
2429

2530
| Workload type | File share type | Storage account kind | Storage account SKU |
2631
|-|-|-|-|
27-
| Logging, moderate I/O | SSD provisioned v2 with Local redundancy | `FileStorage` | `PremiumV2_LRS` |
28-
| Media/content, high throughput | SSD provisioned v2 with Zone redundancy | `FileStorage` | `PremiumV2_ZRS` |
29-
| Config files, low I/O | SSD provisioned v2, HDD provisioned v2, or HDD pay-as-you-go with Local redundancy | `FileStorage` (provisioned v2) or `StorageV2` (pay-as-you-go) | `PremiumV2_LRS`, `StandardV2_LRS`, `Standard_LRS` |
32+
| Logging, moderate I/O | SSD provisioned v2 with locally redundant storage (LRS) | `FileStorage` | `PremiumV2_LRS` |
33+
| Media/content, high throughput | SSD provisioned v2 with zone-redundant storage (ZRS) | `FileStorage` | `PremiumV2_ZRS` |
34+
| Config files, low I/O | SSD provisioned v2, HDD provisioned v2, or HDD pay-as-you-go with LRS | `FileStorage` (provisioned v2) or `StorageV2` (pay-as-you-go) | `PremiumV2_LRS`, `StandardV2_LRS`, `Standard_LRS` |
3035

31-
For complete scalability and performance information, see [Scalability and performance targets for Azure Files](storage-files-scale-targets.md).
36+
There's a [per-file performance cap](storage-files-scale-targets.md#classic-file-share-scale-targets-for-individual-files) on Azure file shares. For complete scalability and performance information, see [Scalability and performance targets for Azure Files](storage-files-scale-targets.md).
3237

33-
Deploy the storage account in the same Azure region as your AKS cluster to minimize network latency.
38+
Deploy the storage account in the same Azure region as your AKS cluster to minimize network latency. Cross-region mounts add 50–100+ ms latency.
3439

3540
### Persistent shared storage
3641

3742
Unlike local storage that's tied to individual nodes (Kubernetes worker VMs), Azure Files provides persistent storage that survives pod restarts, node failures, and cluster (AKS) scaling events. Multiple pods across different nodes can simultaneously access the same file share, enabling shared data scenarios and stateful applications.
3843

3944
### Kubernetes native integration
4045

41-
Azure Files integrates with Kubernetes through the Container Storage Interface (CSI) driver. You provision and manage file shares using persistent volumes (PV) and persistent volume claims (PVC). The CSI driver handles Azure API calls, authentication via managed identity or storage account key, and mount operations.
46+
Azure Files integrates with Kubernetes through the Azure Files Container Storage Interface (CSI) driver. You provision and manage file shares using persistent volumes (PV) and persistent volume claims (PVC). The CSI driver handles Azure API calls, authentication via managed identity or storage account key, and mount operations.
4247

4348
### SSD file shares for optimal performance
4449

45-
Azure Files has two media tiers. For new deployments, SSD provisioned v2 is recommended for most workloads:
50+
For new deployments, we recommend the SSD media tier combined with the provisioned v2 billing model for most workloads:
4651

4752
- **SSD** (recommended): Suitable for logging, media serving, databases, and latency-sensitive workloads. Available with the provisioned v2 billing model (recommended, `PremiumV2_LRS` / `PremiumV2_ZRS`) or the legacy provisioned v1 billing model (`Premium_LRS` / `Premium_ZRS`). Up to 102,400 IOPS and 10,340 MiB/sec throughput per share.
48-
- **HDD**: Suitable for config files and infrequent access. Available with the provisioned v2 billing model (`StandardV2_LRS` / `StandardV2_ZRS`) or the pay-as-you-go billing model (`Standard_LRS` / `Standard_ZRS`). Up to 50,000 IOPS and 5,120 MiB/sec throughput per share with provisioned v2. For very small shares, HDD pay-as-you-go (`Standard_LRS` / `Standard_ZRS`) may be more cost-effective because HDD provisioned v2 requires a minimum amount of provisioned IOPS and throughput with no free baseline. For most other HDD workloads, SSD provisioned v2 is actually more cost-effective at small share sizes due to its included baseline IOPS and throughput.
49-
50-
For complete scalability and performance information, see [Scalability and performance targets for Azure Files](storage-files-scale-targets.md).
51-
52-
Deploy file shares in the same region as your AKS cluster. Cross-region mounts add 50–100+ ms latency.
53+
- **HDD**: Suitable for config files and infrequent access. Available with the provisioned v2 billing model (`StandardV2_LRS` / `StandardV2_ZRS`) or the pay-as-you-go billing model (`Standard_LRS` / `Standard_ZRS`). Up to 50,000 IOPS and 5,120 MiB/sec throughput per share with provisioned v2. For very small shares, HDD pay-as-you-go (`Standard_LRS` / `Standard_ZRS`) might be more cost-effective because HDD provisioned v2 requires a minimum amount of provisioned IOPS and throughput with no free baseline. For most other HDD workloads, SSD provisioned v2 is more cost-effective at small share sizes due to its included baseline IOPS and throughput.
5354

5455
### Protocol support
5556

@@ -62,7 +63,7 @@ Azure Files security features: AES-256 encryption at rest, TLS 1.2+ encryption i
6263

6364
## Azure Files CSI driver: Kubernetes integration
6465

65-
The Azure Files Container Storage Interface (CSI) driver connects Azure Files to Kubernetes clusters. The CSI specification defines a standard interface for storage systems to expose capabilities to containerized workloads. For configuration details, see [Use Azure Files CSI driver in AKS](/azure/aks/azure-files-csi).
66+
The Azure Files CSI driver connects Azure Files to Kubernetes clusters. The CSI specification defines a standard interface for storage systems to expose capabilities to containerized workloads. For configuration details, see [Use Azure Files CSI driver in AKS](/azure/aks/azure-files-csi).
6667

6768
### How the CSI driver works
6869

@@ -123,9 +124,9 @@ Ensure the following are in place before creating a StorageClass for dynamic pro
123124
### Steps to configure dynamic provisioning
124125

125126
1. **Create the StorageClass** – Define the provisioning parameters (SKU, protocol, mount options).
126-
2. **Create a PersistentVolumeClaim (PVC)** – Reference the StorageClass; the CSI driver auto-creates the Azure file share.
127-
3. **Deploy your workload** – Mount the PVC in your pod spec.
128-
4. **Verify** – Confirm PVC is `Bound` and the mount path is accessible.
127+
1. **Create a PersistentVolumeClaim (PVC)** – Reference the StorageClass; the CSI driver auto-creates the Azure file share.
128+
1. **Deploy your workload** – Mount the PVC in your pod spec.
129+
1. **Verify** – Confirm PVC is `Bound` and the mount path is accessible.
129130

130131
### StorageClass parameters for dynamic provisioning
131132

@@ -475,7 +476,7 @@ Ensure the following are in place before configuring private endpoints for Azure
475476
5. **Deploy your workload** – Mount the PVC in your pod spec.
476477
6. **Verify** – Confirm the PVC binds and that DNS resolves to a private IP (`nslookup <storageaccount>.file.core.windows.net`).
477478

478-
This YAML example demonstrates how to create Azure file storage with private endpoint configuration for enhanced security. The CSI driver automatically discovers the virtual network from the AKS cluster configuration, so `vnetResourceGroup`, `vnetName`, and `subnetName` are optional if the virtual network is in the same resource group as the AKS cluster. Specify them explicitly for cross-resource-group or multi-VNet scenarios. For Linux mount options, see [SMB mount options reference](#smb-mount-options-reference-linux).
479+
This YAML example demonstrates how to create Azure file storage with private endpoint configuration for enhanced security. The CSI driver automatically discovers the virtual network from the AKS cluster configuration, so `vnetResourceGroup`, `vnetName`, and `subnetName` are optional if the virtual network is in the same resource group as the AKS cluster. Specify them explicitly for cross-resource group or scenarios with multiple virtual networks. For Linux mount options, see [SMB mount options reference](#smb-mount-options-reference-linux).
479480

480481
```yaml
481482
apiVersion: storage.k8s.io/v1

0 commit comments

Comments
 (0)