You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/storage/files/azure-kubernetes-service-workloads.md
+20-19Lines changed: 20 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,44 +12,45 @@ ai-usage: ai-generated
12
12
13
13
# Azure Files guidance for Azure Kubernetes Service (AKS) workloads
14
14
15
-
Azure Files provides file shares (Azure Files SMB/NFS endpoints) accessible via SMB 3.x or NFS 4.1 protocols. When integrated with Azure Kubernetes Service (AKS), Azure Files enables persistent, shared storage for containerized applications with `ReadWriteMany` access mode, allowing multiple pods (Kubernetes container groups) to mount the same share concurrently.
15
+
Azure Files provides file shares (Azure Files SMB/NFS endpoints) accessible via SMB 3.x or NFS 4.1 protocols. When integrated with Azure Kubernetes Service (AKS), Azure Files enables persistent, shared storage for containerized applications with `ReadWriteMany`(RWX) access mode, allowing multiple pods (Kubernetes container groups) to mount the same share concurrently.
16
16
17
17
## AKS overview: managed Kubernetes on Azure
18
18
19
19
Azure Kubernetes Service is a managed Kubernetes service for deploying and scaling containerized applications on Azure. AKS manages control plane components (API server, etcd, scheduler); you manage worker node pools. AKS 1.21+ includes the Azure Files CSI driver by default.
20
20
21
21
## Azure Files benefits for AKS storage
22
22
23
-
Azure Files supports `ReadWriteMany` (RWX) access mode required for multi-pod shared storage. Use the following SKU guidance:
23
+
Azure Files supports `ReadWriteMany` access mode required for multi-pod shared storage. Azure Files has two media tiers: solid state drives (SSD) and hard disk drives (HDD). It also offers three different [billing models](understanding-billing.md): provisioned v2, pay-as-you-go, and the legacy provisioned v1 billing model.
24
+
25
+
> [!IMPORTANT]
26
+
> To use the provisioned v2 billing model for Azure Files, you must use the Azure Files CSI driver [version 1.35.0](https://github.com/kubernetes-sigs/azurefile-csi-driver/releases/tag/v1.35.0) or later.
27
+
28
+
Use the following SKU guidance:
24
29
25
30
| Workload type | File share type | Storage account kind | Storage account SKU |
26
31
|-|-|-|-|
27
-
| Logging, moderate I/O | SSD provisioned v2 with Local redundancy|`FileStorage`|`PremiumV2_LRS`|
28
-
| Media/content, high throughput | SSD provisioned v2 with Zone redundancy|`FileStorage`|`PremiumV2_ZRS`|
29
-
| Config files, low I/O | SSD provisioned v2, HDD provisioned v2, or HDD pay-as-you-go with Local redundancy|`FileStorage` (provisioned v2) or `StorageV2` (pay-as-you-go) |`PremiumV2_LRS`, `StandardV2_LRS`, `Standard_LRS`|
| Media/content, high throughput | SSD provisioned v2 with zone-redundant storage (ZRS)|`FileStorage`|`PremiumV2_ZRS`|
34
+
| Config files, low I/O | SSD provisioned v2, HDD provisioned v2, or HDD pay-as-you-go with LRS|`FileStorage` (provisioned v2) or `StorageV2` (pay-as-you-go) |`PremiumV2_LRS`, `StandardV2_LRS`, `Standard_LRS`|
30
35
31
-
For complete scalability and performance information, see [Scalability and performance targets for Azure Files](storage-files-scale-targets.md).
36
+
There's a [per-file performance cap](storage-files-scale-targets.md#classic-file-share-scale-targets-for-individual-files) on Azure file shares. For complete scalability and performance information, see [Scalability and performance targets for Azure Files](storage-files-scale-targets.md).
32
37
33
-
Deploy the storage account in the same Azure region as your AKS cluster to minimize network latency.
38
+
Deploy the storage account in the same Azure region as your AKS cluster to minimize network latency. Cross-region mounts add 50–100+ ms latency.
34
39
35
40
### Persistent shared storage
36
41
37
42
Unlike local storage that's tied to individual nodes (Kubernetes worker VMs), Azure Files provides persistent storage that survives pod restarts, node failures, and cluster (AKS) scaling events. Multiple pods across different nodes can simultaneously access the same file share, enabling shared data scenarios and stateful applications.
38
43
39
44
### Kubernetes native integration
40
45
41
-
Azure Files integrates with Kubernetes through the Container Storage Interface (CSI) driver. You provision and manage file shares using persistent volumes (PV) and persistent volume claims (PVC). The CSI driver handles Azure API calls, authentication via managed identity or storage account key, and mount operations.
46
+
Azure Files integrates with Kubernetes through the Azure Files Container Storage Interface (CSI) driver. You provision and manage file shares using persistent volumes (PV) and persistent volume claims (PVC). The CSI driver handles Azure API calls, authentication via managed identity or storage account key, and mount operations.
42
47
43
48
### SSD file shares for optimal performance
44
49
45
-
Azure Files has two media tiers. For new deployments, SSD provisioned v2 is recommended for most workloads:
50
+
For new deployments, we recommend the SSD media tier combined with the provisioned v2 billing model for most workloads:
46
51
47
52
-**SSD** (recommended): Suitable for logging, media serving, databases, and latency-sensitive workloads. Available with the provisioned v2 billing model (recommended, `PremiumV2_LRS` / `PremiumV2_ZRS`) or the legacy provisioned v1 billing model (`Premium_LRS` / `Premium_ZRS`). Up to 102,400 IOPS and 10,340 MiB/sec throughput per share.
48
-
-**HDD**: Suitable for config files and infrequent access. Available with the provisioned v2 billing model (`StandardV2_LRS` / `StandardV2_ZRS`) or the pay-as-you-go billing model (`Standard_LRS` / `Standard_ZRS`). Up to 50,000 IOPS and 5,120 MiB/sec throughput per share with provisioned v2. For very small shares, HDD pay-as-you-go (`Standard_LRS` / `Standard_ZRS`) may be more cost-effective because HDD provisioned v2 requires a minimum amount of provisioned IOPS and throughput with no free baseline. For most other HDD workloads, SSD provisioned v2 is actually more cost-effective at small share sizes due to its included baseline IOPS and throughput.
49
-
50
-
For complete scalability and performance information, see [Scalability and performance targets for Azure Files](storage-files-scale-targets.md).
51
-
52
-
Deploy file shares in the same region as your AKS cluster. Cross-region mounts add 50–100+ ms latency.
53
+
-**HDD**: Suitable for config files and infrequent access. Available with the provisioned v2 billing model (`StandardV2_LRS` / `StandardV2_ZRS`) or the pay-as-you-go billing model (`Standard_LRS` / `Standard_ZRS`). Up to 50,000 IOPS and 5,120 MiB/sec throughput per share with provisioned v2. For very small shares, HDD pay-as-you-go (`Standard_LRS` / `Standard_ZRS`) might be more cost-effective because HDD provisioned v2 requires a minimum amount of provisioned IOPS and throughput with no free baseline. For most other HDD workloads, SSD provisioned v2 is more cost-effective at small share sizes due to its included baseline IOPS and throughput.
53
54
54
55
### Protocol support
55
56
@@ -62,7 +63,7 @@ Azure Files security features: AES-256 encryption at rest, TLS 1.2+ encryption i
62
63
63
64
## Azure Files CSI driver: Kubernetes integration
64
65
65
-
The Azure Files Container Storage Interface (CSI) driver connects Azure Files to Kubernetes clusters. The CSI specification defines a standard interface for storage systems to expose capabilities to containerized workloads. For configuration details, see [Use Azure Files CSI driver in AKS](/azure/aks/azure-files-csi).
66
+
The Azure Files CSI driver connects Azure Files to Kubernetes clusters. The CSI specification defines a standard interface for storage systems to expose capabilities to containerized workloads. For configuration details, see [Use Azure Files CSI driver in AKS](/azure/aks/azure-files-csi).
66
67
67
68
### How the CSI driver works
68
69
@@ -123,9 +124,9 @@ Ensure the following are in place before creating a StorageClass for dynamic pro
123
124
### Steps to configure dynamic provisioning
124
125
125
126
1.**Create the StorageClass** – Define the provisioning parameters (SKU, protocol, mount options).
126
-
2.**Create a PersistentVolumeClaim (PVC)** – Reference the StorageClass; the CSI driver auto-creates the Azure file share.
127
-
3.**Deploy your workload** – Mount the PVC in your pod spec.
128
-
4.**Verify** – Confirm PVC is `Bound` and the mount path is accessible.
127
+
1.**Create a PersistentVolumeClaim (PVC)** – Reference the StorageClass; the CSI driver auto-creates the Azure file share.
128
+
1.**Deploy your workload** – Mount the PVC in your pod spec.
129
+
1.**Verify** – Confirm PVC is `Bound` and the mount path is accessible.
129
130
130
131
### StorageClass parameters for dynamic provisioning
131
132
@@ -475,7 +476,7 @@ Ensure the following are in place before configuring private endpoints for Azure
475
476
5. **Deploy your workload** – Mount the PVC in your pod spec.
476
477
6. **Verify** – Confirm the PVC binds and that DNS resolves to a private IP (`nslookup <storageaccount>.file.core.windows.net`).
477
478
478
-
This YAML example demonstrates how to create Azure file storage with private endpoint configuration for enhanced security. The CSI driver automatically discovers the virtual network from the AKS cluster configuration, so `vnetResourceGroup`, `vnetName`, and `subnetName` are optional if the virtual network is in the same resource group as the AKS cluster. Specify them explicitly for cross-resource-group or multi-VNet scenarios. For Linux mount options, see [SMB mount options reference](#smb-mount-options-reference-linux).
479
+
This YAML example demonstrates how to create Azure file storage with private endpoint configuration for enhanced security. The CSI driver automatically discovers the virtual network from the AKS cluster configuration, so `vnetResourceGroup`, `vnetName`, and `subnetName` are optional if the virtual network is in the same resource group as the AKS cluster. Specify them explicitly for cross-resourcegroup or scenarios with multiple virtual networks. For Linux mount options, see [SMB mount options reference](#smb-mount-options-reference-linux).
0 commit comments