| title | Use Azure Container Storage with Local NVMe |
|---|---|
| description | Configure Azure Container Storage for use with local NVMe on the Azure Kubernetes Service (AKS) cluster nodes. Create a storage class and deploy a pod using standard Kubernetes patterns. |
| author | khdownie |
| ms.service | azure-container-storage |
| ms.topic | how-to |
| ms.date | 09/10/2025 |
| ms.author | kendownie |
| ms.custom | references_regions |
Azure Container Storage is a cloud-based volume management, deployment, and orchestration service built natively for containers. This article shows you how to configure Azure Container Storage to use local NVMe disk as backend storage for your Kubernetes workloads. NVMe is designed for high-speed data transfer between storage and CPU, providing high IOPS and throughput.
Important
This article applies to Azure Container Storage (version 2.x.x), which supports local NVMe disk and Azure Elastic SAN as backing storage types. For details about earlier versions, see Azure Container Storage (version 1.x.x) documentation.
When your application needs sub-millisecond storage latency and high throughput, you can use local NVMe disks with Azure Container Storage to meet your performance requirements. Ephemeral means that the disks are deployed on the local virtual machine (VM) hosting the AKS cluster and not saved to an Azure storage service. Data is lost on these disks if you stop/deallocate your VM. Local NVMe disks are offered on select Azure VM families such as storage-optimized VMs.
By default, Azure Container Storage creates generic ephemeral volumes when using local NVMe disks. For use cases that require persistent volume claims, you can add the annotation localdisk.csi.acstor.io/accept-ephemeral-storage: "true" in your persistent volume claim template.
To maximize performance, Azure Container Storage automatically stripes data across all available local NVMe disks on a per-VM basis. Striping is a technique where data is divided into small chunks and evenly written across multiple disks simultaneously, which increases throughput and improves overall I/O performance. This behavior is enabled by default and cannot be disabled.
Because performance aggregates across those striped devices, larger VM sizes that expose more NVMe drives can unlock substantially higher IOPS and bandwidth. Selecting a larger VM family lets your workloads benefit from the extra aggregate throughput without more configuration.
For example, the Lsv3 series scales from a single 1.92-TB NVMe drive on Standard_L8s_v3 (around 400,000 IOPS and 2,000 MB/s) up to 10 NVMe drives on Standard_L80s_v3 (about 3.8 million IOPS and 20,000 MB/s).
[!INCLUDE container-storage-prerequisites]
- Review the installation instructions and ensure Azure Container Storage is properly installed.
Local NVMe disks are only available in certain types of VMs, for example, storage-optimized VMs or GPU accelerated VMs. If you plan to use local NVMe capacity, choose one of these VM sizes.
Run the following command to get the VM type that's used with your node pool. Replace <resource group> and <cluster name> with your own values. You don't need to supply values for PoolName or VmSize, so keep the query as shown here.
az aks nodepool list --resource-group <resource group> --cluster-name <cluster name> --query "[].{PoolName:name, VmSize:vmSize}" -o table
The following output is an example.
PoolName VmSize
---------- ---------------
nodepool1 standard_l8s_v3
Note
In Azure Container Storage (version 2.x.x), you can now use clusters with fewer than three nodes.
In scenarios where VM sizes with a single local NVMe disk are used alongside ephemeral OS disks, the local NVMe disk is allocated for the OS, leaving no capacity for Azure Container Storage to use. To ensure optimal performance and availability of local NVMe disks for high-performance data processing, we recommend that you do the following:
- Select VM sizes with two or more local NVMe disks.
- Use managed disks for the OS, freeing up all local NVMe disks for data processing.
For more information, refer to Best practices for ephemeral NVMe data disks in Azure Kubernetes Service.
If you don't already have Azure Container Storage installed, install it.
Azure Container Storage (version 2.x.x) presents local NVMe as a standard Kubernetes storage class. Create the local-csi storage class once per cluster and reuse it for both generic ephemeral volumes and persistent volume claims.
-
Use your favorite text editor to create a YAML manifest file such as
storageclass.yaml, then paste in the following specification.apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-csi provisioner: localdisk.csi.acstor.io reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true
-
Apply the manifest to create the storage class.
kubectl apply -f storageclass.yaml
Alternatively, you can create the storage class using Terraform.
-
Use Terraform to manage the storage class by creating a configuration like the following
main.tf. Update the provider version or kubeconfig path as needed for your environment.terraform { required_version = ">= 1.5.0" required_providers { kubernetes = { source = "hashicorp/kubernetes" version = "~> 3.0" } } } provider "kubernetes" { config_path = "~/.kube/config" } resource "kubernetes_storage_class_v1" "local_csi" { metadata { name = "local-csi" } storage_provisioner = "localdisk.csi.acstor.io" reclaim_policy = "Delete" volume_binding_mode = "WaitForFirstConsumer" allow_volume_expansion = true }
-
Initialize, review, and apply the configuration to create the storage class.
terraform init terraform plan terraform apply
Run the following command to verify that the storage class is created:
kubectl get storageclass local-csi
You should see output similar to:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-csi localdisk.csi.acstor.io Delete WaitForFirstConsumer true 10s
Follow these steps to create and attach a generic ephemeral volume using Azure Container Storage. Make sure Azure Container Storage is installed and the local-csi storage class exists before you continue.
Create a pod using Fio (Flexible I/O Tester) for benchmarking and workload simulation that uses a generic ephemeral volume.
-
Use your favorite text editor to create a YAML manifest file such as
code fiopod.yaml. -
Paste in the following code and save the file.
kind: Pod apiVersion: v1 metadata: name: fiopod spec: nodeSelector: "kubernetes.io/os": linux containers: - name: fio image: mayadata/fio args: ["sleep", "1000000"] volumeMounts: - mountPath: "/volume" name: ephemeralvolume volumes: - name: ephemeralvolume ephemeral: volumeClaimTemplate: spec: volumeMode: Filesystem accessModes: ["ReadWriteOnce"] storageClassName: local-csi resources: requests: storage: 10Gi
-
Apply the YAML manifest file to deploy the pod.
kubectl apply -f fiopod.yaml
Check that the pod is running:
kubectl get pod fiopod
You should see the pod in the Running state. Once running, you can execute a Fio benchmark test:
kubectl exec -it fiopod -- fio --name=benchtest --size=800m --filename=/volume/test --direct=1 --rw=randrw --ioengine=libaio --bs=4k --iodepth=16 --numjobs=8 --time_based --runtime=60
While generic ephemeral volumes are recommended for ephemeral storage, Azure Container Storage also supports persistent volumes with ephemeral storage when needed for compatibility with existing workloads.
Note
Azure Container Storage (version 2.x.x) uses the new annotation localdisk.csi.acstor.io/accept-ephemeral-storage: "true" instead of the previous acstor.azure.com/accept-ephemeral-storage: "true".
Make sure Azure Container Storage is installed and the local-csi storage class you created earlier is available before deploying workloads that use it.
If you need to use persistent volume claims that aren't tied to the pod lifecycle, you must add the localdisk.csi.acstor.io/accept-ephemeral-storage: "true" annotation. The data on the volume is local to the node and is lost if the node is deleted or the pod is moved to another node.
Here's an example stateful set using persistent volumes with the ephemeral storage annotation:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: statefulset-lcd-lvm-annotation
labels:
app: busybox
spec:
podManagementPolicy: Parallel
serviceName: statefulset-lcd
replicas: 10
template:
metadata:
labels:
app: busybox
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: statefulset-lcd
image: mcr.microsoft.com/azurelinux/busybox:1.36
command:
- "/bin/sh"
- "-c"
- set -euo pipefail; trap exit TERM; while true; do date -u +"%Y-%m-%dT%H:%M:%SZ" >> /mnt/lcd/outfile; sleep 1; done
volumeMounts:
- name: persistent-storage
mountPath: /mnt/lcd
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app: busybox
volumeClaimTemplates:
- metadata:
name: persistent-storage
annotations:
localdisk.csi.acstor.io/accept-ephemeral-storage: "true"
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: local-csi
resources:
requests:
storage: 10GiSave and apply this YAML to create the stateful set with persistent volumes:
kubectl apply -f statefulset-pvc.yaml
In this section, you learn how to check node ephemeral disk capacity, expand storage capacity, and delete storage resources.
An ephemeral volume is allocated on a single node. When you configure the size of your ephemeral volumes, the size should be less than the available capacity of the single node's ephemeral disk.
Make sure a StorageClass for localdisk.csi.acstor.io exists. Run the following command to check the available capacity of ephemeral disk for each node.
kubectl get csistoragecapacities.storage.k8s.io -n kube-system -o custom-columns=NAME:.metadata.name,STORAGE_CLASS:.storageClassName,CAPACITY:.capacity,NODE:.nodeTopology.matchLabels."topology\.localdisk\.csi\.acstor\.io/node"
You should see output similar to this example:
NAME STORAGE_CLASS CAPACITY NODE
csisc-2pkx4 local-csi 1373172Mi aks-storagepool-31410930-vmss000001
csisc-gnmm9 local-csi 1373172Mi aks-storagepool-31410930-vmss000000
If you encounter empty capacity output, confirm that a StorageClass for localdisk.csi.acstor.io exists. The csistoragecapacities.storage.k8s.io resource is only generated after a StorageClass for localdisk.csi.acstor.io exists.
Because ephemeral disk storage uses local resources on the AKS cluster nodes, expanding storage capacity requires adding nodes to the cluster.
To add a node to your cluster, run the following command. Replace <cluster-name>, <nodepool-name>, <resource-group>, and <new-count> with your values.
az aks nodepool scale --cluster-name <cluster-name> --name <nodepool-name> --resource-group <resource-group> --node-count <new-count>
To clean up storage resources, you must first delete all PersistentVolumeClaims and/or PersistentVolumes. Deleting the Azure Container Storage StorageClass doesn't automatically remove your existing PersistentVolumes/PersistentVolumeClaims.
To delete a storage class named local-csi, run the following command:
kubectl delete storageclass local-csi
- What is Azure Container Storage?
- Install Azure Container Storage with AKS
- Use Azure Container Storage (version 1.x.x) with local NVMe
- Overview of deploying a highly available PostgreSQL database on Azure Kubernetes Service (AKS)
- Best practices for ephemeral NVMe data disks in Azure Kubernetes Service (AKS)