Skip to content

Commit 9d6e521

Browse files
Merge pull request #311466 from fhryo-msft/master
Add guidance for local CSI driver placement
2 parents ae42d6b + 974862e commit 9d6e521

2 files changed

Lines changed: 122 additions & 1 deletion

File tree

articles/storage/container-storage/TOC.yml

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@
1111
href: install-container-storage-aks.md
1212
- name: Use with local NVMe
1313
href: use-container-storage-with-local-disk.md
14+
- name: Manage Local CSI driver placement
15+
href: manage-local-container-storage-interface-driver-placement.md
1416
- name: Use with Elastic SAN
1517
href: use-container-storage-with-elastic-san.md
1618
- name: Remove Azure Container Storage
@@ -119,4 +121,4 @@
119121
- name: Understand billing
120122
href: container-storage-billing-version-1.md
121123
- name: FAQ
122-
href: container-storage-faq.md
124+
href: container-storage-faq.md
Lines changed: 119 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,119 @@
1+
---
2+
title: Manage Local CSI driver placement in Azure Container Storage
3+
description: Manage local CSI driver placement through node affinity configuration in the local NVMe storage class.
4+
author: fhryo-msft
5+
ms.service: azure-container-storage
6+
ms.topic: how-to
7+
ms.date: 2/5/2026
8+
ms.author: fryu
9+
## Customer Intent: As a Kubernetes administrator, I want to manage local CSI driver placement through node affinity configuration in the local NVMe storage class.
10+
---
11+
12+
# Manage local CSI driver placement with node affinity
13+
14+
In Kubernetes clusters, CSI drivers are typically deployed as DaemonSets, running on all nodes by default. However, in production environments, certain nodes may be equipped with specialized hardware (such as local NVMe disks), specific instance types, or designated roles that make them more suitable for particular storage workloads.
15+
16+
Azure Container Storage uses the local CSI driver to manage local NVMe volumes. By configuring node affinity in the local NVMe storage class, you can control the placement of local CSI drivers to ensure they run only on nodes that meet the designed conditions. This approach helps optimize resource utilization and minimizes the impact on other nodes in the cluster.
17+
18+
## When to consider managing local CSI driver placement
19+
20+
Managing the placement of local CSI drivers is essential in the following scenarios:
21+
22+
- Scenario 1: Mixed node pools with different capabilities. Clusters often contain multiple node pools with different instance types. Without node affinity, local CSI driver pods might be scheduled onto nodes that don't have local NVMe disks and can't successfully service storage requests.
23+
24+
- Scenario 2: Mixed node pools for distinct workloads. In large clusters, it's common to have multiple node pools, each tailored for specific types of workloads. Without node affinity, local CSI driver pods might be scheduled on node pools that aren’t meant to use local NVMe disks, even if those disks are configured.
25+
26+
## Node affinity via StorageClass annotations
27+
28+
The Local CSI driver placement mechanism uses:
29+
30+
- [Kubernetes nodeAffinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity). The `preferredDuringSchedulingIgnoredDuringExecution` option isn't supported.
31+
- Storage class annotations to express placement requirements
32+
- Only creation or modification of storage classes triggers nodeAffinity recomputation
33+
34+
You can define a nodeAffinity rule for a local NVMe StorageClass using the `storageoperator.acstor.io/nodeAffinity` annotation. These rules ensure that local CSI driver pods are scheduled only on nodes that meet the specified criteria. If no nodeAffinity rule is defined, local CSI driver pods are deployed across all nodes in the cluster by default.
35+
36+
## Ensure local CSI drivers are placed on nodes with local NVMe disks
37+
38+
To ensure that local CSI drivers are deployed only on nodes equipped with local NVMe disks, you can configure node affinity based on instance type. The following example shows a StorageClass configuration:
39+
40+
```bash
41+
cat <<EOF | kubectl apply -f -
42+
apiVersion: storage.k8s.io/v1
43+
kind: StorageClass
44+
metadata:
45+
name: local-nvme
46+
annotations:
47+
storageoperator.acstor.io/nodeAffinity: |
48+
requiredDuringSchedulingIgnoredDuringExecution:
49+
nodeSelectorTerms:
50+
- matchExpressions:
51+
- key: node.kubernetes.io/instance-type
52+
operator: In
53+
values: [standard_l8s_v3, Standard_L16s_v3]
54+
provisioner: localdisk.csi.acstor.io
55+
reclaimPolicy: Delete
56+
volumeBindingMode: WaitForFirstConsumer
57+
allowVolumeExpansion: true
58+
EOF
59+
```
60+
61+
Match expressions are case-sensitive. We recommend verifying the actual instance type values on your nodes before configuring node affinity. Use the following command to validate:
62+
63+
```bash
64+
$ kubectl get nodes -o custom-columns="NAME:.metadata.name,INSTANCE-TYPE:.metadata.labels.node\.kubernetes\.io/instance-type"
65+
NAME INSTANCE-TYPE
66+
aks-mycpu-32605643-vmss000000 Standard_D4ds_v5
67+
aks-mygpu-23116656-vmss000000 standard_l8s_v3
68+
aks-mygpu2-37383660-vmss000000 Standard_L16s_v3
69+
```
70+
71+
## Ensure local CSI drivers are placed in specific node pools
72+
73+
You can ensure that local CSI drivers are deployed only in selected node pools by configuring node affinity based on the `agentpool` label. The following example shows a StorageClass configuration:
74+
75+
```bash
76+
cat <<EOF | kubectl apply -f -
77+
apiVersion: storage.k8s.io/v1
78+
kind: StorageClass
79+
metadata:
80+
name: local-nvme
81+
annotations:
82+
storageoperator.acstor.io/nodeAffinity: |
83+
requiredDuringSchedulingIgnoredDuringExecution:
84+
nodeSelectorTerms:
85+
- matchExpressions:
86+
- key: kubernetes.azure.com/agentpool
87+
operator: In
88+
values: [mygpu,mygpu2]
89+
provisioner: localdisk.csi.acstor.io
90+
reclaimPolicy: Delete
91+
volumeBindingMode: WaitForFirstConsumer
92+
allowVolumeExpansion: true
93+
EOF
94+
```
95+
96+
Match expressions are case-sensitive. We recommend verifying the actual node pool names on your nodes before configuring node affinity. Use the following command to validate:
97+
98+
```bash
99+
$ kubectl get nodes -o custom-columns="NAME:.metadata.name,AGENTPOOL:.metadata.labels.kubernetes\.azure\.com/agentpool"
100+
NAME AGENTPOOL
101+
aks-mycpu-32605643-vmss000000 mycpu
102+
aks-mygpu-23116656-vmss000000 mygpu
103+
aks-mygpu2-37383660-vmss000000 mygpu2
104+
```
105+
106+
## Best practices
107+
108+
- Always label nodes explicitly before using node affinity.
109+
- Keep StorageClasses consistent and avoid mixing annotated and nonannotated classes unless intentional.
110+
- Use multiple nodeSelectorTerms to express OR‑style placement.
111+
- Validate node labels before deploying StorageClasses.
112+
- Learn more capabilities in [Kubernetes nodeAffinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/).
113+
114+
## See also
115+
116+
- [What is Azure Container Storage?](./container-storage-introduction.md)
117+
- [Install Azure Container Storage with AKS](./install-container-storage-aks.md)
118+
- [Use Azure Container Storage with local NVMe](./use-container-storage-with-local-disk.md)
119+
- [Best practices for ephemeral NVMe data disks in Azure Kubernetes Service (AKS)](/azure/aks/best-practices-storage-nvme)

0 commit comments

Comments
 (0)