You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: support/azure/azure-kubernetes/create-upgrade-delete/aks-increased-memory-usage-cgroup-v2.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,38 +23,38 @@ You experience one or more of the following symptoms:
23
23
24
24
## Cause
25
25
26
-
This increase is caused by a change in memory accounting within version 2 of the Linux control group (cgroup) API. [Cgroup v2](https://kubernetes.io/docs/concepts/architecture/cgroups/) is now the default cgroup version for Kubernetes 1.25 on AKS.
26
+
This increase is caused by a change in memory accounting within version 2 of the Linux control group (`cgroup`) API. [Cgroup v2](https://kubernetes.io/docs/concepts/architecture/cgroups/) is now the default cgroup version for Kubernetes 1.25 on AKS.
27
27
28
28
> [!NOTE]
29
-
> This issue is distinct from the memory saturation in nodes that's caused by applications or frameworks that aren't aware of cgroup v2. For more information, see [Memory saturation occurs in pods after cluster upgrade to Kubernetes 1.25](./aks-memory-saturation-after-upgrade.md).
29
+
> This issue is distinct from the memory saturation in nodes that's caused by applications or frameworks that aren't aware of `cgroup` v2. For more information, see [Memory saturation occurs in pods after cluster upgrade to Kubernetes 1.25](./aks-memory-saturation-after-upgrade.md).
30
30
31
31
## Solution
32
32
33
33
- If you observe frequent memory pressure on the nodes, upgrade your subscription to increase the amount of memory that's available to your virtual machines (VMs).
34
34
35
35
- If you see a higher eviction rate on the pods, [use higher limits and requests for pods](/azure/aks/developer-best-practices-resource-management#define-pod-resource-requests-and-limits).
36
36
37
-
- cgroup v2 uses a different API than cgroup v1. If there are any applications that directly access the cgroup file system, update them to later versions that support cgroup v2. For example:
37
+
-`cgroup` v2 uses a different API than `cgroup` v1. If there are any applications that directly access the `cgroup` file system, update them to later versions that support `cgroup` v2. For example:
38
38
39
39
-**Third-party monitoring and security agents**:
40
40
41
-
Some monitoring and security agents depend on the cgroup file system. Update these agents to versions that support cgroup v2.
41
+
Some monitoring and security agents depend on the `cgroup` file system. Update these agents to versions that support `cgroup` v2.
42
42
43
43
-**Java applications**:
44
44
45
-
Use versions that fully support cgroup v2:
45
+
Use versions that fully support `cgroup` v2:
46
46
- OpenJDK/HotSpot: `jdk8u372`, `11.0.16`, `15`, and later versions.
47
47
- IBM Semeru Runtimes: `8.0.382.0`, `11.0.20.0`, `17.0.8.0`, and later versions.
48
48
- IBM Java: `8.0.8.6` and later versions.
49
49
50
50
-**uber-go/automaxprocs**:
51
-
If you are using the `uber-go/automaxprocs` package, ensure the version is `v1.5.1` or later.
51
+
If you're using the `uber-go/automaxprocs` package, ensure the version is `v1.5.1` or later.
52
52
53
-
- An alternative temporary solution is to revert the cgroup version on your nodes by using the DaemonSet. For more information, see [Revert to cgroup v1 DaemonSet](https://github.com/Azure/AKS/blob/master/examples/cgroups/revert-cgroup-v1.yaml).
53
+
- An alternative temporary solution is to revert the `cgroup` version on your nodes by using the DaemonSet. For more information, see [Revert to cgroup v1 DaemonSet](https://github.com/Azure/AKS/blob/master/examples/cgroups/revert-cgroup-v1.yaml).
54
54
55
55
> [!IMPORTANT]
56
56
> - Use the DaemonSet cautiously. Test it in a lower environment before applying to production to ensure compatibility and prevent disruptions.
57
-
> - By default, the DaemonSet applies to all nodes in the cluster and reboots them to implement the cgroup change.
57
+
> - By default, the DaemonSet applies to all nodes in the cluster and reboots them to implement the `cgroup` change.
58
58
> - To control how the DaemonSet is applied, configure a `nodeSelector` to target specific nodes.
0 commit comments