You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: support/azure/azure-container-instances/connectivity/web-socket-is-closed-or-could-not-be-opened.md
+6-4Lines changed: 6 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,11 @@
1
1
---
2
2
title: Error - Web socket is closed or could not be opened
3
3
description: Learn how to resolve the (Web socket is closed or could not be opened) error. This error prevents you from connecting to your container from a virtual network.
4
-
ms.date: 12/28/2023
4
+
ms.date: 02/24/2025
5
5
author: tysonfms
6
6
ms.author: tysonfreeman
7
7
editor: v-jsitser
8
-
ms.reviewer: v-leedennis
8
+
ms.reviewer: albarqaw, v-weizhu, v-leedennis
9
9
ms.service: azure-container-instances
10
10
ms.custom: sap:Connectivity
11
11
#Customer intent: As an Azure administrator, I want to learn how to resolve the "Web socket is closed or could not be opened" error so that I can successfully deploy an image onto a container instance.
@@ -22,10 +22,12 @@ When you try to connect to your container from the Azure portal, you receive the
22
22
23
23
## Cause
24
24
25
-
Your firewall blocks access to port 19390. This port is required to connect to Container Instances from the Azure portal when container groups are deployed in virtual networks.
25
+
Your firewall or corporate proxy blocks access to port 19390. This port is required to connect to Container Instances from the Azure portal when container groups are deployed in virtual networks.
26
26
27
27
## Solution
28
28
29
-
Allow ingress to TCP port 19390 in your firewall. At a minimum, make sure that your firewall gives access to that port for all public client IP addresses that the Azure portal has to connect to.
29
+
To resolve this error, allow ingress to TCP port 19390 in your firewall. At a minimum, make sure that your firewall gives access to that port for all public client IP addresses that the Azure portal has to connect to.
30
+
31
+
In some scenarios where the corporate proxy blocks port 19390, allow this port for corporate proxy, and then verify the traffic by using the **Network** tab in the browser developer tools.
30
32
31
33
[!INCLUDE [Azure Help Support](../../../includes/azure-help-support.md)]
Copy file name to clipboardExpand all lines: support/azure/azure-kubernetes/connectivity/insufficientsubnetsize-error-advanced-networking.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
2
title: InsufficientSubnetSize error code
3
3
description: Learn how to fix an InsufficientSubnetSize error that occurs when you deploy an Azure Kubernetes Service (AKS) cluster that uses advanced networking.
ms.custom: sap:Create, Upgrade, Scale and Delete operations (cluster or nodepool)
10
10
@@ -26,8 +26,7 @@ sections:
26
26
- question: |
27
27
Can I move my cluster to a different subscription, or move my subscription with my cluster to a new tenant?
28
28
answer: |
29
-
If you've moved your AKS cluster to a different subscription or the cluster's subscription to a new tenant, the cluster won't function because of missing cluster identity permissions. AKS doesn't support moving clusters across subscriptions or tenants because of this constraint.
30
-
29
+
No. If you've moved your AKS cluster to a different subscription or the cluster's subscription to a new tenant, the cluster won't function because of missing cluster identity permissions. AKS doesn't support moving clusters across subscriptions or tenants because of this constraint. For more information, see [Operations FAQ](/azure/aks/faq#operations).
31
30
- question: |
32
31
What naming restrictions are enforced for AKS resources and parameters?
33
32
answer: |
@@ -42,7 +41,10 @@ sections:
42
41
- AKS node pool names must be all lowercase. The names must be 1-12 characters in length for Linux node pools and 1-6 characters for Windows node pools. A name must start with a letter, and the only allowed characters are letters and numbers.
43
42
44
43
- The *admin-username*, which sets the administrator user name for Linux nodes, must start with a letter. This user name may only contain letters, numbers, hyphens, and underscores. It has a maximum length of 32 characters.
45
-
44
+
45
+
For more information about naming convention. see the following resources:
46
+
- [Naming rules and restrictions for Azure resources](/azure/azure-resource-manager/management/resource-name-rules#microsoftcontainerservice)
47
+
- [Abbreviation recommendations for Azure resources](/azure/cloud-adoption-framework/ready/azure-best-practices/resource-abbreviations#containers)
Copy file name to clipboardExpand all lines: support/azure/azure-kubernetes/create-upgrade-delete/aks-increased-memory-usage-cgroup-v2.md
+31-5Lines changed: 31 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,8 +1,8 @@
1
1
---
2
2
title: Increased memory usage reported in Kubernetes 1.25 or later versions
3
3
description: Resolve an increase in memory usage that's reported after you upgrade an Azure Kubernetes Service (AKS) cluster to Kubernetes 1.25.x.
4
-
ms.date: 07/13/2023
5
-
editor: v-jsitser
4
+
ms.date: 03/03/2025
5
+
editor: momajed
6
6
ms.reviewer: aritraghosh, cssakscic, v-leedennis
7
7
ms.service: azure-kubernetes-service
8
8
ms.custom: sap:Create, Upgrade, Scale and Delete operations (cluster or nodepool)
@@ -23,23 +23,49 @@ You experience one or more of the following symptoms:
23
23
24
24
## Cause
25
25
26
-
This increase is caused by a change in memory accounting within version 2 of the Linux control group (cgroup) API. [Cgroup v2](https://kubernetes.io/docs/concepts/architecture/cgroups/) is now the default cgroup version for Kubernetes 1.25 on AKS.
26
+
This increase is caused by a change in memory accounting within version 2 of the Linux control group (`cgroup`) API. [Cgroup v2](https://kubernetes.io/docs/concepts/architecture/cgroups/) is now the default `cgroup` version for Kubernetes 1.25 on AKS.
27
27
28
28
> [!NOTE]
29
-
> This issue is distinct from the memory saturation in nodes that's caused by applications or frameworks that aren't aware of cgroup v2. For more information, see [Memory saturation occurs in pods after cluster upgrade to Kubernetes 1.25](./aks-memory-saturation-after-upgrade.md).
29
+
> This issue is distinct from the memory saturation in nodes that's caused by applications or frameworks that aren't aware of `cgroup` v2. For more information, see [Memory saturation occurs in pods after cluster upgrade to Kubernetes 1.25](./aks-memory-saturation-after-upgrade.md).
30
30
31
31
## Solution
32
32
33
33
- If you observe frequent memory pressure on the nodes, upgrade your subscription to increase the amount of memory that's available to your virtual machines (VMs).
34
34
35
35
- If you see a higher eviction rate on the pods, [use higher limits and requests for pods](/azure/aks/developer-best-practices-resource-management#define-pod-resource-requests-and-limits).
36
36
37
+
-`cgroup` v2 uses a different API than `cgroup` v1. If there are any applications that directly access the `cgroup` file system, update them to later versions that support `cgroup` v2. For example:
38
+
39
+
-**Third-party monitoring and security agents**:
40
+
41
+
Some monitoring and security agents depend on the `cgroup` file system. Update these agents to versions that support `cgroup` v2.
42
+
43
+
-**Java applications**:
44
+
45
+
Use versions that fully support `cgroup` v2:
46
+
- OpenJDK/HotSpot: `jdk8u372`, `11.0.16`, `15`, and later versions.
47
+
- IBM Semeru Runtimes: `8.0.382.0`, `11.0.20.0`, `17.0.8.0`, and later versions.
48
+
- IBM Java: `8.0.8.6` and later versions.
49
+
50
+
-**uber-go/automaxprocs**:
51
+
If you're using the `uber-go/automaxprocs` package, ensure the version is `v1.5.1` or later.
52
+
53
+
- An alternative temporary solution is to revert the `cgroup` version on your nodes by using the DaemonSet. For more information, see [Revert to cgroup v1 DaemonSet](https://github.com/Azure/AKS/blob/master/examples/cgroups/revert-cgroup-v1.yaml).
54
+
55
+
> [!IMPORTANT]
56
+
> - Use the DaemonSet cautiously. Test it in a lower environment before applying to production to ensure compatibility and prevent disruptions.
57
+
> - By default, the DaemonSet applies to all nodes in the cluster and reboots them to implement the `cgroup` change.
58
+
> - To control how the DaemonSet is applied, configure a `nodeSelector` to target specific nodes.
59
+
60
+
37
61
> [!NOTE]
38
62
> If you experience only an increase in memory use without any of the other symptoms that are mentioned in the "Symptoms" section, you don't have to take any action.
39
63
40
64
## Status
41
65
42
-
We're actively working with the Kubernetes community to fix the underlying issue, and we'll keep you updated on our progress. We also plan to change the eviction thresholds or [resource reservations](/azure/aks/concepts-clusters-workloads#resource-reservations), depending on the outcome of the fix.
66
+
We're actively working with the Kubernetes community to resolve the underlying issue. Progress on this effort can be tracked at [Azure/AKS Issue #3443](https://github.com/kubernetes/kubernetes/issues/118916).
67
+
68
+
As part of the resolution, we plan to adjust the eviction thresholds or update [resource reservations](/azure/aks/concepts-clusters-workloads#resource-reservations), depending on the outcome of the fix.
0 commit comments