You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: deployment/openshift/README.md
-2Lines changed: 0 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -86,8 +86,6 @@ You can use the [cic.yaml](https://raw.githubusercontent.com/citrix/citrix-k8s-i
86
86
87
87
**Note:** The Citrix ADC MPX or VPX can be deployed in *[standalone](https://docs.citrix.com/en-us/citrix-adc/12-1/getting-started-with-citrix-adc.html)*, *[high-availability](https://docs.citrix.com/en-us/citrix-adc/12-1/getting-started-with-citrix-adc/configure-ha-first-time.html)*, or *[clustered](https://docs.citrix.com/en-us/citrix-adc/12-1/clustering.html)* modes.
88
88
89
-
**Note:** In the latest versions of OpenShift when OVN CNI is used, `—feature-node-watch` might not work. In that case, you must manually configure the static routes on Citrix ADC VPX.
90
-
91
89
### Prerequisites
92
90
93
91
- Determine the IP address needed by the Citrix ingress controller to communicate with the Citrix ADC appliance. The IP address might be any one of the following depending on the type of Citrix ADC deployment:
Copy file name to clipboardExpand all lines: docs/deploy/deploy-cic-openshift.md
-2Lines changed: 0 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -119,8 +119,6 @@ You can use the [cic.yaml](https://raw.githubusercontent.com/citrix/citrix-k8s-i
119
119
120
120
**Note:** The Citrix ADC MPX or VPX can be deployed in *[standalone](https://docs.citrix.com/en-us/citrix-adc/12-1/getting-started-with-citrix-adc.html)*, *[high-availability](https://docs.citrix.com/en-us/citrix-adc/12-1/getting-started-with-citrix-adc/configure-ha-first-time.html)*, or *[clustered](https://docs.citrix.com/en-us/citrix-adc/12-1/clustering.html)* modes.
121
121
122
-
**Note:** In the latest versions of OpenShift when OVN CNI is used, `—feature-node-watch` might not work. In that case, you must manually configure the static routes on Citrix ADC VPX.
123
-
124
122
### Prerequisites
125
123
126
124
- Determine the IP address needed by the Citrix ingress controller to communicate with the Citrix ADC appliance. The IP address might be any one of the following depending on the type of Citrix ADC deployment:
Copy file name to clipboardExpand all lines: docs/multicluster/multi-cluster.md
+27-1Lines changed: 27 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -397,12 +397,13 @@ The following table explains the GTP CRD attributes.
397
397
|`serviceType: `|Specifies the protocol to which multi-cluster support is applied. |
398
398
|`host `| Specifies the domain for which multi-cluster support is applied. |
399
399
|`trafficPolicy`| Specifies the traffic distribution policy supported in a multi-cluster deployment. |
400
+
|`sourceIpPersistenceId`| Specifies the unique source IP persistence ID. This attribute enables persistence based on the source IP address for the inbound packets. The `sourceIpPersistenceId ` attribute should be a multiple of 100 and should be unique. For a sample configuration, see [Example: source IP persistence](#example-source-ip-persistence). |
400
401
|`secLbMethod`| Specifies the traffic distribution policy supported among clusters under a group in local-first, canary, or failover. |
401
402
|`destination `| Specifies the Ingress or LoadBalancer service endpoint in each cluster. The destination name should match with the name of GSE. |
402
403
|`weight`| Specifies the proportion of traffic to be distributed across clusters. For canary deployment, the proportion is specified as percentage. |
403
404
|`CIDR`|Specifies the CIDR to be used in local-first to determine the scope of locality. |
404
405
|`primary`| Specifies whether the destination is a primary cluster or a backup cluster in the failover deployment. |
405
-
|`monType`|Specifies the type of probe to determine the health of the multi-cluster endpoint. |
406
+
|`monType`|Specifies the type of probe to determine the health of the multi-cluster endpoint. When the monitor type is HTTPS, SNI is enabled by default during the TLS handshake.|
406
407
|`uri`|Specifies the path to be probed for the health of the multi-cluster endpoint for HTTP and HTTPS. |
407
408
|`respCode`|Specifies the response code expected to mark the multi-cluster endpoint as healthy for HTTP and HTTPS. |
408
409
@@ -545,3 +546,28 @@ Following is a sample traffic policy for the static proximity deployment.
545
546
uri: ''
546
547
respCode: 200
547
548
549
+
## Example: source IP persistence
550
+
551
+
The following traffic policy provides an example for enabling source IP persistence support. Source IP persistence can be enabled by providing the parameter `sourceIpPersistenceId`. The source IP persistence attribute can be enabled with the supported traffic policies.
Copy file name to clipboardExpand all lines: docs/troubleshooting/troubleshooting.md
+54Lines changed: 54 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,3 +28,57 @@ The following table describes some of the common issues and workarounds.
28
28
|------|-----|-----|
29
29
|Grafana dashboard has no plots|If the graphs on the Grafana dashboards do not have any values plotted, then Grafana is unable to obtain statistics from its datasource.| Check if the Prometheus datasource is saved and working properly. On saving the datasource after providing the Name and IP, a "Data source is working" message appears in green indicating the datasource is reachable and detected. <br>If the dashboard is created using `sample_grafana_dashboard.json`, ensure that the name given to the Prometheus datasource begins with the word "prometheus" in lowercase. <br>Check the Targets page of Prometheus to see if the required target exporter is in `DOWN` state.|
30
30
| DOWN: Context deadline exceeded| If the message appears against any of the exporter targets of Prometheus, then Prometheus is either unable to connect to the exporter or unable to fetch all the metrics within the given `scrape_timeout`.|If you are using Prometheus Operator, `scrape_timeout` is adjusted automatically and the error means that the exporter itself is not reachable. <br>If a standalone Prometheus container or pod is used, try increasing the `scrape_interval` and `scrape_timeout` values in the `/etc/prometheus/prometheus.cfg` file to increase the time interval for collecting the metrics.|
31
+
32
+
## Troubleshooting - OpenShift feature node watch
33
+
34
+
**Problem 1**:
35
+
While using OpenShift-ovn CNI `feature-node-watch` is not adding correct routes.
36
+
37
+
**Description**: Citrix ingress controller looks for Node annotations for fetching the necessary details to add the static routes.
38
+
39
+
**Workaround**:
40
+
41
+
1. Make sure that following RBAC permission is provided to Citrix ingress controller along with `route.openshift.io` for Citrix ingress controller to run in the OpenShift environment with OVN CNI.
42
+
43
+
- apiGroups: ["config.openshift.io"]
44
+
resources: ["networks"]
45
+
verbs: ["get", "list"]
46
+
47
+
2. Citrix ingress controller looks for the following two annotations added by OVN, make sure that it exists on the
3. If the annotation does not exist `feature-node-watch` might not work for OVN CNI. In that case, you must manually configure the static routes on Citrix ADC VPX.
54
+
55
+
**Problem 2**:
56
+
While using OpenShift-sdn CNI feature-node-watch is not adding correct routes
57
+
58
+
**Description**: Citrix ingress controller looks for the Hostsubnet CRD for fetching the necessary details to add the static routes.
59
+
60
+
**Workaround**:
61
+
62
+
1. Make sure that following RBAC permission is provided to Citrix ingress controller along with `route.openshift.io` for Citrix ingress controller to run in the OpenShift environment with SDN CNI.
63
+
64
+
- apiGroups: ["network.openshift.io"]
65
+
resources: ["hostsubnets"]
66
+
verbs: ["get", "list", "watch"]
67
+
- apiGroups: ["config.openshift.io"]
68
+
resources: ["networks"]
69
+
verbs: ["get", "list"]
70
+
2. Citrix ingress controller looks for the following CRD and specification.
71
+
72
+
oc get hostsubnets.network.openshift.io <cluster node-name> -ojson
73
+
74
+
{ "apiVersion": "network.openshift.io/v1",
75
+
"host": <cluster node-name,
76
+
"hostIP": "x.x.x.x",
77
+
"kind": "HostSubnet",
78
+
"metadata": {
79
+
"annotations": {
80
+
...
81
+
},
82
+
"subnet": "10.129.0.0/23"
83
+
}
84
+
3. If the CRD does not exist with the expected specification, `feature-node-watch` might not work for OpenShfit-SDN CNI. In that case, you must manually configure the static routes on Citrix ADC VPX.
0 commit comments