You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Enable request retry feature using AppQoE for Citrix ingress controller
2
+
3
+
When a Citrix ADC appliance receives an HTTP request and forwards it to a back-end server, sometimes there may be connection failures with the back-end server. You can configure the request-retry feature on Citrix ADC to forward the request to the next available server, instead of sending the reset to the client. Hence, the client saves round trip time when Citrix ADC initiates the same request to the next available service. For more information request retry feature, see the [Citrix ADC documentation](https://docs.citrix.com/en-us/citrix-adc/current-release/system/request-retry/request_retry_if_back-end_server_resets_tcp_connection.html)
4
+
5
+
Now, you can configure request retry on Citrix ADC with Citrix ingress controller.
6
+
Custom Resource Definitions (CRDs) are the primary way of configuring policies in cloud native deployments. Using the AppQoE CRD provided by Citrix, you can configure request-retry policies on Citrix ADC with the Citrix ingress controller. The AppQoE CRD enables communication between the Citrix ingress controller and Citrix ADC for enforcing AppQoE policies.
7
+
8
+
## AppQoE CRD definition
9
+
10
+
The AppQoE CRD is available in the Citrix ingress controller GitHub repo at: [appqoe-crd.yaml](https://raw.githubusercontent.com/citrix/citrix-k8s-ingress-controller/master/crd/appqoe/appqoe-crd.yaml). The AppQoE CRD provides attributes for the various options that are required to define the AppQoE policy on Citrix ADC.
11
+
12
+
The following are the attributes provided in the AppQoE CRD:
13
+
14
+
| Attribute | Description |
15
+
| --------- | ----------- |
16
+
|`servicenames`| Specifies the list of Kubernetes services to which you want to apply the AppQoE policies.|
17
+
|`on-reset`| Specifies whether to set retry on connection Reset or Not|
18
+
|`on-timeout`| Specifies the time in milliseconds for retry |
19
+
|`number-of-retries`| Specifies the number of retries |
20
+
|`appqoe-criteria`|Specifies the expression for evaluating traffic. |
21
+
|`direction`| Specifies the bind point for binding the AppQoE policy. |
22
+
23
+
## Deploy the AppQoE CRD
24
+
25
+
Perform the following to deploy the AppQoE CRD:
26
+
27
+
1. Download the [AppQoE CRD](https://github.com/citrix/citrix-k8s-ingress-controller/blob/master/crd/appqoe/appqoe-crd.yaml).
28
+
29
+
2. Deploy the AppQoE CRD using the following command:
30
+
31
+
kubectl create -f appqoe-crd.yaml
32
+
33
+
### How to write a AppQoE policy configuration
34
+
35
+
After you have deployed the AppQoE CRD provided by Citrix in the Kubernetes cluster, you can define the AppQoE policy configuration in a `.yaml` file. In the `.yaml` file, use `appqoepolicy` in the kind field and in the `spec` section add the AppQoE CRD attributes based on your requirement for the policy configuration.
36
+
37
+
The following YAML file applies the AppQoE policy to the services listed in the servicenames field. You must configure the AppQoE action to retry on timeout and define the number of retry attempts.
Copy file name to clipboardExpand all lines: docs/configure/config-map-coe.md
+10-4Lines changed: 10 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,12 +12,13 @@ You can configure the following parameters under `NS_ANALYTICS_CONFIG` using a C
12
12
-`samplingrate`: Specifies the OpenTracing sampling rate in percentage. The default value is 100.
13
13
14
14
-`endpoint`: Specifies the IP address or DNS address of the analytics server.
15
-
15
+
16
16
-`server`: Set this value as the IP address or DNS address of the server.
17
-
17
+
-`service`: Specifies the IP address or service name of the Citrix ADC observability exporter service depending on whether the service is running on a virtual machine or as a Kubernetes service.
18
+
If the Citrix ADC observability exporter instance is running on a virtual machine this parameter specifies the IP address. If the Citrix ADC observability exporter instance is running as a service in the Kubernetes cluster, this parameter specifies the instance as namespace/service name.
18
19
-`timeseries`: Enables exporting time series data from Citrix ADC. You can specify the following attributes for time series configuration.
19
20
20
-
-`port`: Specifies the port number of time series end point of the analytics server. The default value is 5563.
21
+
-`port`: Specifies the port number of the time series end point of the analytics server. The default value is 5563.
21
22
-`metrics`: Enables exporting metrics from Citrix ADC.
22
23
23
24
-`enable`: Set this value to `true` to enable sending metrics. The default value is `false`.
@@ -31,7 +32,7 @@ You can configure the following parameters under `NS_ANALYTICS_CONFIG` using a C
31
32
-`transactions`: Enables exporting transactions from Citrix ADC.
32
33
33
34
-`enable`: Set this value to `true` to enable sending transactions. The default value is `false`.
34
-
-`port`: Specifies the port number of transactional endpoint of analytics server. The default value is 5557.
35
+
-`port`: Specifies the port number of the transactional endpoint of the analytics server. The default value is 5557.
35
36
36
37
The following configurations cannot be changed while the Citrix ingress controller is running and you need to reboot the Citrix ingress controller to apply these settings.
37
38
@@ -41,6 +42,9 @@ The following configurations cannot be changed while the Citrix ingress controll
41
42
42
43
You can change other ConfigMap settings at runtime while the Citrix ingress controller is running.
43
44
45
+
**Note:**
46
+
When the user specifies value for a service as `namespace/service name`, Citrix ingress controller derives the endpoint associated to that service and dynamically bind them to the transactional service group in Citrix tier-1 ADC . If a user specifies the value for a service as IP address, the IP address is direclty bound to the transactional service group. Citrix ingress controller is enhanced to create default web or TCP based analytics profiles and bind them to the logging virtual server. The default analytics profiles are bound to all load balancing virtual servers of applications if the Citrix ADC observability exporter is enabled in the cluster. If the user wants to change the analytics profile, they can use the `analyticsprofile` annotation.
47
+
44
48
The attributes of `NS_ANALYTICS_CONFIG` should follow a well-defined schema. If any value provided does not confirm with the schema, then the entire configuration is rejected. For reference, see the schema file [ns_analytics_config_schema.yaml](#Schema-for-NSANALYTICSCONFIG).
45
49
46
50
## Creating a ConfigMap for analytics configuration
@@ -67,6 +71,7 @@ data:
67
71
samplingrate: 100
68
72
endpoint:
69
73
server: '1.1.1.1'
74
+
service: 'default/coe-kafka'
70
75
timeseries:
71
76
port: 5563
72
77
metrics:
@@ -79,6 +84,7 @@ data:
79
84
transactions:
80
85
enable: 'true'
81
86
port: 5557
87
+
82
88
```
83
89
84
90
For more information on how to configure ConfigMap support on the Citrix ingress controller, [see configuring ConfigMap support for the Citrix ingress controller](https://developer-docs.citrix.com/projects/citrix-k8s-ingress-controller/en/latest/configure/config-map/#configuring-configmap-support-for-the-citrix-ingress-controller).
# Configuring consistent hashing algorithm using Citrix ingress controller
2
+
3
+
Load balancing algorithms define the criteria that the Citrix ADC appliance uses to select the service to which to redirect each client request. Different load balancing algorithms use different criteria and consistent hashing is one the load balancing algorithms supported by Citrix ADC.
4
+
Consistent hashing algorithms are often used to load balance when the back-end is a caching server to achieve stateless persistency.
5
+
Consistent hashing can ensure that when a cache server is removed, only the requests cached in that specific server is rehashed and the rest of the requests are not affected. For more information on the consistent hashing algorithm, see the [Citrix ADC documentation](https://docs.citrix.com/en-us/citrix-adc/current-release/load-balancing/load-balancing-customizing-algorithms/hashing-methods.html#consistent-hashing-algorithms).
6
+
7
+
You can now configure the consistent hashing algorithm on Citrix ADC using Citrix ingress controller. This configuration is enabled with in the Citrix ingress controller using a ConfigMap.
8
+
9
+
## Configure hashing algorithm
10
+
11
+
A new parameter `NS_LB_HASH_ALGO` is introduced in the Citrix ingress controller ConfigMap for hashing algorithm support.
12
+
Supported environment variables for consistent hashing algorithm using ConfigMap under the `NS_LB_HASH_ALGO` parameter:
13
+
14
+
-`hashFingers`: Specifies the number of fingers to be used for the hashing algorithm. Possible values are from 1 to 1024. Increasing the number of fingers provides better distribution of traffic at the expense of extra memory.
15
+
-`hashAlgorithm`: Specifies the supported algorithm. Supported algorithms are `default`, `jarh`, `prac`.
16
+
17
+
The following example shows a sample ConfigMap for configuring consistent hashing algorithm using Citrix ingress controller. In this example, the hashing algorithm is used as Prime Re-Shuffled Assisted CARP (PRAC) and the number of fingers to be used in PRAC is set as 50.
# Enable request retry feature using AppQoE for Citrix ingress controller
2
+
3
+
When a Citrix ADC appliance receives an HTTP request and forwards it to a back-end server, sometimes there may be connection failures with the back-end server. You can configure the request-retry feature on Citrix ADC to forward the request to the next available server, instead of sending the reset to the client. Hence, the client saves round trip time when Citrix ADC initiates the same request to the next available service. For more information request retry feature, see the [Citrix ADC documentation](https://docs.citrix.com/en-us/citrix-adc/current-release/system/request-retry/request_retry_if_back-end_server_resets_tcp_connection.html)
4
+
5
+
Now, you can configure request retry on Citrix ADC with Citrix ingress controller.
6
+
Custom Resource Definitions (CRDs) are the primary way of configuring policies in cloud native deployments. Using the AppQoE CRD provided by Citrix, you can configure request-retry policies on Citrix ADC with the Citrix ingress controller. The AppQoE CRD enables communication between the Citrix ingress controller and Citrix ADC for enforcing AppQoE policies.
7
+
8
+
## AppQoE CRD definition
9
+
10
+
The AppQoE CRD is available in the Citrix ingress controller GitHub repo at: [appqoe-crd.yaml](https://raw.githubusercontent.com/citrix/citrix-k8s-ingress-controller/master/crd/appqoe/appqoe-crd.yaml). The AppQoE CRD provides attributes for the various options that are required to define the AppQoE policy on Citrix ADC.
11
+
12
+
The following are the attributes provided in the AppQoE CRD:
13
+
14
+
| Attribute | Description |
15
+
| --------- | ----------- |
16
+
|`servicenames`| Specifies the list of Kubernetes services to which you want to apply the AppQoE policies.|
17
+
|`on-reset`| Specifies whether to set retry on connection Reset or Not|
18
+
|`on-timeout`| Specifies the time in milliseconds for retry |
19
+
|`number-of-retries`| Specifies the number of retries |
20
+
|`appqoe-criteria`|Specifies the expression for evaluating traffic. |
21
+
|`direction`| Specifies the bind point for binding the AppQoE policy. |
22
+
23
+
## Deploy the AppQoE CRD
24
+
25
+
Perform the following to deploy the AppQoE CRD:
26
+
27
+
1. Download the [AppQoE CRD](https://github.com/citrix/citrix-k8s-ingress-controller/blob/master/crd/appqoe/appqoe-crd.yaml).
28
+
29
+
2. Deploy the AppQoE CRD using the following command:
30
+
31
+
kubectl create -f appqoe-crd.yaml
32
+
33
+
### How to write a AppQoE policy configuration
34
+
35
+
After you have deployed the AppQoE CRD provided by Citrix in the Kubernetes cluster, you can define the AppQoE policy configuration in a `.yaml` file. In the `.yaml` file, use `appqoepolicy` in the kind field and in the `spec` section add the AppQoE CRD attributes based on your requirement for the policy configuration.
36
+
37
+
The following YAML file applies the AppQoE policy to the services listed in the servicenames field. You must configure the AppQoE action to retry on timeout and define the number of retry attempts.
0 commit comments