Release 1.2.0
What's New
Expose services of type LoadBalancer
You can create a service of type LoadBalancer and expose it externally using the ingress Citrix ADC. You can manually assign an IP address to the service using the service.citrix.com/frontend-ip annotation. Else, you can also automatically assign IP address to service using the IPAM controller provided by Citrix. The Citrix ingress controller configures the assigned IP address as virtual IP (VIP) in the ingress Citrix ADC. And, the service is exposed using the IP address. For more information, see Expose services of type LoadBalancer.
RedHat OpenShift router sharding support
OpenShift router sharding allows distributing a set of routes among multiple OpenShift routers. By default, an OpenShift router selects all routes from all namespaces. In router sharding, labels are added to routes or namespaces and label selectors to routers for filtering routes. Each router shard selects only routes with specific labels that match its label selection parameters.
Citrix ADC supports OpenShift router sharding when you deploy it as an OpenShift router. For more information, see Deploy the Citrix ingress controller with OpenShift router sharding support.
Establish network connectivity between Kubernetes nodes and Ingress Citrix ADC using Citrix node controller
In Kubernetes environments, when you expose the services for external access through the Ingress device, to route the traffic into the cluster, you need to appropriately configure the network between the Kubernetes nodes and the Ingress device. Configuring the network is challenging as the pods use private IP addresses based on the CNI framework. Without proper network configuration, the Ingress device cannot access these private IP addresses. Also, manually configuring the network to ensure such reachability is cumbersome in Kubernetes environments.
Citrix provides a microservice called as Citrix k8s node controller that you can use to create the network between the cluster and the Ingress device. For more information, see Citrix node controller and Establish network between Kubernetes nodes and Ingress Citrix ADC using Citrix node controller.
Ability to match the ingress path
The Citrix ingress controller now provides an annotation ingress.citrix.com/path-match-method that you can use to define the Citrix ingress controller to consider the path string in the ingress path has prefix expression or as an exact match. For more information, see Annotations.
Ability to customize the prefix for Citrix ADC entities
By default, the Citrix ingress controller adds "k8s" as prefix to the Citrix ADC entities such as, content switching (CS) virtual server, load balancing (LB) virtual server and so on. You can now customize the prefix using the NS_APPS_NAME_PREFIX environment variable in the Citrix ingress controller deployment YAML file. You can use alphanumeric characters for the prefix and the prefix length should not exceed 8 characters.
Fixed issues
- Preconfigured certificates with "." in the certificate name is not supported. For example, hotdrink.cert.
- Citrix ingress controller fails to configure Citrix ADC if it is being deployed in standalone mode after rebooting Citrix ADC VPX.
Known issues
- Red Hat OpenShift support:
Automatic route configuration using the Citrix Ingress Controller (feature-node-watch) is not supported in OpenShift.
When you frequently modify the OpenShift route configuration, the Citrix ingress controller might crash with the following SSL exception: SSL: DECRYPTION_FAILED_OR_BAD_RECORD_MAC.
After modifying the OpenShift route configuration, applying those changes using the oc apply command does not work.
Workaround: Delete the existing OpenShift route and recreate the route.
- Rewrite policy CRD:
When you apply the rewrite policy CRD deployment file on the Kubernetes cluster, Citrix ingress controller requires 12 seconds to process the CRD deployment file.