You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -26,6 +27,10 @@ Currently, you can monitor performance at the application level using Citrix ADC
26
27
27
28
Citrix Observability Exporter supports collecting transactions and streaming them to endpoints. Currently, Citrix Observability Exporter supports Elasticsearch and Kafka as transaction endpoints.
28
29
30
+
### Time series data support
31
+
32
+
Citrix Observability Exporter supports collecting time series data (metrics) from Citrix ADC instances and exports them to Prometheus. Prometheus is a monitoring solution for storing time series data like metrics. You can then add Prometheus as a data source to Grafana and graphically view the Citrix ADC metrics and analyze the metrics.
33
+
29
34
## How does Citrix Observability Exporter work
30
35
31
36
### Distributed tracing with Zipkin using Citrix Observability Exporter
@@ -42,14 +47,19 @@ When Elasticsearch is specified as the transaction endpoint, Citrix Observabilit
42
47
43
48
When Kafka is specified as the transaction endpoint, Citrix Observability Exporter converts the transaction data to [Avro](http://avro.apache.org/docs/current/Avro) format and streams them to Kafka.
44
49
50
+
### Citrix Observability Exporter with Prometheus as the endpoint for time series data
51
+
52
+
When Prometheus is specified as the format for time series data, Citrix Observability Exporter collects various metrics from Citrix ADCs and converts them to appropriate Prometheus format and exports them to the Prometheus server. These metrics include counters of the virtual servers, services to which the analytics profile is bound and global counters of HTTP, TCP and so on.
53
+
45
54
## Deployment
46
55
47
-
You can deploy Citrix Observability Exporter using Kubernetes YAML. To deploy Citrix Observability Exporter using Kubernetes YAML, see [Deployment](deployment/README.md).
56
+
You can deploy Citrix Observability Exporter using Kubernetes YAML or Helm charts. To deploy Citrix Observability Exporter using Kubernetes YAML, see [Deployment](deployment/README.md).
48
57
49
58
## Questions
50
59
51
60
For questions and support, the following channels are available:
This feature enables logging of all HTTP headers of a transaction. As part of this feature, a new option called `allHttpHeaders` is introduced under the Web Insight analytics profile. When this option is set, Citrix ADC uploads all the request and response headers under `httpAllReqHdrs` and `httpAllResHdrs` respectively. This option is disabled by default. Custom header logging is currently supported only for the Kafka endpoint.
4
+
5
+
Now, Citrix Observability Exporter transaction records contain two additional fields:
6
+
7
+
-`httpAllReqHdrs`: This field is a string containing all the request header lines separated by \r\n.
8
+
-`httpAllResHdrs`: This field is a string containing all the response header lines separated by \r\n.
9
+
10
+
## Citrix ADC configuration to enable custom header logging
11
+
12
+
### Using the Citrix ADC command line
13
+
14
+
Use the following command from the Citrix ADC command line.
15
+
16
+
set analytics profile ns_analytics_default_http_profile -allHttpHeaders ENABLED
17
+
18
+
### Using the Citrix ingress controller
19
+
20
+
To enable `allHttpHeaders` using the Citrix ingress controller, you should set the analytics profile as annotations on the Ingress which is applied on the Citrix ADC.
Copy file name to clipboardExpand all lines: deployment/README.md
+47-47Lines changed: 47 additions & 47 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,13 +19,14 @@ The following diagram shows a deployment of Citrix Observability Exporter with a
19
19
ensure that you have the following docker images installed in the Kubernetes cluster:
20
20
-[Zipkin](https://zipkin.io/)
21
21
- (Optional) [Elasticsearch](https://www.elastic.co/products/elasticsearch) as back-end for Zipkin. Elasticsearch is required if you want to visualize your tracing data in [Kibana](https://www.elastic.co/products/kibana). You can also use Elasticsearch as an endpoint for transactions.
22
-
-(Optional) [Kibana](https://www.elastic.co/products/kibana) is required to visualize your tracing data.
22
+
-[Kibana](https://www.elastic.co/products/kibana) is required to visualize your tracing data.
23
23
24
24
**Note:**
25
25
You can use [zipkin.yaml](../examples/zipkin.yaml), [elasticsearch.yaml](../examples/elasticsearch.yaml), and [kibana.yaml](../examples/kibana.yaml) for installing Zipkin, Elasticsearch, and Kibana.
26
26
27
27
- If Elasticsearch is used as the endpoint for transactions, ensure that you have Elasticsearch installed and configured.
28
28
- If Kafka is used as the endpoint for transactions, ensure that Kafka server is installed and configured.
29
+
- If Prometheus is used as the endpoint for time series data, ensure that Prometheus is installed and configured.
29
30
30
31
## Deploy Citrix Observability Exporter using YAML
31
32
@@ -38,6 +39,7 @@ To deploy Citrix Observability Exporter using Kubernetes YAML, perform the follo
38
39
2. If you use Citrix ADC VPX or MPX in the deployment, create the necessary login credentials.
3. Create a Kubernetes ConfigMap, Deployment, and Service with Log stream configuration for the required endpoint:
42
44
43
45
- For Citrix Observability Exporter with Zipkin tracing support:
@@ -76,6 +78,19 @@ You can specify the tracing server in ConfigMap using environment variables in t
76
78
77
79
Enable the Kafka endpoint by setting the value of `EnableKafka` as `yes`. Also, set Kafka broker details in `KafkaBroker` and topic details in `KafkaTopic`. You also must specify the Kafka cluster host IP mapping under HostAliases in the [Kubernetes Pod specification](https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/#adding-additional-entries-with-hostaliases).
78
80
81
+
- For Citrix Observability Exporter with Prometheus as the endpoint for time series data:
82
+
83
+
You can enable Prometheus support by specifying the following annotations in the YAML files to deploy Zipkin, Kafka, or Elasticsearch and exposing the time series port.
84
+
85
+
prometheus.io/scrape: "true"
86
+
prometheus.io/port: "5563"
87
+
88
+
The following command deploys Citrix Observability Exporter with both Elasticsearch and Prometheus as endpoints, using the [coe-es-prometheus.yaml](coe-es-prometheus.yaml) file. In this YAML file, annotations for Prometheus support are enabled and port 5563 is exposed which is used for the time series data.
89
+
90
+
kubectl create -f coe-es-prometheus.yaml
91
+
92
+
You should configure Prometheus to scrape the data from the Citrix Observability Exporter time series port. For enabling time series data processing, there is no specific configuration required on Citrix Observability Exporter. By default, if time series data is pushed to Citrix Observability Exporter, it is processed automatically. The time series port is enabled by default.
93
+
79
94
**Note:**
80
95
Once you deploy a Citrix Observability Exporter instance with a specific endpoint, you cannot modify it. For changing the endpoint, you must bring down the Citrix Observability Exporter instance and deploy it again with the new endpoint.
81
96
@@ -89,9 +104,6 @@ In this procedure, a Citrix ADC CPX is deployed with the Citrix ingress controll
89
104
90
105
Depending on the endpoint you are using, you can choose the YAML file for deploying Citrix ADC CPX. These YAML files include the configuration required for Citrix Observability Exporter.
91
106
92
-
**Note:**
93
-
Any usage of the environment variable ``NS_LOGPROXY`` in this procedure refers to ``Citrix Observability Exporter`` only.
94
-
95
107
Perform the following steps to deploy a Citrix ADC CPX instance with Citrix Observability Exporter support enabled.
96
108
97
109
1. Download the YAML file for deploying Citrix ADC CPX according to the endpoint.
@@ -100,34 +112,9 @@ Perform the following steps to deploy a Citrix ADC CPX instance with Citrix Obse
100
112
- For Elasticsearch as the transaction endpoint: [cpx-ingress-es.yaml](../examples/elasticsearch/cpx-ingress-es.yaml)
101
113
- For Kafka as the transaction endpoint: [cpx-ingress-kafka.yaml](../examples/kafka/cpx-ingress-kafka.yaml)
102
114
103
-
2. Edit the YAML file and specify the environment variables in the Citrix ingress controller configuration according to the endpoint you are using:
104
-
- For tracing support with Zipkin:
105
-
106
-
-name: "NS_LOGPROXY"
107
-
value: "<abc.com>"
108
-
-name: "NS_DISTRIBUTED_TRACING"
109
-
value: "yes"
110
-
- For Elasticsearch or Kafka as the transaction endpoint:
111
-
112
-
- name: "NS_LOGPROXY"
113
-
value: "<abc.com>"
114
-
115
-
**Note:**
116
-
Using [smart annotations](https://developer-docs.citrix.com/projects/citrix-k8s-ingress-controller/en/latest/configure/annotations/), you can define specific parameters you must import by specifying it in the YAML file for deploying Citrix ADC CPX.
You can also define the parameters to import using smart annotations for service. You can specify the parameters in the YAML for deploying Citrix Observability Exporter. However, you can use service annotations only when the service type is `LoadBalancer`.
115
+
2. Create and deploy a ConfigMap with the required key-value pairs in the ConfigMap. You can use the [cic-configmap.yaml](../examples/cic-configmap.yaml) file.
3. Deploy Citrix ADC CPX with the Citrix ingress controller as a sidecar using the following command.
133
120
@@ -141,6 +128,22 @@ Perform the following steps to deploy a Citrix ADC CPX instance with Citrix Obse
141
128
142
129
kubectl create -f cpx-ingress-kafka.yaml
143
130
131
+
**Note:**
132
+
Using [smart annotations](https://developer-docs.citrix.com/projects/citrix-k8s-ingress-controller/en/latest/configure/annotations/), you can define specific parameters you must import by specifying it in the YAML file for deploying Citrix ADC CPX.
You can also define the parameters to import using smart annotations for service. You can specify the parameters in the YAML for deploying Citrix Observability Exporter. However, you can use service annotations only when the service type is `LoadBalancer`.
### Deploy the Citrix ingress controller with Citrix Observability Exporter support for Citrix ADC MPX or VPX
145
148
146
149
In this deployment, the Citrix ingress controller is deployed as a standalone pod in the Kubernetes cluster. It controls the Citrix ADC MPX or VPX appliance deployed outside the cluster. The Citrix Observability Exporter support is enabled in the Citrix ingress controller configuration.
@@ -150,24 +153,12 @@ Perform the following steps to deploy a Citrix ADC CPX instance with Citrix Obse
150
153
You need to complete the [prerequisites](https://developer-docs.citrix.com/projects/citrix-k8s-ingress-controller/en/latest/deploy/deploy-cic-yaml/#prerequisites) for deploying the Citrix ingress controller as a standalone pod.
151
154
152
155
1. Download the [vpx-ingress.yaml](../examples/vpx-ingress.yaml) file.
153
-
2. Edit the `vpx-ingress.yaml` file and modify the values for the environmental variables as provided in [deploying the Citrix ingress controller](https://developer-docs.citrix.com/projects/citrix-k8s-ingress-controller/en/latest/deploy/deploy-cic-yaml/#deploy-citrix-ingress-controller-as-a-pod).
154
-
155
-
3. Specify environment variables for Citrix Observability Exporter in the Citrix ingress controller configuration.
156
-
157
-
- For tracing support with Zipkin:
158
-
159
156
160
-
- name: "NS_LOGPROXY"
161
-
value: "<abc.com>"
162
-
- name: "NS_DISTRIBUTED_TRACING"
163
-
value: "yes"
157
+
1. Create and deploy a ConfigMap with the required key-value pairs in the ConfigMap. You can use the [cic-configmap.yaml](../examples/cic-configmap.yaml) file.
164
158
165
-
- For ElasticSearch or Kafka as the transaction endpoint:
159
+
kubectl create -f cic-configmap.yaml
166
160
167
-
- name: "NS_LOGPROXY"
168
-
value: "<abc.com>:5557"
169
-
170
-
4. Once you update the environment variables, save the [vpx-ingress.yaml](../examples/vpx-ingress.yaml) file and deploy it using the following command.
161
+
2. Deploy the [vpx-ingress.yaml](../examples/vpx-ingress.yaml) file using the following command.
171
162
172
163
kubectl create -f vpx-ingress.yaml -n tracing
173
164
@@ -221,6 +212,7 @@ file. This sample web application is added as a service in the Ingress.
221
212
1. Get `NodePort` information for `cpx-service` using the following command.
222
213
223
214
kubectl describe service cpx-service
215
+
224
216
2. Access `http://www.samplewebserver.com:NodePort` from a web browser to open the sample web application.
225
217
226
218
3. Send multiple requests to the application as shown in the following sample image.
@@ -230,7 +222,7 @@ file. This sample web application is added as a service in the Ingress.
230
222
**Note:**
231
223
You can generate different types of response status codes for different HTTP methods (for example, GET, POST, DELETE, and so on).
232
224
233
-
1. All transactions are uploaded to the Elasticsearch server and you can view them using the Kibana dashboard.
225
+
1. All transactions are uploaded to the Elasticsearch server and you can view them using the Kibana dashboard.
234
226
235
227
You can use the following sample Kibana dashboard to visualize transactions.
236
228
@@ -240,3 +232,11 @@ file. This sample web application is added as a service in the Ingress.
240
232
You can import the Kibana dashboard template from [dashboards](../dashboards/KibanaAppTrans.ndjson).
241
233
Before importing the Kibana dashboard, you must define an index pattern named `*http*` using the information in the [Kibana User Guide](https://www.elastic.co/guide/en/kibana/current/tutorial-define-index.html).
242
234
235
+
### Sample Grafana dashboard for Prometheus
236
+
237
+
Following is a sample Grafana dashboard which visualizes time series data from Prometheus. Kafka is used as the transaction endpoint.
0 commit comments