You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+6-10Lines changed: 6 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -37,30 +37,26 @@ Citrix Observability Exporter supports collecting time series data (metrics) fro
37
37
38
38
Logstream is a Citrix-owned protocol that is used as one of the transport modes to efficiently transfer transactions from Citrix ADC instances. Citrix Observability Exporter collects tracing data as Logstream records from multiple Citrix ADCs and aggregates them. Citrix Observability Exporter converts the data into a format understood by the tracer and then uploads to the tracer (Zipkin in this case). For Zipkin, the data is converted into JSON, with Zipkin-specific key values.
39
39
40
-
You can view the traces using Zipkin user interface. However, you can also enhance the trace analysis by using [Elasticsearch](https://www.elastic.co/products/elasticsearch) and [Kibana](https://www.elastic.co/products/kibana) with Zipkin. Elasticsearch provides long-term retention of the trace data and Kibana allows you to get much deeper insight into the data.
40
+
You can view the traces using the Zipkin user interface. However, you can also enhance the trace analysis by using [Elasticsearch](https://www.elastic.co/products/elasticsearch) and [Kibana](https://www.elastic.co/products/kibana) with Zipkin. Elasticsearch provides long-term retention of the trace data and Kibana allows you to get much deeper insight into the data.
41
41
42
42
### Citrix Observability Exporter with Elasticsearch as the transaction endpoint
43
43
44
+
44
45
When Elasticsearch is specified as the transaction endpoint, Citrix Observability Exporter converts the data to JSON format. On the Elasticsearch server, Citrix Observability Exporter creates Elasticsearch indexes for each ADC on an hourly basis. These indexes are based on data, hour, UUID of the ADC, and the type of HTTP data (http_event or http_error). Then, Citrix Observability Exporter uploads the data in JSON format under Elastic search indexes for each ADC. All regular transactions are placed into the http_event index and any anomalies are placed into the http_error index.
45
46
47
+
Effective with the Citrix Observability Exporter release 1.2.001, when the Citrix Observability Exporter sends the data to the Elasticsearch server some of the fields are available in the string format. Also, index configuration options are also added for Elasticsearch. For more information on fields which are in the string format and how to configure the Elasticsearch index, see [Elasticsearch support enhancements](./es-enhancements/README.md).
48
+
46
49
### Citrix Observability Exporter with Kafka as the transaction endpoint
47
50
48
51
When Kafka is specified as the transaction endpoint, Citrix Observability Exporter converts the transaction data to [Avro](http://avro.apache.org/docs/current/Avro) format and streams them to Kafka.
49
52
50
53
### Citrix Observability Exporter with Prometheus as the endpoint for time series data
51
54
52
-
When Prometheus is specified as the format for time series data, Citrix Observability Exporter collects various metrics from Citrix ADCs and converts them to appropriate Prometheus format and exports them to the Prometheus server. These metrics include counters of the virtual servers, services to which the analytics profile is bound and global counters of HTTP, TCP and so on.
55
+
When Prometheus is specified as the format for time series data, Citrix Observability Exporter collects various metrics from Citrix ADCs and converts them to the appropriate Prometheus format and exports them to the Prometheus server. These metrics include counters of the virtual servers, services to which the analytics profile is bound and global counters of HTTP, TCP, and so on.
53
56
54
57
## Deployment
55
58
56
-
You can deploy Citrix Observability Exporter using Kubernetes YAML or Helm charts. To deploy Citrix Observability Exporter using Kubernetes YAML, see [Deployment](deployment/README.md). To deploy Citrix Observability Exporter using Helm charts, see [Deploy using Helm charts](https://github.com/citrix/citrix-helm-charts/tree/master/citrix-observability-exporter).
57
-
58
-
## Features
59
-
60
-
### Custom header logging
61
-
62
-
Custom header logging enables logging of all HTTP headers of a transaction and currently supported on the Kafka endpoint.
63
-
For more information, see [Custom header logging](https://github.com/citrix/citrix-observability-exporter/tree/master/custom-header).
59
+
You can deploy Citrix Observability Exporter using Kubernetes YAML or Helm charts. To deploy Citrix Observability Exporter using Kubernetes YAML, see [Deployment](deployment/README.md).
Copy file name to clipboardExpand all lines: deployment/README.md
+24-21Lines changed: 24 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,6 @@
1
1
# Deploy Citrix Observability Exporter
2
2
3
3
This topic provides information on how to deploy Citrix Observability Exporter using Kubernetes YAML files.
4
-
To deploy Citrix Observability Exporter using Helm charts, see [Deploy using Helm charts](https://github.com/citrix/citrix-helm-charts/tree/master/citrix-observability-exporter).
5
4
<!---
6
5
You can deploy Citrix Observability Exporter using Kubernetes YAML files or using Helm charts.
7
6
-->
@@ -15,18 +14,18 @@ The following diagram shows a deployment of Citrix Observability Exporter with a
15
14
16
15
## Prerequisites
17
16
18
-
- Ensure that you have a Kubernetes cluster with `kube-dns` or `CoreDNS`.
17
+
- Ensure that you have a Kubernetes cluster with `kube-dns` or `CoreDNS` addon enabled.
19
18
- If Zipkin is used as the distributed tracer,
20
19
ensure that you have the following docker images installed in the Kubernetes cluster:
21
20
-[Zipkin](https://zipkin.io/)
22
-
-[Elasticsearch](https://www.elastic.co/products/elasticsearch) as back-end for Zipkin and to visualize your tracing data in [Kibana](https://www.elastic.co/products/kibana). You can also use Elasticsearch as an endpoint for transactions.
21
+
-(Optional) [Elasticsearch](https://www.elastic.co/products/elasticsearch) as back-end for Zipkin. Elasticsearch is required if you want to visualize your tracing data in [Kibana](https://www.elastic.co/products/kibana). You can also use Elasticsearch as an endpoint for transactions.
23
22
-[Kibana](https://www.elastic.co/products/kibana) is required to visualize your tracing data.
24
23
25
24
**Note:**
26
25
You can use [zipkin.yaml](../examples/zipkin.yaml), [elasticsearch.yaml](../examples/elasticsearch.yaml), and [kibana.yaml](../examples/kibana.yaml) for installing Zipkin, Elasticsearch, and Kibana.
27
26
28
27
- If Elasticsearch is used as the endpoint for transactions, ensure that you have Elasticsearch installed and configured.
29
-
- If Kafka is used as the endpoint for transactions, ensure that Kafka server is installed and configured.
28
+
- If Kafka is used as the endpoint for transactions, ensure that the Kafka server is installed and configured.
30
29
- If Prometheus is used as the endpoint for time series data, ensure that Prometheus is installed and configured.
31
30
32
31
## Deploy Citrix Observability Exporter using YAML
@@ -45,48 +44,47 @@ To deploy Citrix Observability Exporter using Kubernetes YAML, perform the follo
45
44
46
45
- For Citrix Observability Exporter with Zipkin tracing support:
47
46
48
-
Deploy Citrix Observability Exporter using the [coe-tracing.yaml](coe-tracing.yaml) file.
47
+
Deploy Citrix Observability Exporter using the [coe-zipkin.yaml](coe-zipkin.yaml) file.
49
48
50
49
51
-
kubectl create -f coe-tracing.yaml
50
+
kubectl create -f coe-zipkin.yaml
52
51
53
-
Set the `EnableTracing` option to `yes` and provide the Zipkin server information using `TracingServer`.
54
-
You can specify the tracing server in ConfigMap using environment variables in two ways:
52
+
You can specify the Zipkin server information in ConfigMap using environment variables in two ways:
55
53
56
54
- Specify the IP address or DNS name of the tracing server (Zipkin):
57
55
58
-
TRACING_SERVER=<ip-address> or <dns-name>
56
+
ServerUrl=<ip-address> or <dns-name>
59
57
60
58
If you specify only the IP address, Citrix Observability Exporter considers the port as the default Zipkin port (9411) and takes the default upload path (`/api/v1/spans`).
61
59
62
60
- Explicitly provide the tracer IP address or DNS name, port, and the upload path information:
63
61
64
-
TRACING_SERVER=<ip-address>:<port>/api/v1/spans
62
+
ServerUrl=<ip-address>:<port>/api/v1/spans
65
63
66
64
- For Citrix Observability Exporter with Elasticsearch as the endpoint:
67
65
68
66
Deploy Citrix Observability Exporter using the [coe-es.yaml](coe-es.yaml) file.
69
67
70
68
kubectl create -f coe-es.yaml
71
69
72
-
Set the Elasticsearch server details in the `ELKServer` environment variable either based on IP address or DNS name, along with port information.
70
+
Set the Elasticsearch server details in the `ServerUrl` environment variable either based on IP address or DNS name, along with port information.
73
71
74
72
- For Citrix Observability Exporter with Kafka as the endpoint:
75
73
76
74
Deploy Citrix Observability Exporter using the [coe-kafka.yaml](coe-kafka.yaml) file
77
75
78
76
kubectl create -f coe-kafka.yaml
79
77
80
-
Enable the Kafka endpoint by setting the value of `EnableKafka` as `yes`. Also, set Kafka broker details in `KafkaBroker` and topic details in `KafkaTopic`. You also must specify the Kafka cluster host IP mapping under HostAliases in the [Kubernetes Pod specification](https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/#adding-additional-entries-with-hostaliases).
78
+
Enable the Kafka endpoint by setting the Kafka broker details in the `ServerUrl` environment variable either based on IP address or DNS name, along with port information. Then specify the Kafka topic details in `KafkaTopic`. You also must specify the Kafka cluster host IP mapping under HostAliases in the [Kubernetes Pod specification](https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/#adding-additional-entries-with-hostaliases).
81
79
82
80
- For Citrix Observability Exporter with Prometheus as the endpoint for time series data:
83
81
84
-
You can enable Prometheus support by specifying the following annotations in the YAML files to deploy Zipkin, Kafka, or Elasticsearch and exposing the time series port.
82
+
You can enable Prometheus support by specifying the following annotations in the YAML files to deploy Zipkin, Kafka, or Elasticsearch and exposing the time series port. You need to also specify the time series parameter with metrics enable set as `true` and the mode set to `prometheus` in the respective `cic-configmap.yaml` file for the endpoint.
85
83
86
84
prometheus.io/scrape: "true"
87
85
prometheus.io/port: "5563"
88
86
89
-
The following command deploys Citrix Observability Exporter with both Elasticsearch and Prometheus as endpoints, using the [coe-es-prometheus.yaml](coe-es-prometheus.yaml) file. In this YAML file, annotations for Prometheus support are enabled and port 5563 is exposed which is used for the time series data.
87
+
The following command deploys Citrix Observability Exporter with both Elasticsearch and Prometheus as endpoints, using the [coe-es-prometheus.yaml](coe-es-prometheus.yaml) file. In this YAML file, annotations for Prometheus support are enabled and port 5563 is exposed which is used for the time series data.
90
88
91
89
kubectl create -f coe-es-prometheus.yaml
92
90
@@ -112,22 +110,27 @@ Perform the following steps to deploy a Citrix ADC CPX instance with Citrix Obse
112
110
- For tracing support with Zipkin: [cpx-ingress-tracing.yaml](../examples/tracing/cpx-ingress-tracing.yaml)
113
111
- For Elasticsearch as the transaction endpoint: [cpx-ingress-es.yaml](../examples/elasticsearch/cpx-ingress-es.yaml)
114
112
- For Kafka as the transaction endpoint: [cpx-ingress-kafka.yaml](../examples/kafka/cpx-ingress-kafka.yaml)
113
+
- For Prometheus as the time series data endpoint: [cpx-ingress-prometheus.yaml](../examples/prometheus/cpx-ingress-prometheus.yaml)
115
114
116
-
2. Create and deploy a ConfigMap with the required key-value pairs in the ConfigMap. You can use the [cic-configmap.yaml](../examples/cic-configmap.yaml) file.
115
+
2. Create and deploy a ConfigMap with the required key-value pairs in the ConfigMap. You can use the [cic-configmap.yaml](../examples/cic-configmap.yaml) file or the one that is available within the endpoint example directory.
117
116
118
117
kubectl create -f cic-configmap.yaml
119
118
120
119
3. Deploy Citrix ADC CPX with the Citrix ingress controller as a sidecar using the following command.
121
120
122
121
- For tracing support with Zipkin:
123
122
124
-
kubectl create -f cpx-ingress-tracing.yaml
123
+
kubectl create -f cpx-ingress-tracing.yaml
125
124
- For Elasticsearch as the transaction endpoint:
126
125
127
-
kubectl create -f cpx-ingress-es.yaml
128
-
- For Kafka as the transaction endpoint:
126
+
kubectl create -f cpx-ingress-es.yaml
127
+
- For Kafka as the transaction endpoint:
129
128
130
-
kubectl create -f cpx-ingress-kafka.yaml
129
+
kubectl create -f cpx-ingress-kafka.yaml
130
+
131
+
- For Prometheus as the time series data endpoint:
132
+
133
+
kubectl create -f cpx-ingress-prometheus.yaml
131
134
132
135
**Note:**
133
136
Using [smart annotations](https://developer-docs.citrix.com/projects/citrix-k8s-ingress-controller/en/latest/configure/annotations/), you can define specific parameters you must import by specifying it in the YAML file for deploying Citrix ADC CPX.
@@ -206,7 +209,7 @@ file. This sample web application is added as a service in the Ingress.
206
209
207
210
kubectl create -f webserver-es.yaml
208
211
209
-
1. Create a host entry for the web application in Citrix ADC CPX hosts file and map it to the IP address of Kubernetes master node for DNS resolution.
212
+
1. Create a host entry for the web application in Citrix ADC CPX hosts file and map it to the IP address of the Kubernetes master node for DNS resolution.
210
213
211
214
www.samplewebserver.com ip-address
212
215
@@ -231,7 +234,7 @@ file. This sample web application is added as a service in the Ingress.
231
234
232
235
**Note:**
233
236
You can import the Kibana dashboard template from [dashboards](../dashboards/KibanaAppTrans.ndjson).
234
-
Before importing the Kibana dashboard, you must define an index pattern named `*http*` using the information in the [Kibana User Guide](https://www.elastic.co/guide/en/kibana/current/tutorial-define-index.html).
237
+
This Kibana dashboard uses the default index prefix `adc_coe`, you must define an index pattern named `adc_coe*` using the information in the [Kibana User Guide](https://www.elastic.co/guide/en/kibana/current/tutorial-define-index.html).
0 commit comments