Skip to content

Commit 5f51ef8

Browse files
Version 1.2.001
1 parent 466b981 commit 5f51ef8

45 files changed

Lines changed: 2376 additions & 566 deletions

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

README.md

Lines changed: 6 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -37,30 +37,26 @@ Citrix Observability Exporter supports collecting time series data (metrics) fro
3737

3838
Logstream is a Citrix-owned protocol that is used as one of the transport modes to efficiently transfer transactions from Citrix ADC instances. Citrix Observability Exporter collects tracing data as Logstream records from multiple Citrix ADCs and aggregates them. Citrix Observability Exporter converts the data into a format understood by the tracer and then uploads to the tracer (Zipkin in this case). For Zipkin, the data is converted into JSON, with Zipkin-specific key values.
3939

40-
You can view the traces using Zipkin user interface. However, you can also enhance the trace analysis by using [Elasticsearch](https://www.elastic.co/products/elasticsearch) and [Kibana](https://www.elastic.co/products/kibana) with Zipkin. Elasticsearch provides long-term retention of the trace data and Kibana allows you to get much deeper insight into the data.
40+
You can view the traces using the Zipkin user interface. However, you can also enhance the trace analysis by using [Elasticsearch](https://www.elastic.co/products/elasticsearch) and [Kibana](https://www.elastic.co/products/kibana) with Zipkin. Elasticsearch provides long-term retention of the trace data and Kibana allows you to get much deeper insight into the data.
4141

4242
### Citrix Observability Exporter with Elasticsearch as the transaction endpoint
4343

44+
4445
When Elasticsearch is specified as the transaction endpoint, Citrix Observability Exporter converts the data to JSON format. On the Elasticsearch server, Citrix Observability Exporter creates Elasticsearch indexes for each ADC on an hourly basis. These indexes are based on data, hour, UUID of the ADC, and the type of HTTP data (http_event or http_error). Then, Citrix Observability Exporter uploads the data in JSON format under Elastic search indexes for each ADC. All regular transactions are placed into the http_event index and any anomalies are placed into the http_error index.
4546

47+
Effective with the Citrix Observability Exporter release 1.2.001, when the Citrix Observability Exporter sends the data to the Elasticsearch server some of the fields are available in the string format. Also, index configuration options are also added for Elasticsearch. For more information on fields which are in the string format and how to configure the Elasticsearch index, see [Elasticsearch support enhancements](./es-enhancements/README.md).
48+
4649
### Citrix Observability Exporter with Kafka as the transaction endpoint
4750

4851
When Kafka is specified as the transaction endpoint, Citrix Observability Exporter converts the transaction data to [Avro](http://avro.apache.org/docs/current/Avro) format and streams them to Kafka.
4952

5053
### Citrix Observability Exporter with Prometheus as the endpoint for time series data
5154

52-
When Prometheus is specified as the format for time series data, Citrix Observability Exporter collects various metrics from Citrix ADCs and converts them to appropriate Prometheus format and exports them to the Prometheus server. These metrics include counters of the virtual servers, services to which the analytics profile is bound and global counters of HTTP, TCP and so on.
55+
When Prometheus is specified as the format for time series data, Citrix Observability Exporter collects various metrics from Citrix ADCs and converts them to the appropriate Prometheus format and exports them to the Prometheus server. These metrics include counters of the virtual servers, services to which the analytics profile is bound and global counters of HTTP, TCP, and so on.
5356

5457
## Deployment
5558

56-
You can deploy Citrix Observability Exporter using Kubernetes YAML or Helm charts. To deploy Citrix Observability Exporter using Kubernetes YAML, see [Deployment](deployment/README.md). To deploy Citrix Observability Exporter using Helm charts, see [Deploy using Helm charts](https://github.com/citrix/citrix-helm-charts/tree/master/citrix-observability-exporter).
57-
58-
## Features
59-
60-
### Custom header logging
61-
62-
Custom header logging enables logging of all HTTP headers of a transaction and currently supported on the Kafka endpoint.
63-
For more information, see [Custom header logging](https://github.com/citrix/citrix-observability-exporter/tree/master/custom-header).
59+
You can deploy Citrix Observability Exporter using Kubernetes YAML or Helm charts. To deploy Citrix Observability Exporter using Kubernetes YAML, see [Deployment](deployment/README.md).
6460

6561
## Questions
6662

deployment/README.md

Lines changed: 24 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
# Deploy Citrix Observability Exporter
22

33
This topic provides information on how to deploy Citrix Observability Exporter using Kubernetes YAML files.
4-
To deploy Citrix Observability Exporter using Helm charts, see [Deploy using Helm charts](https://github.com/citrix/citrix-helm-charts/tree/master/citrix-observability-exporter).
54
<!---
65
You can deploy Citrix Observability Exporter using Kubernetes YAML files or using Helm charts.
76
-->
@@ -15,18 +14,18 @@ The following diagram shows a deployment of Citrix Observability Exporter with a
1514

1615
## Prerequisites
1716

18-
- Ensure that you have a Kubernetes cluster with `kube-dns` or `CoreDNS`.
17+
- Ensure that you have a Kubernetes cluster with `kube-dns` or `CoreDNS` addon enabled.
1918
- If Zipkin is used as the distributed tracer,
2019
ensure that you have the following docker images installed in the Kubernetes cluster:
2120
- [Zipkin](https://zipkin.io/)
22-
- [Elasticsearch](https://www.elastic.co/products/elasticsearch) as back-end for Zipkin and to visualize your tracing data in [Kibana](https://www.elastic.co/products/kibana). You can also use Elasticsearch as an endpoint for transactions.
21+
- (Optional) [Elasticsearch](https://www.elastic.co/products/elasticsearch) as back-end for Zipkin. Elasticsearch is required if you want to visualize your tracing data in [Kibana](https://www.elastic.co/products/kibana). You can also use Elasticsearch as an endpoint for transactions.
2322
- [Kibana](https://www.elastic.co/products/kibana) is required to visualize your tracing data.
2423

2524
**Note:**
2625
You can use [zipkin.yaml](../examples/zipkin.yaml), [elasticsearch.yaml](../examples/elasticsearch.yaml), and [kibana.yaml](../examples/kibana.yaml) for installing Zipkin, Elasticsearch, and Kibana.
2726

2827
- If Elasticsearch is used as the endpoint for transactions, ensure that you have Elasticsearch installed and configured.
29-
- If Kafka is used as the endpoint for transactions, ensure that Kafka server is installed and configured.
28+
- If Kafka is used as the endpoint for transactions, ensure that the Kafka server is installed and configured.
3029
- If Prometheus is used as the endpoint for time series data, ensure that Prometheus is installed and configured.
3130

3231
## Deploy Citrix Observability Exporter using YAML
@@ -45,48 +44,47 @@ To deploy Citrix Observability Exporter using Kubernetes YAML, perform the follo
4544

4645
- For Citrix Observability Exporter with Zipkin tracing support:
4746

48-
Deploy Citrix Observability Exporter using the [coe-tracing.yaml](coe-tracing.yaml) file.
47+
Deploy Citrix Observability Exporter using the [coe-zipkin.yaml](coe-zipkin.yaml) file.
4948

5049

51-
kubectl create -f coe-tracing.yaml
50+
kubectl create -f coe-zipkin.yaml
5251

53-
Set the `EnableTracing` option to `yes` and provide the Zipkin server information using `TracingServer`.
54-
You can specify the tracing server in ConfigMap using environment variables in two ways:
52+
You can specify the Zipkin server information in ConfigMap using environment variables in two ways:
5553

5654
- Specify the IP address or DNS name of the tracing server (Zipkin):
5755

58-
TRACING_SERVER=<ip-address> or <dns-name>
56+
ServerUrl=<ip-address> or <dns-name>
5957

6058
If you specify only the IP address, Citrix Observability Exporter considers the port as the default Zipkin port (9411) and takes the default upload path (`/api/v1/spans`).
6159

6260
- Explicitly provide the tracer IP address or DNS name, port, and the upload path information:
6361

64-
TRACING_SERVER=<ip-address>:<port>/api/v1/spans
62+
ServerUrl=<ip-address>:<port>/api/v1/spans
6563

6664
- For Citrix Observability Exporter with Elasticsearch as the endpoint:
6765

6866
Deploy Citrix Observability Exporter using the [coe-es.yaml](coe-es.yaml) file.
6967

7068
kubectl create -f coe-es.yaml
7169

72-
Set the Elasticsearch server details in the `ELKServer` environment variable either based on IP address or DNS name, along with port information.
70+
Set the Elasticsearch server details in the `ServerUrl` environment variable either based on IP address or DNS name, along with port information.
7371

7472
- For Citrix Observability Exporter with Kafka as the endpoint:
7573

7674
Deploy Citrix Observability Exporter using the [coe-kafka.yaml](coe-kafka.yaml) file
7775

7876
kubectl create -f coe-kafka.yaml
7977

80-
Enable the Kafka endpoint by setting the value of `EnableKafka` as `yes`. Also, set Kafka broker details in `KafkaBroker` and topic details in `KafkaTopic`. You also must specify the Kafka cluster host IP mapping under HostAliases in the [Kubernetes Pod specification](https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/#adding-additional-entries-with-hostaliases).
78+
Enable the Kafka endpoint by setting the Kafka broker details in the `ServerUrl` environment variable either based on IP address or DNS name, along with port information. Then specify the Kafka topic details in `KafkaTopic`. You also must specify the Kafka cluster host IP mapping under HostAliases in the [Kubernetes Pod specification](https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/#adding-additional-entries-with-hostaliases).
8179

8280
- For Citrix Observability Exporter with Prometheus as the endpoint for time series data:
8381

84-
You can enable Prometheus support by specifying the following annotations in the YAML files to deploy Zipkin, Kafka, or Elasticsearch and exposing the time series port.
82+
You can enable Prometheus support by specifying the following annotations in the YAML files to deploy Zipkin, Kafka, or Elasticsearch and exposing the time series port. You need to also specify the time series parameter with metrics enable set as `true` and the mode set to `prometheus` in the respective `cic-configmap.yaml` file for the endpoint.
8583

8684
prometheus.io/scrape: "true"
8785
prometheus.io/port: "5563"
8886

89-
The following command deploys Citrix Observability Exporter with both Elasticsearch and Prometheus as endpoints, using the [coe-es-prometheus.yaml](coe-es-prometheus.yaml) file. In this YAML file, annotations for Prometheus support are enabled and port 5563 is exposed which is used for the time series data.
87+
The following command deploys Citrix Observability Exporter with both Elasticsearch and Prometheus as endpoints, using the [coe-es-prometheus.yaml](coe-es-prometheus.yaml) file. In this YAML file, annotations for Prometheus support are enabled and port 5563 is exposed which is used for the time series data.
9088

9189
kubectl create -f coe-es-prometheus.yaml
9290

@@ -112,22 +110,27 @@ Perform the following steps to deploy a Citrix ADC CPX instance with Citrix Obse
112110
- For tracing support with Zipkin: [cpx-ingress-tracing.yaml](../examples/tracing/cpx-ingress-tracing.yaml)
113111
- For Elasticsearch as the transaction endpoint: [cpx-ingress-es.yaml](../examples/elasticsearch/cpx-ingress-es.yaml)
114112
- For Kafka as the transaction endpoint: [cpx-ingress-kafka.yaml](../examples/kafka/cpx-ingress-kafka.yaml)
113+
- For Prometheus as the time series data endpoint: [cpx-ingress-prometheus.yaml](../examples/prometheus/cpx-ingress-prometheus.yaml)
115114

116-
2. Create and deploy a ConfigMap with the required key-value pairs in the ConfigMap. You can use the [cic-configmap.yaml](../examples/cic-configmap.yaml) file.
115+
2. Create and deploy a ConfigMap with the required key-value pairs in the ConfigMap. You can use the [cic-configmap.yaml](../examples/cic-configmap.yaml) file or the one that is available within the endpoint example directory.
117116

118117
kubectl create -f cic-configmap.yaml
119118

120119
3. Deploy Citrix ADC CPX with the Citrix ingress controller as a sidecar using the following command.
121120

122121
- For tracing support with Zipkin:
123122

124-
kubectl create -f cpx-ingress-tracing.yaml
123+
kubectl create -f cpx-ingress-tracing.yaml
125124
- For Elasticsearch as the transaction endpoint:
126125

127-
kubectl create -f cpx-ingress-es.yaml
128-
- For Kafka as the transaction endpoint:
126+
kubectl create -f cpx-ingress-es.yaml
127+
- For Kafka as the transaction endpoint:
129128

130-
kubectl create -f cpx-ingress-kafka.yaml
129+
kubectl create -f cpx-ingress-kafka.yaml
130+
131+
- For Prometheus as the time series data endpoint:
132+
133+
kubectl create -f cpx-ingress-prometheus.yaml
131134

132135
**Note:**
133136
Using [smart annotations](https://developer-docs.citrix.com/projects/citrix-k8s-ingress-controller/en/latest/configure/annotations/), you can define specific parameters you must import by specifying it in the YAML file for deploying Citrix ADC CPX.
@@ -206,7 +209,7 @@ file. This sample web application is added as a service in the Ingress.
206209

207210
kubectl create -f webserver-es.yaml
208211

209-
1. Create a host entry for the web application in Citrix ADC CPX hosts file and map it to the IP address of Kubernetes master node for DNS resolution.
212+
1. Create a host entry for the web application in Citrix ADC CPX hosts file and map it to the IP address of the Kubernetes master node for DNS resolution.
210213

211214
www.samplewebserver.com ip-address
212215

@@ -231,7 +234,7 @@ file. This sample web application is added as a service in the Ingress.
231234

232235
**Note:**
233236
You can import the Kibana dashboard template from [dashboards](../dashboards/KibanaAppTrans.ndjson).
234-
Before importing the Kibana dashboard, you must define an index pattern named `*http*` using the information in the [Kibana User Guide](https://www.elastic.co/guide/en/kibana/current/tutorial-define-index.html).
237+
This Kibana dashboard uses the default index prefix `adc_coe`, you must define an index pattern named `adc_coe*` using the information in the [Kibana User Guide](https://www.elastic.co/guide/en/kibana/current/tutorial-define-index.html).
235238

236239
### Sample Grafana dashboard for Prometheus
237240

Lines changed: 32 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -5,30 +5,32 @@ metadata:
55
data:
66
lstreamd_default.conf: |
77
{
8-
"RecordType": {
9-
"HTTP": "all",
10-
"TCP": "all",
11-
"SWG": "all",
12-
"VPN": "all",
13-
"NGS": "all",
14-
"ICA": "all",
15-
"APPFW": "all",
16-
"BOT": "none",
17-
"VIDEOOPT": "none",
18-
"BURST_CQA": "none",
19-
"SLA": "none"
20-
},
21-
"EnableTracing": "no",
22-
"ProcessAlways": "no",
23-
"ProcessorMode": "json",
24-
"FileSizeMax": "40",
25-
"ElkServer": "elasticsearch.default.svc.cluster.local:9200",
26-
"ElkMaxConnections": "512",
27-
"ElkMaxSendBuffersPerSec": "128",
28-
"ElkBufferingLimit": "1024*1024",
29-
"SkipAvro": "yes",
30-
"ProcessYieldTimeOut": "500",
31-
"FileStorageLimit": "1000"
8+
"Endpoints": {
9+
"ES": {
10+
"ServerUrl": "elasticsearch.default.svc.cluster.local:9200",
11+
"IndexPrefix":"adc_coe",
12+
"IndexInterval": "daily",
13+
"RecordType": {
14+
"HTTP": "all",
15+
"TCP": "all",
16+
"SWG": "all",
17+
"VPN": "all",
18+
"NGS": "all",
19+
"ICA": "all",
20+
"APPFW": "none",
21+
"BOT": "none",
22+
"VIDEOOPT": "none",
23+
"BURST_CQA": "none",
24+
"SLA": "none",
25+
"MONGO": "all"
26+
},
27+
"ProcessAlways": "no",
28+
"ProcessYieldTimeOut": "500",
29+
"MaxConnections": "512",
30+
"ElkMaxSendBuffersPerSec": "64",
31+
"JsonFileDump": "no"
32+
}
33+
}
3234
}
3335
---
3436

@@ -51,7 +53,7 @@ spec:
5153
spec:
5254
containers:
5355
- name: coe-es
54-
image: "quay.io/citrix/citrix-observability-exporter:1.0.001"
56+
image: "quay.io/citrix/citrix-observability-exporter:1.2.001"
5557
imagePullPolicy: Always
5658
ports:
5759
- containerPort: 5557
@@ -60,10 +62,14 @@ spec:
6062
- name: lstreamd-config-es
6163
mountPath: /var/logproxy/lstreamd/conf/lstreamd_default.conf
6264
subPath: lstreamd_default.conf
65+
- name: core-data
66+
mountPath: /cores/
6367
volumes:
6468
- name: lstreamd-config-es
6569
configMap:
6670
name: coe-config-es
71+
- name: core-data
72+
emptyDir: {}
6773
---
6874
# Citrix-observability-exporter headless service
6975
apiVersion: v1
@@ -93,4 +99,4 @@ spec:
9399
- port: 5557
94100
protocol: TCP
95101
selector:
96-
app: coe-es
102+
app: coe-es

deployment/coe-es-prometheus.yaml

Lines changed: 35 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -5,32 +5,32 @@ metadata:
55
data:
66
lstreamd_default.conf: |
77
{
8-
"RecordType": {
9-
"HTTP": "all",
10-
"TCP": "all",
11-
"SWG": "all",
12-
"VPN": "all",
13-
"NGS": "all",
14-
"ICA": "all",
15-
"APPFW": "none",
16-
"BOT": "none",
17-
"VIDEOOPT": "none",
18-
"BURST_CQA": "none",
19-
"SLA": "none"
20-
},
21-
"EnableTracing": "yes",
22-
"TracingServer": "zipkin.default.svc.cluster.local:9411/api/v1/spans",
23-
"ProcessAlways": "yes",
24-
"ProcessorMode": "json",
25-
"FileSizeMax": "40",
26-
"ElkServer": "elasticsearch.default.svc.cluster.local:9200",
27-
"ElkMaxConnections": "512",
28-
"ElkMaxSendBuffersPerSec": "128",
29-
"ElkBufferingLimit": "1024*1024",
30-
"ELKFileDump": "no",
31-
"SkipAvro": "yes",
32-
"ProcessYieldTimeOut": "500",
33-
"FileStorageLimit": "1000"
8+
"Endpoints": {
9+
"ES": {
10+
"ServerUrl": "elasticsearch.default.svc.cluster.local:9200",
11+
"IndexPrefix":"adc_coe",
12+
"IndexInterval": "daily",
13+
"RecordType": {
14+
"HTTP": "all",
15+
"TCP": "all",
16+
"SWG": "all",
17+
"VPN": "all",
18+
"NGS": "all",
19+
"ICA": "all",
20+
"APPFW": "none",
21+
"BOT": "none",
22+
"VIDEOOPT": "none",
23+
"BURST_CQA": "none",
24+
"SLA": "none",
25+
"MONGO": "none"
26+
},
27+
"ProcessAlways": "no",
28+
"ProcessYieldTimeOut": "500",
29+
"MaxConnections": "512",
30+
"ElkMaxSendBuffersPerSec": "64",
31+
"JsonFileDump": "no"
32+
}
33+
}
3434
}
3535
---
3636

@@ -56,8 +56,10 @@ spec:
5656
spec:
5757
containers:
5858
- name: coe-es
59-
image: "quay.io/citrix/citrix-observability-exporter:1.1.001"
59+
image: "quay.io/citrix/citrix-observability-exporter:1.2.001"
6060
imagePullPolicy: Always
61+
securityContext:
62+
privileged: true
6163
ports:
6264
- containerPort: 5557
6365
name: lstream
@@ -67,12 +69,16 @@ spec:
6769
- name: lstreamd-config-es
6870
mountPath: /var/logproxy/lstreamd/conf/lstreamd_default.conf
6971
subPath: lstreamd_default.conf
72+
- name: core-data
73+
mountPath: /cores/
7074
volumes:
7175
- name: lstreamd-config-es
7276
configMap:
7377
name: coe-config-es
78+
- name: core-data
79+
emptyDir: {}
7480
---
75-
# Citrix-observability-exporter headless service
81+
# Citrix-observability-exporter headless service
7682
apiVersion: v1
7783
kind: Service
7884
metadata:
@@ -108,4 +114,4 @@ spec:
108114
protocol: TCP
109115
name: rest
110116
selector:
111-
app: coe-es
117+
app: coe-es

0 commit comments

Comments
 (0)