Skip to content

Commit 2e8a47e

Browse files
committed
Version 1.1.001 release docs
1 parent 607651b commit 2e8a47e

4 files changed

Lines changed: 86 additions & 49 deletions

File tree

README.md

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,8 @@
88

99
- [Zipkin](https://zipkin.io/)
1010
- [Kafka](https://kafka.apache.org/)
11-
- [Elasticsearch](https://www.elastic.co/products/elasticsearch)
11+
- [Elasticsearch](https://www.elastic.co/products/elasticsearch)
12+
- [Prometheus](https://prometheus.io/docs/introduction/overview/)
1213

1314
## Overview
1415

@@ -26,6 +27,10 @@ Currently, you can monitor performance at the application level using Citrix ADC
2627

2728
Citrix Observability Exporter supports collecting transactions and streaming them to endpoints. Currently, Citrix Observability Exporter supports Elasticsearch and Kafka as transaction endpoints.
2829

30+
### Time series data support
31+
32+
Citrix Observability Exporter supports collecting time series data (metrics) from Citrix ADC instances and exports them to Prometheus. Prometheus is a monitoring solution for storing time series data like metrics. You can then add Prometheus as a data source to Grafana and graphically view the Citrix ADC metrics and analyze the metrics.
33+
2934
## How does Citrix Observability Exporter work
3035

3136
### Distributed tracing with Zipkin using Citrix Observability Exporter
@@ -42,14 +47,19 @@ When Elasticsearch is specified as the transaction endpoint, Citrix Observabilit
4247

4348
When Kafka is specified as the transaction endpoint, Citrix Observability Exporter converts the transaction data to [Avro](http://avro.apache.org/docs/current/Avro) format and streams them to Kafka.
4449

50+
### Citrix Observability Exporter with Prometheus as the endpoint for time series data
51+
52+
When Prometheus is specified as the format for time series data, Citrix Observability Exporter collects various metrics from Citrix ADCs and converts them to appropriate Prometheus format and exports them to the Prometheus server. These metrics include counters of the virtual servers, services to which the analytics profile is bound and global counters of HTTP, TCP and so on.
53+
4554
## Deployment
4655

47-
You can deploy Citrix Observability Exporter using Kubernetes YAML. To deploy Citrix Observability Exporter using Kubernetes YAML, see [Deployment](deployment/README.md).
56+
You can deploy Citrix Observability Exporter using Kubernetes YAML or Helm charts. To deploy Citrix Observability Exporter using Kubernetes YAML, see [Deployment](deployment/README.md).
4857

4958
## Questions
5059

5160
For questions and support, the following channels are available:
5261

62+
5363
- [Citrix Discussion Forum](https://discussions.citrix.com/)
5464
- [Citrix ADC Cloud Native Slack Channel](https://citrixadccloudnative.slack.com/)
5565

custom-header/README.md

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
# Custom header logging
2+
3+
This feature enables logging of all HTTP headers of a transaction. As part of this feature, a new option called `allHttpHeaders` is introduced under the Web Insight analytics profile. When this option is set, Citrix ADC uploads all the request and response headers under `httpAllReqHdrs` and `httpAllResHdrs` respectively. This option is disabled by default. Custom header logging is currently supported only for the Kafka endpoint.
4+
5+
Now, Citrix Observability Exporter transaction records contain two additional fields:
6+
7+
- `httpAllReqHdrs`: This field is a string containing all the request header lines separated by \r\n.
8+
- `httpAllResHdrs`: This field is a string containing all the response header lines separated by \r\n.
9+
10+
## Citrix ADC configuration to enable custom header logging
11+
12+
### Using the Citrix ADC command line
13+
14+
Use the following command from the Citrix ADC command line.
15+
16+
set analytics profile ns_analytics_default_http_profile -allHttpHeaders ENABLED
17+
18+
### Using the Citrix ingress controller
19+
20+
To enable `allHttpHeaders` using the Citrix ingress controller, you should set the analytics profile as annotations on the Ingress which is applied on the Citrix ADC.
21+
22+
23+
ingress.citrix.com/analyticsprofile: '{"webinsight": {"httpurl":"ENABLED", "httpuseragent":"ENABLED", "httphost":"ENABLED", “allhttpheaders”:”ENABLED”, "httpmethod":"ENABLED", "httpcontenttype":"ENABLED"}, "tcpinsight": {"tcpBurstReporting":"DISABLED"}}'
24+
25+
Following is a sample transaction record snapshot:
26+
27+
{"http_transid": "avro_HTTP_TF_0_c222660a_0_0_T_1577278789_1", "recType": "HTTP_A", "actualtemplatecode": 51, "httpReqMethod": "GET", "httpReqUrl": "/status/500", "httpReqUserAgent": "curl/7.58.0", "httpContentType": "", "httpReqHost": "10.102.34.201", "httpReqAuthorization": "", "httpReqCookie": "", "httpReqReferer": "", "httpResSetCookie": "", "httpAllReqHdrs": "Test: test500get\r\nAccept: */*\r\nHost: 10.102.34.201\r\nUser-Agent: curl/7.58.0\r\n", "httpAllResHdrs": "Connection: keep-alive\r\nContent-Length: 0\r\nContent-Type: text/html; charset=utf-8\r\nDate: Wed, 25 Dec 2019 13:16:39 GMT\r\nServer: gunicorn/19.9.0\r\nAccess-Control-Allow-Origin: *\r\nAccess-Control-Allow-Credentials: true\r\n", "icContGrpName": "", "icFlags": 0, "icNostoreFlags": 0, "icPolicyName": "", "responseMediaType": 0,

deployment/README.md

Lines changed: 47 additions & 47 deletions
Original file line numberDiff line numberDiff line change
@@ -19,13 +19,14 @@ The following diagram shows a deployment of Citrix Observability Exporter with a
1919
ensure that you have the following docker images installed in the Kubernetes cluster:
2020
- [Zipkin](https://zipkin.io/)
2121
- (Optional) [Elasticsearch](https://www.elastic.co/products/elasticsearch) as back-end for Zipkin. Elasticsearch is required if you want to visualize your tracing data in [Kibana](https://www.elastic.co/products/kibana). You can also use Elasticsearch as an endpoint for transactions.
22-
- (Optional) [Kibana](https://www.elastic.co/products/kibana) is required to visualize your tracing data.
22+
- [Kibana](https://www.elastic.co/products/kibana) is required to visualize your tracing data.
2323

2424
**Note:**
2525
You can use [zipkin.yaml](../examples/zipkin.yaml), [elasticsearch.yaml](../examples/elasticsearch.yaml), and [kibana.yaml](../examples/kibana.yaml) for installing Zipkin, Elasticsearch, and Kibana.
2626

2727
- If Elasticsearch is used as the endpoint for transactions, ensure that you have Elasticsearch installed and configured.
2828
- If Kafka is used as the endpoint for transactions, ensure that Kafka server is installed and configured.
29+
- If Prometheus is used as the endpoint for time series data, ensure that Prometheus is installed and configured.
2930

3031
## Deploy Citrix Observability Exporter using YAML
3132

@@ -38,6 +39,7 @@ To deploy Citrix Observability Exporter using Kubernetes YAML, perform the follo
3839
2. If you use Citrix ADC VPX or MPX in the deployment, create the necessary login credentials.
3940

4041
kubectl create secret generic nslogin --from-literal=username='nsroot' --from-literal=password='nsroot'
42+
4143
3. Create a Kubernetes ConfigMap, Deployment, and Service with Log stream configuration for the required endpoint:
4244

4345
- For Citrix Observability Exporter with Zipkin tracing support:
@@ -76,6 +78,19 @@ You can specify the tracing server in ConfigMap using environment variables in t
7678

7779
Enable the Kafka endpoint by setting the value of `EnableKafka` as `yes`. Also, set Kafka broker details in `KafkaBroker` and topic details in `KafkaTopic`. You also must specify the Kafka cluster host IP mapping under HostAliases in the [Kubernetes Pod specification](https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/#adding-additional-entries-with-hostaliases).
7880

81+
- For Citrix Observability Exporter with Prometheus as the endpoint for time series data:
82+
83+
You can enable Prometheus support by specifying the following annotations in the YAML files to deploy Zipkin, Kafka, or Elasticsearch and exposing the time series port.
84+
85+
prometheus.io/scrape: "true"
86+
prometheus.io/port: "5563"
87+
88+
The following command deploys Citrix Observability Exporter with both Elasticsearch and Prometheus as endpoints, using the [coe-es-prometheus.yaml](coe-es-prometheus.yaml) file. In this YAML file, annotations for Prometheus support are enabled and port 5563 is exposed which is used for the time series data.
89+
90+
kubectl create -f coe-es-prometheus.yaml
91+
92+
You should configure Prometheus to scrape the data from the Citrix Observability Exporter time series port. For enabling time series data processing, there is no specific configuration required on Citrix Observability Exporter. By default, if time series data is pushed to Citrix Observability Exporter, it is processed automatically. The time series port is enabled by default.
93+
7994
**Note:**
8095
Once you deploy a Citrix Observability Exporter instance with a specific endpoint, you cannot modify it. For changing the endpoint, you must bring down the Citrix Observability Exporter instance and deploy it again with the new endpoint.
8196

@@ -89,9 +104,6 @@ In this procedure, a Citrix ADC CPX is deployed with the Citrix ingress controll
89104

90105
Depending on the endpoint you are using, you can choose the YAML file for deploying Citrix ADC CPX. These YAML files include the configuration required for Citrix Observability Exporter.
91106

92-
**Note:**
93-
Any usage of the environment variable ``NS_LOGPROXY`` in this procedure refers to ``Citrix Observability Exporter`` only.
94-
95107
Perform the following steps to deploy a Citrix ADC CPX instance with Citrix Observability Exporter support enabled.
96108

97109
1. Download the YAML file for deploying Citrix ADC CPX according to the endpoint.
@@ -100,34 +112,9 @@ Perform the following steps to deploy a Citrix ADC CPX instance with Citrix Obse
100112
- For Elasticsearch as the transaction endpoint: [cpx-ingress-es.yaml](../examples/elasticsearch/cpx-ingress-es.yaml)
101113
- For Kafka as the transaction endpoint: [cpx-ingress-kafka.yaml](../examples/kafka/cpx-ingress-kafka.yaml)
102114

103-
2. Edit the YAML file and specify the environment variables in the Citrix ingress controller configuration according to the endpoint you are using:
104-
- For tracing support with Zipkin:
105-
106-
-name: "NS_LOGPROXY"
107-
value: "<abc.com>"
108-
-name: "NS_DISTRIBUTED_TRACING"
109-
value: "yes"
110-
- For Elasticsearch or Kafka as the transaction endpoint:
111-
112-
- name: "NS_LOGPROXY"
113-
value: "<abc.com>"
114-
115-
**Note:**
116-
Using [smart annotations](https://developer-docs.citrix.com/projects/citrix-k8s-ingress-controller/en/latest/configure/annotations/), you can define specific parameters you must import by specifying it in the YAML file for deploying Citrix ADC CPX.
117-
118-
For example:
119-
120-
121-
ingress.citrix.com/analyticsprofile: '{"webinsight": {"httpurl":"ENABLED", "httpuseragent":"ENABLED", "httphost":"ENABLED", "httpmethod":"ENABLED", "httpcontenttype":"ENABLED"}, "tcpinsight": {"tcpBurstReporting":"DISABLED"}}'
122-
123-
124-
**Note:**
125-
You can also define the parameters to import using smart annotations for service. You can specify the parameters in the YAML for deploying Citrix Observability Exporter. However, you can use service annotations only when the service type is `LoadBalancer`.
115+
2. Create and deploy a ConfigMap with the required key-value pairs in the ConfigMap. You can use the [cic-configmap.yaml](../examples/cic-configmap.yaml) file.
126116

127-
For example:
128-
129-
service.citrix.com/analyticsprofile: '{"<service name>":'{"webinsight": {"httpurl":"ENABLED", "httpuseragent":"ENABLED"}
130-
117+
kubectl create -f cic-configmap.yaml
131118

132119
3. Deploy Citrix ADC CPX with the Citrix ingress controller as a sidecar using the following command.
133120

@@ -141,6 +128,22 @@ Perform the following steps to deploy a Citrix ADC CPX instance with Citrix Obse
141128

142129
kubectl create -f cpx-ingress-kafka.yaml
143130

131+
**Note:**
132+
Using [smart annotations](https://developer-docs.citrix.com/projects/citrix-k8s-ingress-controller/en/latest/configure/annotations/), you can define specific parameters you must import by specifying it in the YAML file for deploying Citrix ADC CPX.
133+
134+
For example:
135+
136+
137+
ingress.citrix.com/analyticsprofile: '{"webinsight": {"httpurl":"ENABLED", "httpuseragent":"ENABLED", "httphost":"ENABLED", "httpmethod":"ENABLED", "httpcontenttype":"ENABLED"}, "tcpinsight": {"tcpBurstReporting":"DISABLED"}}'
138+
139+
140+
**Note:**
141+
You can also define the parameters to import using smart annotations for service. You can specify the parameters in the YAML for deploying Citrix Observability Exporter. However, you can use service annotations only when the service type is `LoadBalancer`.
142+
143+
For example:
144+
145+
service.citrix.com/analyticsprofile: '{"<service name>":'{"webinsight": {"httpurl":"ENABLED", "httpuseragent":"ENABLED"}
146+
144147
### Deploy the Citrix ingress controller with Citrix Observability Exporter support for Citrix ADC MPX or VPX
145148

146149
In this deployment, the Citrix ingress controller is deployed as a standalone pod in the Kubernetes cluster. It controls the Citrix ADC MPX or VPX appliance deployed outside the cluster. The Citrix Observability Exporter support is enabled in the Citrix ingress controller configuration.
@@ -150,24 +153,12 @@ Perform the following steps to deploy a Citrix ADC CPX instance with Citrix Obse
150153
You need to complete the [prerequisites](https://developer-docs.citrix.com/projects/citrix-k8s-ingress-controller/en/latest/deploy/deploy-cic-yaml/#prerequisites) for deploying the Citrix ingress controller as a standalone pod.
151154

152155
1. Download the [vpx-ingress.yaml](../examples/vpx-ingress.yaml) file.
153-
2. Edit the `vpx-ingress.yaml` file and modify the values for the environmental variables as provided in [deploying the Citrix ingress controller](https://developer-docs.citrix.com/projects/citrix-k8s-ingress-controller/en/latest/deploy/deploy-cic-yaml/#deploy-citrix-ingress-controller-as-a-pod).
154-
155-
3. Specify environment variables for Citrix Observability Exporter in the Citrix ingress controller configuration.
156-
157-
- For tracing support with Zipkin:
158-
159156

160-
- name: "NS_LOGPROXY"
161-
value: "<abc.com>"
162-
- name: "NS_DISTRIBUTED_TRACING"
163-
value: "yes"
157+
1. Create and deploy a ConfigMap with the required key-value pairs in the ConfigMap. You can use the [cic-configmap.yaml](../examples/cic-configmap.yaml) file.
164158

165-
- For ElasticSearch or Kafka as the transaction endpoint:
159+
kubectl create -f cic-configmap.yaml
166160

167-
- name: "NS_LOGPROXY"
168-
value: "<abc.com>:5557"
169-
170-
4. Once you update the environment variables, save the [vpx-ingress.yaml](../examples/vpx-ingress.yaml) file and deploy it using the following command.
161+
2. Deploy the [vpx-ingress.yaml](../examples/vpx-ingress.yaml) file using the following command.
171162

172163
kubectl create -f vpx-ingress.yaml -n tracing
173164

@@ -221,6 +212,7 @@ file. This sample web application is added as a service in the Ingress.
221212
1. Get `NodePort` information for `cpx-service` using the following command.
222213

223214
kubectl describe service cpx-service
215+
224216
2. Access `http://www.samplewebserver.com:NodePort` from a web browser to open the sample web application.
225217

226218
3. Send multiple requests to the application as shown in the following sample image.
@@ -230,7 +222,7 @@ file. This sample web application is added as a service in the Ingress.
230222
**Note:**
231223
You can generate different types of response status codes for different HTTP methods (for example, GET, POST, DELETE, and so on).
232224

233-
1. All transactions are uploaded to the Elasticsearch server and you can view them using the Kibana dashboard.
225+
1. All transactions are uploaded to the Elasticsearch server and you can view them using the Kibana dashboard.
234226

235227
You can use the following sample Kibana dashboard to visualize transactions.
236228

@@ -240,3 +232,11 @@ file. This sample web application is added as a service in the Ingress.
240232
You can import the Kibana dashboard template from [dashboards](../dashboards/KibanaAppTrans.ndjson).
241233
Before importing the Kibana dashboard, you must define an index pattern named `*http*` using the information in the [Kibana User Guide](https://www.elastic.co/guide/en/kibana/current/tutorial-define-index.html).
242234

235+
### Sample Grafana dashboard for Prometheus
236+
237+
Following is a sample Grafana dashboard which visualizes time series data from Prometheus. Kafka is used as the transaction endpoint.
238+
239+
![Grafana-dashboard](../media/COE-GrafanaDashboard.png)
240+
241+
**Note:**
242+
You can import the Grafana dashboard template from [dashboards](../dashboards).
24 KB
Loading

0 commit comments

Comments
 (0)