You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/field-descriptions.md
+7-3Lines changed: 7 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -77,16 +77,20 @@ This topic contains descriptions of the `lstreamd_default.conf` file parameters.
77
77
78
78
-`EVENTS`:
79
79
80
-
Citrix ADC Observability Exporter allows exporting time series (events, and audit logs) to Splunk.
80
+
Citrix ADC Observability Exporter allows exporting time series (events, and audit logs) to Splunk and Kafka.
81
81
Set this field to `yes` to allow exporting events.
82
82
The default value is `no`.
83
83
84
84
-`AUDITLOGS`:
85
85
86
-
You can export audit logs to Splunk.
86
+
You can export audit logs to Splunk and Kafka.
87
87
Set this field to `yes` to allow exporting audit logs.
88
88
The default value is `no`.
89
89
90
+
-`ConnectionPoolSize`:
91
+
92
+
Alters the size of connection pools for Splunk.`ConnectionPoolSize` and `MaxConnections` can be used to control the rate at which data is exported (to the endpoint).
93
+
90
94
-`ElkMaxSendBuffersPerSec`:
91
95
92
96
The maximum rate at which the data is exported to ElasticSearch.
@@ -174,7 +178,7 @@ Following are the guidelines while configuring the `lstreamd_default.conf` file
174
178
175
179
Prometheus is always `ON` and metrics can be exported to it in parallel to transactions, audit logs, and events.
176
180
177
-
- Currently, you can only export time series like audit logs and events to Splunk, but in parallel to transactions and metrics.
181
+
- Currently, you can only export time series like audit logs and events to Splunk and Kafka, but in parallel to transactions and metrics.
178
182
179
183
- You must not configure multiple endpoints of the same type in the `lstreamd_default.conf` file for one Citrix ADC Observability Exporter. For example, it is not possible to configure two Splunk instances, or two Kafka instances, or two ElasticSearch instances, or one Splunk and one ElasticSearch, and so on.
180
184
For Zipkin, although you can configure it in parallel to Splunk and ElasticSearch, you may not configure multiple instances of Zipkin. For example, it is not possible to have two Zipkin instances in parallel.
Copy file name to clipboardExpand all lines: docs/index.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -44,9 +44,9 @@ You can view the traces using the Zipkin user interface. However, you can also e
44
44
45
45
When Elasticsearch is specified as the transaction endpoint, Citrix ADC Observability Exporter converts the data to JSON format. On the Elasticsearch server, Citrix ADC Observability Exporter creates Elasticsearch indexes for each ADC on an hourly basis. These indexes are based on data, hour, UUID of the ADC, and the type of HTTP data (http_event or http_error). Then, Citrix ADC Observability Exporter uploads the data in JSON format under Elastic search indexes for each ADC. All regular transactions are placed into the http_event index and any anomalies are placed into the http_error index.
46
46
47
-
### Citrix ADC Observability Exporter with Kafka as the transaction endpoint
47
+
### Citrix ADC Observability Exporter with Kafka as the endpoint
48
48
49
-
When Kafka is specified as the transaction endpoint, Citrix ADC Observability Exporter converts the transaction data to[Avro](http://avro.apache.org/docs/current/Avro)format and streams them to Kafka.
49
+
NetScaler Observability Exporter exports transactions to Kafka as[Avro](http://avro.apache.org/docs/current/Avro)or JSON. Auditlogs and events are exported as JSON.
50
50
51
51
### Citrix ADC Observability Exporter with Prometheus as the endpoint for time series data
0 commit comments