You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Troubleshoot high data ingestion in Application Insights
10
10
11
-
This article helps you troubleshoot high data ingestion that occurs in Application Insights resources or Log Analytics workspaces.
11
+
Billing charges for Application Insights or Log Analytics often occur due to high data ingestion. This article helps you troubleshoot this issue that occurs in Application Insights resources or Log Analytics workspaces.
12
12
13
13
## General troubleshooting steps
14
14
@@ -54,17 +54,17 @@ Once you've identified an Application Insights resource or a Log Analytics works
54
54
55
55
Similar to the record count queries, these queries above can assist in identifying the most active tables, allowing you to pinpoint specific tables for further investigation.
56
56
57
-
- Using Log Analytics Usage workbooks
57
+
- Using Log Analytics workbooks
58
58
59
59
In the Azure portal, navigate to your Log Analytics workspace, select **Workbooks**, and select **Usage** under **Log Analytics Workspace Insights**.
This workbook provides valuable insights, such as the percentage of data ingestion for each table and detailed ingestion statistics for each resource reporting to the same workspace.
64
64
65
-
### Step 3: Identify driving factors in high data ingestion
65
+
### Step 3: Determine factors contributing to high data ingestion
66
66
67
-
Once you've identified the tables with high data ingestion, take the table with the highest activity and identify the driving factors for that excess telemetry. This could be a specific application that generates more data than the others, an exception message that gets logged too frequently, or a new logger category that emits too information.
67
+
After identifying the tables with high data ingestion, focus on the table with the highest activity and determine factors contributing to high data ingestion. This could be a specific application that generates more data than the others, an exception message that gets logged too frequently, or a new logger category that emits too information.
68
68
69
69
Here are some sample queries you can use for this identification:
70
70
@@ -111,7 +111,7 @@ exceptions
111
111
```
112
112
113
113
114
-
You can try out different telemetry fields. For example, perhaps you first run the query below and see no evident culprit for the excess of telemetry:
114
+
You can try out different telemetry fields. For example, perhaps you first run the following query and observe there is no obvious cause for the excessive telemetry:
115
115
116
116
```Kusto
117
117
dependencies
@@ -120,7 +120,7 @@ dependencies
120
120
| sort by count_ desc
121
121
```
122
122
123
-
However, you can try another telemetry field instead of `target`, such as `type`. This might show more compelling results to help your investigation.
123
+
However, you can try another telemetry field instead of `target`, such as `type`.
124
124
125
125
```Kusto
126
126
dependencies
@@ -149,7 +149,7 @@ exceptions
149
149
150
150
### Step 4: Investigate evolution of ingestion over time
151
151
152
-
Examine the evolution of ingestion over time based on the driving factors identified previously. This way can determine whether this behavior has been consistent or if changes occurred at a specific point. By analyzing data in this way, you can pinpoint when the change happened and provide a clearer understanding of the causes behind the high data ingestion. This insight will be important for addressing the issue and implementing effective solutions.
152
+
Examine the evolution of ingestion over time based on the factors identified previously. This way can determine whether this behavior has been consistent or if changes occurred at a specific point. By analyzing data in this way, you can pinpoint when the change happened and provide a clearer understanding of the causes behind the high data ingestion. This insight will be important for addressing the issue and implementing effective solutions.
153
153
154
154
In the following queries, the [bin()](/kusto/query/bin-function) Kusto Query Language (KQL) scalar function is used to segment data into 1-day intervals. This approach facilitates trend analysis as you can see how data has changed or not changed over time.
155
155
@@ -172,7 +172,7 @@ dependencies
172
172
173
173
## Troubleshooting steps for specific scenarios
174
174
175
-
### Scenario 1: High ingestion in Log Analytics
175
+
### Scenario 1: High data ingestion in Log Analytics
176
176
177
177
1. Query all tables within a Log Analytics workspace.
178
178
@@ -222,9 +222,9 @@ dependencies
222
222
223
223
:::image type="content" source="media/troubleshoot-high-ingestion/logger-categories-sending-telemetry-to-apptraces.png" alt-text="A screenshot thst shows the specific logger categories sending telemetry to the AppTraces table.":::
224
224
225
-
### Scenario 2: High ingestion in Application Insight
225
+
### Scenario 2: High data ingestion in Application Insight
226
226
227
-
To identify what specifically is driving the costs, follow these steps:
227
+
To determine the factors contributing to the costs, follow these steps:
228
228
229
229
1. Query the telemetry across all tables and obtain a record count per table and SDK version:
230
230
@@ -264,7 +264,7 @@ To identify what specifically is driving the costs, follow these steps:
264
264
| sort by count_ desc
265
265
```
266
266
267
-
The result can show the specific message driving up ingestion costs:
267
+
The result can show the specific message increasing ingestion costs:
268
268
269
269
:::image type="content" source="media/troubleshoot-high-ingestion/app-message-count.png" alt-text="A screenshot thst shows a count of records per each individual message.":::
270
270
@@ -278,13 +278,13 @@ customEvents
278
278
| summarize count(), min(timestamp) by name
279
279
```
280
280
281
-
This analysis revealed that certain events started ingested on September 4th and subsequently became noisy very quickly.
281
+
This analysis indicates that certain events started ingested on September 4th and subsequently became noisy very quickly.
282
282
283
283
:::image type="content" source="media/troubleshoot-high-ingestion/custom-events.png" alt-text="A screenshot thst shows a count of custom events.":::
284
284
285
-
## Methods to reduce costs
285
+
## Reduce data ingestion costs
286
286
287
-
After identifying the driving factors in the Azure Monitor tables that explain the unexpected data ingestion, reduce costs by using the following methods per your scenario:
287
+
After identifying the factors in the Azure Monitor tables responsible for unexpected data ingestion, reduce data ingestion costs using the following methods per your scenarios:
0 commit comments