Skip to content

Commit 886ef45

Browse files
committed
update
1 parent 9fcb71c commit 886ef45

1 file changed

Lines changed: 14 additions & 14 deletions

File tree

support/azure/azure-monitor/app-insights/telemetry/troubleshoot-high-ingestion.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.custom: sap:Application Insights
88
---
99
# Troubleshoot high data ingestion in Application Insights
1010

11-
This article helps you troubleshoot high data ingestion that occurs in Application Insights resources or Log Analytics workspaces.
11+
Billing charges for Application Insights or Log Analytics often occur due to high data ingestion. This article helps you troubleshoot this issue that occurs in Application Insights resources or Log Analytics workspaces.
1212

1313
## General troubleshooting steps
1414

@@ -54,17 +54,17 @@ Once you've identified an Application Insights resource or a Log Analytics works
5454
5555
Similar to the record count queries, these queries above can assist in identifying the most active tables, allowing you to pinpoint specific tables for further investigation.
5656
57-
- Using Log Analytics Usage workbooks
57+
- Using Log Analytics workbooks
5858
5959
In the Azure portal, navigate to your Log Analytics workspace, select **Workbooks**, and select **Usage** under **Log Analytics Workspace Insights**.
6060
6161
:::image type="content" source="media/troubleshoot-high-ingestion/log-analytics-usage-workbook.png" alt-text="A screenshot thst shows the Log Analytics workbook pane." lightbox="media/troubleshoot-high-ingestion/log-analytics-usage-workbook.png" border="false":::
6262
6363
This workbook provides valuable insights, such as the percentage of data ingestion for each table and detailed ingestion statistics for each resource reporting to the same workspace.
6464
65-
### Step 3: Identify driving factors in high data ingestion
65+
### Step 3: Determine factors contributing to high data ingestion
6666
67-
Once you've identified the tables with high data ingestion, take the table with the highest activity and identify the driving factors for that excess telemetry. This could be a specific application that generates more data than the others, an exception message that gets logged too frequently, or a new logger category that emits too information.
67+
After identifying the tables with high data ingestion, focus on the table with the highest activity and determine factors contributing to high data ingestion. This could be a specific application that generates more data than the others, an exception message that gets logged too frequently, or a new logger category that emits too information.
6868
6969
Here are some sample queries you can use for this identification:
7070
@@ -111,7 +111,7 @@ exceptions
111111
```
112112

113113

114-
You can try out different telemetry fields. For example, perhaps you first run the query below and see no evident culprit for the excess of telemetry:
114+
You can try out different telemetry fields. For example, perhaps you first run the following query and observe there is no obvious cause for the excessive telemetry:
115115

116116
```Kusto
117117
dependencies
@@ -120,7 +120,7 @@ dependencies
120120
| sort by count_ desc
121121
```
122122

123-
However, you can try another telemetry field instead of `target`, such as `type`. This might show more compelling results to help your investigation.
123+
However, you can try another telemetry field instead of `target`, such as `type`.
124124

125125
```Kusto
126126
dependencies
@@ -149,7 +149,7 @@ exceptions
149149

150150
### Step 4: Investigate evolution of ingestion over time
151151

152-
Examine the evolution of ingestion over time based on the driving factors identified previously. This way can determine whether this behavior has been consistent or if changes occurred at a specific point. By analyzing data in this way, you can pinpoint when the change happened and provide a clearer understanding of the causes behind the high data ingestion. This insight will be important for addressing the issue and implementing effective solutions.
152+
Examine the evolution of ingestion over time based on the factors identified previously. This way can determine whether this behavior has been consistent or if changes occurred at a specific point. By analyzing data in this way, you can pinpoint when the change happened and provide a clearer understanding of the causes behind the high data ingestion. This insight will be important for addressing the issue and implementing effective solutions.
153153

154154
In the following queries, the [bin()](/kusto/query/bin-function) Kusto Query Language (KQL) scalar function is used to segment data into 1-day intervals. This approach facilitates trend analysis as you can see how data has changed or not changed over time.
155155

@@ -172,7 +172,7 @@ dependencies
172172

173173
## Troubleshooting steps for specific scenarios
174174

175-
### Scenario 1: High ingestion in Log Analytics
175+
### Scenario 1: High data ingestion in Log Analytics
176176

177177
1. Query all tables within a Log Analytics workspace.
178178

@@ -222,9 +222,9 @@ dependencies
222222
223223
:::image type="content" source="media/troubleshoot-high-ingestion/logger-categories-sending-telemetry-to-apptraces.png" alt-text="A screenshot thst shows the specific logger categories sending telemetry to the AppTraces table.":::
224224
225-
### Scenario 2: High ingestion in Application Insight
225+
### Scenario 2: High data ingestion in Application Insight
226226
227-
To identify what specifically is driving the costs, follow these steps:
227+
To determine the factors contributing to the costs, follow these steps:
228228
229229
1. Query the telemetry across all tables and obtain a record count per table and SDK version:
230230
@@ -264,7 +264,7 @@ To identify what specifically is driving the costs, follow these steps:
264264
| sort by count_ desc
265265
```
266266
267-
The result can show the specific message driving up ingestion costs:
267+
The result can show the specific message increasing ingestion costs:
268268
269269
:::image type="content" source="media/troubleshoot-high-ingestion/app-message-count.png" alt-text="A screenshot thst shows a count of records per each individual message.":::
270270
@@ -278,13 +278,13 @@ customEvents
278278
| summarize count(), min(timestamp) by name
279279
```
280280

281-
This analysis revealed that certain events started ingested on September 4th and subsequently became noisy very quickly.
281+
This analysis indicates that certain events started ingested on September 4th and subsequently became noisy very quickly.
282282

283283
:::image type="content" source="media/troubleshoot-high-ingestion/custom-events.png" alt-text="A screenshot thst shows a count of custom events.":::
284284

285-
## Methods to reduce costs
285+
## Reduce data ingestion costs
286286

287-
After identifying the driving factors in the Azure Monitor tables that explain the unexpected data ingestion, reduce costs by using the following methods per your scenario:
287+
After identifying the factors in the Azure Monitor tables responsible for unexpected data ingestion, reduce data ingestion costs using the following methods per your scenarios:
288288

289289
### Update daily cap configuration
290290

0 commit comments

Comments
 (0)