You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: support/azure/azure-monitor/app-insights/telemetry/troubleshoot-high-data-ingestion.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ Billing charges for Application Insights or Log Analytics often occur due to hig
16
16
17
17
In the Azure portal, navigate to cost analysis for your scope. For example: **Cost Management + Billing** > **Cost Management** > **Cost analysis**. This blade offers cost analysis views to chart costs per resource, as follows:
This workbook provides valuable insights, such as the percentage of data ingestion for each table and detailed ingestion statistics for each resource reporting to the same workspace.
64
64
@@ -188,7 +188,7 @@ dependencies
188
188
189
189
You can get what table is the biggest contributor to costs. Here's an exmaple of `AppTraces`:
190
190
191
-
:::image type="content" source="media/troubleshoot-high-ingestion/apptraces-table.png" alt-text="A screenshot thst shows that the AppTraces table is the biggest contributor to costs.":::
191
+
:::image type="content" source="media/troubleshoot-high-data-ingestion/apptraces-table.png" alt-text="A screenshot thst shows that the AppTraces table is the biggest contributor to costs.":::
192
192
193
193
2. Query the specific application driving the costs for traces:
194
194
@@ -202,7 +202,7 @@ dependencies
202
202
| project-away TotalBilledSize
203
203
```
204
204
205
-
:::image type="content" source="media/troubleshoot-high-ingestion/application-driving-costs-for-traces.png" alt-text="A screenshot thst shows the specific application driving the costs for traces.":::
205
+
:::image type="content" source="media/troubleshoot-high-data-ingestion/application-driving-costs-for-traces.png" alt-text="A screenshot thst shows the specific application driving the costs for traces.":::
206
206
207
207
3. Run the following query specific to that application and look further into the specific logger categories sending telemetry to the `AppTraces` table:
208
208
@@ -220,7 +220,7 @@ dependencies
220
220
221
221
The result shows two main categories responsible for the costs:
222
222
223
-
:::image type="content" source="media/troubleshoot-high-ingestion/logger-categories-sending-telemetry-to-apptraces.png" alt-text="A screenshot thst shows the specific logger categories sending telemetry to the AppTraces table.":::
223
+
:::image type="content" source="media/troubleshoot-high-data-ingestion/logger-categories-sending-telemetry-to-apptraces.png" alt-text="A screenshot thst shows the specific logger categories sending telemetry to the AppTraces table.":::
224
224
225
225
### Scenario 2: High data ingestion in Application Insight
226
226
@@ -237,7 +237,7 @@ To determine the factors contributing to the costs, follow these steps:
237
237
238
238
Here's an exmaple that shows Azure Functions is generating lots of trace and exception telemetry:
239
239
240
-
:::image type="content" source="media/troubleshoot-high-ingestion/table-sdkversion-count.png" alt-text="A screenshot thst shows what table and SDK is generating most Trace and Exception telemetry.":::
240
+
:::image type="content" source="media/troubleshoot-high-data-ingestion/table-sdkversion-count.png" alt-text="A screenshot thst shows what table and SDK is generating most Trace and Exception telemetry.":::
241
241
242
242
243
243
2. Run the following query to get the specific app generating more traces than the others:
@@ -251,7 +251,7 @@ To determine the factors contributing to the costs, follow these steps:
251
251
```
252
252
253
253
254
-
:::image type="content" source="media/troubleshoot-high-ingestion/app-generating-more-traces.png" alt-text="A screenshot thst shows what app is generating most traces.":::
254
+
:::image type="content" source="media/troubleshoot-high-data-ingestion/app-generating-more-traces.png" alt-text="A screenshot thst shows what app is generating most traces.":::
255
255
256
256
3. Refine the query to include that specific app and generate a count of records per each individual message:
257
257
@@ -266,7 +266,7 @@ To determine the factors contributing to the costs, follow these steps:
266
266
267
267
The result can show the specific message increasing ingestion costs:
268
268
269
-
:::image type="content" source="media/troubleshoot-high-ingestion/app-message-count.png" alt-text="A screenshot thst shows a count of records per each individual message.":::
269
+
:::image type="content" source="media/troubleshoot-high-data-ingestion/app-message-count.png" alt-text="A screenshot thst shows a count of records per each individual message.":::
270
270
271
271
### Scenario 3: Reach daily cap unexpectedly
272
272
@@ -280,7 +280,7 @@ customEvents
280
280
281
281
This analysis indicates that certain events started ingested on September 4th and subsequently became noisy very quickly.
282
282
283
-
:::image type="content" source="media/troubleshoot-high-ingestion/custom-events.png" alt-text="A screenshot thst shows a count of custom events.":::
283
+
:::image type="content" source="media/troubleshoot-high-data-ingestion/custom-events.png" alt-text="A screenshot thst shows a count of custom events.":::
0 commit comments