Skip to content

Commit 1ddc63f

Browse files
committed
update
1 parent 886ef45 commit 1ddc63f

11 files changed

Lines changed: 11 additions & 9 deletions

support/azure/azure-monitor/app-insights/telemetry/media/troubleshoot-high-ingestion/app-generating-more-traces.png renamed to support/azure/azure-monitor/app-insights/telemetry/media/troubleshoot-high-data-ingestion/app-generating-more-traces.png

File renamed without changes.

support/azure/azure-monitor/app-insights/telemetry/media/troubleshoot-high-ingestion/app-message-count.png renamed to support/azure/azure-monitor/app-insights/telemetry/media/troubleshoot-high-data-ingestion/app-message-count.png

File renamed without changes.

support/azure/azure-monitor/app-insights/telemetry/media/troubleshoot-high-ingestion/application-driving-costs-for-traces.png renamed to support/azure/azure-monitor/app-insights/telemetry/media/troubleshoot-high-data-ingestion/application-driving-costs-for-traces.png

File renamed without changes.

support/azure/azure-monitor/app-insights/telemetry/media/troubleshoot-high-ingestion/apptraces-table.png renamed to support/azure/azure-monitor/app-insights/telemetry/media/troubleshoot-high-data-ingestion/apptraces-table.png

File renamed without changes.

support/azure/azure-monitor/app-insights/telemetry/media/troubleshoot-high-ingestion/cost-analysis.png renamed to support/azure/azure-monitor/app-insights/telemetry/media/troubleshoot-high-data-ingestion/cost-analysis.png

File renamed without changes.

support/azure/azure-monitor/app-insights/telemetry/media/troubleshoot-high-ingestion/custom-events.png renamed to support/azure/azure-monitor/app-insights/telemetry/media/troubleshoot-high-data-ingestion/custom-events.png

File renamed without changes.

support/azure/azure-monitor/app-insights/telemetry/media/troubleshoot-high-ingestion/log-analytics-usage-workbook.png renamed to support/azure/azure-monitor/app-insights/telemetry/media/troubleshoot-high-data-ingestion/log-analytics-usage-workbook.png

File renamed without changes.

support/azure/azure-monitor/app-insights/telemetry/media/troubleshoot-high-ingestion/logger-categories-sending-telemetry-to-apptraces.png renamed to support/azure/azure-monitor/app-insights/telemetry/media/troubleshoot-high-data-ingestion/logger-categories-sending-telemetry-to-apptraces.png

File renamed without changes.

support/azure/azure-monitor/app-insights/telemetry/media/troubleshoot-high-ingestion/table-sdkversion-count.png renamed to support/azure/azure-monitor/app-insights/telemetry/media/troubleshoot-high-data-ingestion/table-sdkversion-count.png

File renamed without changes.

support/azure/azure-monitor/app-insights/telemetry/troubleshoot-high-ingestion.md renamed to support/azure/azure-monitor/app-insights/telemetry/troubleshoot-high-data-ingestion.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ Billing charges for Application Insights or Log Analytics often occur due to hig
1616

1717
In the Azure portal, navigate to cost analysis for your scope. For example: **Cost Management + Billing** > **Cost Management** > **Cost analysis**. This blade offers cost analysis views to chart costs per resource, as follows:
1818

19-
:::image type="content" source="media/troubleshoot-high-ingestion/cost-analysis.png" alt-text="A screenshot thst shows the 'cost analysis' blade." border="false":::
19+
:::image type="content" source="media/troubleshoot-high-data-ingestion/cost-analysis.png" alt-text="A screenshot thst shows the 'cost analysis' blade." border="false":::
2020

2121

2222
### Step 2: Identify costly tables with high data ingestion
@@ -58,7 +58,7 @@ Once you've identified an Application Insights resource or a Log Analytics works
5858
5959
In the Azure portal, navigate to your Log Analytics workspace, select **Workbooks**, and select **Usage** under **Log Analytics Workspace Insights**.
6060
61-
:::image type="content" source="media/troubleshoot-high-ingestion/log-analytics-usage-workbook.png" alt-text="A screenshot thst shows the Log Analytics workbook pane." lightbox="media/troubleshoot-high-ingestion/log-analytics-usage-workbook.png" border="false":::
61+
:::image type="content" source="media/troubleshoot-high-data-ingestion/log-analytics-usage-workbook.png" alt-text="A screenshot thst shows the Log Analytics workbook pane." lightbox="media/troubleshoot-high-data-ingestion/log-analytics-usage-workbook.png" border="false":::
6262
6363
This workbook provides valuable insights, such as the percentage of data ingestion for each table and detailed ingestion statistics for each resource reporting to the same workspace.
6464
@@ -188,7 +188,7 @@ dependencies
188188
189189
You can get what table is the biggest contributor to costs. Here's an exmaple of `AppTraces`:
190190
191-
:::image type="content" source="media/troubleshoot-high-ingestion/apptraces-table.png" alt-text="A screenshot thst shows that the AppTraces table is the biggest contributor to costs.":::
191+
:::image type="content" source="media/troubleshoot-high-data-ingestion/apptraces-table.png" alt-text="A screenshot thst shows that the AppTraces table is the biggest contributor to costs.":::
192192
193193
2. Query the specific application driving the costs for traces:
194194
@@ -202,7 +202,7 @@ dependencies
202202
| project-away TotalBilledSize
203203
```
204204
205-
:::image type="content" source="media/troubleshoot-high-ingestion/application-driving-costs-for-traces.png" alt-text="A screenshot thst shows the specific application driving the costs for traces.":::
205+
:::image type="content" source="media/troubleshoot-high-data-ingestion/application-driving-costs-for-traces.png" alt-text="A screenshot thst shows the specific application driving the costs for traces.":::
206206
207207
3. Run the following query specific to that application and look further into the specific logger categories sending telemetry to the `AppTraces` table:
208208
@@ -220,7 +220,7 @@ dependencies
220220
221221
The result shows two main categories responsible for the costs:
222222
223-
:::image type="content" source="media/troubleshoot-high-ingestion/logger-categories-sending-telemetry-to-apptraces.png" alt-text="A screenshot thst shows the specific logger categories sending telemetry to the AppTraces table.":::
223+
:::image type="content" source="media/troubleshoot-high-data-ingestion/logger-categories-sending-telemetry-to-apptraces.png" alt-text="A screenshot thst shows the specific logger categories sending telemetry to the AppTraces table.":::
224224
225225
### Scenario 2: High data ingestion in Application Insight
226226
@@ -237,7 +237,7 @@ To determine the factors contributing to the costs, follow these steps:
237237
238238
Here's an exmaple that shows Azure Functions is generating lots of trace and exception telemetry:
239239
240-
:::image type="content" source="media/troubleshoot-high-ingestion/table-sdkversion-count.png" alt-text="A screenshot thst shows what table and SDK is generating most Trace and Exception telemetry.":::
240+
:::image type="content" source="media/troubleshoot-high-data-ingestion/table-sdkversion-count.png" alt-text="A screenshot thst shows what table and SDK is generating most Trace and Exception telemetry.":::
241241
242242
243243
2. Run the following query to get the specific app generating more traces than the others:
@@ -251,7 +251,7 @@ To determine the factors contributing to the costs, follow these steps:
251251
```
252252
253253
254-
:::image type="content" source="media/troubleshoot-high-ingestion/app-generating-more-traces.png" alt-text="A screenshot thst shows what app is generating most traces.":::
254+
:::image type="content" source="media/troubleshoot-high-data-ingestion/app-generating-more-traces.png" alt-text="A screenshot thst shows what app is generating most traces.":::
255255
256256
3. Refine the query to include that specific app and generate a count of records per each individual message:
257257
@@ -266,7 +266,7 @@ To determine the factors contributing to the costs, follow these steps:
266266
267267
The result can show the specific message increasing ingestion costs:
268268
269-
:::image type="content" source="media/troubleshoot-high-ingestion/app-message-count.png" alt-text="A screenshot thst shows a count of records per each individual message.":::
269+
:::image type="content" source="media/troubleshoot-high-data-ingestion/app-message-count.png" alt-text="A screenshot thst shows a count of records per each individual message.":::
270270
271271
### Scenario 3: Reach daily cap unexpectedly
272272
@@ -280,7 +280,7 @@ customEvents
280280

281281
This analysis indicates that certain events started ingested on September 4th and subsequently became noisy very quickly.
282282

283-
:::image type="content" source="media/troubleshoot-high-ingestion/custom-events.png" alt-text="A screenshot thst shows a count of custom events.":::
283+
:::image type="content" source="media/troubleshoot-high-data-ingestion/custom-events.png" alt-text="A screenshot thst shows a count of custom events.":::
284284

285285
## Reduce data ingestion costs
286286

0 commit comments

Comments
 (0)