You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: learn-pr/wwl/analyze-monitor-tune-ai-powered-business-solutions/includes/2-recommend-process-tools-monitoring-agents.md
+52-52Lines changed: 52 additions & 52 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,49 +28,49 @@ Solution architects should recommend the processes for monitoring AI Agents acro
28
28
29
29
### 2.1 Establish a Monitoring Operating Model
30
30
31
-
*A strong operational model ensures consistency, ownership, and accountability.
31
+
*A strong operational model ensures consistency, ownership, and accountability.
32
32
33
33
#### Key components:
34
34
35
-
*Defined roles (Ops team, product owners, data engineers, architects)
35
+
*Defined roles (Ops team, product owners, data engineers, architects)
36
36
37
-
*Process workflows for incident response
37
+
*Process workflows for incident response
38
38
39
-
*Standardized metric definitions (creating a baseline with trends)
39
+
*Standardized metric definitions (creating a baseline with trends)
40
40
41
-
*Log review cadence (daily/weekly/monthly)
41
+
*Log review cadence (daily/weekly/monthly)
42
42
43
-
*Change management and version tracking
43
+
*Change management and version tracking
44
44
45
-
*Documentation of expected agent behaviors and constraints
45
+
*Documentation of expected agent behaviors and constraints
46
46
47
47
### 2.2 Configure Guardrails and Threshold Alerts
48
48
49
-
*Set thresholds for latency, exception volume, and unusual activity.
49
+
*Set thresholds for latency, exception volume, and unusual activity.
50
50
51
-
*Create automated alerts for guardrail triggers or tool invocation failures.
51
+
*Create automated alerts for guardrail triggers or tool invocation failures.
52
52
53
-
*Monitor for unexpected spikes in prompts indicating potential misuse.
53
+
*Monitor for unexpected spikes in prompts indicating potential misuse.
54
54
55
55
### 2.3 Conduct Regular Quality Evaluations
56
56
57
-
*Humanintheloop spot checks
57
+
*Humanintheloop spot checks
58
58
59
-
*Scenariobased evaluations
59
+
*Scenariobased evaluations
60
60
61
-
*Review lowconfidence outputs
61
+
*Review lowconfidence outputs
62
62
63
-
*Validate alignment with business rules or compliance requirements
63
+
*Validate alignment with business rules or compliance requirements
64
64
65
65
### 2.4 Continuously Improve Based on Insights
66
66
67
-
*Analyze logs and telemetry to find failure patterns.
67
+
*Analyze logs and telemetry to find failure patterns.
68
68
69
-
*Identify training needs for users.
69
+
*Identify training needs for users.
70
70
71
-
*Recommend prompt engineering improvements.
71
+
*Recommend prompt engineering improvements.
72
72
73
-
*Propose workflow adjustments or retraining of custom models (if applicable).
73
+
*Propose workflow adjustments or retraining of custom models (if applicable).
74
74
75
75
## 3. Recommended Tools for Monitoring AI Agents
76
76
@@ -80,85 +80,85 @@ Solution architects should recommend the toolset that covers **observability**,
80
80
81
81
#### Azure Monitor provides:
82
82
83
-
*Application and agent telemetry
83
+
*Application and agent telemetry
84
84
85
-
*Dashboards for real-time* metrics
85
+
**Dashboards for real-time* metrics
86
86
87
-
*Alert rules for anomalies
87
+
*Alert rules for anomalies
88
88
89
-
*Integration with Log Analytics Workspaces
89
+
*Integration with Log Analytics Workspaces
90
90
91
91
#### Use cases:
92
92
93
-
*Monitor agent workflows built with Power Platform or custom services.
93
+
*Monitor agent workflows built with Power Platform or custom services.
0 commit comments