Skip to content

Commit 56d1110

Browse files
committed
Bullet fix
1 parent ec28922 commit 56d1110

1 file changed

Lines changed: 52 additions & 52 deletions

File tree

learn-pr/wwl/analyze-monitor-tune-ai-powered-business-solutions/includes/2-recommend-process-tools-monitoring-agents.md

Lines changed: 52 additions & 52 deletions
Original file line numberDiff line numberDiff line change
@@ -28,49 +28,49 @@ Solution architects should recommend the processes for monitoring AI Agents acro
2828

2929
### 2.1 Establish a Monitoring Operating Model
3030

31-
*A strong operational model ensures consistency, ownership, and accountability.
31+
* A strong operational model ensures consistency, ownership, and accountability.
3232

3333
#### Key components:
3434

35-
*Defined roles (Ops team, product owners, data engineers, architects)
35+
* Defined roles (Ops team, product owners, data engineers, architects)
3636

37-
*Process workflows for incident response
37+
* Process workflows for incident response
3838

39-
*Standardized metric definitions (creating a baseline with trends)
39+
* Standardized metric definitions (creating a baseline with trends)
4040

41-
*Log review cadence (daily/weekly/monthly)
41+
* Log review cadence (daily/weekly/monthly)
4242

43-
*Change management and version tracking
43+
* Change management and version tracking
4444

45-
*Documentation of expected agent behaviors and constraints
45+
* Documentation of expected agent behaviors and constraints
4646

4747
### 2.2 Configure Guardrails and Threshold Alerts
4848

49-
*Set thresholds for latency, exception volume, and unusual activity.
49+
* Set thresholds for latency, exception volume, and unusual activity.
5050

51-
*Create automated alerts for guardrail triggers or tool invocation failures.
51+
* Create automated alerts for guardrail triggers or tool invocation failures.
5252

53-
*Monitor for unexpected spikes in prompts indicating potential misuse.
53+
* Monitor for unexpected spikes in prompts indicating potential misuse.
5454

5555
### 2.3 Conduct Regular Quality Evaluations
5656

57-
*Humanintheloop spot checks
57+
* Humanintheloop spot checks
5858

59-
*Scenariobased evaluations
59+
* Scenariobased evaluations
6060

61-
*Review lowconfidence outputs
61+
* Review lowconfidence outputs
6262

63-
*Validate alignment with business rules or compliance requirements
63+
* Validate alignment with business rules or compliance requirements
6464

6565
### 2.4 Continuously Improve Based on Insights
6666

67-
*Analyze logs and telemetry to find failure patterns.
67+
* Analyze logs and telemetry to find failure patterns.
6868

69-
*Identify training needs for users.
69+
* Identify training needs for users.
7070

71-
*Recommend prompt engineering improvements.
71+
* Recommend prompt engineering improvements.
7272

73-
*Propose workflow adjustments or retraining of custom models (if applicable).
73+
* Propose workflow adjustments or retraining of custom models (if applicable).
7474

7575
## 3. Recommended Tools for Monitoring AI Agents
7676

@@ -80,85 +80,85 @@ Solution architects should recommend the toolset that covers **observability**,
8080

8181
#### Azure Monitor provides:
8282

83-
*Application and agent telemetry
83+
* Application and agent telemetry
8484

85-
*Dashboards for real-time* metrics
85+
* *Dashboards for real-time* metrics
8686

87-
*Alert rules for anomalies
87+
* Alert rules for anomalies
8888

89-
*Integration with Log Analytics Workspaces
89+
* Integration with Log Analytics Workspaces
9090

9191
#### Use cases:
9292

93-
*Monitor agent workflows built with Power Platform or custom services.
93+
* Monitor agent workflows built with Power Platform or custom services.
9494

95-
*Track errors, latency, throughput, connector failures.
95+
* Track errors, latency, throughput, connector failures.
9696

97-
*Build KQL-based queries for deep diagnostics.
97+
* Build KQL-based queries for deep diagnostics.
9898

9999
### 3.2 Microsoft 365 Admin Analytics (Usage & Adoption Trends)
100100

101101
#### Useful for:
102102

103-
*Understanding agent usage volume
103+
* Understanding agent usage volume
104104

105-
*Tracking adoption and engagement
105+
* Tracking adoption and engagement
106106

107-
*Identifying departments with low usage or operational barriers
107+
* Identifying departments with low usage or operational barriers
108108

109-
*Measuring improvements week-over-week
109+
* Measuring improvements week-over-week
110110

111111
### 3.3 Copilot & Agent Analytics Dashboards
112112

113113
#### When available in an organization's tenant, Copilot analytics can provide:
114114

115-
*Agent invocation frequency
115+
* Agent invocation frequency
116116

117-
*Task completion trends
117+
* Task completion trends
118118

119-
*Common user queries
119+
* Common user queries
120120

121-
*Productivity pattern insights
121+
* Productivity pattern insights
122122

123-
*Error or guardrail-trigger events
123+
* Error or guardrail-trigger events
124124

125125
### 3.4 Power Platform Admin Center (Environment-Level Monitoring)
126126

127127
#### Provides:
128128

129-
*Environment health
129+
* Environment health
130130

131-
*Connector usage and limits
131+
* Connector usage and limits
132132

133-
*Flow telemetry (for agents using workflows)
133+
* Flow telemetry (for agents using workflows)
134134

135-
*DLP rule impact visibility
135+
* DLP rule impact visibility
136136

137137
### 3.5 Foundry or Organizational Observability Platforms
138138

139139
#### Enterprises may adopt centralized observability platforms (example: Foundry-like solutions, if present in the environment) to unify:
140140

141-
*Multisystem logs
141+
* Multisystem logs
142142

143-
*Event traces
143+
* Event traces
144144

145-
*Cross-environment dashboards
145+
* Cross-environment dashboards
146146

147-
*AI model execution insights
147+
* AI model execution insights
148148

149-
*These platforms reduce fragmentation and provide a single-pane-of-glass view for complex agent ecosystems.
149+
* These platforms reduce fragmentation and provide a single-pane-of-glass view for complex agent ecosystems.
150150

151151
### 3.6 Custom Dashboards for Enterprise AI Agents
152152

153153
#### Solution architects often design:
154154

155-
*KPI dashboards in Power BI
155+
* KPI dashboards in Power BI
156156

157-
*Heatmaps of usage
157+
* Heatmaps of usage
158158

159-
*Drift detection visualizations
159+
* Drift detection visualizations
160160

161-
*Compliance trend reports
161+
* Compliance trend reports
162162

163163
#### Example: Agent Health Summary
164164

@@ -175,15 +175,15 @@ Solution architects should recommend the toolset that covers **observability**,
175175

176176
#### Best Practices
177177

178-
*Always centralize logs.
178+
* Always centralize logs.
179179

180-
*Standardize naming conventions.
180+
* Standardize naming conventions.
181181

182-
*Define clear SLAs for agent responsiveness.
182+
* Define clear SLAs for agent responsiveness.
183183

184-
*Automate alerting for critical business workflows.
184+
* Automate alerting for critical business workflows.
185185

186-
*Integrate monitoring outputs into monthly operational reviews.
186+
* Integrate monitoring outputs into monthly operational reviews.
187187

188188
## References
189189

0 commit comments

Comments
 (0)