You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
title: Auditing & Logging for the Employee Self-Service agent
3
+
f1.keywords: NOCSH
4
+
ms.author: semani
5
+
author: sthilkumar
6
+
manager: swgulati
7
+
ms.reviewer: semani
8
+
ms.date: 02/23/2026
9
+
audience: Admin
10
+
ms.topic: article
11
+
ms.service: microsoft-365-copilot
12
+
ms.custom: ess-agent
13
+
ms.localizationpriority: medium
14
+
ms.collection: m365copilot
15
+
description: Learn how to setup auditing & logging for Employee Self-Service agent
16
+
appliesto:
17
+
- ✅ Microsoft 365 Copilot
18
+
---
19
+
20
+
# Auditing and logging
21
+
22
+
The Employee Self-Service agent is built on Copilot and Power Platform. You can use auditing capabilities from these platforms to log and monitor usage.
23
+
24
+
### Audit with Microsoft Purview
25
+
26
+
Microsoft Purview provides features for a comprehensive audit record of end-user activities when interacting with agents. [Learn how to audit user interactions with agents in Microsoft Purview.](/power-platform/release-plan/2024wave2/microsoft-copilot-studio/audit-user-interactions-agents-purview)
27
+
28
+
[Audit logs for Copilot and AI activities](/purview/audit-copilot).
29
+
30
+
### Audit with Azure Application Insights
31
+
32
+
You can [use Azure Application Insights to capture telemetry for custom Copilot Agents.](/microsoft-copilot-studio/advanced-bot-framework-composer-capture-telemetry?tabs=webApp)
33
+
34
+
### Security information and event management (SIEM)
35
+
36
+
For any SIEM integrations, use Application Insights or the Power Platform Dataverse auditing capabilities.
[Integrate Microsoft Sentinel and Power Platform to better monitor and protect your low-code solutions.](https://www.microsoft.com/power-platform/blog/power-apps/integrating-microsoft-sentinel-and-power-platform-to-better-monitor-and-protect-your-low-code-solutions/?msockid=1614e9ffd18265002a76fcabd0016456)
title: Reviewing end-user feedback for Employee Self-Service agent
3
+
f1.keywords: NOCSH
4
+
ms.author: semani
5
+
author: sthilkumar
6
+
manager: swgulati
7
+
ms.reviewer: semani
8
+
ms.date: 02/24/2026
9
+
audience: Admin
10
+
ms.topic: article
11
+
ms.service: microsoft-365-copilot
12
+
ms.custom: ess-agent
13
+
ms.localizationpriority: medium
14
+
ms.collection: m365copilot
15
+
description: Learn about how to review end-users feedback for ESS agent
16
+
appliesto:
17
+
- ✅ Microsoft 365 Copilot
18
+
---
19
+
20
+
# Reviewing end-user feedback
21
+
22
+
Feedback about Copilot is collected from Copilot experiences, such as when selecting a thumb's up or thumbs down on a response from Copilot in Microsoft 365 apps. After a user selects one of the thumb options, the feedback pane appears and asks for more information, including what the user liked (thumbs up) or what went wrong (thumbs down).
23
+
24
+
> [!NOTE]
25
+
> If using Copilot for work or school, the IT admin can turn off feedback. If the thumbs option doesn't appear next to Copilot responses, or if selecting a thumb doesn't open the feedback pane, your organization's IT admin may have feedback turned off.
26
+
27
+
You can view feedback in the [Microsoft feedback portal](https://feedbackportal.microsoft.com/feedback). To view feedback submitted to Microsoft, select **My feedback** and sign in. If using Copilot at work or school, your IT admin might not allow use of the feedback portal.
28
+
29
+
Administrators can view end-user feedback in the **Product feedback** page in the Microsoft admin center (MAC) under the **Health** section.
description: Learn how to access usage analytics for the Employee Self-Service agent.
15
+
description: Learn how to access usage analytics for your Employee Self-Service agent. Administrators can monitor usage for any Employee Self-Service agents configured in their environment.
16
16
appliesto:
17
17
- ✅ Microsoft 365 Copilot
18
18
---
@@ -24,37 +24,242 @@ Monitoring the usage analytics of the Employee Self-Service agent should be part
24
24
There are two approaches in consuming analytics:
25
25
26
26
1. Systematically monitoring agent usage, effectiveness, quality, and satisfaction, which fall within the responsibilities of the service owner and/or creator. [Learn more](/microsoft-copilot-studio/analytics-overview) about Copilot Studio analytics documentation.
27
-
1. Review agent usage, satisfaction scores, and other metrics to assess the agent’s return on investment. [Learn more](/viva/insights/org-team-insights/copilot-dashboard) about the Copilot Analytics Dashboard.
27
+
1. Review agent usage, satisfaction scores, and other metrics to assess the agent's return on investment. [Learn more](/viva/insights/org-team-insights/copilot-dashboard) about Copilot.
28
28
29
-
## Provide feedback
29
+
## Measure what matters
30
30
31
-
Feedback about Copilot is collected from Copilot experiences, such as when selecting a thumb's up or thumb's down on a response from Copilot in Microsoft 365 apps. After a user selects one of the thumb options, the feedback pane appears and asks for more information, including what the user liked (thumb's up) or what went wrong (thumb's down).
31
+
Employee Self-Service atelemetry is designed to help organizations move beyond basic usage reporting and toward **operational clarity, trust, and continuous improvement**. While Employee Self-Service collects a single, consistent telemetry stream, **different stakeholders interpret that telemetry through different lenses**, depending on the decisions they're responsible for making.
32
32
33
-
>[!NOTE]
34
-
>If using Copilot for work or school, the IT admin can turn off feedback. If the thumb options don't appear next to Copilot responses or if selecting a thumb doesn't open the feedback pane, your organization's IT admin may have turned off feedback.
33
+
This article explains how to interpret Employee Self-Service telemetry for each stakeholder in your organization, using the same storytelling model that Microsoft's product and engineering teams use internally. This information allows customers and product teams to align on what "good" looks like and what actions to take next.
35
34
36
-
You can view feedback in the [Microsoft feedback portal](https://feedbackportal.microsoft.com/feedback). To view feedback submitted to Microsoft, select **My feedback** and sign in. If using Copilot at work or school, your IT admin might not allow use of the feedback portal.
35
+
## One Telemetry, Different lenses
37
36
38
-
Administrators can view end-user feedback in the **Product feedback** page in the Microsoft admin center (MAC) under the **Health** section.
37
+
Employee Self-Service telemetry follows a simple but powerful narrative framework:
39
38
40
-
## Auditing and logging
39
+
**Intent > Behavior > Outcome > Action**
41
40
42
-
The Employee Self-Service agent is built on Copilot and Power Platform. You can use auditing capabilities from these platforms to log and monitor usage.
41
+
This model ensures telemetry is decision-oriented as well as descriptive. The same data answers different questions depending on who is looking at it.
43
42
44
-
### Audit with Microsoft Purview
43
+
|Story Element |What it means in Employee Self-Service |
|**Behavior**|How the Employee Self-Service agent is used. |
47
+
|**Outcome**|Business and experience impact. |
48
+
|**Action**|What should change as a result. |
45
49
46
-
Microsoft Purview provides features for a comprehensive audit record of end-user activities when interacting with agents. [Learn how to audit user interactions with agents in Microsoft Purview.](/power-platform/release-plan/2024wave2/microsoft-copilot-studio/audit-user-interactions-agents-purview)
50
+
## Stakeholder-based interpretation guide
47
51
48
-
[Audit logs for Copilot and AI activities](/purview/audit-copilot).
52
+
### 1. Executive & Business Leaders
49
53
50
-
### Audit with Azure Application Insights
54
+
**Primary question**
51
55
52
-
You can [use Azure Application Insights to capture telemetry for custom Copilot Agents.](/microsoft-copilot-studio/advanced-bot-framework-composer-capture-telemetry?tabs=webApp)
56
+
*Is the Employee Self-Service agent compounding value on scale, and where should we invest next?*
57
+
Executives should focus on **outcome-level telemetry**, not raw interaction counts.
53
58
54
-
### Security information and event management (SIEM)
59
+
**Recommended telemetry signals**
55
60
56
-
For any SIEM integrations, use Application Insights or the Power Platform Dataverse auditing capabilities.
61
+
- Active usage and rollout progression
62
+
- Conversion success rate
63
+
- Reduction in assisted support
64
+
- Trend alignment with business goals (for example, ticket deflection, time saved, productivity gain, employee satisfaction)
- Rising usage without corresponding success improvements may indicate trust gaps
68
+
- Stable success rates with expanding usage suggest the Employee Self-Service agent is scaling reliably
69
+
- Drops in success or spikes in assisted support, signals investment needs
59
70
60
-
[Integrate Microsoft Sentinel and Power Platform to better monitor and protect your low-code solutions.](https://www.microsoft.com/power-platform/blog/power-apps/integrating-microsoft-sentinel-and-power-platform-to-better-monitor-and-protect-your-low-code-solutions/?msockid=1614e9ffd18265002a76fcabd0016456)
71
+
**Typical actions**
72
+
- Prioritize funding towards friction areas surfaced by telemetry
73
+
- Align Employee Self-Service agent expansion to scenarios with measurable business value
74
+
75
+
**Common business goals**
76
+
- Ticket deflection across implemented business verticals to reduce operational cost
77
+
- Time saved / Assisted savings for employees by reducing manual support interactions
78
+
- Return on investment (ROI) from Employee Self-Service deployment and expansion
79
+
- Sustained adoption beyond pilot phases (trust and repeat usage)
80
+
- Predictable scale without increasing support or incident load
*What should we fix or improve next to move outcomes – not just metrics?*
91
+
92
+
Product/service stakeholders focused on each business domain/verticals such as HR, IT, Facilities, etc. should interpret telemetry as **signals of friction**, not performance grades.
93
+
94
+
**Recommended telemetry signals**
95
+
96
+
- Scenario-level success and failure patterns
97
+
- Drop-off and retry behaviors
98
+
- Error and failback indicators
99
+
- Evaluation (Eval) regression trends
100
+
101
+
**How to interpret**
102
+
- Concentrated failures often indicate missing configuration, connector gaps, or unclear responses
103
+
- Repeated retries imply intent is understood but fulfillment is failing
104
+
- Regression after changes indicates quality or performance tradeoffs
105
+
106
+
**Typical actions**
107
+
- Created targeted backlog items (operational guidance, evaluations, fixes)
108
+
- Expand evaluation coverage for high-risk scenarios
109
+
- Adjust prompts, orchestration, or data sources
110
+
111
+
**Common business goals**
112
+
- Improve conversation success rate for high-volume scenarios
113
+
- Reduce deployment and adoption friction detected in telemetry
114
+
- Prevent regressions such as prompts, knowledge, or integrations change
115
+
- Focus investment on highest-impact scenarios, not vanity metrics
116
+
117
+
#### Interpreting Telemetry by verticals
118
+
This section is designed to help you operationalize the Employee Self-Service telemetry across your organization by anchoring analytics to real scenarios, clear stakeholder questions, and concrete actions.
-**High completion + low retries** = Clear guidance and fulfillment
213
+
-**Repeated queries** = Outdated or unclear facilities content
214
+
-**Frequent handoffs** = Integration or workflow gaps
215
+
-**Negative sentiment** = Experience or clarity issues
216
+
217
+
**Recommended actions for facilities**
218
+
- Improve knowledge freshness and clarity
219
+
- Identify top "must-be-right" facilities scenarios for evals
220
+
- Use telemetry to justify integration investments
221
+
- Track sentiment trends as a proxy for workplace trust
222
+
223
+
### IT Administrators & Makers ###
224
+
**Primary question**
225
+
226
+
*Is the Employee Self-Service agent configured correctly, stable, and ready to scale?*
227
+
Administrators and Makers should interpret telemetry as **health and readiness indicators**, not adoption metrics
228
+
229
+
**Recommended telemetry signals**
230
+
- Error rates and configuration warnings
231
+
- Connector and dependency health
232
+
- Performance and latency indicators
233
+
- ALM and environment readiness signals
234
+
235
+
**How to interpret**
236
+
237
+
- Persistent errors usually indicate misconfiguration or environment issues
238
+
- Latency spikes suggest throttling, dependency, or orchestration issues
239
+
- Clean telemetry during pilots increases confidence to expand rollout
240
+
241
+
**Typical actions**
242
+
243
+
- Address configuration or dependency gaps
244
+
- Validates environments before promoting to production
245
+
- Coordinate changes using ALM and readiness checks
246
+
247
+
## Employee Self-Service analytics and evaluations: A unified playbook to accelerate time-to-value (TTV)
248
+
249
+
### Why combine telemetry and evaluations?
250
+
251
+
Telemetry and evaluations solve different (but complementary) problems:
252
+
- Telemetry tells you what is happening in real usage - what employees are trying to do, what they do, and what outcomes are being produced.
253
+
- Evaluations (evals) tell you whether the agent behaves the way you expect – accurately, consistently, and safely, using repeatable, automated test cases that help validate improvements and catch regressions.
254
+
255
+
When used together, they create a practical loop:
256
+
257
+
- Telemetry identifies where to focus > Evaluations verify quality and prevent regressions > Telemetry confirm impact at scale > repeat.
258
+
259
+
This loop is what accelerates TTV: you don't "*look at dashboards*" or "*run tests*," you continuously turning signals into actions.
260
+
261
+
The shared operating model:
262
+
263
+
**'Intent ➡️ Behavior ➡️ Outcome ➡️ Action'**
264
+
265
+
Evals plug into the same model by providing repeatable evidence about whether the agent can reliably deliver the intended outcomes before you expose the change to broad employee usage.
0 commit comments