Skip to content

Commit 0f6d069

Browse files
Merge pull request #30894 from MicrosoftDocs/main
[AutoPublish] main to live - 02/24 07:42 PST | 02/24 21:12 IST
2 parents c597833 + ea69a20 commit 0f6d069

3 files changed

Lines changed: 294 additions & 20 deletions

File tree

Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
---
2+
title: Auditing & Logging for the Employee Self-Service agent
3+
f1.keywords: NOCSH
4+
ms.author: semani
5+
author: sthilkumar
6+
manager: swgulati
7+
ms.reviewer: semani
8+
ms.date: 02/23/2026
9+
audience: Admin
10+
ms.topic: article
11+
ms.service: microsoft-365-copilot
12+
ms.custom: ess-agent
13+
ms.localizationpriority: medium
14+
ms.collection: m365copilot
15+
description: Learn how to setup auditing & logging for Employee Self-Service agent
16+
appliesto:
17+
- ✅ Microsoft 365 Copilot
18+
---
19+
20+
# Auditing and logging
21+
22+
The Employee Self-Service agent is built on Copilot and Power Platform. You can use auditing capabilities from these platforms to log and monitor usage.
23+
24+
### Audit with Microsoft Purview
25+
26+
Microsoft Purview provides features for a comprehensive audit record of end-user activities when interacting with agents. [Learn how to audit user interactions with agents in Microsoft Purview.](/power-platform/release-plan/2024wave2/microsoft-copilot-studio/audit-user-interactions-agents-purview)
27+
28+
[Audit logs for Copilot and AI activities](/purview/audit-copilot).
29+
30+
### Audit with Azure Application Insights
31+
32+
You can [use Azure Application Insights to capture telemetry for custom Copilot Agents.](/microsoft-copilot-studio/advanced-bot-framework-composer-capture-telemetry?tabs=webApp)
33+
34+
### Security information and event management (SIEM)
35+
36+
For any SIEM integrations, use Application Insights or the Power Platform Dataverse auditing capabilities.
37+
38+
[Manage Dataverse auditing](/power-platform/admin/manage-dataverse-auditing)
39+
40+
[Integrate Microsoft Sentinel and Power Platform to better monitor and protect your low-code solutions.](https://www.microsoft.com/power-platform/blog/power-apps/integrating-microsoft-sentinel-and-power-platform-to-better-monitor-and-protect-your-low-code-solutions/?msockid=1614e9ffd18265002a76fcabd0016456)
Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
---
2+
title: Reviewing end-user feedback for Employee Self-Service agent
3+
f1.keywords: NOCSH
4+
ms.author: semani
5+
author: sthilkumar
6+
manager: swgulati
7+
ms.reviewer: semani
8+
ms.date: 02/24/2026
9+
audience: Admin
10+
ms.topic: article
11+
ms.service: microsoft-365-copilot
12+
ms.custom: ess-agent
13+
ms.localizationpriority: medium
14+
ms.collection: m365copilot
15+
description: Learn about how to review end-users feedback for ESS agent
16+
appliesto:
17+
- ✅ Microsoft 365 Copilot
18+
---
19+
20+
# Reviewing end-user feedback
21+
22+
Feedback about Copilot is collected from Copilot experiences, such as when selecting a thumb's up or thumbs down on a response from Copilot in Microsoft 365 apps. After a user selects one of the thumb options, the feedback pane appears and asks for more information, including what the user liked (thumbs up) or what went wrong (thumbs down).
23+
24+
> [!NOTE]
25+
> If using Copilot for work or school, the IT admin can turn off feedback. If the thumbs option doesn't appear next to Copilot responses, or if selecting a thumb doesn't open the feedback pane, your organization's IT admin may have feedback turned off.
26+
27+
You can view feedback in the [Microsoft feedback portal](https://feedbackportal.microsoft.com/feedback). To view feedback submitted to Microsoft, select **My feedback** and sign in. If using Copilot at work or school, your IT admin might not allow use of the feedback portal.
28+
29+
Administrators can view end-user feedback in the **Product feedback** page in the Microsoft admin center (MAC) under the **Health** section.

copilot/employee-self-service/usage-analytics.md

Lines changed: 225 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -5,14 +5,14 @@ ms.author: heidip
55
author: MicrosoftHeidi
66
manager: dansimp
77
ms.reviewer: semani
8-
ms.date: 11/05/2025
8+
ms.date: 02/24/2026
99
audience: Admin
1010
ms.topic: article
1111
ms.service: microsoft-365-copilot
1212
ms.custom: ess-agent
1313
ms.localizationpriority: medium
1414
ms.collection: m365copilot
15-
description: Learn how to access usage analytics for the Employee Self-Service agent.
15+
description: Learn how to access usage analytics for your Employee Self-Service agent. Administrators can monitor usage for any Employee Self-Service agents configured in their environment.
1616
appliesto:
1717
- ✅ Microsoft 365 Copilot
1818
---
@@ -24,37 +24,242 @@ Monitoring the usage analytics of the Employee Self-Service agent should be part
2424
There are two approaches in consuming analytics:
2525

2626
1. Systematically monitoring agent usage, effectiveness, quality, and satisfaction, which fall within the responsibilities of the service owner and/or creator. [Learn more](/microsoft-copilot-studio/analytics-overview) about Copilot Studio analytics documentation.
27-
1. Review agent usage, satisfaction scores, and other metrics to assess the agents return on investment. [Learn more](/viva/insights/org-team-insights/copilot-dashboard) about the Copilot Analytics Dashboard.
27+
1. Review agent usage, satisfaction scores, and other metrics to assess the agent's return on investment. [Learn more](/viva/insights/org-team-insights/copilot-dashboard) about Copilot.
2828

29-
## Provide feedback
29+
## Measure what matters
3030

31-
Feedback about Copilot is collected from Copilot experiences, such as when selecting a thumb's up or thumb's down on a response from Copilot in Microsoft 365 apps. After a user selects one of the thumb options, the feedback pane appears and asks for more information, including what the user liked (thumb's up) or what went wrong (thumb's down).
31+
Employee Self-Service atelemetry is designed to help organizations move beyond basic usage reporting and toward **operational clarity, trust, and continuous improvement**. While Employee Self-Service collects a single, consistent telemetry stream, **different stakeholders interpret that telemetry through different lenses**, depending on the decisions they're responsible for making.
3232

33-
>[!NOTE]
34-
>If using Copilot for work or school, the IT admin can turn off feedback. If the thumb options don't appear next to Copilot responses or if selecting a thumb doesn't open the feedback pane, your organization's IT admin may have turned off feedback.
33+
This article explains how to interpret Employee Self-Service telemetry for each stakeholder in your organization, using the same storytelling model that Microsoft's product and engineering teams use internally. This information allows customers and product teams to align on what "good" looks like and what actions to take next.
3534

36-
You can view feedback in the [Microsoft feedback portal](https://feedbackportal.microsoft.com/feedback). To view feedback submitted to Microsoft, select **My feedback** and sign in. If using Copilot at work or school, your IT admin might not allow use of the feedback portal.
35+
## One Telemetry, Different lenses
3736

38-
Administrators can view end-user feedback in the **Product feedback** page in the Microsoft admin center (MAC) under the **Health** section.
37+
Employee Self-Service telemetry follows a simple but powerful narrative framework:
3938

40-
## Auditing and logging
39+
**Intent > Behavior > Outcome > Action**
4140

42-
The Employee Self-Service agent is built on Copilot and Power Platform. You can use auditing capabilities from these platforms to log and monitor usage.
41+
This model ensures telemetry is decision-oriented as well as descriptive. The same data answers different questions depending on who is looking at it.
4342

44-
### Audit with Microsoft Purview
43+
|Story Element |What it means in Employee Self-Service |
44+
|--------------|---------------------------------------------|
45+
|**Intent** |What employees are trying to do. |
46+
|**Behavior** |How the Employee Self-Service agent is used. |
47+
|**Outcome** |Business and experience impact. |
48+
|**Action** |What should change as a result. |
4549

46-
Microsoft Purview provides features for a comprehensive audit record of end-user activities when interacting with agents. [Learn how to audit user interactions with agents in Microsoft Purview.](/power-platform/release-plan/2024wave2/microsoft-copilot-studio/audit-user-interactions-agents-purview)
50+
## Stakeholder-based interpretation guide
4751

48-
[Audit logs for Copilot and AI activities](/purview/audit-copilot).
52+
### 1. Executive & Business Leaders
4953

50-
### Audit with Azure Application Insights
54+
**Primary question**
5155

52-
You can [use Azure Application Insights to capture telemetry for custom Copilot Agents.](/microsoft-copilot-studio/advanced-bot-framework-composer-capture-telemetry?tabs=webApp)
56+
*Is the Employee Self-Service agent compounding value on scale, and where should we invest next?*
57+
Executives should focus on **outcome-level telemetry**, not raw interaction counts.
5358

54-
### Security information and event management (SIEM)
59+
**Recommended telemetry signals**
5560

56-
For any SIEM integrations, use Application Insights or the Power Platform Dataverse auditing capabilities.
61+
- Active usage and rollout progression
62+
- Conversion success rate
63+
- Reduction in assisted support
64+
- Trend alignment with business goals (for example, ticket deflection, time saved, productivity gain, employee satisfaction)
5765

58-
[Manage Dataverse auditing](/power-platform/admin/manage-dataverse-auditing)
66+
**How to interpret**
67+
- Rising usage without corresponding success improvements may indicate trust gaps
68+
- Stable success rates with expanding usage suggest the Employee Self-Service agent is scaling reliably
69+
- Drops in success or spikes in assisted support, signals investment needs
5970

60-
[Integrate Microsoft Sentinel and Power Platform to better monitor and protect your low-code solutions.](https://www.microsoft.com/power-platform/blog/power-apps/integrating-microsoft-sentinel-and-power-platform-to-better-monitor-and-protect-your-low-code-solutions/?msockid=1614e9ffd18265002a76fcabd0016456)
71+
**Typical actions**
72+
- Prioritize funding towards friction areas surfaced by telemetry
73+
- Align Employee Self-Service agent expansion to scenarios with measurable business value
74+
75+
**Common business goals**
76+
- Ticket deflection across implemented business verticals to reduce operational cost
77+
- Time saved / Assisted savings for employees by reducing manual support interactions
78+
- Return on investment (ROI) from Employee Self-Service deployment and expansion
79+
- Sustained adoption beyond pilot phases (trust and repeat usage)
80+
- Predictable scale without increasing support or incident load
81+
82+
**What to use When**
83+
84+
[Copilot Analytics introduction](/viva/insights/copilot-analytics-introduction#which-tool-should-i-use-when)
85+
86+
### 1. Product / Service Owners
87+
88+
**Primary question**
89+
90+
*What should we fix or improve next to move outcomes – not just metrics?*
91+
92+
Product/service stakeholders focused on each business domain/verticals such as HR, IT, Facilities, etc. should interpret telemetry as **signals of friction**, not performance grades.
93+
94+
**Recommended telemetry signals**
95+
96+
- Scenario-level success and failure patterns
97+
- Drop-off and retry behaviors
98+
- Error and failback indicators
99+
- Evaluation (Eval) regression trends
100+
101+
**How to interpret**
102+
- Concentrated failures often indicate missing configuration, connector gaps, or unclear responses
103+
- Repeated retries imply intent is understood but fulfillment is failing
104+
- Regression after changes indicates quality or performance tradeoffs
105+
106+
**Typical actions**
107+
- Created targeted backlog items (operational guidance, evaluations, fixes)
108+
- Expand evaluation coverage for high-risk scenarios
109+
- Adjust prompts, orchestration, or data sources
110+
111+
**Common business goals**
112+
- Improve conversation success rate for high-volume scenarios
113+
- Reduce deployment and adoption friction detected in telemetry
114+
- Prevent regressions such as prompts, knowledge, or integrations change
115+
- Focus investment on highest-impact scenarios, not vanity metrics
116+
117+
#### Interpreting Telemetry by verticals
118+
This section is designed to help you operationalize the Employee Self-Service telemetry across your organization by anchoring analytics to real scenarios, clear stakeholder questions, and concrete actions.
119+
120+
***Human Resources (HR)***
121+
**Typical scenarios**
122+
- "How many vacation days do I have left?"
123+
- "When is my next payroll date?"
124+
- "What benefits am I eligible for?"
125+
- "How do I apply for parental leave?"
126+
- "Can I add a dependent to my benefits?"
127+
128+
**HR stakeholders & questions**
129+
130+
|Stakeholder |Key question |
131+
|---------------------------|---------------------------------------------------------------|
132+
|**HR Business Owner** |Are employees getting accurate answers without HR tickets? |
133+
| **HR Operations** |Which topics still require manual follow-up? |
134+
|**Change & Adoption lead** |Are employees trusting Employee Self-Service for HR questions? |
135+
136+
**Telemetry signals to review for HR**
137+
- Conversation success rate (HR intents).
138+
- Assisted support rate for HR topics.
139+
- Repeated queries or retries on the same HR topic.
140+
- Evaluation (Eval) pass/fail trends for HR scenarios.
141+
142+
**How to interpret HR telemetry**
143+
- **High usage + high success** = The Employee Self-Service agent is deflecting HR tickets effectively.
144+
- **High usage + low success** = Knowledge gaps or response clarity issues.
145+
- **Repeated retries** = Policy ambiguity or missing personalization (user context).
146+
- **Eval regressions** = Risk of inconsistent answers.
147+
148+
**Recommended actions for HR**
149+
- Prioritize Eval coverage for high-volume HR scenarios.
150+
- Improve response specificity for policy-driven questions.
151+
- Align HR telemetry reviews with specific organizational event cadence such as payroll/benefit cycles.
152+
- Track success trends before expanding the Employee Self-Service agent to new HR domains.
153+
154+
***Information Technology (IT)***
155+
156+
**Typical IT scenarios**
157+
- "Reset my password"
158+
- "Unlock my account"
159+
- "Request access to an application"
160+
- "Install approved software"
161+
- "Check my device compliance status"
162+
163+
**IT Stakeholders & Questions**
164+
165+
|**Stakeholder** |**Key Question** |
166+
|---------------------|--------------------------------------------------------------------------------|
167+
|**IT Service Owner** |Is the Employee Self-Service agent reducing ticket volume for common IT issues? |
168+
|**IT Operations** |Are failures due to configuration or platform issues? |
169+
|**Security / IAM** |Are requests handled securely and consistently? |
170+
171+
**Telemetry signals to review for IT**
172+
- Ticket deflection indicators
173+
- Error and fallback rates
174+
- Connector and dependency health
175+
- Latency and timeout signals for IT actions
176+
177+
**How to interpret IT Telemetry**
178+
- **Low assisted support + high completion** = Effective IT self-service
179+
- **Errors clustered by scenarios** = Configuration or connector issues
180+
- **Latency spikes** = Throttling, dependency, or orchestration bottlenecks
181+
- **Security-related fallbacks** = IAM or policy misalignment
182+
183+
**Recommended actions for IT**
184+
- Focus telemetry reviews on top ticket-deflecting scenarios
185+
- Use diagnostics to distinguish config issues vs. product gaps
186+
- Validate performance telemetry before scaling rollout
187+
- Pair telemetry with readiness and ALM checks for production moves
188+
189+
***Facilities & Workplace services***
190+
**Typical Workplace scenarios**
191+
- "Report a facilities issue"
192+
- "Request building access"
193+
- "Find office policies or amenities"
194+
- "Book a workspace or room"
195+
- "Check office hours or closures"
196+
197+
**Facilities Stakeholders & Questions**
198+
199+
|**Stakeholder** |**Key Question** |
200+
|-------------------------|-------------------------------------------------------------------------|
201+
|**Workplace Operations** |Are facilities requests resolved without manual triage? |
202+
|**Facilities Managers** |Which requests still require human intervention? |
203+
|**Employee Experience** |Is the Employee Self-Service agent improving day to day workplace trust? |
204+
205+
**Telemetry signals to review for facilities**
206+
- Scenario completion vs. handoff rates
207+
- Repeated questions about the same facility topic
208+
- Time-to-resolution proxies
209+
- Satisfaction indicators (thumps up/down, sentiment)
210+
211+
**How to interpret facilities telemetry**
212+
- **High completion + low retries** = Clear guidance and fulfillment
213+
- **Repeated queries** = Outdated or unclear facilities content
214+
- **Frequent handoffs** = Integration or workflow gaps
215+
- **Negative sentiment** = Experience or clarity issues
216+
217+
**Recommended actions for facilities**
218+
- Improve knowledge freshness and clarity
219+
- Identify top "must-be-right" facilities scenarios for evals
220+
- Use telemetry to justify integration investments
221+
- Track sentiment trends as a proxy for workplace trust
222+
223+
### IT Administrators & Makers ###
224+
**Primary question**
225+
226+
*Is the Employee Self-Service agent configured correctly, stable, and ready to scale?*
227+
Administrators and Makers should interpret telemetry as **health and readiness indicators**, not adoption metrics
228+
229+
**Recommended telemetry signals**
230+
- Error rates and configuration warnings
231+
- Connector and dependency health
232+
- Performance and latency indicators
233+
- ALM and environment readiness signals
234+
235+
**How to interpret**
236+
237+
- Persistent errors usually indicate misconfiguration or environment issues
238+
- Latency spikes suggest throttling, dependency, or orchestration issues
239+
- Clean telemetry during pilots increases confidence to expand rollout
240+
241+
**Typical actions**
242+
243+
- Address configuration or dependency gaps
244+
- Validates environments before promoting to production
245+
- Coordinate changes using ALM and readiness checks
246+
247+
## Employee Self-Service analytics and evaluations: A unified playbook to accelerate time-to-value (TTV)
248+
249+
### Why combine telemetry and evaluations?
250+
251+
Telemetry and evaluations solve different (but complementary) problems:
252+
- Telemetry tells you what is happening in real usage - what employees are trying to do, what they do, and what outcomes are being produced.
253+
- Evaluations (evals) tell you whether the agent behaves the way you expect – accurately, consistently, and safely, using repeatable, automated test cases that help validate improvements and catch regressions.
254+
255+
When used together, they create a practical loop:
256+
257+
- Telemetry identifies where to focus > Evaluations verify quality and prevent regressions > Telemetry confirm impact at scale > repeat.
258+
259+
This loop is what accelerates TTV: you don't "*look at dashboards*" or "*run tests*," you continuously turning signals into actions.
260+
261+
The shared operating model:
262+
263+
**'Intent ➡️ Behavior ➡️ Outcome ➡️ Action'**
264+
265+
Evals plug into the same model by providing repeatable evidence about whether the agent can reliably deliver the intended outcomes before you expose the change to broad employee usage.

0 commit comments

Comments
 (0)