You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: copilot/agent-essentials/m365-agents-admin-guide.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ f1.keywords:
4
4
ms.author: erikre
5
5
author: ErikRe
6
6
manager: dansimp
7
-
ms.date: 03/12/2026
7
+
ms.date: 03/26/2026
8
8
ms.update-cycle: 180-days
9
9
audience: Admin
10
10
ms.topic: concept-article
@@ -81,7 +81,7 @@ Microsoft 365 Copilot offers chat grounded in both web-based and work-based data
81
81
82
82
By default, Microsoft and Microsoft partners provide ready-to-use agents that you can quickly integrate and deploy with Microsoft 365 Copilot Chat and Microsoft 365 Copilot. In addition, you can integrate and deploy agents created by members of your organization.
83
83
84
-
When using a Microsoft 365 subscription, you have agents available with your Microsoft 365 apps, such as Word and Excel. You can also view agents directly in the Microsoft 365 Copilot app. For more information, see [Welcome to the Microsoft 365 Copilot app](https://support.microsoft.com/topic/welcome-to-the-microsoft-365-copilot-app-092599f1-5917-4bd6-bd59-58af628bbc39).
84
+
You can view agents directly in the Microsoft 365 Copilot app. For more information about using The Microsoft 365 Copilot app, see [What is the Microsoft 365 Copilot app?](https://support.microsoft.com/topic/welcome-to-the-microsoft-365-copilot-app-092599f1-5917-4bd6-bd59-58af628bbc39) For information about agents, see [Get started with agents in the Microsoft 365 Copilot app](https://support.microsoft.com/topic/get-started-with-agents-in-the-microsoft-365-copilot-app-943e563d-602d-40fa-bdd1-dbc83f582466).
85
85
86
86
✅ **Task: Understand how to view agents in Microsoft 365 Copilot and Microsoft Teams.**
Copy file name to clipboardExpand all lines: copilot/employee-self-service/customize.md
+8Lines changed: 8 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -420,6 +420,14 @@ For information on SharePoint knowledge filtering, see [SharePoint Advanced Filt
420
420
421
421
You can optionally customize how the Employee Self‑Service (ESS) agent appears and how users start conversations by configuring tenant‑level settings in the Microsoft 365 admin center. These settings complement Copilot Studio configuration and apply to deployed agents.
422
422
423
+
## Roles that can access these settings in the Microsoft admin center
424
+
425
+
- AI Admin
426
+
- Global Admin
427
+
428
+
> [!IMPORTANT]
429
+
> Microsoft recommends that you use roles with the fewest permissions. Using lower permissioned accounts helps improve security for your organization. Global Administrator is a highly privileged role that should be limited to emergency scenarios when you can't use an existing role.
430
+
423
431
The **Rich landing page** gives you control over how your Employee Self-Service agent looks and feels to employees. On the agent’s landing page, you can:
Copy file name to clipboardExpand all lines: copilot/employee-self-service/overview.md
+38-4Lines changed: 38 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,9 +20,9 @@ appliesto:
20
20
21
21
# Employee Self-Service
22
22
23
-
As part of our general availability release, access is rolling out in waves, starting with managed customers and expanding to all customers over time. If you’d like to explore access sooner, reach out to your Microsoft account team. If you don’t currently have an account team, keep an eye on [this blog post](https://aka.ms/ess/gablog) for updates on when the Employee Self-Service Agent is available to all customers in Copilot Studio.
23
+
As part of our general availability release, access is rolling out in waves, starting with managed customers and expanding to all customers over time. If you’d like to explore access sooner, reach out to your Microsoft account team. If you don’t currently have an account team, keep an eye on [this blog post](https://aka.ms//gablog) for updates on when the Employee Self-Service Agent is available to all customers in Copilot Studio.
24
24
25
-
The Employee Self-Service agent is designed as a unified, customer-facing, AI-powered interface for handling employee requests and automating routine tasks within enterprise environments. The Employee Self-Service agent, built on Copilot Studio, is designed for you to customize. Once you customize it for your organization's needs, the Employee Self-Service agent streamlines access to HR, IT, and operational systems, reducing manual intervention and improving process efficiency.
25
+
The Employee Self-Service agent is designed as a unified, customer-facing, AI-powered interface for handling employee requests and automating routine tasks within enterprise environments. The Employee Self-Service agent, built on Copilot Studio, is designed for you to customize. Once you customize it for your organization's needs, the Employee Self-Service agent streamlines acc to HR, IT, and operational systems, reducing manual intervention and improving process efficiency.
26
26
27
27
## Technical Architecture
28
28
@@ -53,7 +53,7 @@ Data security and compliance are enforced through:
53
53
54
54
## Customization, scalability, and extensibility
55
55
56
-
Copilot Studio and Power Platform provide extensive customization options, including low-code and pro-code development environments. Organizations can modify agent behavior, extend dialog flows, and integrate additional data sources. The platform supports scaling across regions and business units, with centralized management and version control for agent configurations.
56
+
Copilot Studio and Power Platform provide extensive customization options, including low-code and pro-code development environments. Organizations can modify agent behavior, extend dialog flows, and integrate more data sources. The platform supports scaling across regions and business units, with centralized management and version control for agent configurations.
57
57
58
58
Each starter comes with default content and accelerators to get you started like:
59
59
@@ -69,7 +69,7 @@ The agent ships with a few solution accelerators to integrate with external syst
69
69
- Flows
70
70
- Templates
71
71
72
-
In addition to the shipped solution accelerators, the agent is extensible within Copilot Studio by adding components to support additional business scenarios.
72
+
In addition to the shipped solution accelerators, the agent is extensible within Copilot Studio by adding components to support other business scenarios.
73
73
74
74
[Learn more](https://github.com/microsoft/CopilotStudioSamples/tree/main/EmployeeSelfServiceAgent) about Copilot Studio samples and adding more scenarios.
75
75
@@ -110,3 +110,37 @@ The following matrix provides an overview of the various external systems integr
110
110
-[Learn more](workday.md#topics) about Workday preconfigured scenarios.
111
111
-[Learn more](servicenow-hrsd-itsm.md#topics) about ServiceNow HR preconfigured scenarios.
112
112
-[Learn more](servicenow-hrsd-itsm.md#topics-1) about ServiceNow IT preconfigured scenarios.
113
+
114
+
## Use Employee Self Service on Mobile
115
+
116
+
Employee Self Service is available on mobile through the M365 Copilot app on iOS and Android. This enables employees to access HR and IT support wherever they work.
117
+
118
+
### What Employee Self Service on Mobile Supports
119
+
120
+
- Access to the Employee Self Service agent from the M365 Copilot mobile app
121
+
- Core Employee Self Service scenarios aligned with the web experience
122
+
123
+
### How Access Works
124
+
125
+
If your organization has enabled Employee Self Service on web, users can access Employee Self Service on mobile by signing in to the M365 Copilot app and selecting the Employee Self Service agent from the agent list.
126
+
127
+
No other configuration is required to enable mobile access.
128
+
129
+
Users should update to the latest version of the app if they are not able to see the Employee Self Service agent.
130
+
131
+
### Current Limitations
132
+
133
+
The Employee Self Service mobile experience supports core self-service scenarios available through the M365 Copilot mobile app. Some capabilities currently available on web are not yet supported on mobile.
134
+
135
+
| Capability Area | Limitation on Mobile | Recommended Behavior |
| Agent Handoff | Handoff to another agent or live agent is not supported in mobile experiences. | Users are redirected to complete the interaction on Employee Self Service web |
138
+
| Starter Prompts | Employee Self Service Mobile currently displays six starter prompts on its main landing page. iOS and Android do not currently offer access to the Prompt Gallery nor the capability to add references or modify values in a starter prompt. | — |
139
+
| Rich Landing Page | Rich landing page elements configured in MAC (for example, starter prompts, quick links, accent color) may not render on mobile. | Use Employee Self Service on web for full landing page experience; configure essential prompts in Copilot Studio when needed. |
140
+
| Multi Agent Support | Multi-agent orchestration scenarios may have limited functionality on mobile. | Continue interaction on web for complex agent routing. |
141
+
| Official Sources | Official sources on mobile provide the same content as the web experience, but the visual elements, such as the official source header and badge, isn't shown. | — |
142
+
| Official Answers | On mobile, the Official Answer label does not appear in the response. Official Answers provide the same content as the web experience, but users need to click an adaptive card to access the content. | — |
143
+
144
+
### What’s Next
145
+
146
+
The Employee Self Service mobile experience continues to evolve as platform and configuration capabilities converge. Future updates are going to support scenarios and improve parity across surfaces.
Copy file name to clipboardExpand all lines: copilot/microsoft-365-ai-disclaimers.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,9 +29,9 @@ To turn on the Copilot AI disclaimer, you need to be assigned the AI Administrat
29
29
30
30
## Turn on Microsoft 365 Copilot AI disclaimers
31
31
32
-
1. In the [Microsoft 365 admin center](https://admin.microsoft.com), go to **Copilot** -> **Settings** -> **Copilot actions** -> **Copilot AI disclaimer**.
32
+
1. In the [Microsoft 365 admin center](https://admin.microsoft.com), go to **Copilot** -> **Settings** -> **View all** -> **Copilot AI disclaimer**.
33
33
2. On the **Copilot AI disclaimer** page, select **Standard** or **Bold** to select the font for the disclaimer.
34
-
3. Optional: Create a page with your organization’s internal AI policy and add the URL under Provide a web address that's available from the tooltip. Or leave this field blank if you want to keep the default Microsoft Copilot AI disclaimer.
34
+
3. Optional: Create a page with your organization’s internal AI policy and add the URL under **Provide a web address that is available from the tooltip**. Or leave this field blank if you want to keep the default Microsoft Copilot AI disclaimer.
35
35
4. Review the disclaimer and select **Save** to apply the setting.
Copy file name to clipboardExpand all lines: copilot/microsoft-365-copilot-application-card.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ ms.collection:
15
15
- must-keep
16
16
hideEdit: true
17
17
ms.update-cycle: 180-days
18
-
ms.date: 03/24/2026
18
+
ms.date: 03/26/2026
19
19
---
20
20
21
21
# Application card: Microsoft 365 Copilot
@@ -24,7 +24,7 @@ ms.date: 03/24/2026
24
24
25
25
Microsoft’s Application and Platform cards are intended to help you understand how our AI technology works, the choices application owners can make that influence application performance and behavior, and the importance of considering the whole application, including the technology, the people, and the environment. Application cards are created for AI applications and platform cards are created for AI platform services. These resources can support the development or deployment of your own applications and can be shared with users or stakeholders impacted by them.
26
26
27
-
As part of its commitment to responsible AI, Microsoft adheres to [six core principles](https://www.microsoft.com/ai/principles-and-approach/?msockid=3da790040c776d6f2b5485e40de56c06#ai-principles): fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. These principles are embedded in the [Responsible AI Standard](https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/final/en-us/microsoft-brand/documents/Microsoft-Responsible-AI-Standard-General-Requirements.pdf), which guides teams in designing, building, and testing AI applications. Application and Platform cards play a key role in operationalizing these principles by offering transparency around capabilities, intended uses, and limitations. For further insight, readers are encouraged to explore Microsoft’s [Responsible AI Transparency Report](https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/msc/documents/presentations/CSR/Responsible-AI-Transparency-Report-2025.pdf) and either the [Microsoft Enterprise AI Services Code of Conduct](/legal/ai-code-of-conduct) (for organizations) or the [Code Conduct section in the Microsoft Services Agreement](https://www.microsoft.com/servicesagreement#3_codeOfConduct) (for individuals), both of which outline how to engage with AI responsibly.
27
+
As part of its commitment to responsible AI, Microsoft adheres to [six core principles](https://www.microsoft.com/ai/principles-and-approach/?msockid=3da790040c776d6f2b5485e40de56c06#ai-principles): fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. These principles are embedded in the [Responsible AI Standard](https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/final/en-us/microsoft-brand/documents/Microsoft-Responsible-AI-Standard-General-Requirements.pdf), which guides teams in designing, building, and testing AI applications. Application and Platform cards play a key role in operationalizing these principles by offering transparency around capabilities, intended uses, and limitations. For further insight, readers are encouraged to explore Microsoft’s [Responsible AI Transparency Report](https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/msc/documents/presentations/CSR/Responsible-AI-Transparency-Report-2025.pdf) and either the [Microsoft Enterprise AI Services Code of Conduct](/legal/ai-code-of-conduct) (for organizations) or the [Code of Conduct section in the Microsoft Services Agreement](https://www.microsoft.com/servicesagreement#3_codeOfConduct) (for individuals), both of which outline how to engage with AI responsibly.
28
28
29
29
## Overview
30
30
@@ -48,7 +48,7 @@ The following table provides a glossary of key terms related to Microsoft 365 Co
48
48
|Large language model (LLM)|Large language models (LLMs) in this context are AI models that are trained on large amounts of text data to predict words in sequences. LLMs are capable of performing a variety of tasks, such as text generation, summarization, translation, classification, and more.|
49
49
|Microsoft Graph |Microsoft Graph is the gateway to data and intelligence in Microsoft 365. It includes information about the relationships between users, activities, and an organization’s data. |
50
50
|Post-processing|The processing Microsoft 365 Copilot does after it receives a response from the LLM. This post-processing includes additional grounding calls to Microsoft Graph, responsible AI, security, compliance, and privacy checks.|
51
-
|Processing |Processing of a user prompt in Microsoft 365 Copilot involves several steps, including responsible AI checks, to help Microsoft 365 Copilot provides relevant and actionable responses. |
51
+
|Processing |Processing of a user prompt in Microsoft 365 Copilot involves several steps, including responsible AI checks, to help Microsoft 365 Copilot provide relevant and actionable responses. |
52
52
|Prompt |A Prompt is the text sent to Microsoft 365 Copilot to execute a specific task or provide information. For example, a user might input the following prompt: Write an email congratulating my team on the end of the fiscal year. |
53
53
|Red team testing|Techniques used by experts to assess the limitations and vulnerabilities of a system and to test the effectiveness of planned mitigations. Red team testing is used to identify potential risks and is distinct from systematic measurement of risks. |
54
54
|Response|The content generated by the LLM and returned to Microsoft 365 Copilot as a reply to a prompt.|
@@ -125,7 +125,7 @@ Microsoft 365 Copilot doesn't require web content or organizational data to prov
125
125
126
126
## Limitations
127
127
128
-
Understanding Microsoft 365 Copilot’s limitations is crucial to determine it's used within safe and effective boundaries. While we encourage customers to leverage Microsoft 365 Copilot in their innovative solutions or applications, it’s important to note that Microsoft 365 Copilot wasn't designed for every possible scenario. We encourage users to refer to either the [Microsoft Enterprise AI Services Code of Conduct](/legal/ai-code-of-conduct) (for organizations) or the [Code Conduct section in the Microsoft Services Agreement](https://www.microsoft.com/servicesagreement#3_codeOfConduct) (for individuals) as well as the following considerations when choosing a use case:
128
+
Understanding Microsoft 365 Copilot’s limitations is crucial to determine it's used within safe and effective boundaries. While we encourage customers to leverage Microsoft 365 Copilot in their innovative solutions or applications, it’s important to note that Microsoft 365 Copilot wasn't designed for every possible scenario. We encourage users to refer to either the [Microsoft Enterprise AI Services Code of Conduct](/legal/ai-code-of-conduct) (for organizations) or the [Code of Conduct section in the Microsoft Services Agreement](https://www.microsoft.com/servicesagreement#3_codeOfConduct) (for individuals) as well as the following considerations when choosing a use case:
129
129
130
130
-**Compatibility:** While Microsoft 365 Copilot is designed to work seamlessly with Microsoft 365 applications, there can be limitations or issues with compatibility in certain environments, especially with third party (non-Microsoft) apps and customized or nonstandard configurations.
131
131
@@ -223,7 +223,7 @@ To improve the performance in relation to the accuracy of Microsoft 365 Copilot
223
223
224
224
-**Be aware of the risk of overreliance:** Overreliance on AI happens when users accept incorrect or incomplete AI outputs, mainly because mistakes in AI outputs may be hard to detect. For the end-user, overreliance could result in decreased productivity, loss of trust, product abandonment, financial loss, psychological harm, physical harm, among others. (for example, a doctor accepts an incorrect AI output). For Microsoft 365 Copilot, we help mitigate this risk by adding disclaimers to our products but users should still make sure to review the accuracy of the answers.
225
225
226
-
-**Exercise caution when designing agentic AI in sensitive domains:** Users should exercise caution when designing and/or deploying agentic AI systems in sensitive domains where agent actions are irreversible or highly consequential. Additional precautions should also be taken when creating autonomous agentic AI as described further in either the [Microsoft Enterprise AI Services Code of Conduct](/legal/ai-code-of-conduct) (for organizations) or the [Code Conduct section in the Microsoft Services Agreement](https://www.microsoft.com/servicesagreement#3_codeOfConduct) (for individuals).
226
+
-**Exercise caution when designing agentic AI in sensitive domains:** Users should exercise caution when designing and/or deploying agentic AI systems in sensitive domains where agent actions are irreversible or highly consequential. Additional precautions should also be taken when creating autonomous agentic AI as described further in either the [Microsoft Enterprise AI Services Code of Conduct](/legal/ai-code-of-conduct) (for organizations) or the [Code of Conduct section in the Microsoft Services Agreement](https://www.microsoft.com/servicesagreement#3_codeOfConduct) (for individuals).
0 commit comments