Skip to content

Commit 1f3e338

Browse files
committed
Merge branch 'EE_genUp' of https://github.com/MicrosoftDocs/microsoft-365-docs-pr into EE_genUp
2 parents 2e62ed7 + 5c23c92 commit 1f3e338

26 files changed

Lines changed: 97 additions & 52 deletions

copilot/agent-essentials/m365-agents-admin-guide.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ f1.keywords:
44
ms.author: erikre
55
author: ErikRe
66
manager: dansimp
7-
ms.date: 03/12/2026
7+
ms.date: 03/26/2026
88
ms.update-cycle: 180-days
99
audience: Admin
1010
ms.topic: concept-article
@@ -81,7 +81,7 @@ Microsoft 365 Copilot offers chat grounded in both web-based and work-based data
8181

8282
By default, Microsoft and Microsoft partners provide ready-to-use agents that you can quickly integrate and deploy with Microsoft 365 Copilot Chat and Microsoft 365 Copilot. In addition, you can integrate and deploy agents created by members of your organization.
8383

84-
When using a Microsoft 365 subscription, you have agents available with your Microsoft 365 apps, such as Word and Excel. You can also view agents directly in the Microsoft 365 Copilot app. For more information, see [Welcome to the Microsoft 365 Copilot app](https://support.microsoft.com/topic/welcome-to-the-microsoft-365-copilot-app-092599f1-5917-4bd6-bd59-58af628bbc39).
84+
You can view agents directly in the Microsoft 365 Copilot app. For more information about using The Microsoft 365 Copilot app, see [What is the Microsoft 365 Copilot app?](https://support.microsoft.com/topic/welcome-to-the-microsoft-365-copilot-app-092599f1-5917-4bd6-bd59-58af628bbc39) For information about agents, see [Get started with agents in the Microsoft 365 Copilot app](https://support.microsoft.com/topic/get-started-with-agents-in-the-microsoft-365-copilot-app-943e563d-602d-40fa-bdd1-dbc83f582466).
8585

8686
**Task: Understand how to view agents in Microsoft 365 Copilot and Microsoft Teams.**
8787

copilot/employee-self-service/customize.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -420,6 +420,14 @@ For information on SharePoint knowledge filtering, see [SharePoint Advanced Filt
420420

421421
You can optionally customize how the Employee Self‑Service (ESS) agent appears and how users start conversations by configuring tenant‑level settings in the Microsoft 365 admin center. These settings complement Copilot Studio configuration and apply to deployed agents.
422422

423+
## Roles that can access these settings in the Microsoft admin center
424+
425+
- AI Admin
426+
- Global Admin
427+
428+
> [!IMPORTANT]
429+
> Microsoft recommends that you use roles with the fewest permissions. Using lower permissioned accounts helps improve security for your organization. Global Administrator is a highly privileged role that should be limited to emergency scenarios when you can't use an existing role.
430+
423431
The **Rich landing page** gives you control over how your Employee Self-Service agent looks and feels to employees. On the agent’s landing page, you can:
424432

425433
- Add accent colors to reflect your brand.

copilot/employee-self-service/overview.md

Lines changed: 38 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -20,9 +20,9 @@ appliesto:
2020

2121
# Employee Self-Service
2222

23-
As part of our general availability release, access is rolling out in waves, starting with managed customers and expanding to all customers over time. If you’d like to explore access sooner, reach out to your Microsoft account team. If you don’t currently have an account team, keep an eye on [this blog post](https://aka.ms/ess/gablog) for updates on when the Employee Self-Service Agent is available to all customers in Copilot Studio.
23+
As part of our general availability release, access is rolling out in waves, starting with managed customers and expanding to all customers over time. If you’d like to explore access sooner, reach out to your Microsoft account team. If you don’t currently have an account team, keep an eye on [this blog post](https://aka.ms//gablog) for updates on when the Employee Self-Service Agent is available to all customers in Copilot Studio.
2424

25-
The Employee Self-Service agent is designed as a unified, customer-facing, AI-powered interface for handling employee requests and automating routine tasks within enterprise environments. The Employee Self-Service agent, built on Copilot Studio, is designed for you to customize. Once you customize it for your organization's needs, the Employee Self-Service agent streamlines access to HR, IT, and operational systems, reducing manual intervention and improving process efficiency.
25+
The Employee Self-Service agent is designed as a unified, customer-facing, AI-powered interface for handling employee requests and automating routine tasks within enterprise environments. The Employee Self-Service agent, built on Copilot Studio, is designed for you to customize. Once you customize it for your organization's needs, the Employee Self-Service agent streamlines acc to HR, IT, and operational systems, reducing manual intervention and improving process efficiency.
2626

2727
## Technical Architecture
2828

@@ -53,7 +53,7 @@ Data security and compliance are enforced through:
5353

5454
## Customization, scalability, and extensibility
5555

56-
Copilot Studio and Power Platform provide extensive customization options, including low-code and pro-code development environments. Organizations can modify agent behavior, extend dialog flows, and integrate additional data sources. The platform supports scaling across regions and business units, with centralized management and version control for agent configurations.
56+
Copilot Studio and Power Platform provide extensive customization options, including low-code and pro-code development environments. Organizations can modify agent behavior, extend dialog flows, and integrate more data sources. The platform supports scaling across regions and business units, with centralized management and version control for agent configurations.
5757

5858
Each starter comes with default content and accelerators to get you started like:
5959

@@ -69,7 +69,7 @@ The agent ships with a few solution accelerators to integrate with external syst
6969
- Flows
7070
- Templates
7171

72-
In addition to the shipped solution accelerators, the agent is extensible within Copilot Studio by adding components to support additional business scenarios.
72+
In addition to the shipped solution accelerators, the agent is extensible within Copilot Studio by adding components to support other business scenarios.
7373

7474
[Learn more](https://github.com/microsoft/CopilotStudioSamples/tree/main/EmployeeSelfServiceAgent) about Copilot Studio samples and adding more scenarios.
7575

@@ -110,3 +110,37 @@ The following matrix provides an overview of the various external systems integr
110110
- [Learn more](workday.md#topics) about Workday preconfigured scenarios.
111111
- [Learn more](servicenow-hrsd-itsm.md#topics) about ServiceNow HR preconfigured scenarios.
112112
- [Learn more](servicenow-hrsd-itsm.md#topics-1) about ServiceNow IT preconfigured scenarios.
113+
114+
## Use Employee Self Service on Mobile
115+
116+
Employee Self Service is available on mobile through the M365 Copilot app on iOS and Android. This enables employees to access HR and IT support wherever they work.
117+
118+
### What Employee Self Service on Mobile Supports
119+
120+
- Access to the Employee Self Service agent from the M365 Copilot mobile app
121+
- Core Employee Self Service scenarios aligned with the web experience
122+
123+
### How Access Works
124+
125+
If your organization has enabled Employee Self Service on web, users can access Employee Self Service on mobile by signing in to the M365 Copilot app and selecting the Employee Self Service agent from the agent list.
126+
127+
No other configuration is required to enable mobile access.
128+
129+
Users should update to the latest version of the app if they are not able to see the Employee Self Service agent.
130+
131+
### Current Limitations
132+
133+
The Employee Self Service mobile experience supports core self-service scenarios available through the M365 Copilot mobile app. Some capabilities currently available on web are not yet supported on mobile.
134+
135+
| Capability Area | Limitation on Mobile | Recommended Behavior |
136+
|-----------------------|---------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------|
137+
| Agent Handoff | Handoff to another agent or live agent is not supported in mobile experiences. | Users are redirected to complete the interaction on Employee Self Service web |
138+
| Starter Prompts | Employee Self Service Mobile currently displays six starter prompts on its main landing page. iOS and Android do not currently offer access to the Prompt Gallery nor the capability to add references or modify values in a starter prompt. ||
139+
| Rich Landing Page | Rich landing page elements configured in MAC (for example, starter prompts, quick links, accent color) may not render on mobile. | Use Employee Self Service on web for full landing page experience; configure essential prompts in Copilot Studio when needed. |
140+
| Multi Agent Support | Multi-agent orchestration scenarios may have limited functionality on mobile. | Continue interaction on web for complex agent routing. |
141+
| Official Sources | Official sources on mobile provide the same content as the web experience, but the visual elements, such as the official source header and badge, isn't shown. ||
142+
| Official Answers | On mobile, the Official Answer label does not appear in the response. Official Answers provide the same content as the web experience, but users need to click an adaptive card to access the content. ||
143+
144+
### What’s Next
145+
146+
The Employee Self Service mobile experience continues to evolve as platform and configuration capabilities converge. Future updates are going to support scenarios and improve parity across surfaces.

copilot/microsoft-365-ai-disclaimers.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,9 +29,9 @@ To turn on the Copilot AI disclaimer, you need to be assigned the AI Administrat
2929
3030
## Turn on Microsoft 365 Copilot AI disclaimers
3131

32-
1. In the [Microsoft 365 admin center](https://admin.microsoft.com), go to **Copilot** -> **Settings** -> **Copilot actions** -> **Copilot AI disclaimer**.
32+
1. In the [Microsoft 365 admin center](https://admin.microsoft.com), go to **Copilot** -> **Settings** -> **View all** -> **Copilot AI disclaimer**.
3333
2. On the **Copilot AI disclaimer** page, select **Standard** or **Bold** to select the font for the disclaimer.
34-
3. Optional: Create a page with your organization’s internal AI policy and add the URL under Provide a web address that's available from the tooltip. Or leave this field blank if you want to keep the default Microsoft Copilot AI disclaimer.
34+
3. Optional: Create a page with your organization’s internal AI policy and add the URL under **Provide a web address that is available from the tooltip**. Or leave this field blank if you want to keep the default Microsoft Copilot AI disclaimer.
3535
4. Review the disclaimer and select **Save** to apply the setting.
3636

3737
> [!NOTE]

copilot/microsoft-365-copilot-application-card.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ ms.collection:
1515
- must-keep
1616
hideEdit: true
1717
ms.update-cycle: 180-days
18-
ms.date: 03/24/2026
18+
ms.date: 03/26/2026
1919
---
2020

2121
# Application card: Microsoft 365 Copilot
@@ -24,7 +24,7 @@ ms.date: 03/24/2026
2424

2525
Microsoft’s Application and Platform cards are intended to help you understand how our AI technology works, the choices application owners can make that influence application performance and behavior, and the importance of considering the whole application, including the technology, the people, and the environment. Application cards are created for AI applications and platform cards are created for AI platform services. These resources can support the development or deployment of your own applications and can be shared with users or stakeholders impacted by them.
2626

27-
As part of its commitment to responsible AI, Microsoft adheres to [six core principles](https://www.microsoft.com/ai/principles-and-approach/?msockid=3da790040c776d6f2b5485e40de56c06#ai-principles): fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. These principles are embedded in the [Responsible AI Standard](https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/final/en-us/microsoft-brand/documents/Microsoft-Responsible-AI-Standard-General-Requirements.pdf), which guides teams in designing, building, and testing AI applications. Application and Platform cards play a key role in operationalizing these principles by offering transparency around capabilities, intended uses, and limitations. For further insight, readers are encouraged to explore Microsoft’s [Responsible AI Transparency Report](https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/msc/documents/presentations/CSR/Responsible-AI-Transparency-Report-2025.pdf) and either the [Microsoft Enterprise AI Services Code of Conduct](/legal/ai-code-of-conduct) (for organizations) or the [Code Conduct section in the Microsoft Services Agreement](https://www.microsoft.com/servicesagreement#3_codeOfConduct) (for individuals), both of which outline how to engage with AI responsibly.
27+
As part of its commitment to responsible AI, Microsoft adheres to [six core principles](https://www.microsoft.com/ai/principles-and-approach/?msockid=3da790040c776d6f2b5485e40de56c06#ai-principles): fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. These principles are embedded in the [Responsible AI Standard](https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/final/en-us/microsoft-brand/documents/Microsoft-Responsible-AI-Standard-General-Requirements.pdf), which guides teams in designing, building, and testing AI applications. Application and Platform cards play a key role in operationalizing these principles by offering transparency around capabilities, intended uses, and limitations. For further insight, readers are encouraged to explore Microsoft’s [Responsible AI Transparency Report](https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/msc/documents/presentations/CSR/Responsible-AI-Transparency-Report-2025.pdf) and either the [Microsoft Enterprise AI Services Code of Conduct](/legal/ai-code-of-conduct) (for organizations) or the [Code of Conduct section in the Microsoft Services Agreement](https://www.microsoft.com/servicesagreement#3_codeOfConduct) (for individuals), both of which outline how to engage with AI responsibly.
2828

2929
## Overview
3030

@@ -48,7 +48,7 @@ The following table provides a glossary of key terms related to Microsoft 365 Co
4848
|Large language model (LLM)|Large language models (LLMs) in this context are AI models that are trained on large amounts of text data to predict words in sequences. LLMs are capable of performing a variety of tasks, such as text generation, summarization, translation, classification, and more.|
4949
|Microsoft Graph |Microsoft Graph is the gateway to data and intelligence in Microsoft 365. It includes information about the relationships between users, activities, and an organization’s data. |
5050
|Post-processing|The processing Microsoft 365 Copilot does after it receives a response from the LLM. This post-processing includes additional grounding calls to Microsoft Graph, responsible AI, security, compliance, and privacy checks.|
51-
|Processing |Processing of a user prompt in Microsoft 365 Copilot involves several steps, including responsible AI checks, to help Microsoft 365 Copilot provides relevant and actionable responses. |
51+
|Processing |Processing of a user prompt in Microsoft 365 Copilot involves several steps, including responsible AI checks, to help Microsoft 365 Copilot provide relevant and actionable responses. |
5252
|Prompt |A Prompt is the text sent to Microsoft 365 Copilot to execute a specific task or provide information. For example, a user might input the following prompt: Write an email congratulating my team on the end of the fiscal year. |
5353
|Red team testing|Techniques used by experts to assess the limitations and vulnerabilities of a system and to test the effectiveness of planned mitigations. Red team testing is used to identify potential risks and is distinct from systematic measurement of risks. |
5454
|Response|The content generated by the LLM and returned to Microsoft 365 Copilot as a reply to a prompt.|
@@ -125,7 +125,7 @@ Microsoft 365 Copilot doesn't require web content or organizational data to prov
125125

126126
## Limitations
127127

128-
Understanding Microsoft 365 Copilot’s limitations is crucial to determine it's used within safe and effective boundaries. While we encourage customers to leverage Microsoft 365 Copilot in their innovative solutions or applications, it’s important to note that Microsoft 365 Copilot wasn't designed for every possible scenario. We encourage users to refer to either the [Microsoft Enterprise AI Services Code of Conduct](/legal/ai-code-of-conduct) (for organizations) or the [Code Conduct section in the Microsoft Services Agreement](https://www.microsoft.com/servicesagreement#3_codeOfConduct) (for individuals) as well as the following considerations when choosing a use case:
128+
Understanding Microsoft 365 Copilot’s limitations is crucial to determine it's used within safe and effective boundaries. While we encourage customers to leverage Microsoft 365 Copilot in their innovative solutions or applications, it’s important to note that Microsoft 365 Copilot wasn't designed for every possible scenario. We encourage users to refer to either the [Microsoft Enterprise AI Services Code of Conduct](/legal/ai-code-of-conduct) (for organizations) or the [Code of Conduct section in the Microsoft Services Agreement](https://www.microsoft.com/servicesagreement#3_codeOfConduct) (for individuals) as well as the following considerations when choosing a use case:
129129

130130
- **Compatibility:** While Microsoft 365 Copilot is designed to work seamlessly with Microsoft 365 applications, there can be limitations or issues with compatibility in certain environments, especially with third party (non-Microsoft) apps and customized or nonstandard configurations.
131131

@@ -223,7 +223,7 @@ To improve the performance in relation to the accuracy of Microsoft 365 Copilot
223223

224224
- **Be aware of the risk of overreliance:** Overreliance on AI happens when users accept incorrect or incomplete AI outputs, mainly because mistakes in AI outputs may be hard to detect. For the end-user, overreliance could result in decreased productivity, loss of trust, product abandonment, financial loss, psychological harm, physical harm, among others. (for example, a doctor accepts an incorrect AI output). For Microsoft 365 Copilot, we help mitigate this risk by adding disclaimers to our products but users should still make sure to review the accuracy of the answers.
225225

226-
- **Exercise caution when designing agentic AI in sensitive domains:** Users should exercise caution when designing and/or deploying agentic AI systems in sensitive domains where agent actions are irreversible or highly consequential. Additional precautions should also be taken when creating autonomous agentic AI as described further in either the [Microsoft Enterprise AI Services Code of Conduct](/legal/ai-code-of-conduct) (for organizations) or the [Code Conduct section in the Microsoft Services Agreement](https://www.microsoft.com/servicesagreement#3_codeOfConduct) (for individuals).
226+
- **Exercise caution when designing agentic AI in sensitive domains:** Users should exercise caution when designing and/or deploying agentic AI systems in sensitive domains where agent actions are irreversible or highly consequential. Additional precautions should also be taken when creating autonomous agentic AI as described further in either the [Microsoft Enterprise AI Services Code of Conduct](/legal/ai-code-of-conduct) (for organizations) or the [Code of Conduct section in the Microsoft Services Agreement](https://www.microsoft.com/servicesagreement#3_codeOfConduct) (for individuals).
227227

228228
### Deployers should:
229229

0 commit comments

Comments
 (0)