Skip to content

Commit a79eeb2

Browse files
committed
Add section about model changes
1 parent 4871ab4 commit a79eeb2

1 file changed

Lines changed: 5 additions & 1 deletion

File tree

copilot/microsoft-365-copilot-privacy.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ ms.collection:
1616
- trust-pod
1717
hideEdit: true
1818
ms.update-cycle: 180-days
19-
ms.date: 08/22/2025
19+
ms.date: 09/05/2025
2020
ms.custom: [copilot-learning-hub]
2121
---
2222

@@ -192,6 +192,10 @@ Yes, Microsoft 365 Copilot provides detection for protected materials, which inc
192192

193193
Jailbreak attacks are prompts designed to bypass Copilot's safeguards or induce non-compliant behavior. Microsoft 365 Copilot helps mitigate these attacks by using proprietary jailbreak and cross-prompt injection attack (XPIA) classifiers. These classifiers analyze inputs to the Copilot service and help block high-risk prompts prior to model execution.
194194

195+
### What happens when foundation model changes occur?
196+
197+
The AI models that power Microsoft 365 Copilot are regularly updated and enhanced. Model updates bring performance improvements, more advanced reasoning, and expanded capabilities, but they don't change your security, privacy, or compliance settings. For more information, see [Microsoft 365 Blog: Understanding foundation model changes in Microsoft 365 Copilot](https://techcommunity.microsoft.com/blog/microsoft_365blog/understanding-foundation-model-changes-in-microsoft-365-copilot/4440322).
198+
195199
### Committed to responsible AI
196200

197201
As AI is poised to transform our lives, we must collectively define new rules, norms, and practices for the use and impact of this technology. Microsoft has been on a Responsible AI journey since 2017, when we defined our principles and approach to ensuring this technology is used in a way that is driven by ethical principles that put people first.

0 commit comments

Comments
 (0)