You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/concepts/content-filter.md
+23-23Lines changed: 23 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -628,39 +628,40 @@ For details on the inference REST API endpoints for Azure OpenAI and how to crea
628
628
}
629
629
```
630
630
631
-
## Content streaming
631
+
## Streaming
632
632
633
-
This section describes the Azure OpenAI content streaming experience and options. With approval, you have the option to receive content from the API as it's generated, instead of waiting for chunks of content that have been verified to pass your content filters.
633
+
Azure OpenAI Service includes a content filtering system that works alongside core models. The following section describes the AOAI streaming experience and options in the context of content filters.
634
634
635
-
### Default filtering
635
+
### Default
636
636
637
-
The content filtering system is integrated and enabled by default for all customers. In the default streaming scenario, completion content is buffered, the content filtering system runs on the buffered content, and – depending on the content filtering configuration – content is either returned to the user if it does not violate the content filtering policy (Microsoft's default or a custom user configuration), or it’s immediately blocked and returns a content filtering error, without returning the harmful completion content. Thisprocess is repeated until the end of the stream. Contentis fully vetted according to the content filtering policy before it's returned to the user. Content is not returned token-by-token in this case, but in “content chunks” of the respective buffer size.
637
+
The content filtering system is integrated and enabled by default for all customers. In the default streaming scenario, completion content is buffered, the content filtering system runs on the buffered content, and – depending on content filtering configuration – content is either returned to the user if it does not violate the content filtering policy (Microsoft default or custom user configuration), or it’s immediately blocked which returns a content filtering error, without returning harmful completion content. Thisprocess is repeated until the end of the stream. Contentwas fully vetted according to the content filtering policy before returned to the user. Content is not returned token-by-token inthis case, but in “content chunks” of the respective buffer size.
638
638
639
639
### Asynchronous modified filter
640
640
641
-
Customers who have been approved for modified content filters can choose the asynchronous modified filter as an additional option, providing a new streaming experience. In this case, content filters are run asynchronously, and completion content is returned immediately with a smooth token-by-token streaming experience. No content is buffered, which allows for zero latency.
641
+
Customers who have been approved for modified content filters can choose Asynchronous Modified Filter as an additional option, providing a newstreamingexperience. Inthis case, content filters are run asynchronously, completion content is returned immediately with a smooth token-by-token streaming experience. No content is buffered, the content filters run asynchronously, which allows for zero latencyinthis context.
642
642
643
-
Customers must be aware that while the feature improves latency, it is a trade-off against the safety and real-time vetting of smaller sections of model output. Because content filters are run asynchronously, content moderation messages and policy violation signals are delayed, which means some sections of harmful content that would otherwise have been filtered immediately could be displayed to the user.
643
+
> [!NOTE]
644
+
> Customers must be aware that while the feature improves latency, it can bring a trade-off in terms of the safety and real-time vetting of smaller sections of model output. Because content filters are run asynchronously, content moderation messages and the content filtering signal in case of a policy violation are delayed, which means some sections of harmful content that would otherwise have been filtered immediately could be displayed to the user.
644
645
645
-
**Annotations**: Annotations and content moderation messages are continuously returned during the stream. We strongly recommend you consume annotations in your app and implement additional AI content safety mechanisms, such as redacting content or returning additional safety information to the user.
646
+
**Annotations**: Annotations and content moderation messages are continuously returned during the stream. We strongly recommend to consume annotations and implement additional AI content safety mechanisms, such as redacting content or returning additional safety information to the user.
646
647
647
-
**Content filtering signal**: The content filtering error signal is delayed. In case of a policy violation, it’s returned as soon as it’s available, and the stream is stopped. The content filtering signal is guaranteed within a ~1,000-character window of the policy-violating content.
648
+
**Content filtering signal**: The content filtering error signal is delayed; in case of a policy violation, it’s returned as soon as it’s available, and the stream is stopped. The content filtering signal is guaranteed within ~1,000-character windows in case ofa policy violation.
648
649
649
-
Approval for modified content filtering is required for access to the asynchronous modified filter. The application can be found [here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu). To enable it in Azure OpenAI Studio, follow the [Content filter how-to guide](/azure/ai-services/openai/how-to/content-filters) to create a new content filtering configuration, and select **Asynchronous Modified Filter** in the Streaming section.
650
+
Approval forModified Content Filtering is required for access to Streaming – Asynchronous Modified Filter. The application can be found [here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu). To enable it via Azure OpenAI Studio please follow the instructions [here](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/content-filters) to create a new content filtering configuration, and select “Asynchronous Modified Filter” in the Streaming section, as shown in the below screenshot.
| How to enable | Enabled by default, no action needed |Customers approved for modified content filtering can configure it directly in Azure OpenAI Studio (as part of a content filtering configuration, applied at the deployment level) |
658
-
|Modality and availability |Text; all GPTmodels |Text; all GPTmodels except gpt-4-vision |
657
+
|Access | Enabled by default, no action needed |Customers approved forModified Content Filtering can configure directly via Azure OpenAI Studio (as part of a content filtering configuration; applied on deployment-level)|
|Modality and Availability|Text; all GPT-models |Text; all GPT-models except gpt-4-vision |
659
660
|Streaming experience |Content is buffered and returned in chunks |Zero latency (no buffering, filters run asynchronously) |
660
-
|Content filtering signal |Immediate filtering signal |Delayed filtering signal (in up to ~1,000-character increments) |
661
-
|Content filtering configurations |Supports default and any customer-defined filter setting (including optional models) |Supports default and any customer-defined filter setting (including optional models) |
661
+
|Content filtering signal |Immediate filtering signal |Delayed filtering signal (in up to ~1,000 char increments) |
662
+
|Content filtering configurations |Supports default and any customer-defined filter setting (including optional models) |Supports default and any customer-defined filter setting (including optional models) | Supports default and any customer-defined filter setting (including optional models) |
662
663
663
-
### Annotations and sample responses
664
+
### Annotations and sample response stream
664
665
665
666
#### Prompt annotation message
666
667
@@ -708,11 +709,11 @@ data: {
708
709
709
710
#### Annotation message
710
711
711
-
The text field will always be an empty string, indicating no new tokens. Annotations will only be relevant to already-sent tokens. There may be multiple annotation messages referring to the same tokens.
712
+
The text field will always be an empty string, indicating no newtokens. Annotations will only be relevant to already-sent tokens. There may be multiple Annotation Messages referring to the same tokens.
712
713
713
-
`"start_offset"` and `"end_offset"` are low-granularity offsets in text (with 0 being beginning of prompt) to mark which text the annotation is relevant to.
714
+
“start_offset” and “end_offset” are low-granularity offsets intext (with0at beginning of prompt) which the annotation is relevant to.
714
715
715
-
`"check_offset"` represents how much text has been fully moderated. It is an exclusive lower bound on the `"end_offset"` values of future annotations. It is non-decreasing.
716
+
“check_offset” represents how much text has been fully moderated. It is an exclusive lower bound on the end_offsets of future annotations. It is nondecreasing.
716
717
717
718
```json
718
719
data: {
@@ -738,9 +739,9 @@ data: {
738
739
```
739
740
740
741
741
-
#### Sample response stream (passes filters)
742
+
### Sample response stream
742
743
743
-
Below is a real chat completion response using asynchronous modified filter. Note how the prompt annotations are not changed, completion tokens are sent without annotations, and new annotation messages are sent without tokens—they are instead associated with certain content filter offsets.
744
+
Below is a real chat completion response using Asynchronous Modified Filter. Note how prompt annotations are not changed; completion tokens are sent without annotations; and newannotation messages are sent without tokens, instead associated with certain content filter offsets.
As part of your application design, consider the following best practices to deliver a positive experience with your application while minimizing potential harms:
0 commit comments