Skip to content

Commit 816f928

Browse files
authored
Typo fix, correct heading style
1 parent 76618b6 commit 816f928

1 file changed

Lines changed: 3 additions & 3 deletions

File tree

learn-pr/wwl-data-ai/develop-generative-ai-vision-apps/includes/3-develop-visual-chat-app.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ The key difference is that prompts for a vision-based chat include multi-part us
44

55
![Diagram of a multi-part prompt being submitted to a model.](../media/multi-part-prompt.png)
66

7-
## Submitting an image-based prompt using the *Responses* API
7+
## Submit an image-based prompt using the *Responses* API
88

99
To include an image in a prompt using the *Responses* API, specify a URL for a web-based image file, or load a local image and encode its data in Base64 format and submit a URL in the format `data:image/jpeg;base64,{image_data}` (replacing "jpeg" with "png" pr other formats as appropriate).
1010

@@ -33,9 +33,9 @@ response = client.responses.create(
3333
print(response.output_text)
3434
```
3535

36-
## Submitting an image-based prompt using the *ChatCompletions* API
36+
## Submit an image-based prompt using the *ChatCompletions* API
3737

38-
When using the Azure OpenAI endpoint to submit prompts to models that don;t support the *Responses* API, you can use the *CatCompletions* API; like this:
38+
When using the Azure OpenAI endpoint to submit prompts to models that don't support the *Responses* API, you can use the *CatCompletions* API; like this:
3939

4040
```python
4141
# Read the image data from a local file

0 commit comments

Comments
 (0)