Skip to content

Commit 3a741f3

Browse files
Clarify AI response handling in RAG application
Updated the description of the core function to clarify the handling of AI-generated responses.
1 parent 8d24238 commit 3a741f3

1 file changed

Lines changed: 1 addition & 1 deletion

File tree

learn-pr/wwl-data-ai/build-rag-applications-azure-database-postgresql/includes/5-build-rag-application-postgresql-python.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -117,7 +117,7 @@ def generate_answer(question, chunks):
117117
> - Provide Azure settings via env vars and constructor: `AZURE_OPENAI_API_KEY`, `AZURE_OPENAI_ENDPOINT`, and `azure_deployment`, `api_version`.
118118
> - Keep `temperature=0` for factual answers. Larger values increase creativity but might reduce accuracy.
119119
120-
This function is the core of the RAG application, handling the interaction with the language model. Notice how retrieved chunks are formatted and included in the context. Additionally, the system prompt is designed to ensure the model adheres to the context provided and reduces hallucination. Finally, the messages are processed by the language model to generate a response using the *LangChain* **invoke** method. The `invoke` method is called with the formatted messages, and the model's response is returned as natural language text.
120+
This function is the core of the RAG application, handling the interaction with the language model. Notice how retrieved chunks are formatted and included in the context. Additionally, the system prompt is designed to ensure the model adheres to the context provided and reduces AI-generated responses that might be incorrect. Finally, the messages are processed by the language model to generate a response using the *LangChain* **invoke** method. The `invoke` method is called with the formatted messages, and the model's response is returned as natural language text.
121121

122122
### Tie it together
123123

0 commit comments

Comments
 (0)