Skip to content

Commit 14e8886

Browse files
committed
Address PR comments by switching Foundry Tools back to AI services
1 parent 2e49382 commit 14e8886

13 files changed

Lines changed: 42 additions & 42 deletions

articles/app-service/includes/deploy-intelligent-apps/deploy-intelligent-apps-linux-dotnet-pivot.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -160,9 +160,9 @@ To initialize the kernel, add the following code to the `OpenAI.razor` file.
160160

161161
In this step, you add the using statement and create the kernel in a method that you can use when you send the request to the service.
162162

163-
## 4. Add your Foundry Tool
163+
## 4. Add your AI service
164164

165-
After the kernel is initialized, you can add your chosen Foundry Tool to the kernel. You define your model and pass in your key and endpoint information that the chosen model consumes. If you plan to use a managed identity with Azure OpenAI, add the service by using the example in the next section.
165+
After the kernel is initialized, you can add your chosen AI service to the kernel. You define your model and pass in your key and endpoint information that the chosen model consumes. If you plan to use a managed identity with Azure OpenAI, add the service by using the example in the next section.
166166

167167
Use the following code for Azure OpenAI:
168168

articles/app-service/scenario-ai-local-small-language-model.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
title: Use local small language models (SLMs) in Azure App Service
3-
description: Deploy a web app with a local small language model (SLM) as a sidecar container to run AI models entirely within your App Service environment. No outbound calls or external Foundry Tool dependencies required.
3+
description: Deploy a web app with a local small language model (SLM) as a sidecar container to run AI models entirely within your App Service environment. No outbound calls or external AI service dependencies required.
44
author: cephalin
55
ms.author: cephalin
66
ms.service: azure-app-service
@@ -14,7 +14,7 @@ ms.update-cycle: 180-days
1414

1515
# Use a local SLM (sidecar container)
1616

17-
Deploy a web app with a local small language model (SLM) as a sidecar container to run AI models entirely within your App Service environment. No outbound calls or external Foundry Tool dependencies required. This approach is ideal if you have strict data privacy or compliance requirements, as all AI processing and data remain local to your app. App Service offers high-performance, memory-optimized pricing tiers needed for running SLMs in sidecars.
17+
Deploy a web app with a local small language model (SLM) as a sidecar container to run AI models entirely within your App Service environment. No outbound calls or external AI service dependencies required. This approach is ideal if you have strict data privacy or compliance requirements, as all AI processing and data remain local to your app. App Service offers high-performance, memory-optimized pricing tiers needed for running SLMs in sidecars.
1818

1919
## Overview
2020

@@ -23,7 +23,7 @@ Small Language Models (SLMs) are compact AI models, such as Microsoft's Phi-3 an
2323
This architecture provides several advantages:
2424

2525
- **Complete data privacy**: All data and AI processing stays within your App Service environment
26-
- **Zero external dependencies**: No reliance on external Foundry Tools or internet connectivity
26+
- **Zero external dependencies**: No reliance on external AI services or internet connectivity
2727
- **Predictable latency**: Responses are consistently fast with no network overhead
2828
- **Cost control**: Pay only for App Service compute resources, with no per-token charges
2929
- **Regulatory compliance**: Meet strict data residency and privacy requirements

articles/app-service/tutorial-ai-openai-search-dotnet.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,14 +17,14 @@ ms.update-cycle: 180-days
1717

1818
# Tutorial: Build a retrieval augmented generation app in Azure App Service with Azure OpenAI and Azure AI Search (.NET)
1919

20-
In this tutorial, you'll create a .NET retrieval augmented generation (RAG) application using .NET Blazor, Azure OpenAI, and Azure AI Search and deploy it to Azure App Service. This application demonstrates how to implement a chat interface that retrieves information from your own documents and leverages Foundry Tools to provide accurate, contextually aware answers with proper citations. The solution uses managed identities for passwordless authentication between services.
20+
In this tutorial, you'll create a .NET retrieval augmented generation (RAG) application using .NET Blazor, Azure OpenAI, and Azure AI Search and deploy it to Azure App Service. This application demonstrates how to implement a chat interface that retrieves information from your own documents and leverages AI services in Azure to provide accurate, contextually aware answers with proper citations. The solution uses managed identities for passwordless authentication between services.
2121

2222
:::image type="content" source="media/tutorial-ai-openai-search-dotnet/chat-interface.png" alt-text="Screenshot showing the Blazor chat interface in introduction.":::
2323

2424
In this tutorial, you learn how to:
2525

2626
> [!div class="checklist"]
27-
> * Deploy a Blazor application that uses RAG pattern with Foundry Tools.
27+
> * Deploy a Blazor application that uses RAG pattern with AI services in Azure.
2828
> * Configure Azure OpenAI and Azure AI Search for hybrid search.
2929
> * Upload and index documents for use in your AI-powered application.
3030
> * Use managed identities for secure service-to-service communication.

articles/app-service/tutorial-ai-openai-search-java.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ ms.update-cycle: 180-days
1818

1919
# Tutorial: Build a retrieval augmented generation app in Azure App Service with Azure OpenAI and Azure AI Search (Spring Boot)
2020

21-
In this tutorial, you'll create a Java Retrieval Augmented Generation (RAG) application using Spring Boot, Azure OpenAI, and Azure AI Search and deploy it to Azure App Service. This application demonstrates how to implement a chat interface that retrieves information from your own documents and leverages Foundry Tools to provide accurate, contextually aware answers with proper citations. The solution uses managed identities for passwordless authentication between services.
21+
In this tutorial, you'll create a Java Retrieval Augmented Generation (RAG) application using Spring Boot, Azure OpenAI, and Azure AI Search and deploy it to Azure App Service. This application demonstrates how to implement a chat interface that retrieves information from your own documents and leverages AI services in Azure to provide accurate, contextually aware answers with proper citations. The solution uses managed identities for passwordless authentication between services.
2222

2323
> [!TIP]
2424
> While this tutorial uses Spring Boot, the core concepts of building a RAG application with Azure OpenAI and Azure AI Search apply to any Java web application. If you're using a different hosting option on App Service, such as Tomcat or JBoss EAP, you can adapt the authentication patterns and Azure SDK usage shown here to your preferred framework.
@@ -28,7 +28,7 @@ In this tutorial, you'll create a Java Retrieval Augmented Generation (RAG) appl
2828
In this tutorial, you learn how to:
2929

3030
> [!div class="checklist"]
31-
> * Deploy a Spring Boot application that uses RAG pattern with Foundry Tools.
31+
> * Deploy a Spring Boot application that uses RAG pattern with AI services in Azure.
3232
> * Configure Azure OpenAI and Azure AI Search for hybrid search.
3333
> * Upload and index documents for use in your AI-powered application.
3434
> * Use managed identities for secure service-to-service communication.

articles/app-service/tutorial-ai-openai-search-nodejs.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,14 +18,14 @@ ms.update-cycle: 180-days
1818

1919
# Tutorial: Build a retrieval augmented generation app in Azure App Service with Azure OpenAI and Azure AI Search (Express.js)
2020

21-
In this tutorial, you'll create a Node.js Retrieval Augmented Generation (RAG) application using Express.js, Azure OpenAI, and Azure AI Search and deploy it to Azure App Service. This application demonstrates how to implement a chat interface that retrieves information from your own documents and leverages Foundry Tools to provide accurate, contextually aware answers with proper citations. The solution uses managed identities for passwordless authentication between services.
21+
In this tutorial, you'll create a Node.js Retrieval Augmented Generation (RAG) application using Express.js, Azure OpenAI, and Azure AI Search and deploy it to Azure App Service. This application demonstrates how to implement a chat interface that retrieves information from your own documents and leverages AI services in Azure to provide accurate, contextually aware answers with proper citations. The solution uses managed identities for passwordless authentication between services.
2222

2323
:::image type="content" source="media/tutorial-ai-openai-search-dotnet/chat-interface.png" alt-text="Screenshot showing the Express.js chat interface in introduction.":::
2424

2525
In this tutorial, you learn how to:
2626

2727
> [!div class="checklist"]
28-
> * Deploy an Express.js application that uses RAG pattern with Foundry Tools.
28+
> * Deploy an Express.js application that uses RAG pattern with AI services in Azure.
2929
> * Configure Azure OpenAI and Azure AI Search for hybrid search.
3030
> * Upload and index documents for use in your AI-powered application.
3131
> * Use managed identities for secure service-to-service communication.

articles/app-service/tutorial-ai-openai-search-python.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,14 +17,14 @@ ms.update-cycle: 180-days
1717

1818
# Tutorial: Build a retrieval augmented generation app in Azure App Service with Azure OpenAI and Azure AI Search (FastAPI)
1919

20-
In this tutorial, you'll create a Python Retrieval Augmented Generation (RAG) application using FastAPI, Azure OpenAI, and Azure AI Search and deploy it to Azure App Service. This application demonstrates how to implement a chat interface that retrieves information from your own documents and leverages Foundry Tools to provide accurate, contextually aware answers with proper citations. The solution uses managed identities for passwordless authentication between services.
20+
In this tutorial, you'll create a Python Retrieval Augmented Generation (RAG) application using FastAPI, Azure OpenAI, and Azure AI Search and deploy it to Azure App Service. This application demonstrates how to implement a chat interface that retrieves information from your own documents and leverages AI services in Azure to provide accurate, contextually aware answers with proper citations. The solution uses managed identities for passwordless authentication between services.
2121

2222
:::image type="content" source="media/tutorial-ai-openai-search-dotnet/chat-interface.png" alt-text="Screenshot showing the FastAPI chat interface in introduction.":::
2323

2424
In this tutorial, you learn how to:
2525

2626
> [!div class="checklist"]
27-
> * Deploy a FastAPI application that uses RAG pattern with Foundry Tools.
27+
> * Deploy a FastAPI application that uses RAG pattern with AI services in Azure.
2828
> * Configure Azure OpenAI and Azure AI Search for hybrid search.
2929
> * Upload and index documents for use in your AI-powered application.
3030
> * Use managed identities for secure service-to-service communication.

0 commit comments

Comments
 (0)