Skip to content

Commit 2c05c92

Browse files
authored
Fixed Feedback bug
Revised explanation of the cost and complexity challenges associated with dedicated Azure AI Search instances, emphasizing the benefits of hub-level shared connections.
1 parent d3d8c85 commit 2c05c92

1 file changed

Lines changed: 1 addition & 1 deletion

File tree

learn-pr/wwl-azure/configure-ai-ready-infrastructure-microsoft-foundry/includes/4-implement-hub-level-shared-connections-azure.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
## The cost and complexity challenge of dedicated search
22

3-
When organizations first adopt AI applications, teams provision separate Azure AI Search instances for each project—one for the customer support chatbot, another for the product recommendation engine, and a third for the knowledge management system. This pattern creates three immediate problems. First, costs multiply unnecessarily: five projects each deploying a Basic tier search instance at **$75** per month generate **$375** in monthly charges when a single Standard tier instance at $250 could serve all five projects with capacity to spare. Second, operational overhead scales linearly with instance count: your operations team monitors performance metrics, applies service updates, and reviews diagnostic logs across five separate services instead of one centralized instance. Third, configuration drift emerges as teams independently adjust query timeouts, scoring profiles, and index schemas without cross-project coordination.
3+
When organizations first adopt AI applications, teams provision separate Azure AI Search instances for each project—one for the customer support chatbot, another for the product recommendation engine, and a third for the knowledge management system. This pattern creates three immediate problems. First, costs multiply unnecessarily: five projects each deploying a Basic tier search instance generate monthly charges when a single Standard tier instance could serve all five projects with capacity to spare. Second, operational overhead scales linearly with instance count: your operations team monitors performance metrics, applies service updates, and reviews diagnostic logs across five separate services instead of one centralized instance. Third, configuration drift emerges as teams independently adjust query timeouts, scoring profiles, and index schemas without cross-project coordination.
44

55
Hub-level shared connections eliminate these problems by connecting one Azure AI Search service to the hub, making it available to all child projects through inherited configuration. With this approach, your five projects query the shared search service using the hub's managed identity for authentication, eliminating the need to distribute and rotate API keys across project code. At the same time, each project maintains logical isolation through separate search indexes: the chatbot project queries the Support Knowledge Base index, the recommendation engine queries the Product Catalog index, and teams never access each other's data despite using the same underlying service.
66

0 commit comments

Comments
 (0)