Skip to content

Commit 99ba19a

Browse files
Merge pull request #306859 from MicrosoftDocs/main
Auto Publish – main to live - 2025-10-13 22:00 UTC
2 parents 3ac07dd + 2ed7a91 commit 99ba19a

16 files changed

Lines changed: 529 additions & 82 deletions

articles/azure-netapp-files/maxfiles-concept.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,13 @@ services: azure-netapp-files
55
author: b-hchen
66
ms.service: azure-netapp-files
77
ms.topic: concept-article
8-
ms.date: 07/04/2025
8+
ms.date: 10/13/2025
99
ms.author: anfdocs
1010
# Customer intent: "As a cloud storage administrator, I want to understand the `maxfiles` limits for Azure NetApp Files, so that I can effectively manage volume capacity and avoid 'out of space' errors when creating new files."
1111
---
1212
# Understand `maxfiles` limits in Azure NetApp Files
1313

14-
Azure NetApp Files volumes have a value called `maxfiles` that refers to the maximum number of files and folders (also known as inodes) a volume can contain. The `maxfiles` limit for an Azure NetApp Files volume is based on the size (quota) of the volume, where the service dynamically adjusts the `maxfiles` limit for a volume based on its provisioned size and uses the following guidelines.
14+
Azure NetApp Files volumes have a value called `maxfiles` that refers to the maximum number of files and folders (also known as inodes) a volume can contain. The `maxfiles` limit for an Azure NetApp Files volume is based on the size (quota) of the volume. The service dynamically adjusts the `maxfiles` limit for a volume based on its provisioned size and uses the following guidelines.
1515

1616
- For regular volumes less than or equal to 683 GiB, the default `maxfiles` limit is 21,251,126.
1717
- For regular volumes greater than 683 GiB, the default `maxfiles` limit is approximately one file (or inode) per 32 KiB of allocated volume capacity up to a maximum of 2,147,483,632.
@@ -47,14 +47,14 @@ To see the `maxfiles` allocation for a specific volume size, check the **Maximum
4747
:::image type="content" source="./media/azure-netapp-files-resource-limits/maximum-number-files.png" alt-text="Screenshot of volume overview menu." lightbox="./media/azure-netapp-files-resource-limits/maximum-number-files.png":::
4848

4949
>[!NOTE]
50-
>The maximum number of files metric is reported against the `maxfiles` account quota limit. The metric in Azure Mmonitor might reflect fewer files than metrics provided by the operating system mounting the volume. This behavior is expected.
50+
>The maximum number of files metric is reported against the `maxfiles` account quota limit. The metric in Azure Monitor might reflect fewer files than metrics provided by the operating system mounting the volume. This behavior is expected.
5151
5252
When the `maxfiles` limit is reached, clients receive "out of space" messages when attempting to create new files or folders. Adjusting your quota based on this information can create greater inode availability.
5353

5454
>[!NOTE]
55-
>If you want to increase the `maxfiles` limit, you must increase the corresponding volume size accordingly. To increase the `maxfiles` limit, contact Microsoft technical support.
55+
>If you want to increase the `maxfiles` limit for a volume, you must increase the volume's size. If your volume is at the [maximum size](azure-netapp-files-resource-limits.md) and you still need to increase the `maxfiles` limit, contact Microsoft technical support.
5656
57-
You can't set `maxfiles` limits for data protection volumes via a quota request. Azure NetApp Files automatically increases the `maxfiles` limit of a data protection volume to accommodate the number of files replicated to the volume. When a failover happens on a data protection volume, the `maxfiles` limit remains the last value before the failover. In this situation, you can submit a `maxfiles` [quota request](azure-netapp-files-resource-limits.md#request-limit-increase) for the volume.
57+
You can't set `maxfiles` limits for data protection volumes with a quota request. Azure NetApp Files automatically increases the `maxfiles` limit of a data protection volume to accommodate the number of files replicated to the volume. When a failover happens on a data protection volume, the `maxfiles` limit remains the last value before the failover. In this situation, you can submit a `maxfiles` [quota request](azure-netapp-files-resource-limits.md#request-limit-increase) for the volume.
5858

5959
## Next steps
6060

articles/container-apps/TOC.yml

Lines changed: 24 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -62,14 +62,6 @@
6262
href: application-lifecycle-management.md
6363
- name: Jobs
6464
href: jobs.md
65-
- name: Dynamic sessions
66-
items:
67-
- name: Overview
68-
href: sessions.md
69-
- name: Usage
70-
href: sessions-usage.md
71-
- name: Session pools
72-
href: session-pool.md
7365
- name: Microservices
7466
href: microservices.md
7567
- name: Planned maintenance
@@ -188,6 +180,30 @@
188180
href: troubleshoot-container-start-failures.md
189181
- name: Reliability in Azure Container Apps
190182
href: ../reliability/reliability-azure-container-apps.md?toc=/azure/container-apps/toc.json&bc=/azure/container-apps/breadcrumb/toc.json
183+
- name: AI integration
184+
items:
185+
- name: AI integration
186+
href: ai-integration.md
187+
- name: GPUs
188+
items:
189+
- name: Serverless GPUs
190+
href: gpu-serverless-overview.md
191+
- name: GPU types
192+
href: gpu-types.md
193+
- name: Tutorials
194+
items:
195+
- name: Generate images with serverless GPUs
196+
href: gpu-image-generation.md
197+
- name: Deploy an NVIDIA Llama3 NIM
198+
href: serverless-gpu-nim.md
199+
- name: Dynamic sessions
200+
items:
201+
- name: Overview
202+
href: sessions.md
203+
- name: Usage
204+
href: sessions-usage.md
205+
- name: Session pools
206+
href: session-pool.md
191207
- name: Observability
192208
items:
193209
- name: Overview
@@ -300,18 +316,6 @@
300316
href: workload-profiles-manage-cli.md
301317
- name: Portal
302318
href: workload-profiles-manage-portal.md
303-
- name: GPUs
304-
items:
305-
- name: Serverless GPUs
306-
href: gpu-serverless-overview.md
307-
- name: GPU types
308-
href: gpu-types.md
309-
- name: Tutorials
310-
items:
311-
- name: Generate images with serverless GPUs
312-
href: gpu-image-generation.md
313-
- name: Deploy an NVIDIA Llama3 NIM
314-
href: serverless-gpu-nim.md
315319
- name: Microservices
316320
items:
317321
- name: Developing with Dapr
Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
---
2+
title: AI integration with Azure Container Apps
3+
description: Examples for running AI workloads in Azure Container Apps, including GPU-powered inference, dynamic sessions, and deploying Azure AI Foundry models.
4+
author: jefmarti
5+
ms.author: jefmarti
6+
ms.service: azure-container-apps
7+
ms.date: 10/03/2025
8+
ms.topic: article
9+
---
10+
11+
# AI integration with Azure Container Apps
12+
13+
Azure Container Apps is a serverless container platform that simplifies the deployment and scaling of microservices and AI-powered applications. With native support for GPU workloads, seamless integration with Azure AI services, and flexible deployment options, it is an ideal platform for building intelligent, cloud-native solutions.
14+
15+
16+
## GPU-powered inference
17+
18+
Use GPU accelerated workload profiles to meet a variety of your AI workload needs, including:
19+
20+
- **[Serverless GPUs](./gpu-serverless-overview.md)**: Ideal for variable traffic scenarios and cost-sensitive inference workloads.
21+
- **Dedicated GPUs**: best for continuous, low-latency inference scenarios.
22+
- **Scale to zero**: automatically scale down idle GPU resources to minimize costs.
23+
24+
## Dynamic sessions for AI-generated code
25+
26+
Dynamic sessions provide a secure, isolated environment for executing AI-generated code. Perfect for scenarios like sandboxed execution, code evaluation, and AI agents.
27+
28+
Supported session types include:
29+
- **[Platform managed built-in containers](./sessions-code-interpreter.md)**: a platform-managed container that supports executing code in Python and Node.js.
30+
- **[Custom containers](./sessions-custom-container.md)**: create a sessions pool using a custom container for specialized workloads or additional language support.
31+
32+
## Deploying Azure AI Foundry models
33+
34+
Azure Container Apps integrates with Azure AI Foundry, which enables you to deploy curated AI models directly into your containerized environments. This integration simplifies model deployment and management, making it easier to build intelligent applications on Container Apps.
35+
36+
### Sample projects
37+
38+
The following are a few examples that demonstrate AI integration with Azure Container Apps. These samples showcase various AI capabilities, including OpenAI integration, multi-agent coordination, and retrieval-augmented generation (RAG) using Azure AI Search. For more samples, visit the [template library](https://azure-sdk.github.io/awesome-azd/?name=azure+container+apps).
39+
40+
| Sample | Description |
41+
|--------|-------------|
42+
| [Chat app with Azure OpenAI](https://github.com/Azure-Samples/container-apps-openai) | ChatGPT-like apps using OpenAI, LangChain, ChromaDB, and Chainlit deployed to ACA using Terraform. |
43+
| [Host an MCP server](https://github.com/Azure-Samples/azure-container-apps-ai-mcp) | Demonstrates multi-agent coordination using the MCP protocol with Azure OpenAI and GitHub models in Container Apps. |
44+
| [MCP client and server](https://github.com/Azure-Samples/openai-mcp-agent-dotnet) | .NET-based MCP agent app using Azure OpenAI with a TypeScript MCP server, both hosted on ACA. |
45+
| [Remote MCP server](https://github.com/Azure-Samples/mcp-container-ts) | TypeScript-based MCP server template for Container Apps, ideal for building custom AI toolchains. |
46+
| [Dynamic session Python code interpreter](https://github.com/Azure-Samples/aca-python-code-interpreter-session) | Dynamic session for executing Python code in a secure environment. |
47+
48+
## Related content
49+
- [Multiple-agent workflow automation](/azure/architecture/ai-ml/idea/multiple-agent-workflow-automation)
50+
51+

articles/planetary-computer/TOC.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,8 @@
1010
href: stac-overview.md
1111
- name: Supported data types
1212
href: supported-data-types.md
13+
- name: Service usage meters
14+
href: service-usage-meters.md
1315
expanded: true
1416
- name: Deploy & set up
1517
items:

articles/planetary-computer/data-cube-overview.md

Lines changed: 43 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -13,64 +13,85 @@ ms.custom:
1313
---
1414
# Data cubes in Microsoft Planetary Computer Pro
1515

16-
As mentioned in [Supported Data Types](./supported-data-types.md), Microsoft Planetary Computer Pro supports ingestion, cloud optimization, and visualization of data cube files in NetCDF, HDF5, and GRIB2 formats. Though complex and historically cumbersome on local storage, these assets are optimized for cloud environments with Planetary Computer Pro, further empowering them as efficient tools to structure and store multidimensional data like satellite imagery and climate models.
16+
As mentioned in [Supported Data Types](./supported-data-types.md), Microsoft Planetary Computer Pro supports ingestion, cloud optimization, and visualization of data cube files in NetCDF, HDF5, Zarr, and GRIB2 formats. Though complex and historically cumbersome on local storage, these assets are optimized for cloud environments with Planetary Computer Pro, further empowering them as efficient tools to structure and store multidimensional data like satellite imagery and climate models.
1717

18-
## Handling data cubes in Planetary Computer Pro
18+
## Ingestion of data cubes
1919

20-
Data cube files can be ingested into Planetary Computer Pro in the same way as other raster data types. As with other date formats, assets and associated Spatio Temporal Asset Catalog (STAC) Items must first be stored in Azure Blob Storage. Unlike other two-dimensional raster assets, however, additional processing will occur upon ingestion of certain data cube formats (NetCDF and HDF5).
20+
Data cube files can be ingested into Planetary Computer Pro in the same way as other raster data types. As with other date formats, assets and associated Spatio Temporal Asset Catalog (STAC) Items must first be stored in Azure Blob Storage. Unlike other two-dimensional raster assets, however, more cloud optimization steps occur upon ingestion of certain data cube formats (NetCDF and HDF5).
2121

2222
> [!NOTE]
23-
> GRIB2 data will be ingested in the same way as other two-dimensional raster data (with no additional enrichment), as they are essentially a collection of 2D rasters with an associated index file that references the data efficiently in cloud environments.
23+
> GRIB2 data is ingested in the same way as other two-dimensional raster data (with no other cloud optimization steps), as they're essentially a collection of 2D rasters with an associated index file that references the data efficiently in cloud environments. Similarly, Zarr is already a cloud-native format, so no optimization takes place upon ingestion.
2424
25-
## Enabling data cube enrichment of STAC assets
25+
## Cloud optimization of data cubes
2626

27-
When a STAC Item containing NetCDF or HDF5 assets is ingested, those assets can be enriched with data cube functionality. When data cube functionality is enabled, a Kerchunk manifest is generated and stored in blob storage alongside the asset, enabling more efficient data access.
27+
When a STAC Item containing NetCDF or HDF5 assets is ingested, the assets are cloud optimized, not by transforming the data itself, but rather by generation of reference files that enable more efficient data access.
2828

29-
### Data cube enrichment and Kerchunk manifests
29+
### Cloud optimization via Kerchunk manifests
3030

31-
For STAC assets in **NetCDF** or **HDF5** formats, Planetary Computer can apply **Data cube enrichment** during ingestion. This process generates a **Kerchunk manifest**, which is stored in blob storage alongside the asset. The Kerchunk manifest enables efficient access to chunked dataset formats.
31+
Unlike 2D raster data that is transformed into Cloud Optimized Geotiffs (COGs) when ingested into Planetary Computer Pro, data cube assets are optimized by generation of reference files, or Kerchunk manifests. [Kerchunk](https://fsspec.github.io/kerchunk/) is an open-source Python library that creates these chunk manifests, or JSON files that describe the structure of the data cube and its chunks using Zarr-style chunk keys that map to the byte ranges in the original file where those chunks reside. Once generated, the Kerchunk files are stored in blob storage alongside the assets, and the STAC items are enriched to include references to these manifests, optimizing data access for cloud environments.
3232

33-
### Enabling data cube enrichment
33+
### STAC item properties that trigger cloud optimization
3434

35-
Data cube enrichment is **enabled** for applicable assets in the STAC item JSON. For each asset, enrichment is triggered if both of the following conditions are met:
35+
Within the collection's STAC items, the following conditions must be true for a data cube asset to be cloud optimized:
3636

3737
* The asset format is one of the following types:
3838
- `application/netcdf`
3939
- `application/x-netcdf`
4040
- `application/x-hdf5`
4141
* The asset has a `roles` field that includes either `data` or `visual` within its list of roles.
4242

43-
If these conditions are met, a **Kerchunk manifest** (`assetid-kerchunk.json`) is generated in blob storage alongside the asset.
43+
If these conditions are met, a Kerchunk manifest (`assetid-kerchunk.json`) is generated in blob storage alongside the asset.
4444

4545
> [!NOTE]
4646
> The asset format type`application/x-hdf` often corresponds to HDF4 assets. GeoCatalog ingestion doesn't currently support creating virtual kerchunk manifests for HDF4 due to its added complexity and multiple variants.
4747
48-
### Data cube enrichment modifies the STAC item JSON
48+
### STAC item enrichment
4949

50-
For each enriched asset within the **STAC item JSON**, the following fields are added:
50+
For each optimized asset within the STAC item, the following fields are added:
5151

5252
- `msft:datacube_converted: true` – Indicates that enrichment was applied.
5353
- `cube:dimensions` – A dictionary listing dataset dimensions and their properties.
5454
- `cube:variables` – A dictionary describing dataset variables and their properties.
5555

56+
These variables should be used for render configurations to ensure that your visualization of data cube assets in the Explorer is reading and rendering your data most efficiently.
5657

57-
### Disabling data cube enrichment
58+
### Benefits of cloud optimized data cubes
5859

59-
To **disable enrichment** for an asset, remove `data` and `visual` from the asset’s `roles` list in the STAC item JSON before ingestion.
60+
Data cube cloud optimization improves data access performance, especially for visualization workflows. When a Kerchunk manifest is present, it allows faster access compared to loading the entire dataset file.
6061

61-
### Handling enrichment failures
62+
The Microsoft Planetary Computer Pro Explorer and tiling APIs preferentially use the Kerchunk manifest for data read operations if one exists in the same blob storage directory as the original asset.
6263

63-
If Data cube enrichment fails, the asset can be **re-ingested** with enrichment disabled by updating the STAC item JSON to exclude the `data` or `visual` role before retrying ingestion.
64+
Reading data using a chunked, reference-based approach is faster because it avoids reading the entire file into memory.
6465

65-
### Why enable data cube enrichment?
66+
### Disabling data cube cloud optimization
6667

67-
Enabling Data cube enrichment improves **data access performance**, especially for visualization workflows. When a Kerchunk manifest is present, it allows **faster access** compared to loading the entire dataset file.
68+
If you decide you don't want to work with cloud optimized data cube assets, disable cloud optimization by removing `data` and `visual` from the asset’s `roles` list in the STAC item JSON before ingestion.
6869

69-
### Faster dataset access for data APIs and visualization with Kerchunk
70+
## Zarr ingestion and data updates
7071

71-
The Data Explorer and tiling APIs preferentially use the **Kerchunk manifest (`.json`)** for data read operations if one exists in the same blob storage directory as the original asset. Instead of opening the full `.nc` file, we use a **Zarr with reference files** to access only the necessary data.
72+
As previously mentioned, Zarr is inherently a cloud-native format, so no extra optimization occurs when ingested and no modification of its STAC items is necessary. However, if you plan to dynamically update your Zarr assets and reingest STAC items to work with the latest version, you need to be aware of two update methods: **Append** and **Sync**.
7273

73-
Reading data using a chunked, reference-based approach is faster because it avoids reading the entire file into memory.
74+
### Append
75+
76+
If you add new data to a locally stored Zarr store, but want to update the version stored in Planetary Computer Pro, you need to reingest the STAC item. When that item is reingested, the default behavior is to review the assets for any new data, and add it to the data stored in the cloud. No modification to the STAC item is necessary prior to reingestion.
77+
78+
### Sync
79+
80+
If you remove data from a locally stored Zarr store, reingesting the same STAC item won't allow the cloud-based version to match the version on your machine, as the **append** functionality looks for new data, but not adjust according to any missing data. That's where **sync** comes into play. By modifying the STAC item to include a parameter that indicates you want to sync, the existing data with the new, and reingesting that modified STAC item, only the most up-to-date data from the Zarr store are available in Planetary Computer Pro. The modification to the STAC item should appear as follows:
81+
82+
```json
83+
{
84+
...
85+
"assets": {
86+
"pr": {
87+
"href": "https://managedstorage.azure.com/collection-container/somestuff/pr.zarr",
88+
"msft:ingestion": {
89+
"directory": "sync"
90+
}
91+
}
92+
}
93+
}
94+
```
7495

7596
## Related content
7697

0 commit comments

Comments
 (0)