You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-functions/durable/durable-functions-best-practice-reference.md
+8-6Lines changed: 8 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ This article details some best practices when using Durable Functions. It also d
15
15
16
16
### Use the latest version of the Durable Functions extension and SDK
17
17
18
-
There are two components that a function app uses to execute Durable Functions. One is the *Durable Functions SDK* that allows you to write orchestrator, activity, and entity functions using your target programming language. The other is the *Durable extension*, which is the runtime component that actually executes the code. With the exception of .NET in-process apps, the SDK and the extension are versioned independently.
18
+
There are two components that a function app uses to execute Durable Functions. One is the *Durable Functions SDK* that allows you to write orchestrator, activity, and entity functions using your target programming language. The other is the *Durable extension*, which is the runtime component that actually executes the code. Except for .NET in-process apps, the SDK and the extension are versioned independently.
19
19
20
20
Staying up to date with the latest extension and SDK ensures your application benefits from the latest performance improvements, features, and bug fixes. Upgrading to the latest versions also ensures that Microsoft can collect the latest diagnostic telemetry to help accelerate the investigation process when you open a support case with Azure.
21
21
@@ -31,17 +31,17 @@ The [replay](durable-functions-orchestrations.md#reliability) behavior of orches
31
31
32
32
### Familiarize yourself with your programming language's Azure Functions performance settings
33
33
34
-
_Using default settings_, the language runtime you select may impose strict concurrency restrictions on your functions. For example: only allowing 1 function to execute at a time on a given VM. These restrictions can usually be relaxed by _fine tuning_ the concurrency and performance settings of your language. If you're looking to optimize the performance of your Durable Functions application, you will need to familiarize yourself with these settings.
34
+
_Using default settings_, the language runtime you select may impose strict concurrency restrictions on your functions. For example: only allowing one function to execute at a time on a given VM. These restrictions can usually be relaxed by _fine tuning_ the concurrency and performance settings of your language. If you're looking to optimize the performance of your Durable Functions application, you need to familiarize yourself with these settings.
35
35
36
-
Below is a non-exhaustive list of some of the languages that often benefit from fine tuning their performance and concurrency settings, and their guidelines for doing so.
36
+
Below is a nonexhaustive list of some of the languages that often benefit from fine tuning their performance and concurrency settings, and their guidelines for doing so.
Multiple Durable Function apps can share the same storage account. By default, the name of the app is used as the task hub name, which ensures that accidental sharing of task hubs won't happen. If you need to explicitly configure task hub names for your apps in host.json, you must ensure that the names are [*unique*](durable-functions-task-hubs.md#multiple-function-apps). Otherwise, the multiple apps will compete for messages, which could result in undefined behavior, including orchestrations getting unexpectedly "stuck" in the Pending or Running state.
44
+
Multiple Durable Function apps can share the same storage account. By default, the name of the app is used as the task hub name, which ensures that accidental sharing of task hubs won't happen. If you need to explicitly configure task hub names for your apps in host.json, you must ensure that the names are [*unique*](durable-functions-task-hubs.md#multiple-function-apps). Otherwise, the multiple apps compete for messages, which could result in undefined behavior, including orchestrations getting unexpectedly "stuck" in the Pending or Running state.
45
45
46
46
The only exception is if you deploy *copies* of the same app in [multiple regions](durable-functions-disaster-recovery-geo-distribution.md); in this case, you can use the same task hub for the copies.
47
47
@@ -55,13 +55,15 @@ You can run into memory issues if you provide large inputs and outputs to and fr
55
55
56
56
Inputs and outputs to Durable Functions APIs are serialized into the orchestration history. This means that large inputs and outputs can, over time, greatly contribute to an orchestrator history growing unbounded, which risks causing memory exceptions during [replay](durable-functions-orchestrations.md#reliability).
57
57
58
+
Activity functions returning complex API responses (such as Microsoft Graph result sets) can cause extreme memory usage during serialization. Selecting only required fields and returning a simple DTO avoids this issue.
59
+
58
60
To mitigate the impact of large inputs and outputs to APIs, you may choose to delegate some work to sub-orchestrators. This helps load balance the history memory burden from a single orchestrator to multiple ones, therefore keeping the memory footprint of individual histories small.
59
61
60
62
That said the best practice for dealing with _large_ data is to keep it in external storage and to only materialize that data inside Activities, when needed. When taking this approach, instead of communicating the data itself as inputs and/or outputs of Durable Functions APIs, you can pass in some lightweight identifier that allows you to retrieve that data from external storage when needed in your Activities.
61
63
62
64
### Keep Entity data small
63
65
64
-
Just like for inputs and outputs to Durable Functions APIs, if an entity's explicit state is too large, you may run into memory issues. In particular, an Entity state needs to be serialized and de-serialized from storage on any request, so large states add serialization latency to each invocation. Therefore, if an Entity needs to track large data, it's recommended to offload the data to external storage and track some lightweight identifier in the entity that allows you to materialize the data from storage when needed.
66
+
Just like for inputs and outputs to Durable Functions APIs, if an entity's explicit state is too large, you may run into memory issues. In particular, an Entity state needs to be serialized and deserialized from storage on any request, so large states add serialization latency to each invocation. Therefore, if an Entity needs to track large data, it's recommended to offload the data to external storage and track some lightweight identifier in the entity that allows you to materialize the data from storage when needed.
65
67
66
68
67
69
### Fine tune your Durable Functions concurrency settings
@@ -110,7 +112,7 @@ Starting in v2.3.0 of the Durable extension, logs emitted by the underlying Dura
110
112
Azure Function App Diagnostics is a useful resource on Azure portal for monitoring and diagnosing potential issues in your application. It also provides suggestions to help resolve problems based on the diagnosis. See [Azure Function App Diagnostics](function-app-diagnostics.md).
111
113
112
114
#### Durable Functions Orchestration traces
113
-
Azure portal provides orchestration trace details to help you understand the status of each orchestration instance and trace the end-to-end execution. When you look at the list of functions inside your Azure Functions app, you'll see a **Monitor** column that contains links to the traces. You need to have Applications Insights enabled for your app to get this information.
115
+
Azure portal provides orchestration trace details to help you understand the status of each orchestration instance and trace the end-to-end execution. When you look at the list of functions inside your Azure Functions app, you see a **Monitor** column that contains links to the traces. You need to have Applications Insights enabled for your app to get this information.
Copy file name to clipboardExpand all lines: articles/azure-vmware/architecture-api-management.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,14 +3,14 @@ title: Architecture - API Management
3
3
description: Learn how API Management protects APIs running on Azure VMware Solution virtual machines (VMs)
4
4
ms.topic: concept-article
5
5
ms.service: azure-vmware
6
-
ms.date: 3/22/2024
6
+
ms.date: 1/14/2026
7
7
ms.custom: engagement-fy23
8
8
# Customer intent: As a DevOps engineer, I want to implement API Management for Azure VMware Solution VMs, so that I can securely publish and protect APIs for both internal and external consumers while ensuring optimal traffic flow and management using Azure services.
9
9
---
10
10
11
11
# Publish and protect APIs running on Azure VMware Solution VMs
12
12
13
-
Microsoft Azure [API Management](https://azure.microsoft.com/services/api-management/) lets you securely publish to external or internal consumers. Only the Developer (development) and Premium (production) SKUs allow Azure Virtual Network integration to publish APIs that run on Azure VMware Solution workloads. In addition, both SKUs enable the connectivity between the API Management service and the backend.
13
+
Microsoft Azure [API Management](https://azure.microsoft.com/services/api-management/) lets you securely publish to external or internal consumers. Only the Developer (development) and Premium (production) SKUs allow Azure Virtual Network integration to publish APIs that run on Azure VMware Solution workloads. In addition, both SKUs enable the connectivity between the API Management service and the backend.
14
14
15
15
The API Management configuration is the same for backend services that run on Azure VMware Solution virtual machines (VMs) and on-premises. API Management also configures the virtual IP on the load balancer as the backend endpoint for both deployments when the backend server is placed behind an NSX Load Balancer on Azure VMware Solution.
description: Options for Azure VMware Solution Internet Connectivity.
4
4
ms.topic: concept-article
5
5
ms.service: azure-vmware
6
-
ms.date: 3/22/2024
6
+
ms.date: 1/14/2026
7
7
ms.custom: engagement-fy23
8
8
# Customer intent: As a network architect, I want to evaluate different methods for enabling internet connectivity for Azure VMware Solution, so that I can make an informed decision based on security, visibility, and capacity requirements for my organization's cloud infrastructure.
9
9
---
@@ -31,7 +31,7 @@ Use any of these patterns to provide an outbound SNAT service with the ability t
31
31
32
32
The same service can also consume an Azure Public IP and create an inbound DNAT from the Internet towards targets in Azure VMware Solution.
33
33
34
-
An environment can also be built that utilizes multiple paths for Internet traffic. One for outbound SNAT (for example, a third-party security NVA), and another for inbound DNAT (like a third party Load balancer NVA using SNAT pools for return traffic).
34
+
An environment can also be built that utilizes multiple paths for Internet traffic. One for outbound SNAT (for example, a third-party security NVA), and another for inbound DNAT (like a third party Load balancer NVA using SNAT pools for return traffic).
35
35
36
36
## Azure VMware Solution Managed SNAT
37
37
@@ -57,19 +57,19 @@ Features include:
57
57
58
58
- Scale – you can request to increase the soft limit of 64 Azure Public IPv4 addresses to 1,000 s of Azure Public IPs allocated if an application requires it.
59
59
- Flexibility – an Azure Public IPv4 address can be applied anywhere in the NSX ecosystem. It can be used to provide SNAT or DNAT, on load balancers like VMware’s NSX Advanced Load Balancer, or third-party Network Virtual Appliances. It can also be used on third-party Network Virtual Security Appliances on VMware segments or directly on VMs.
60
-
- Regionality – the Azure Public IPv4 address to NSX Edge is unique to the local SDDC. For “multi private cloud in distributed regions,” with local exit to Internet intentions, it’s easier to direct traffic locally versus trying to control default route propagation for a security or SNAT service hosted in Azure. If you have two or more Azure VMware Solution private clouds connected with a Public IP configured, they can both have a local exit.
60
+
- Regionality – the Azure Public IPv4 address to NSX Edge is unique to the local SDDC. For *multi private cloud in distributed regions* with local exit to Internet intentions, it’s easier to direct traffic locally versus trying to control default route propagation for a security or SNAT service hosted in Azure. If you have two or more Azure VMware Solution private clouds connected with a Public IP configured, they can both have a local exit.
61
61
62
62
## Considerations for selecting an option
63
63
64
64
The option that you select depends on the following factors:
65
65
66
66
- To add an Azure VMware private cloud to a security inspection point provisioned in Azure native that inspects all Internet traffic from Azure native endpoints, use an Azure native construct and leak a default route from Azure to your Azure VMware Solution private cloud.
67
67
- If you need to run a third-party Network Virtual Appliance to conform to existing standards for security inspection or streamlined operating expenses, you have two options. You can run your Azure Public IPv4 address in Azure native with the default route method or run it in Azure VMware Solution using Azure Public IPv4 address to NSX Edge.
68
-
- There are scale limits on how many Azure Public IPv4 addresses can be allocated to a Network Virtual Appliance running in native Azure or provisioned on Azure Firewall. The Azure Public IPv4 address to NSX Edge option allows for higher allocations (1,000 s versus 100 s).
68
+
- There are scale limits on how many Azure Public IPv4 addresses can be allocated to a Network Virtual Appliance running in native Azure or provisioned on Azure Firewall. The Azure Public IPv4 address to NSX Edge option allows for higher allocations (1,000 s versus 100 s).
69
69
- Use an Azure Public IPv4 address to the NSX Edge for a localized exit to the internet from each private cloud in its local region. Using multiple Azure VMware Solution private clouds in several Azure regions that need to communicate with each other and the internet, it can be challenging to match an Azure VMware Solution private cloud with a security service in Azure. The difficulty is due to the way a default route from Azure works.
70
70
71
71
> [!IMPORTANT]
72
-
> By design, Public IPv4 Address with NSX does not allow the exchange of Azure/Microsoft owned Public IP Addresses over ExpressRoute Private Peering connections. This means you cannot advertise the Public IPv4 addresses to your customer VNet or on-premises network via ExpressRoute. All Public IPv4 Addresses with NSX traffic must take the internet path even if the Azure VMware Solution private cloud is connected via ExpressRoute. For more information, visit [ExpressRoute Circuit Peering](../expressroute/expressroute-circuit-peerings.md).
72
+
> By design, Public IPv4 Address with NSX doesn't allow the exchange of Azure/Microsoft owned Public IP Addresses over ExpressRoute Private Peering connections. This means you can't advertise the Public IPv4 addresses to your customer virtual network or on-premises network via ExpressRoute. All Public IPv4 Addresses with NSX traffic must take the internet path even if the Azure VMware Solution private cloud is connected via ExpressRoute. For more information, visit [ExpressRoute Circuit Peering](../expressroute/expressroute-circuit-peerings.md).
0 commit comments