You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/api-management/breaking-changes/trusted-service-connectivity-retirement-march-2026.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,7 +31,7 @@ First, check for an Azure Advisor recommendation:
31
31
32
32
**If you don't see a recommendation**, your API Management gateway isn't affected by the change.
33
33
34
-
**If you see a recommendation**, your API Management gateway is affected by the breaking change and you need to take action:
34
+
**If you see a recommendation**, your API Management gateway has previously sent traffic to the listed Azure services. Because of this, it is considered affected by the breaking change and you need to take action:
35
35
36
36
1. Determine if your API Management gateway relies on trusted service connectivity to Azure services.
37
37
1. If it does, update the networking configuration to eliminate the dependency on trusted service connectivity. If it doesn’t, proceed to the next step.
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/troubleshoot-volumes.md
+6Lines changed: 6 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -74,6 +74,12 @@ This section explains the causes of some of the common allocation failures and s
74
74
|Out of storage or networking capacity in a region for regular volumes. <br> Error message: `There are currently insufficient resources available to create [or extend] a volume in this region. Please retry the operation. If the problem persists, contact Support.`| The error indicates that there are insufficient resources available in the region to create or resize volumes. <br> Try one of the following workarounds: <ul><li>Create the volume under a new VNet to avoid hitting networking-related resource limits.</li> <li>Retry after some time. Resources may have been freed in the cluster, region, or zone in the interim.</li></ul> |
75
75
|Out of storage capacity when creating a volume with network features set to `Standard`. <br> Error message: `No storage available with Standard network features, for the provided VNet.`| The error indicates that there are insufficient resources available in the region to create volumes with `Standard` networking features. <br> Try one of the following workarounds: <ul><li>If `Standard` network features aren't required, create the volume with `Basic` network features.</li> <li>Try creating the volume under a new VNet to avoid hitting networking-related resource limits</li><li>Retry after some time. Resources may have been freed in the cluster, region, or zone in the interim.</li></ul> |
76
76
77
+
## Errors for Access Control List
78
+
79
+
| Error conditions | Resolutions |
80
+
|-|-|
81
+
| Error when attempting to set NTFS ACLs through the Windows Security tab. <br> Error message: `The program cannot open the required dialogue box because it cannot determine whether the computer is joined to a domain.`| This error indicates that the ANF server is unable to retrieve domain information from the Domain Controllers due to missing SYSVOL synchronization among the Domain Controllers. To resolve this issue, [perform a non-authoritative synchronization of DFSR-replicated SYSVOL](/troubleshoot/windows-server/group-policy/force-authoritative-non-authoritative-synchronization)|
Copy file name to clipboardExpand all lines: articles/backup/blob-backup-support-matrix.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,7 +24,7 @@ Operational backup for blobs is available in all public cloud regions, except Fr
24
24
25
25
# [Vaulted backup](#tab/vaulted-backup)
26
26
27
-
Vaulted backup for blobs is available in all public cloud regions. It's also available in China East, China East 2, China East 3, China North 2, China North 3, US GOV Arizona, US GOV Texas, US GOV Virginia, US DoD East, US DoD Central.
27
+
Vaulted backup for blobs is available in all public cloud regions. It's also available in China East 2, China East 3, China North 2, China North 3, US GOV Arizona, US GOV Texas, US GOV Virginia, US DoD East, US DoD Central.
Copy file name to clipboardExpand all lines: articles/cost-management-billing/benefits/macc/manage-consumption-commitment.md
+26-8Lines changed: 26 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,13 +1,13 @@
1
1
---
2
2
title: Manage a Microsoft Azure Consumption Commitment resource
3
3
description: Learn how to manage your Microsoft Azure Consumption Commitment (MACC) resource, including moving it across resource groups or subscriptions.
When you accept a Microsoft Azure Consumption Commitment (MACC) in a Microsoft Customer Agreement, the MACC resource gets placed in a subscription and resource group. The resource contains the metadata related to the MACC. Including: status of the MACC, commitment amount, start date, end date, and System ID. You can view the metadata in the Azure portal.
The MACC’s resource name is a part of its Uniform Resource Identifier (URI) and cannot be changed. However, you can use [tags](../../../azure-resource-manager/management/tag-resources.md) to help identify the MACC resource based on a nomenclature relevant to your organization.
@@ -62,6 +61,25 @@ A MACC resource may only be deleted if its status is _failed_ or _canceled_. Del
62
61
## Cancel MACC
63
62
Please contact your Microsoft account team if you have questions about canceling your MACC.
64
63
64
+
## Track your MACC
65
+
If your organization has a MACC associated with a Microsoft Customer Agreement (MCA) or Enterprise Agreement (EA) billing account, you can track key details—including start and end dates, remaining balance, and eligible spend—through the Azure portal or REST APIs. For more information, see [Track your Microsoft Azure Consumption Commitment (MACC)](track-consumption-commitment.md).
66
+
67
+
### View MACC milestones
68
+
If your MACC includes milestones, you can view milestone details in the Azure portal. Navigate to your MACC resource and select the **Milestones** tab to see a detailed breakdown of your commitment milestones. For more information about milestones, see [MACC Milestones](track-consumption-commitment.md#macc-milestones).
69
+
70
+
The milestones view displays the following information for each milestone:
71
+
-**End Date**: The deadline for reaching the milestone commitment amount
72
+
-**Commitment amount**: The amount that needs to be consumed by the end date
73
+
-**Status**: Current status of the milestone (such as Active, Completed, or Failed)
74
+
-**Automatic Shortfall**: Whether automatic shortfall is applicable for the milestone
75
+
-**Shortfall Amount**: Any shortfall amount if the commitment isn't met (displays when applicable)
Copy file name to clipboardExpand all lines: articles/data-factory/connector-azure-database-for-postgresql.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ ms.author: jianleishen
6
6
author: jianleishen
7
7
ms.subservice: data-movement
8
8
ms.topic: conceptual
9
-
ms.date: 08/05/2025
9
+
ms.date: 02/09/2026
10
10
ms.custom:
11
11
- synapse
12
12
- sfi-image-nochange
@@ -91,7 +91,7 @@ The following properties are supported for the Azure Database for PostgreSQL lin
91
91
| server | Specifies the host name and optionally port on which Azure Database for PostgreSQL is running. | Yes |
92
92
| port |The TCP port of the Azure Database for PostgreSQL server. The default value is `5432`. |No |
93
93
| database| The name of the Azure Database for PostgreSQL database to connect to. |Yes |
94
-
| sslMode | Controls whether SSL is used, depending on server support. <br/>- **Disable**: SSL is disabled. If the server requires SSL, the connection fails.<br/>- **Allow**: Prefer non-SSL connections if the server allows them, but allow SSL connections.<br/>- **Prefer**: Prefer SSL connections if the server allows them, but allow connections without SSL.<br/>- **Require**: The connection fails if the server doesn't support SSL.<br/>- **Verify-ca**: The connection fails if the server doesn't support SSL. Also verifies server certificate.<br/>- **Verify-full**: The connection fails if the server doesn't support SSL. Also verifies server certificate with host's name. <br/>Options: Disable (0) / Allow (1) / Prefer (2) **(Default)** / Require (3) / Verify-ca (4) / Verify-full (5) | No |
94
+
| sslMode | Controls whether SSL is used, depending on server support. <br/>- **Disabled**: SSL is disabled. If the server requires SSL, the connection fails.<br/>- **Allow**: Prefer non-SSL connections if the server allows them, but allow SSL connections.<br/>- **Preferred**: Prefer SSL connections if the server allows them, but allow connections without SSL.<br/>- **Required**: The connection fails if the server doesn't support SSL.<br/>- **Verify_ca**: The connection fails if the server doesn't support SSL. Also verifies server certificate.<br/>- **Verify_full**: The connection fails if the server doesn't support SSL. Also verifies server certificate with host's name. <br/>Options: Disabled (0) / Allow (1) / Preferred (2) **(Default)** / Required (3) / Verify_ca (4) / Verify_full (5) | No |
95
95
| connectVia | This property represents the [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use Azure Integration Runtime or Self-hosted Integration Runtime (if your data store is located in private network). If not specified, it uses the default Azure Integration Runtime.|No|
Copy file name to clipboardExpand all lines: articles/data-factory/connector-postgresql.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ description: Learn how to copy data from PostgreSQL V2 to supported sink data st
5
5
author: jianleishen
6
6
ms.subservice: data-movement
7
7
ms.topic: conceptual
8
-
ms.date: 01/26/2026
8
+
ms.date: 02/09/2026
9
9
ms.author: jianleishen
10
10
ms.custom:
11
11
- synapse
@@ -82,7 +82,7 @@ The following properties are supported for PostgreSQL linked service:
82
82
| database | The PostgreSQL database to connect to. | Yes |
83
83
| username | The username to connect with. | Yes |
84
84
| password | The password to connect with. | Yes |
85
-
| sslMode | Controls whether SSL is used, depending on server support. <br/>- **Disable**: SSL is disabled. If the server requires SSL, the connection will fail.<br/>- **Allow**: Prefer non-SSL connections if the server allows them, but allow SSL connections.<br/>- **Prefer**: Prefer SSL connections if the server allows them, but allow connections without SSL.<br/>- **Require**: Fail the connection if the server doesn't support SSL.<br/>- **Verify-ca**: Fail the connection if the server doesn't support SSL. Also verifies server certificate.<br/>- **Verify-full**: Fail the connection if the server doesn't support SSL. Also verifies server certificate with host's name. <br/>Options: Disable (0) / Allow (1) / Prefer (2) **(Default)** / Require (3) / Verify-ca (4) / Verify-full (5) | No |
85
+
| sslMode | Controls whether SSL is used, depending on server support. <br/>- **Disabled**: SSL is disabled. If the server requires SSL, the connection will fail.<br/>- **Allow**: Prefer non-SSL connections if the server allows them, but allow SSL connections.<br/>- **Preferred**: Prefer SSL connections if the server allows them, but allow connections without SSL.<br/>- **Required**: Fail the connection if the server doesn't support SSL.<br/>- **Verify_ca**: Fail the connection if the server doesn't support SSL. Also verifies server certificate.<br/>- **Verify_full**: Fail the connection if the server doesn't support SSL. Also verifies server certificate with host's name. <br/>Options: Disabled (0) / Allow (1) / Preferred (2) **(Default)** / Required (3) / Verify_ca (4) / Verify_full (5) | No |
86
86
| authenticationType | Authentication type for connecting to the database. Only supports **Basic**. | Yes |
87
87
| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. If not specified, it uses the default Azure Integration Runtime. |No |
Copy file name to clipboardExpand all lines: articles/event-hubs/includes/event-hubs-partition-count.md
+17-2Lines changed: 17 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,6 +10,21 @@ ms.custom: "include file"
10
10
11
11
---
12
12
13
-
A [partition](../event-hubs-features.md#partitions) is a data organization mechanism that enables parallel publishing and consumption. While it supports parallel processing and scaling, total capacity remains limited by the namespace’s scaling allocation. We recommend that you balance scaling units (throughput units for the standard tier, processing units for the premium tier, or capacity units for the dedicated tier) and partitions to achieve optimal scale. In general, we recommend a maximum throughput of 1 MB/s per partition. Therefore, a rule of thumb for calculating the number of partitions would be to divide the maximum expected throughput by 1 MB/s. For example, if your use case requires 20 MB/s, we recommend that you choose at least 20 partitions to achieve the optimal throughput.
13
+
A [partition](../event-hubs-features.md#partitions) is a data organization mechanism that enables parallel publishing and consumption. While it supports parallel processing and scaling, total capacity remains limited by the namespace's scaling allocation. Balance scaling units (throughput units for the standard tier, processing units for the premium tier, or capacity units for the dedicated tier) and partitions to achieve optimal scale.
14
14
15
-
However, if you have a model in which your application has an affinity to a particular partition, increasing the number of partitions isn't beneficial. For more information, see [availability and consistency](../event-hubs-availability-and-consistency.md).
15
+
Start with your workload profile: average payload size, events per second, and sensitivity to throughput drops or latency spikes. Use the per-partition throughput below as a starting point, then validate with load tests:
16
+
17
+
-**Standard tier**: ~1 MB/s ingress and ~2 MB/s egress per partition.
18
+
-**Premium and Dedicated tiers**: ~1-2 MB/s ingress and ~2-5 MB/s egress per partition.
19
+
20
+
Estimate partitions by dividing your expected ingress and egress by the applicable per-partition rates and taking the larger result. If observed throughput or latency doesn't meet expectations, increase partitions (Premium and Dedicated tiers only) and retest.
21
+
22
+
Partitions also set the ceiling for consumer parallelism. How that ceiling works depends on the consumer type:
23
+
24
+
-**Epoch (exclusive) consumers** — Used by `EventProcessorClient` (.NET, Java) and `EventHubConsumerClient` (Python, JavaScript), which is the recommended pattern for production AMQP workloads. Only one epoch consumer can own a given partition in a consumer group at a time. If you deploy more processor instances than partitions, the extra instances aren't assigned any partitions and sit idle until an existing owner releases one. If a new epoch consumer connects with a higher owner level, the service disconnects the current owner with a `ConsumerDisconnected` error, and the new consumer takes over.
25
+
-**Non-epoch consumers** — Up to 5 non-epoch receivers can read the same partition concurrently within a consumer group. Each receiver sees the same events (fan-out), so this mode doesn't increase processing throughput per partition. Connecting an epoch consumer to a partition disconnects all non-epoch consumers on that partition.
26
+
-**Kafka consumers** — Kafka consumers use the group coordination protocol (`group.id`) instead of AMQP epochs, but the partition-ownership model is equivalent: each partition is assigned to exactly one consumer member within a consumer group at a time. When a new member joins or an existing member leaves, the group rebalances and redistributes partition assignments. If there are more consumer members than partitions, the excess members receive no assignments and remain idle until a future rebalance frees up a partition. To reduce unnecessary rebalancing from transient disconnections, set a unique `group.instance.id` per consumer instance (static membership).
27
+
28
+
In practice, **the number of partitions equals the maximum number of parallel consumers per consumer group** regardless of whether you use AMQP epoch consumers or Kafka consumers. Factor this into your partition count when you plan for scale-out.
29
+
30
+
If your application has an affinity to a particular partition, increasing the number of partitions isn't beneficial. For more information, see [availability and consistency](../event-hubs-availability-and-consistency.md).
0 commit comments