You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|Data types | Windows registry </br> Windows services </br> Linux Daemons </br> Files </br> Software
24
24
25
+
> [!NOTE]
26
+
> Change Tracking and Inventory (CTI) currently does not support configuration to collect data from only specific services (such as selected Windows services or Linux daemons).
27
+
> The service is designed to collect data from all services, and this behavior cannot be customized.
28
+
> Additionally, DCR transformations are not supported for Change Tracking DCRs.
29
+
25
30
## Limits
26
31
27
32
The following table shows the tracked item limits per machine for Azure CTI.
@@ -68,6 +73,10 @@ The next table shows the data collection frequency for the types of changes supp
68
73
| Linux software | 5 minutes |
69
74
| Linux Daemons | 5 minutes |
70
75
76
+
> [!NOTE]
77
+
> The ability to customize data collection frequency is limited.
78
+
> Currently, this option is available only for Windows Files and Windows Services, and must adhere to the ranges specified in the preceding table.
79
+
71
80
The following table shows the tracked item limits per machine for Azure CTI.
Copy file name to clipboardExpand all lines: articles/azure-vmware/azure-vmware-solution-platform-updates.md
+6-1Lines changed: 6 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,8 +12,13 @@ ms.date: 09/12/2025
12
12
13
13
Microsoft regularly applies important updates to the Azure VMware Solution for new features and software lifecycle management. You should receive a notification through Azure Service Health that includes the timeline of the maintenance. For more information, see [Host maintenance and lifecycle management](azure-vmware-solution-private-cloud-maintenance-best-practices.md#host-maintenance-and-lifecycle-management).
14
14
15
-
## November 2025
15
+
## December 2025
16
+
17
+
**Resource Health for Azure VMware Solution**
16
18
19
+
Resource Health for Azure VMware Solution is now Generally Available! Resource Health, an Azure native feature, monitors the health of an Azure VMware Solution private cloud and suggests actions for existing issues. Setup Azure Monitor notifications on top of these alerts to notify stakeholders to remediate and ensure the private cloud is in a healthy and maintainable state. [Learn more](resource-health-for-azure-vmware-solution-overview.md)
Copy file name to clipboardExpand all lines: articles/backup/sap-hana-database-instances-backup.md
+13-7Lines changed: 13 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
title: Back up SAP HANA database instances on Azure VMs
3
3
description: In this article, you'll learn how to back up SAP HANA database instances that are running on Azure virtual machines.
4
4
ms.topic: how-to
5
-
ms.date: 11/13/2025
5
+
ms.date: 12/15/2025
6
6
ms.service: azure-backup
7
7
author: AbhishekMallick-MS
8
8
ms.author: v-mallicka
@@ -17,9 +17,7 @@ Azure Backup now performs an SAP HANA storage snapshot-based backup of an entire
17
17
18
18
19
19
>[!Note]
20
-
>- Currently, the snapshots are stored on your storage account/operational tier, and isn't stored in Recovery Services vault. Thus, the vault features, such as Cross-region restore,Cross-subscription restore, and security capabilities, aren't supported.
21
-
>- Original Location Restore (OLR) isn't supported.
22
-
>- HANA System Replication (HSR)) isn't supported.
20
+
>- You can now store the Snapshot backups in a Recovery Services vault by using **Enhanced backup policy (preview)** for HANA Snapshot backup. This provides all the vault level features, such as Immutability, Soft-delete, cross-region restore, and more, for SAP HANA snapshot backups. This policy also ensures faster restores from **instant tier**.
23
21
>- For pricing, as per SAP advisory, you must do a weekly full backup + logs streaming/Backint based backup so that the existing protected instance fee and storage cost are applied. For snapshot backup, the snapshot data created by Azure Backup is saved in your storage account and incurs snapshot storage charges. Thus, in addition to streaming/Backint backup charges, you're charged for per GB data stored in your snapshots, which is charged separately. Learn more about [Snapshot pricing](https://azure.microsoft.com/pricing/details/managed-disks/) and [Streaming/Backint based backup pricing](https://azure.microsoft.com/pricing/details/backup/?ef_id=_k_CjwKCAjwp8OpBhAFEiwAG7NaEsaFZUxIBD-FH1IUIfF-7yZRWAYJSMHP67InGf0drY0X2Km71KOKDBoCktgQAvD_BwE_k_&OCID=AIDcmmf1elj9v5_SEM__k_CjwKCAjwp8OpBhAFEiwAG7NaEsaFZUxIBD-FH1IUIfF-7yZRWAYJSMHP67InGf0drY0X2Km71KOKDBoCktgQAvD_BwE_k_&gclid=CjwKCAjwp8OpBhAFEiwAG7NaEsaFZUxIBD-FH1IUIfF-7yZRWAYJSMHP67InGf0drY0X2Km71KOKDBoCktgQAvD_BwE).
24
22
25
23
@@ -85,7 +83,12 @@ To create a policy for the SAP HANA database instance backup, follow these steps
85
83
86
84
:::image type="content" source="./media/sap-hana-database-instances-backup/select-sap-hana-instance-policy-type.png" alt-text="Screenshot that shows a list of policy types." lightbox="./media/sap-hana-database-instances-backup/select-sap-hana-instance-policy-type.png":::
87
85
88
-
1. On the **Create policy** pane, do the following:
86
+
1. On the **Create policy** pane, select the **Policy sub type**. Select the **Enhanced(Preview)** policy to retain your snapshot backups for long term in a Recovery Services Vault and leverage additional security features like immutability, soft-delete, MUA etc.
87
+
88
+
:::image type="content" source="./media/sap-hana-database-instances-backup/hana-vaulted-snapshot-enhanced-policy.png" alt-text="Screenshot that shows policy sub types." lightbox="./media/sap-hana-database-instances-backup/hana-vaulted-snapshot-enhanced-policy.png":::
89
+
90
+
91
+
1. On the **Create policy** pane, enter the following details:
89
92
90
93
:::image type="content" source="./media/sap-hana-database-instances-backup/create-policy.png" alt-text="Screenshot that shows the 'Create policy' pane for configuring backup and restore." lightbox="./media/sap-hana-database-instances-backup/create-policy.png":::
91
94
@@ -96,7 +99,10 @@ To create a policy for the SAP HANA database instance backup, follow these steps
96
99
>Azure Backup currently supports **Daily** backup only.
97
100
98
101
1.**Instant Restore**: Set the retention of recovery snapshots from *1* to *35* days. The default value is *2*.
99
-
1.**Resource group**: Select the appropriate resource group in the drop-down list.
102
+
1.**Resource group**: Select the appropriate resource group in the drop-down list.
103
+
1.**Retention range**: If you have selected **Enhanced(Preview)** as the policy sub type, you can now provide the retention durations for backups stored in Recovery Services Vault. Provide the retention duration for your daily, weekly, monthly and yearly backup points as required.
104
+
105
+
:::image type="content" source="./media/sap-hana-database-instances-backup/create-policy.png" alt-text="Screenshot that shows the 'Create policy' pane for configuring backup and restore." lightbox="./media/sap-hana-database-instances-backup/create-policy.png":::
100
106
1.**Managed Identity**: Select a managed identity in the dropdown list to assign permissions for taking snapshots of the managed disks and place them in the resource group that you've selected in the policy.
101
107
102
108
You can also create a new managed identity for snapshot backup and restore. To create a managed identity and assign it to the VM with SAP HANA database, follow these steps:
@@ -149,4 +155,4 @@ Learn how to:
149
155
-[Restore SAP HANA database instance snapshots on Azure VMs using Azure portal](sap-hana-database-instances-restore.md).
150
156
-[Manage SAP HANA databases on Azure VMs using Azure portal](sap-hana-database-manage.md).
151
157
-[Manage SAP HANA databases that are backed up by Azure Backup using Azure CLI](tutorial-sap-hana-manage-cli.md).
152
-
-[Troubleshoot SAP HANA snapshot backup jobs on Azure Backup](sap-hana-database-instance-troubleshoot.md).
158
+
-[Troubleshoot SAP HANA snapshot backup jobs on Azure Backup](sap-hana-database-instance-troubleshoot.md).
Copy file name to clipboardExpand all lines: articles/data-factory/connector-amazon-redshift.md
+8-5Lines changed: 8 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ ms.author: jianleishen
6
6
author: jianleishen
7
7
ms.subservice: data-movement
8
8
ms.topic: conceptual
9
-
ms.date: 11/19/2025
9
+
ms.date: 12/09/2025
10
10
ms.custom:
11
11
- synapse
12
12
- sfi-image-nochange
@@ -45,7 +45,7 @@ The connector supports the Windows versions in this [article](create-self-hosted
45
45
46
46
## Prerequisites
47
47
48
-
If you are copying data to an on-premises data store using [Self-hosted Integration Runtime](create-self-hosted-integration-runtime.md), grant Integration Runtime (use IP address of the machine) the access to Amazon Redshift cluster. See [Authorize access to the cluster](https://docs.aws.amazon.com/redshift/latest/gsg/rs-gsg-authorize-cluster-access.html) for instructions. For version 2.0, your self-hosted integration runtime version should be 5.60 or above.
48
+
If you are copying data to an on-premises data store using [Self-hosted Integration Runtime](create-self-hosted-integration-runtime.md), grant Integration Runtime (use IP address of the machine) the access to Amazon Redshift cluster. See [Authorize access to the cluster](https://docs.aws.amazon.com/redshift/latest/gsg/rs-gsg-authorize-cluster-access.html) for instructions. For version 2.0, your self-hosted integration runtime version should be 5.61 or above.
49
49
50
50
If you are copying data to an Azure data store, see [Azure Data Center IP Ranges](https://www.microsoft.com/download/details.aspx?id=41653) for the Compute IP address and SQL ranges used by the Azure data centers.
51
51
@@ -96,7 +96,10 @@ The following properties are supported for Amazon Redshift linked service:
96
96
| database |Name of the Amazon Redshift database. |Yes |
97
97
| username |Name of user who has access to the database. |Yes |
98
98
| password |Password for the user account. Mark this field as a SecureString to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |Yes |
99
-
| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use Azure Integration Runtime or Self-hosted Integration Runtime (if your data store is located in private network). If not specified, it uses the default Azure Integration Runtime. For version 2.0, your self-hosted integration runtime version should be 5.60 or above.|No |
99
+
| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use Azure Integration Runtime or Self-hosted Integration Runtime (if your data store is located in private network). If not specified, it uses the default Azure Integration Runtime. |No |
100
+
101
+
> [!Note]
102
+
> Version 2.0 supports Azure Integration Runtime and Self-hosted Integration Runtime version 5.61 or above. Driver installation is no longer needed with Self-hosted Integration Runtime version 5.61 or above.
100
103
101
104
**Example: version 2.0**
102
105
@@ -303,15 +306,15 @@ The following table shows the release stage and change logs for different versio
303
306
| Version | Release stage | Change log |
304
307
| :----------- | :------- | :------- |
305
308
| Version 1.0 | GA version available | / |
306
-
| Version 2.0 | GA version available | • The self-hosted integration runtime version should be 5.60 or above. <br><br>• BOOLEAN is read as Boolean data type. |
309
+
| Version 2.0 | GA version available | • Supports Azure Integration Runtime and Self-hosted Integration Runtime version 5.61 or above. Driver installation is no longer needed with Self-hosted Integration Runtime version 5.61 or above. <br><br>• BOOLEAN is read as Boolean data type. |
307
310
308
311
### <aname="upgrade-the-amazon-redshift-connector"></a> Upgrade the Amazon Redshift connector from version 1.0 to version 2.0
309
312
310
313
1. In **Edit linked service** page, select version 2.0 and configure the linked service by referring to [linked service properties](#linked-service-properties).
311
314
312
315
2. The data type mapping for the Amazon Redshift linked service version 2.0 is different from that for the version 1.0. To learn the latest data type mapping, see [Data type mapping for Amazon Redshift](#data-type-mapping-for-amazon-redshift).
313
316
314
-
3. Apply a self-hosted integration runtime with version 5.60 or above.
317
+
3. Apply a self-hosted integration runtime with version 5.61 or above. Driver installation is no longer needed with Self-hosted Integration Runtime version 5.61 or above.
315
318
316
319
## Related content
317
320
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
Copy file name to clipboardExpand all lines: articles/data-factory/connector-hubspot.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ ms.custom:
17
17
This article outlines how to use the Copy Activity in an Azure Data Factory or Synapse Analytics pipeline to copy data from HubSpot. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity.
18
18
19
19
> [!IMPORTANT]
20
-
> The HubSpot connector version 2.0 provides improved native HubSpot support. If you are using HubSpot connector version 1.0 in your solution, please [upgrade the HubSpot connector](#upgrade-the-hubspot-connector-from-version-10-to-version-20) as version 1.0 is at [End of Support stage](connector-release-stages-and-timelines.md). Your pipeline will fail after **November 22, 2025** if not upgraded. Refer to this [section](#hubspot-connector-lifecycle-and-upgrade)for details on the difference between version 2.0 and version 1.0.
20
+
> The HubSpot connector version 1.0 is at [removal stage](connector-release-stages-and-timelines.md). You are recommended to [upgrade the HubSpot connector](#hubspot-connector-lifecycle-and-upgrade)from version 1.0 to 2.0.
21
21
22
22
## Supported capabilities
23
23
@@ -305,7 +305,7 @@ The following table shows the release stage and change logs for different versio
305
305
306
306
| Version | Release stage | Change log |
307
307
| :----------- | :------- |:------- |
308
-
| Version 1.0 |End of support | / |
308
+
| Version 1.0 |Removed | Not applicable. |
309
309
| Version 2.0 | General availability |• The `tableName` value is `<HubSpot Category>.<Sub Category>.<Object Name>`, for example: `CRM.Commerce.Discounts`. <br><br>• date is read as DateTime data type. <br><br>• object is read as String data type.<br><br>•`useEncryptedEndpoints`, `useHostVerification`, `usePeerVerification` are not supported in the linked service. <br><br> • `query` is not supported. <br><br>• Support specific HubSpot tables. For the supported table list, go to [Dataset properties](#dataset-properties).|
310
310
311
311
### <aname="upgrade-the-hubspot-connector-from-version-10-to-version-20"></a> Upgrade the HubSpot connector from version 1.0 to version 2.0
0 commit comments