Skip to content

Commit 9bd5b49

Browse files
Merge pull request #310651 from MicrosoftDocs/main
Auto Publish – main to live - 2026-01-20 06:00 UTC
2 parents bbb5767 + a0f7ae8 commit 9bd5b49

6 files changed

Lines changed: 103 additions & 15 deletions

articles/azure-netapp-files/migrate-data.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ ms.service: azure-netapp-files
55
ms.topic: conceptual
66
author: b-ahibbard
77
ms.author: anfdocs
8-
ms.date: 09/04/2025
8+
ms.date: 01/20/2026
99
---
1010
# Migrating data into Azure NetApp Files volumes
1111

@@ -15,9 +15,10 @@ Azure NetApp Files supports several methods to migrate data. You can migrate dat
1515

1616
The Azure NetApp Files [migration assistant](migrate-volumes.md) feature helps you accelerate and simplify migrations of business-critical applications and data to Azure. Benefits include:
1717

18-
* Efficient and cost-effective data migration leveraging ONTAP's built-in replication engine for seamless transition from on-premises or Cloud Volumes ONTAP storage to Azure NetApp Files.
19-
* Storage-efficient data transfer that reduces network transfer costs for both baseline and incremental updates.
20-
* Low cutover/downtime window, ensuring faster and more efficient final updates, thus minimizing disruption to your operations.
18+
* Efficient, cost-effective migration powered by ONTAP’s built-in replication engine for a seamless transition from on-premises or Cloud Volumes ONTAP to Azure NetApp Files.
19+
* Storage-optimized data migration that lowers network costs for both baseline and incremental updates.
20+
* Minimal cutover window for faster final syncs, reducing downtime and keeping your business running smoothly.
21+
* Existing volume snapshots included to safeguard data integrity and reduce risk, delivering a reliable and worry-free migration experience.
2122

2223
To use Azure NetApp Files migration assistant, you need to establish connectivity between your on-premises storage cluster and the target volume in your Azure NetApp Files region of choice. For detailed instructions, see [Migrate volumes to Azure NetApp Files](migrate-volumes.md).
2324

@@ -41,4 +42,4 @@ To migrate data from one Azure region to another Azure region, use [Azure NetApp
4142
* [Migrate volumes to Azure NetApp Files](migrate-volumes.md)
4243
* [Azure NetApp Files replication](replication.md)
4344
* [Data migration and protection FAQs for Azure NetApp Files](faq-data-migration-protection.md)
44-
45+

articles/azure-netapp-files/replication-requirements.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ services: azure-netapp-files
55
author: b-ahibbard
66
ms.service: azure-netapp-files
77
ms.topic: concept-article
8-
ms.date: 01/08/2026
8+
ms.date: 01/20/2026
99
ms.author: anfdocs
1010
ms.custom: references_regions
1111
---
@@ -70,8 +70,6 @@ If you use [cross-zone-region replication](replication.md#cross-zone-region-repl
7070
7171
* You can revert a source or destination volume of a cross-region replication to a snapshot if the snapshot is newer than the most recent SnapMirror snapshot. You can't use snapshots that are older than the SnapMirror snapshot for a volume revert operation. For more information, see [Revert a volume by using snapshot revert](snapshots-revert-volume.md).
7272

73-
* You can revert a source or destination volume of a cross-region replication to a snapshot if the snapshot is newer than the most recent SnapMirror snapshot. You can't use snapshots that are older than the SnapMirror snapshot for a volume revert operation. For more information, see [Revert a volume by using snapshot revert](snapshots-revert-volume.md).
74-
7573
* If you copy large datasets into a volume that has cross-region replication enabled and you have spare capacity in the capacity pool, you should set the replication interval to 10 minutes, increase the volume size to allow for the changes to be stored, and temporarily disable replication.
7674

7775
* If you use the cool access feature, understand the considerations in [Manage Azure NetApp Files storage with cool access](manage-cool-access.md#considerations).

articles/backup/quick-backup-postgresql-flexible-server-terraform.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ Before you configure backup for Azure Database for PostgreSQL - Flexible Server,
3131
>[!Note]
3232
>Terraform only supports authenticating to Azure with the Azure CLI. Authenticating using Azure PowerShell isn't supported. Therefore, while you can use the Azure PowerShell module when doing your Terraform work, you first need to authenticate to Azure.
3333
34-
## Implement the Terraform code
34+
## Implement the Terraform code for PostgreSQL Flexible Server backup configuration
3535

3636
> [!NOTE]
3737
> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform).
@@ -227,15 +227,15 @@ variable "retention_duration_in_months" {
227227
```
228228

229229

230-
## Initialize Terraform
230+
## Initialize Terraform for PostgreSQL Flexible Server backup
231231

232232
[!INCLUDE [terraform-init.md](~/azure-dev-docs-pr/articles/terraform/includes/terraform-init.md)]
233233

234-
## Create a Terraform execution plan
234+
## Create a Terraform execution plan for PostgreSQL Flexible Server backup
235235

236236
[!INCLUDE [terraform-plan.md](~/azure-dev-docs-pr/articles/terraform/includes/terraform-plan.md)]
237237

238-
## Apply a Terraform execution plan
238+
## Apply a Terraform execution plan for PostgreSQL Flexible Server backup
239239

240240
[!INCLUDE [terraform-apply-plan.md](~/azure-dev-docs-pr/articles/terraform/includes/terraform-apply-plan.md)]
241241

articles/data-factory/connector-amazon-rds-for-sql-server.md

Lines changed: 46 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.author: jianleishen
66
author: jianleishen
77
ms.subservice: data-movement
88
ms.topic: conceptual
9-
ms.date: 06/17/2024
9+
ms.date: 12/31/2025
1010
ms.custom:
1111
- synapse
1212
- sfi-image-nochange
@@ -495,6 +495,51 @@ If the table has physical partition, you would see "HasPartition" as "yes" like
495495

496496
:::image type="content" source="./media/connector-azure-sql-database/sql-query-result.png" alt-text="Sql query result":::
497497

498+
## Data type mapping for Amazon RDS for SQL Server
499+
500+
When copying data from Amazon RDS for SQL Server, the following mappings are used from Amazon RDS for SQL Server data types to interim data types used by the service internally. See [Schema and data type mappings](copy-activity-schema-and-type-mapping.md) to learn about how copy activity maps the source schema and data type to the sink.
501+
502+
| Amazon RDS for SQL Server data type | Interim service data type |
503+
| ------ | ------ |
504+
| bigint | Int64 |
505+
| binary | Byte[] |
506+
| bit | Boolean |
507+
| char | String, Char[] |
508+
| date | DateTime |
509+
| Datetime | DateTime |
510+
| datetime2 | DateTime |
511+
| Datetimeoffset | DateTimeOffset |
512+
| Decimal | Decimal |
513+
| FILESTREAM attribute (varbinary(max)) | Byte[] |
514+
| Float | Double |
515+
| image | Byte[] |
516+
| int | Int32 |
517+
| money | Decimal |
518+
| nchar | String, Char[] |
519+
| ntext | String, Char[] |
520+
| numeric | Decimal |
521+
| nvarchar | String, Char[] |
522+
| real | Single |
523+
| rowversion | Byte[] |
524+
| smalldatetime | DateTime |
525+
| smallint | Int16 |
526+
| smallmoney | Decimal |
527+
| sql_variant | Object |
528+
| text | String, Char[] |
529+
| time | TimeSpan |
530+
| timestamp | Byte[] |
531+
| tinyint | Int16 |
532+
| uniqueidentifier | Guid |
533+
| varbinary | Byte[] |
534+
| varchar | String, Char[] |
535+
| xml | String |
536+
537+
>[!NOTE]
538+
> For data types that map to the Decimal interim type, currently Copy activity supports precision up to 28. If you have data that requires precision larger than 28, consider converting to a string in a SQL query.
539+
>
540+
> When copying data from Amazon RDS for SQL Server using Azure Data Factory, the bit data type is mapped to the Boolean interim data type. If you have data that need to be kept as the bit data type, use queries with [T-SQL CAST or CONVERT](/sql/t-sql/functions/cast-and-convert-transact-sql?view=sql-server-ver15&preserve-view=true).
541+
542+
498543
## Lookup activity properties
499544

500545
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).

articles/data-factory/connector-azure-cosmos-db-mongodb-api.md

Lines changed: 23 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.author: jianleishen
66
author: jianleishen
77
ms.subservice: data-movement
88
ms.topic: conceptual
9-
ms.date: 01/05/2024
9+
ms.date: 12/25/2025
1010
ms.custom:
1111
- synapse
1212
- sfi-image-nochange
@@ -266,6 +266,28 @@ After copy activity execution, below BSON ObjectId is generated in sink:
266266
}
267267
```
268268

269+
## Data type mapping for Azure Cosmos DB for MongoDB
270+
271+
When copying data from Azure Cosmos DB for MongoDB, the following mappings are used from Azure Cosmos DB for MongoDB data types to interim data types used by the service internally. See [Schema and data type mappings](copy-activity-schema-and-type-mapping.md) to learn about how copy activity maps the source schema and data type to the sink.
272+
273+
| Azure Cosmos DB for MongoDB data type | Interim service data type |
274+
| ------ | ------ |
275+
| Date | Int64 |
276+
| ObjectId | String |
277+
| Decimal128 | String |
278+
| TimeStamp | The most significant 32 bits -> Int64<br>The least significant 32 bits -> Int64 |
279+
| String | String |
280+
| Double | Double |
281+
| Int32 | Int64 |
282+
| Int64 | Int64 |
283+
| Boolean | Boolean |
284+
| Null | Null |
285+
| JavaScript | String |
286+
| Regular Expression | String |
287+
| Min key | Int64 |
288+
| Max key | Int64 |
289+
| Binary | String |
290+
269291
## Related content
270292

271293
For a list of data stores that Copy Activity supports as sources and sinks, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).

articles/data-factory/connector-mongodb-atlas.md

Lines changed: 23 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: jianleishen
66
ms.author: jianleishen
77
ms.subservice: data-movement
88
ms.topic: conceptual
9-
ms.date: 09/20/2023
9+
ms.date: 12/25/2025
1010
ms.custom:
1111
- synapse
1212
- sfi-image-nochange
@@ -247,5 +247,27 @@ To achieve such schema-agnostic copy, skip the "structure" (also called *schema*
247247

248248
To copy data from MongoDB Atlas to tabular sink or reversed, refer to [schema mapping](copy-activity-schema-and-type-mapping.md#schema-mapping).
249249

250+
## Data type mapping for MongoDB Atlas
251+
252+
When copying data from MongoDB Atlas, the following mappings are used from MongoDB Atlas data types to interim data types used by the service internally. See [Schema and data type mappings](copy-activity-schema-and-type-mapping.md) to learn about how copy activity maps the source schema and data type to the sink.
253+
254+
| MongoDB Atlas data type | Interim service data type |
255+
| ------ | ------ |
256+
| Date | String |
257+
| ObjectId | String |
258+
| Decimal128 | String |
259+
| TimeStamp | The most significant 32 bits -> Int64<br>The least significant 32 bits -> Int64 |
260+
| String | String |
261+
| Double | String |
262+
| Int32 | String |
263+
| Int64 | String |
264+
| Boolean | Boolean |
265+
| Null | Null |
266+
| JavaScript | String |
267+
| Regular Expression | String |
268+
| Min key | Int64 |
269+
| Max key | Int64 |
270+
| Binary | String |
271+
250272
## Related content
251273
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).

0 commit comments

Comments
 (0)