You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/migrate-data.md
+6-5Lines changed: 6 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ ms.service: azure-netapp-files
5
5
ms.topic: conceptual
6
6
author: b-ahibbard
7
7
ms.author: anfdocs
8
-
ms.date: 09/04/2025
8
+
ms.date: 01/20/2026
9
9
---
10
10
# Migrating data into Azure NetApp Files volumes
11
11
@@ -15,9 +15,10 @@ Azure NetApp Files supports several methods to migrate data. You can migrate dat
15
15
16
16
The Azure NetApp Files [migration assistant](migrate-volumes.md) feature helps you accelerate and simplify migrations of business-critical applications and data to Azure. Benefits include:
17
17
18
-
* Efficient and cost-effective data migration leveraging ONTAP's built-in replication engine for seamless transition from on-premises or Cloud Volumes ONTAP storage to Azure NetApp Files.
19
-
* Storage-efficient data transfer that reduces network transfer costs for both baseline and incremental updates.
20
-
* Low cutover/downtime window, ensuring faster and more efficient final updates, thus minimizing disruption to your operations.
18
+
* Efficient, cost-effective migration powered by ONTAP’s built-in replication engine for a seamless transition from on-premises or Cloud Volumes ONTAP to Azure NetApp Files.
19
+
* Storage-optimized data migration that lowers network costs for both baseline and incremental updates.
20
+
* Minimal cutover window for faster final syncs, reducing downtime and keeping your business running smoothly.
21
+
* Existing volume snapshots included to safeguard data integrity and reduce risk, delivering a reliable and worry-free migration experience.
21
22
22
23
To use Azure NetApp Files migration assistant, you need to establish connectivity between your on-premises storage cluster and the target volume in your Azure NetApp Files region of choice. For detailed instructions, see [Migrate volumes to Azure NetApp Files](migrate-volumes.md).
23
24
@@ -41,4 +42,4 @@ To migrate data from one Azure region to another Azure region, use [Azure NetApp
41
42
*[Migrate volumes to Azure NetApp Files](migrate-volumes.md)
42
43
*[Azure NetApp Files replication](replication.md)
43
44
*[Data migration and protection FAQs for Azure NetApp Files](faq-data-migration-protection.md)
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/replication-requirements.md
+1-3Lines changed: 1 addition & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ services: azure-netapp-files
5
5
author: b-ahibbard
6
6
ms.service: azure-netapp-files
7
7
ms.topic: concept-article
8
-
ms.date: 01/08/2026
8
+
ms.date: 01/20/2026
9
9
ms.author: anfdocs
10
10
ms.custom: references_regions
11
11
---
@@ -70,8 +70,6 @@ If you use [cross-zone-region replication](replication.md#cross-zone-region-repl
70
70
71
71
* You can revert a source or destination volume of a cross-region replication to a snapshot if the snapshot is newer than the most recent SnapMirror snapshot. You can't use snapshots that are older than the SnapMirror snapshot for a volume revert operation. For more information, see [Revert a volume by using snapshot revert](snapshots-revert-volume.md).
72
72
73
-
* You can revert a source or destination volume of a cross-region replication to a snapshot if the snapshot is newer than the most recent SnapMirror snapshot. You can't use snapshots that are older than the SnapMirror snapshot for a volume revert operation. For more information, see [Revert a volume by using snapshot revert](snapshots-revert-volume.md).
74
-
75
73
* If you copy large datasets into a volume that has cross-region replication enabled and you have spare capacity in the capacity pool, you should set the replication interval to 10 minutes, increase the volume size to allow for the changes to be stored, and temporarily disable replication.
76
74
77
75
* If you use the cool access feature, understand the considerations in [Manage Azure NetApp Files storage with cool access](manage-cool-access.md#considerations).
Copy file name to clipboardExpand all lines: articles/backup/quick-backup-postgresql-flexible-server-terraform.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,7 +31,7 @@ Before you configure backup for Azure Database for PostgreSQL - Flexible Server,
31
31
>[!Note]
32
32
>Terraform only supports authenticating to Azure with the Azure CLI. Authenticating using Azure PowerShell isn't supported. Therefore, while you can use the Azure PowerShell module when doing your Terraform work, you first need to authenticate to Azure.
33
33
34
-
## Implement the Terraform code
34
+
## Implement the Terraform code for PostgreSQL Flexible Server backup configuration
35
35
36
36
> [!NOTE]
37
37
> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform).
## Data type mapping for Amazon RDS for SQL Server
499
+
500
+
When copying data from Amazon RDS for SQL Server, the following mappings are used from Amazon RDS for SQL Server data types to interim data types used by the service internally. See [Schema and data type mappings](copy-activity-schema-and-type-mapping.md) to learn about how copy activity maps the source schema and data type to the sink.
501
+
502
+
| Amazon RDS for SQL Server data type | Interim service data type |
503
+
| ------ | ------ |
504
+
| bigint | Int64 |
505
+
| binary | Byte[]|
506
+
| bit | Boolean |
507
+
| char | String, Char[]|
508
+
| date | DateTime |
509
+
| Datetime | DateTime |
510
+
| datetime2 | DateTime |
511
+
| Datetimeoffset | DateTimeOffset |
512
+
| Decimal | Decimal |
513
+
| FILESTREAM attribute (varbinary(max)) | Byte[]|
514
+
| Float | Double |
515
+
| image | Byte[]|
516
+
| int | Int32 |
517
+
| money | Decimal |
518
+
| nchar | String, Char[]|
519
+
| ntext | String, Char[]|
520
+
| numeric | Decimal |
521
+
| nvarchar | String, Char[]|
522
+
| real | Single |
523
+
| rowversion | Byte[]|
524
+
| smalldatetime | DateTime |
525
+
| smallint | Int16 |
526
+
| smallmoney | Decimal |
527
+
| sql_variant | Object |
528
+
| text | String, Char[]|
529
+
| time | TimeSpan |
530
+
| timestamp | Byte[]|
531
+
| tinyint | Int16 |
532
+
| uniqueidentifier | Guid |
533
+
| varbinary | Byte[]|
534
+
| varchar | String, Char[]|
535
+
| xml | String |
536
+
537
+
>[!NOTE]
538
+
> For data types that map to the Decimal interim type, currently Copy activity supports precision up to 28. If you have data that requires precision larger than 28, consider converting to a string in a SQL query.
539
+
>
540
+
> When copying data from Amazon RDS for SQL Server using Azure Data Factory, the bit data type is mapped to the Boolean interim data type. If you have data that need to be kept as the bit data type, use queries with [T-SQL CAST or CONVERT](/sql/t-sql/functions/cast-and-convert-transact-sql?view=sql-server-ver15&preserve-view=true).
541
+
542
+
498
543
## Lookup activity properties
499
544
500
545
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
Copy file name to clipboardExpand all lines: articles/data-factory/connector-azure-cosmos-db-mongodb-api.md
+23-1Lines changed: 23 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ ms.author: jianleishen
6
6
author: jianleishen
7
7
ms.subservice: data-movement
8
8
ms.topic: conceptual
9
-
ms.date: 01/05/2024
9
+
ms.date: 12/25/2025
10
10
ms.custom:
11
11
- synapse
12
12
- sfi-image-nochange
@@ -266,6 +266,28 @@ After copy activity execution, below BSON ObjectId is generated in sink:
266
266
}
267
267
```
268
268
269
+
## Data type mapping for Azure Cosmos DB for MongoDB
270
+
271
+
When copying data from Azure Cosmos DB for MongoDB, the following mappings are used from Azure Cosmos DB for MongoDB data types to interim data types used by the service internally. See [Schema and data type mappings](copy-activity-schema-and-type-mapping.md) to learn about how copy activity maps the source schema and data type to the sink.
272
+
273
+
| Azure Cosmos DB for MongoDB data type | Interim service data type |
274
+
| ------ | ------ |
275
+
| Date | Int64 |
276
+
| ObjectId | String |
277
+
| Decimal128 | String |
278
+
| TimeStamp | The most significant 32 bits -> Int64<br>The least significant 32 bits -> Int64 |
279
+
| String | String |
280
+
| Double | Double |
281
+
| Int32 | Int64 |
282
+
| Int64 | Int64 |
283
+
| Boolean | Boolean |
284
+
| Null | Null |
285
+
| JavaScript | String |
286
+
| Regular Expression | String |
287
+
| Min key | Int64 |
288
+
| Max key | Int64 |
289
+
| Binary | String |
290
+
269
291
## Related content
270
292
271
293
For a list of data stores that Copy Activity supports as sources and sinks, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
Copy file name to clipboardExpand all lines: articles/data-factory/connector-mongodb-atlas.md
+23-1Lines changed: 23 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ author: jianleishen
6
6
ms.author: jianleishen
7
7
ms.subservice: data-movement
8
8
ms.topic: conceptual
9
-
ms.date: 09/20/2023
9
+
ms.date: 12/25/2025
10
10
ms.custom:
11
11
- synapse
12
12
- sfi-image-nochange
@@ -247,5 +247,27 @@ To achieve such schema-agnostic copy, skip the "structure" (also called *schema*
247
247
248
248
To copy data from MongoDB Atlas to tabular sink or reversed, refer to [schema mapping](copy-activity-schema-and-type-mapping.md#schema-mapping).
249
249
250
+
## Data type mapping for MongoDB Atlas
251
+
252
+
When copying data from MongoDB Atlas, the following mappings are used from MongoDB Atlas data types to interim data types used by the service internally. See [Schema and data type mappings](copy-activity-schema-and-type-mapping.md) to learn about how copy activity maps the source schema and data type to the sink.
253
+
254
+
| MongoDB Atlas data type | Interim service data type |
255
+
| ------ | ------ |
256
+
| Date | String |
257
+
| ObjectId | String |
258
+
| Decimal128 | String |
259
+
| TimeStamp | The most significant 32 bits -> Int64<br>The least significant 32 bits -> Int64 |
260
+
| String | String |
261
+
| Double | String |
262
+
| Int32 | String |
263
+
| Int64 | String |
264
+
| Boolean | Boolean |
265
+
| Null | Null |
266
+
| JavaScript | String |
267
+
| Regular Expression | String |
268
+
| Min key | Int64 |
269
+
| Max key | Int64 |
270
+
| Binary | String |
271
+
250
272
## Related content
251
273
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
0 commit comments