Skip to content

Commit 4eeab14

Browse files
Merge pull request #306649 from MicrosoftDocs/main
Auto Publish – main to live - 2025-10-08 11:00 UTC
2 parents bf10a06 + 6930ac0 commit 4eeab14

12 files changed

Lines changed: 252 additions & 24 deletions

articles/backup/scripts/backup-powershell-script-find-recovery-services-vault.md

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: PowerShell Script - find Vault for Storage Account
33
description: Learn how to use an Azure PowerShell script to find the Recovery Services vault where your storage account is registered.
44
ms.topic: sample
5-
ms.date: 10/20/2024
5+
ms.date: 10/08/2025
66
ms.service: azure-backup
77
ms.custom: devx-track-azurepowershell
88
author: AbhishekMallick-MS
@@ -14,7 +14,7 @@ ms.author: v-mallicka
1414

1515
This script helps you to find the Recovery Services vault where your storage account is registered.
1616

17-
## Sample script
17+
## Sample script to find the Recovery Services vault
1818

1919
```powershell
2020
Param(
@@ -47,9 +47,11 @@ if(!$found)
4747
}
4848
```
4949

50-
## How to execute the script
50+
## Execute the script to find the Recovery Services vault
5151

52-
1. Save the script above on your machine with a name of your choice. In this example, we saved it as *FindRegisteredStorageAccount.ps1*.
52+
To execute the script for finding the Recovery Services vault where your storage account is registered, follow these steps:
53+
54+
1. Save the preceding script on your machine with a name of your choice. In this example, we saved it as *FindRegisteredStorageAccount.ps1*.
5355
2. Execute the script by providing the following parameters:
5456

5557
* **-ResourceGroupName** - Resource Group of the storage account
@@ -62,9 +64,9 @@ The following example tries to find the Recovery Services vault where the *afsac
6264
.\FindRegisteredStorageAccount.ps1 -ResourceGroupName AzureFiles -StorageAccountName afsaccount -SubscriptionId aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e
6365
```
6466

65-
## Output
67+
## Output of the script
6668

67-
The output will display the complete path of the Recovery Services vault where the storage account is registered. Here is a sample output:
69+
The output shows the complete path of the Recovery Services vault where the storage account is registered. Here's a sample output:
6870

6971
```output
7072
Found Storage account afsaccount registered in vault: /subscriptions/ aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault123

articles/migrate/tutorial-discover-mysql-database-instances.md

Lines changed: 51 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ monikerRange:
1111
# Customer intent: As a database administrator, I want to discover MySQL database instances in my datacenter using an agentless solution, so that I can assess and manage my databases efficiently before migrating to the cloud.
1212
---
1313

14-
# Tutorial: Discover MySQL database instances running in your datacenter (preview)
14+
# Discover MySQL database instances running in your datacenter (preview)
1515

1616

1717
This article describes how to discover MySQL database instances running on servers in your datacenter, using **Azure Migrate appliance**. The discovery process is agentless; no agents are installed on the target servers.
@@ -53,7 +53,7 @@ The following table lists the regions that support MySQL Discovery and Assessmen
5353

5454
1. Open the appliance configuration manager, complete the prerequisite checks and registration of the appliance.
5555
2. Navigate to the Manage credentials and discovery sources panel.
56-
1. In Step 3: Select **MySQL authentication** credential type, provide a friendly name, input the MySQL username, and password and select **Save**.
56+
3. In Step 3: Select **MySQL authentication** credential type, provide a friendly name, input the MySQL username, and password and select **Save**.
5757

5858
> [!NOTE]
5959
> - Ensure that the user corresponding to the added MySQL credentials have the following privileges:
@@ -69,6 +69,55 @@ The following table lists the regions that support MySQL Discovery and Assessmen
6969
> GRANT SELECT ON information_schema.* TO 'username'@'ip';
7070
> GRANT SELECT ON performance_schema.* TO 'username'@'ip';
7171
72+
To enable Discovery and Assessment in Azure Migrate, you can create a custom MySQL user account with the minimum required permissions. Use the following script to create the account and grant access from the appliance machine.
73+
- CREATE USER privilege → to create the new user.
74+
- GRANT OPTION privilege → to grant privileges to the new user.
75+
- SELECT on mysql.user → required for the existence check.
76+
- PROCESS privilege → if you want to verify process-related grants after creation.
77+
78+
```
79+
80+
-- MySQL Script to Create a Least-Privilege User for Azure Migrate
81+
-- Replace @username, @password, and @ip with actual values before execution.
82+
83+
SET @username = 'your_username';
84+
SET @password = 'your_password';
85+
SET @ip = 'your_appliance_ip';
86+
87+
-- Check if the user already exists
88+
SELECT CASE
89+
WHEN EXISTS (SELECT 1 FROM mysql.user WHERE user = @username AND host = @ip)
90+
THEN CONCAT('User ', @username, '@', @ip, ' already exists, skipping creation')
91+
ELSE
92+
CONCAT('User ', @username, '@', @ip, ' does not exist, proceeding with creation')
93+
END AS user_check;
94+
95+
-- Create the user if not exists
96+
CREATE USER IF NOT EXISTS @username@'@ip' IDENTIFIED BY @password;
97+
98+
-- Grant minimal required privileges
99+
GRANT USAGE ON *.* TO @username@'@ip';
100+
GRANT PROCESS ON *.* TO @username@'@ip';
101+
102+
-- Grant SELECT on specific columns in mysql.user
103+
GRANT SELECT (User, Host, Super_priv, File_priv, Create_tablespace_priv, Shutdown_priv)
104+
ON mysql.user TO @username@'@ip';
105+
106+
-- Grant SELECT on information_schema and performance_schema
107+
GRANT SELECT ON information_schema.* TO @username@'@ip';
108+
GRANT SELECT ON performance_schema.* TO @username@'@ip';
109+
110+
-- Apply changes
111+
FLUSH PRIVILEGES;
112+
113+
-- Log success
114+
SELECT CONCAT('Azure Migrate user ', @username, '@', @ip, ' created successfully with least privileges.') AS result;
115+
```
116+
Execute the script using the following command through your MySQL client.
117+
```
118+
mysql -u root -p -e "SET @username='myuser'; SET @password='mypassword'; SET @ip='appliance_ip'; SOURCE CreateUser.sql;"
119+
```
120+
72121
You can review the discovered MySQL databases after around 24 hours of discovery initiation, through the **Discovered servers** view. To expedite the discovery of your MySQL instances follow the steps:
73122
74123
- After adding the MySQL credentials on the appliance configuration manager restart the discovery services on appliance.

articles/migrate/vm-assessment-properties.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -20,8 +20,8 @@ This section describes the components that are part of an assessment.
2020

2121
| **Setting Category**  | **Setting** | **Details** |
2222
|-------------------|---------|-------- |
23-
| **Target settings** | **Target VM series** | The Azure VM series that you want to consider for rightsizing. For example, if you don't have a production environment that needs A-series VMs in Azure, you can exclude A-series from the list of series. The availability of VM series depends on the target location selected. [Learn more](/azure/virtual-machines/sizes/overview?branch=main&branchFallbackFrom=release-migrate-new-structure&tabs=breakdownseries,generalsizelist,computesizelist,memorysizelist,storagesizelist,gpusizelist,fpgasizelist,hpcsizelist). |
24-
| **Target settings** | **Target storage disk**  | Specifies the type of target storage disk as Premium-managed, Standard HDD-managed, Standard SSD-managed, or Ultra disk. <br> **Premium or Standard or Ultra disk**: The assessment recommends a disk SKU within the storage type selected. <br>If you want a single-instance VM service level agreement (SLA) of 99.9%, consider using Premium-managed disks. This ensures that all disks are recommended as Premium-managed disks. <br> If you're looking to run data-intensive workloads that need high throughput, high IOPS, and consistent low latency disk storage, consider using Ultra disks. <br> Azure Migrate supports only Managed disks for migration assessment. |
23+
| **Target settings** | **Target VM series** | The Azure VM series that you want to consider for rightsizing. B-Series VM families are not selected by default in case the _Production_ environment type is selected in general settings. In case you want to assess the VMs in the assessment scope for burstable VM targets add them explicitly [Learn more](https://learn.microsoft.com/azure/virtual-machines/sizes/general-purpose/b-family). Cobalt 100 VMs will only be recommended in case the on-premises VMs are on ARM64 CPU architecture [Learn more](https://learn.microsoft.com/azure/virtual-machines/sizes/cobalt-overview). For example, if you don't have a production environment that needs A-series VMs in Azure, you can exclude A-series from the list of series. The availability of VM series depends on the target location selected. [Learn more](/azure/virtual-machines/sizes/overview?branch=main&branchFallbackFrom=release-migrate-new-structure&tabs=breakdownseries,generalsizelist,computesizelist,memorysizelist,storagesizelist,gpusizelist,fpgasizelist,hpcsizelist). |
24+
| **Target settings** | **Target storage disk**  | Specifies the type of target storage disk as Premium-managed, Standard HDD-managed, Standard SSD-managed, or Ultra disk. If you want a single-instance VM service level agreement (SLA) of 99.9%, consider using Premium-managed disks. This ensures that all disks are recommended as Premium-managed disks. <br> If you're looking to run data-intensive workloads that need high throughput, high IOPS, and consistent low latency disk storage, consider using Ultra disks. <br> Azure Migrate supports only Managed disks for migration assessment. |
2525
| **Right-Sizing**   | **Sizing criteria**  | This attribute is used for right-sizing the target recommendations. <br> Use **as-is on-premises** sizing if you don't want to right size the targets and identify the targets according to your configuration for on-premises workloads. Use **performance-based** sizing to calculate compute recommendation based on CPU and memory utilization data and storage recommendation based on the input/output operations per second (IOPS) and throughput of the on-premises disks. |
2626
| | **VM uptime** | The duration in days per month and hours per day for Azure VMs that won't run continuously. Cost estimates are based on that duration. The default values are 31 days per month and 24 hours per day. |
2727
| | **Azure Hybrid Benefit**| Specifies whether you have software assurance and are eligible for [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/) to use your existing OS licenses. For Azure VM assessments, you can bring in both Windows and Linux licenses. If the setting is enabled, Azure prices for selected operating systems aren't considered for VM costing. |
@@ -31,4 +31,4 @@ This section describes the components that are part of an assessment.
3131
> * In Azure Government, it is recommended to review the [[supported target](supported-geographies.md)] assessment locations. VM size recommendations in assessments will use the VM series specifically designed for Government Cloud regions.  [Learn more](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia&products=virtual-machines).
3232
3333
## Next Steps
34-
[Review](best-practices-assessment.md) the best practices for creating an assessment with Azure Migrate.
34+
[Review](best-practices-assessment.md) the best practices for creating an assessment with Azure Migrate.

articles/sentinel/billing.md

Lines changed: 3 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -67,25 +67,22 @@ There are two ways to pay for the analytics tier: **Pay-As-You-Go** and **Commit
6767

6868
#### Data lake tier
6969

70-
Microsoft Sentinel data lake tier is a cost-effective option for ingesting high volume, low fidelity data. They're charged at a flat, low rate per gigabyte (GB). The data lake tier provides querying and jobs scheduling capabilities and, once enabled, mirrors all eligible data available in the analytics tier.
71-
72-
For more information, see [Microsoft Sentinel data lake](datalake/sentinel-lake-overview.md)
70+
To learn more about the Microsoft Sentinel data lake, see [Microsoft Sentinel data lake](datalake/sentinel-lake-overview.md).
7371

7472
The data lake tier incurs charges based on usage of various data lake capabilities.
7573
- **Data lake ingestion** is charged per GB for all data ingested into tables with retention set to data lake tier only. Data lake ingestion charges don't apply when data is ingested into tables with retention set to include both analytic and data lake tiers.
7674
- **Data processing** is charged per GB for data ingested into tables with retention set to data lake tier only. It supports transformations like redaction, splitting, filtering, and normalization. Data processing charges don't apply when data is ingested into tables with retention set to include both analytic and data lake tiers.
77-
- **Data lake storage** charges are applied per GB per month for any data that remains in the data lake tier after the analytic tier retention period ends. Charges are based on data compressed at a 6X rate. For example, if you retain 600 GB of raw data, it's billed as 100 GB of compressed data.
75+
- **Data lake storage** charges are applied per GB per month for any data that remains in the data lake tier after the analytic tier retention period ends. Charges are based on a simple and uniform data compression rate of 6:1. For example, if you retain 600 GB of raw data, it's billed as 100 GB of compressed data.
7876
- **Data lake query** charges apply per GB of uncompressed data analyzed using Kusto Query Language (KQL) queries or KQL jobs.
7977
- **Advanced data insights** charges apply per compute hour used when using data lake exploration notebook sessions or running data lake exploration notebook jobs. Compute hours are calculated by multiplying the number of cores in the pool selected for the notebook with the amount of time a session was active or a job was running. Data lake notebook sessions and jobs are available in pools of four, eight, and 16 cores.
8078

8179
Once onboarded, usage from Microsoft Sentinel workspaces begins to be billed through the previously described meters rather than existing long-term retention (formerly known as Archive), search, or auxiliary logs ingestion meters.
8280

8381
> [!IMPORTANT]
84-
> Existing Microsoft Sentinel customers currently using and billed for auxiliary logs ingestion, long-term retention, and search will see charges transition to the new data lake ingestion, data lake storage, and data lake query meters, respectively once they onboard to Microsoft Sentinel data lake. Pricing from previous meters doesn't carry over. For more information on pricing, see [Microsoft Sentinel pricing](https://azure.microsoft.com/pricing/details/microsoft-sentinel/).
82+
> Existing Microsoft Sentinel customers currently using and billed for auxiliary logs ingestion, long-term retention, and search will see charges transition to the new data lake ingestion, data lake storage, and data lake query meters respectively, once they onboard to Microsoft Sentinel data lake. Pricing from previous meters doesn't carry over. For more information on pricing, see [Microsoft Sentinel pricing](https://azure.microsoft.com/pricing/details/microsoft-sentinel/).
8583
8684
For customers that haven't onboarded to Microsoft Sentinel data lake and are currently using auxiliary or basic logs, see [Manage data retention in a Log Analytics workspace](/azure/azure-monitor/logs/data-retention-archive) and [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for relevant information.
8785

88-
8986
### Simplified pricing tiers
9087

9188
Simplified pricing tiers combine the data analysis costs for Microsoft Sentinel and ingestion storage costs of Log Analytics into a single pricing tier. The following screenshot shows the simplified pricing tier that all new workspaces use.

articles/sentinel/includes/sap-agentless-prerequisites.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,8 +9,9 @@ ms.topic: include
99
**To run the tool**:
1010

1111
1. Open the integration package, navigate to the artifacts tab, and select the **Prerequisite checker** iflow > **Configure**.
12-
1. Set the target RFC destination to the SAP system you want to check.
13-
1. Deploy the iflow as you would otherwise for your SAP systems. For example, use the following sample PowerShell script, modifying the sample placeholder values for your environment:
12+
1. Set the target destination name for the remote function call (RFC) to the SAP system you want to check. For example, `A4H-100-Sentinel-RFC`.
13+
1. Deploy the iflow as you would otherwise for your SAP systems.
14+
1. Trigger the iflow from any REST client. For example, use the following sample PowerShell script, modifying the sample placeholder values for your environment:
1415

1516
```powershell
1617
$cpiEndpoint = "https://my-cpi-uri.it-cpi012-rt.cfapps.eu01-010.hana.ondemand.com" # CPI endpoint URL
@@ -37,4 +38,6 @@ ms.topic: include
3738
Write-Host $response.RawContent
3839
```
3940
40-
Make sure that the prerequisites checker runs successfully before connecting to Microsoft Sentinel.
41+
Make sure that the prerequisites checker runs successfully (status code 200) with no warnings on the response output before connecting to Microsoft Sentinel.
42+
43+
If any findings, consult the response details for guidance on remediation steps. Legacy SAP systems often require extra SAP notes. Furthermore, see the [troubleshooting section](../sap/sap-deploy-troubleshoot.md) for common issues and resolutions.

articles/sentinel/sap/preparing-sap.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -203,11 +203,11 @@ This procedure has steps both in Microsoft Sentinel and your SAP system, and req
203203
1. Download the [integration package](https://aka.ms/SAPAgentlessPackage) and upload it to your SAP Integration Suite. For more information, see the [SAP documentation](https://help.sap.com/docs/integration-suite/sap-integration-suite/importing-integration-packages).
204204
1. Open the package and go to the **Artifacts** tab. Then select the **Data Collector** configuration. For more information, see the [SAP documentation](https://help.sap.com/docs/integration-suite/sap-integration-suite/importing-integration-packages).
205205
1. Configure the integration flow with the **LogIngestionURL** and the **DCRImmutableID**.
206-
1. Deploy the i-flow using SAP Cloud Integration as the runtime service.
206+
1. Deploy the iflow using SAP Cloud Integration as the runtime service.
207207

208208

209209
## Run the prerequisite checker
210-
1. The **Prerequisite checker** iflow is included in the package. We recommend running this iflow before continuing to the next step to ensure that your SAP system meets the system prerequisites.
210+
1. The **Prerequisite checker** iflow is included in the package. We recommend running this iflow **manually** before continuing to the next step to ensure that your SAP system meets the system prerequisites before attempting integration from Microsoft Sentinel.
211211

212212
[!INCLUDE [sap-agentless-prerequisites](../includes/sap-agentless-prerequisites.md)]
213213

articles/storage/blobs/TOC.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -529,10 +529,13 @@ items:
529529
href: ../common/storage-failover-customer-managed-unplanned.md?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json
530530
- name: Initiate account failover
531531
href: ../common/storage-initiate-account-failover.md?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json
532+
- name: Failover FAQs
533+
href: ../common/storage-failover-faq.md?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json
532534
- name: Check the Last Sync Time property
533535
href: ../common/last-sync-time-get.md?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json
534536
- name: Failover considerations for storage accounts with private endpoints
535537
href: ../common/storage-failover-private-endpoints.md?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json
538+
536539
- name: Performance and scale
537540
items:
538541
- name: Performance and scalability checklist
8.79 KB
Loading

0 commit comments

Comments
 (0)