You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The output will display the complete path of the Recovery Services vault where the storage account is registered. Here is a sample output:
69
+
The output shows the complete path of the Recovery Services vault where the storage account is registered. Here's a sample output:
68
70
69
71
```output
70
72
Found Storage account afsaccount registered in vault: /subscriptions/ aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault123
Copy file name to clipboardExpand all lines: articles/migrate/tutorial-discover-mysql-database-instances.md
+51-2Lines changed: 51 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ monikerRange:
11
11
# Customer intent: As a database administrator, I want to discover MySQL database instances in my datacenter using an agentless solution, so that I can assess and manage my databases efficiently before migrating to the cloud.
12
12
---
13
13
14
-
# Tutorial: Discover MySQL database instances running in your datacenter (preview)
14
+
# Discover MySQL database instances running in your datacenter (preview)
15
15
16
16
17
17
This article describes how to discover MySQL database instances running on servers in your datacenter, using **Azure Migrate appliance**. The discovery process is agentless; no agents are installed on the target servers.
@@ -53,7 +53,7 @@ The following table lists the regions that support MySQL Discovery and Assessmen
53
53
54
54
1. Open the appliance configuration manager, complete the prerequisite checks and registration of the appliance.
55
55
2. Navigate to the Manage credentials and discovery sources panel.
56
-
1. In Step 3: Select **MySQL authentication** credential type, provide a friendly name, input the MySQL username, and password and select **Save**.
56
+
3. In Step 3: Select **MySQL authentication** credential type, provide a friendly name, input the MySQL username, and password and select **Save**.
57
57
58
58
> [!NOTE]
59
59
> - Ensure that the user corresponding to the added MySQL credentials have the following privileges:
@@ -69,6 +69,55 @@ The following table lists the regions that support MySQL Discovery and Assessmen
69
69
> GRANT SELECT ON information_schema.* TO 'username'@'ip';
70
70
> GRANT SELECT ON performance_schema.* TO 'username'@'ip';
71
71
72
+
To enable Discovery and Assessment in Azure Migrate, you can create a custom MySQL user account with the minimum required permissions. Use the following script to create the account and grant access from the appliance machine.
73
+
- CREATE USER privilege → to create the new user.
74
+
- GRANT OPTION privilege → to grant privileges to the new user.
75
+
- SELECT on mysql.user → required for the existence check.
76
+
- PROCESS privilege → if you want to verify process-related grants after creation.
77
+
78
+
```
79
+
80
+
-- MySQL Script to Create a Least-Privilege User for Azure Migrate
81
+
-- Replace @username, @password, and @ip with actual values before execution.
82
+
83
+
SET @username = 'your_username';
84
+
SET @password = 'your_password';
85
+
SET @ip = 'your_appliance_ip';
86
+
87
+
-- Check if the user already exists
88
+
SELECT CASE
89
+
WHEN EXISTS (SELECT 1 FROM mysql.user WHERE user = @username AND host = @ip)
CONCAT('User ', @username, '@', @ip, ' does not exist, proceeding with creation')
93
+
END AS user_check;
94
+
95
+
-- Create the user if not exists
96
+
CREATE USER IF NOT EXISTS @username@'@ip' IDENTIFIED BY @password;
97
+
98
+
-- Grant minimal required privileges
99
+
GRANT USAGE ON *.* TO @username@'@ip';
100
+
GRANT PROCESS ON *.* TO @username@'@ip';
101
+
102
+
-- Grant SELECT on specific columns in mysql.user
103
+
GRANT SELECT (User, Host, Super_priv, File_priv, Create_tablespace_priv, Shutdown_priv)
104
+
ON mysql.user TO @username@'@ip';
105
+
106
+
-- Grant SELECT on information_schema and performance_schema
107
+
GRANT SELECT ON information_schema.* TO @username@'@ip';
108
+
GRANT SELECT ON performance_schema.* TO @username@'@ip';
109
+
110
+
-- Apply changes
111
+
FLUSH PRIVILEGES;
112
+
113
+
-- Log success
114
+
SELECT CONCAT('Azure Migrate user ', @username, '@', @ip, ' created successfully with least privileges.') AS result;
115
+
```
116
+
Execute the script using the following command through your MySQL client.
117
+
```
118
+
mysql -u root -p -e "SET @username='myuser'; SET @password='mypassword'; SET @ip='appliance_ip'; SOURCE CreateUser.sql;"
119
+
```
120
+
72
121
You can review the discovered MySQL databases after around 24 hours of discovery initiation, through the **Discovered servers** view. To expedite the discovery of your MySQL instances follow the steps:
73
122
74
123
- After adding the MySQL credentials on the appliance configuration manager restart the discovery services on appliance.
Copy file name to clipboardExpand all lines: articles/migrate/vm-assessment-properties.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,8 +20,8 @@ This section describes the components that are part of an assessment.
20
20
21
21
|**Setting Category**|**Setting**|**Details**|
22
22
|-------------------|---------|-------- |
23
-
|**Target settings**|**Target VM series**| The Azure VM series that you want to consider for rightsizing. For example, if you don't have a production environment that needs A-series VMs in Azure, you can exclude A-series from the list of series. The availability of VM series depends on the target location selected. [Learn more](/azure/virtual-machines/sizes/overview?branch=main&branchFallbackFrom=release-migrate-new-structure&tabs=breakdownseries,generalsizelist,computesizelist,memorysizelist,storagesizelist,gpusizelist,fpgasizelist,hpcsizelist). |
24
-
|**Target settings**|**Target storage disk**| Specifies the type of target storage disk as Premium-managed, Standard HDD-managed, Standard SSD-managed, or Ultra disk. <br> **Premium or Standard or Ultra disk**: The assessment recommends a disk SKU within the storage type selected. <br>If you want a single-instance VM service level agreement (SLA) of 99.9%, consider using Premium-managed disks. This ensures that all disks are recommended as Premium-managed disks. <br> If you're looking to run data-intensive workloads that need high throughput, high IOPS, and consistent low latency disk storage, consider using Ultra disks. <br> Azure Migrate supports only Managed disks for migration assessment. |
23
+
| **Target settings** | **Target VM series** | The Azure VM series that you want to consider for rightsizing. B-Series VM families are not selected by default in case the _Production_ environment type is selected in general settings. In case you want to assess the VMs in the assessment scope for burstable VM targets add them explicitly [Learn more](https://learn.microsoft.com/azure/virtual-machines/sizes/general-purpose/b-family). Cobalt 100 VMs will only be recommended in case the on-premises VMs are on ARM64 CPU architecture [Learn more](https://learn.microsoft.com/azure/virtual-machines/sizes/cobalt-overview). For example, if you don't have a production environment that needs A-series VMs in Azure, you can exclude A-series from the list of series. The availability of VM series depends on the target location selected. [Learn more](/azure/virtual-machines/sizes/overview?branch=main&branchFallbackFrom=release-migrate-new-structure&tabs=breakdownseries,generalsizelist,computesizelist,memorysizelist,storagesizelist,gpusizelist,fpgasizelist,hpcsizelist). |
24
+
|**Target settings**|**Target storage disk**| Specifies the type of target storage disk as Premium-managed, Standard HDD-managed, Standard SSD-managed, or Ultra disk. If you want a single-instance VM service level agreement (SLA) of 99.9%, consider using Premium-managed disks. This ensures that all disks are recommended as Premium-managed disks. <br> If you're looking to run data-intensive workloads that need high throughput, high IOPS, and consistent low latency disk storage, consider using Ultra disks. <br> Azure Migrate supports only Managed disks for migration assessment. |
25
25
|**Right-Sizing**|**Sizing criteria**| This attribute is used for right-sizing the target recommendations. <br> Use **as-is on-premises** sizing if you don't want to right size the targets and identify the targets according to your configuration for on-premises workloads. Use **performance-based** sizing to calculate compute recommendation based on CPU and memory utilization data and storage recommendation based on the input/output operations per second (IOPS) and throughput of the on-premises disks. |
26
26
||**VM uptime**| The duration in days per month and hours per day for Azure VMs that won't run continuously. Cost estimates are based on that duration. The default values are 31 days per month and 24 hours per day. |
27
27
||**Azure Hybrid Benefit**| Specifies whether you have software assurance and are eligible for [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/) to use your existing OS licenses. For Azure VM assessments, you can bring in both Windows and Linux licenses. If the setting is enabled, Azure prices for selected operating systems aren't considered for VM costing. |
@@ -31,4 +31,4 @@ This section describes the components that are part of an assessment.
31
31
> * In Azure Government, it is recommended to review the [[supported target](supported-geographies.md)] assessment locations. VM size recommendations in assessments will use the VM series specifically designed for Government Cloud regions. [Learn more](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia&products=virtual-machines).
32
32
33
33
## Next Steps
34
-
[Review](best-practices-assessment.md) the best practices for creating an assessment with Azure Migrate.
34
+
[Review](best-practices-assessment.md) the best practices for creating an assessment with Azure Migrate.
Copy file name to clipboardExpand all lines: articles/sentinel/billing.md
+3-6Lines changed: 3 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -67,25 +67,22 @@ There are two ways to pay for the analytics tier: **Pay-As-You-Go** and **Commit
67
67
68
68
#### Data lake tier
69
69
70
-
Microsoft Sentinel data lake tier is a cost-effective option for ingesting high volume, low fidelity data. They're charged at a flat, low rate per gigabyte (GB). The data lake tier provides querying and jobs scheduling capabilities and, once enabled, mirrors all eligible data available in the analytics tier.
71
-
72
-
For more information, see [Microsoft Sentinel data lake](datalake/sentinel-lake-overview.md)
70
+
To learn more about the Microsoft Sentinel data lake, see [Microsoft Sentinel data lake](datalake/sentinel-lake-overview.md).
73
71
74
72
The data lake tier incurs charges based on usage of various data lake capabilities.
75
73
-**Data lake ingestion** is charged per GB for all data ingested into tables with retention set to data lake tier only. Data lake ingestion charges don't apply when data is ingested into tables with retention set to include both analytic and data lake tiers.
76
74
-**Data processing** is charged per GB for data ingested into tables with retention set to data lake tier only. It supports transformations like redaction, splitting, filtering, and normalization. Data processing charges don't apply when data is ingested into tables with retention set to include both analytic and data lake tiers.
77
-
-**Data lake storage** charges are applied per GB per month for any data that remains in the data lake tier after the analytic tier retention period ends. Charges are based on data compressed at a 6X rate. For example, if you retain 600 GB of raw data, it's billed as 100 GB of compressed data.
75
+
-**Data lake storage** charges are applied per GB per month for any data that remains in the data lake tier after the analytic tier retention period ends. Charges are based on a simple and uniform data compression rate of 6:1. For example, if you retain 600 GB of raw data, it's billed as 100 GB of compressed data.
78
76
-**Data lake query** charges apply per GB of uncompressed data analyzed using Kusto Query Language (KQL) queries or KQL jobs.
79
77
-**Advanced data insights** charges apply per compute hour used when using data lake exploration notebook sessions or running data lake exploration notebook jobs. Compute hours are calculated by multiplying the number of cores in the pool selected for the notebook with the amount of time a session was active or a job was running. Data lake notebook sessions and jobs are available in pools of four, eight, and 16 cores.
80
78
81
79
Once onboarded, usage from Microsoft Sentinel workspaces begins to be billed through the previously described meters rather than existing long-term retention (formerly known as Archive), search, or auxiliary logs ingestion meters.
82
80
83
81
> [!IMPORTANT]
84
-
> Existing Microsoft Sentinel customers currently using and billed for auxiliary logs ingestion, long-term retention, and search will see charges transition to the new data lake ingestion, data lake storage, and data lake query meters, respectively once they onboard to Microsoft Sentinel data lake. Pricing from previous meters doesn't carry over. For more information on pricing, see [Microsoft Sentinel pricing](https://azure.microsoft.com/pricing/details/microsoft-sentinel/).
82
+
> Existing Microsoft Sentinel customers currently using and billed for auxiliary logs ingestion, long-term retention, and search will see charges transition to the new data lake ingestion, data lake storage, and data lake query meters respectively, once they onboard to Microsoft Sentinel data lake. Pricing from previous meters doesn't carry over. For more information on pricing, see [Microsoft Sentinel pricing](https://azure.microsoft.com/pricing/details/microsoft-sentinel/).
85
83
86
84
For customers that haven't onboarded to Microsoft Sentinel data lake and are currently using auxiliary or basic logs, see [Manage data retention in a Log Analytics workspace](/azure/azure-monitor/logs/data-retention-archive) and [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for relevant information.
87
85
88
-
89
86
### Simplified pricing tiers
90
87
91
88
Simplified pricing tiers combine the data analysis costs for Microsoft Sentinel and ingestion storage costs of Log Analytics into a single pricing tier. The following screenshot shows the simplified pricing tier that all new workspaces use.
Copy file name to clipboardExpand all lines: articles/sentinel/includes/sap-agentless-prerequisites.md
+6-3Lines changed: 6 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,8 +9,9 @@ ms.topic: include
9
9
**To run the tool**:
10
10
11
11
1. Open the integration package, navigate to the artifacts tab, and select the **Prerequisite checker** iflow > **Configure**.
12
-
1. Set the target RFC destination to the SAP system you want to check.
13
-
1. Deploy the iflow as you would otherwise for your SAP systems. For example, use the following sample PowerShell script, modifying the sample placeholder values for your environment:
12
+
1. Set the target destination name for the remote function call (RFC) to the SAP system you want to check. For example, `A4H-100-Sentinel-RFC`.
13
+
1. Deploy the iflow as you would otherwise for your SAP systems.
14
+
1. Trigger the iflow from any REST client. For example, use the following sample PowerShell script, modifying the sample placeholder values for your environment:
Make sure that the prerequisites checker runs successfully before connecting to Microsoft Sentinel.
41
+
Make sure that the prerequisites checker runs successfully (status code 200) with no warnings on the response output before connecting to Microsoft Sentinel.
42
+
43
+
If any findings, consult the response details for guidance on remediation steps. Legacy SAP systems often require extra SAP notes. Furthermore, see the [troubleshooting section](../sap/sap-deploy-troubleshoot.md) for common issues and resolutions.
Copy file name to clipboardExpand all lines: articles/sentinel/sap/preparing-sap.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -203,11 +203,11 @@ This procedure has steps both in Microsoft Sentinel and your SAP system, and req
203
203
1. Download the [integration package](https://aka.ms/SAPAgentlessPackage) and upload it to your SAP Integration Suite. For more information, see the [SAP documentation](https://help.sap.com/docs/integration-suite/sap-integration-suite/importing-integration-packages).
204
204
1. Open the package and go to the **Artifacts** tab. Then select the **Data Collector** configuration. For more information, see the [SAP documentation](https://help.sap.com/docs/integration-suite/sap-integration-suite/importing-integration-packages).
205
205
1. Configure the integration flow with the **LogIngestionURL** and the **DCRImmutableID**.
206
-
1. Deploy the i-flow using SAP Cloud Integration as the runtime service.
206
+
1. Deploy the iflow using SAP Cloud Integration as the runtime service.
207
207
208
208
209
209
## Run the prerequisite checker
210
-
1. The **Prerequisite checker** iflow is included in the package. We recommend running this iflow before continuing to the next step to ensure that your SAP system meets the system prerequisites.
210
+
1. The **Prerequisite checker** iflow is included in the package. We recommend running this iflow **manually**before continuing to the next step to ensure that your SAP system meets the system prerequisites before attempting integration from Microsoft Sentinel.
0 commit comments