Skip to content

Commit bc1b4e6

Browse files
Merge pull request #311046 from MicrosoftDocs/main
Auto Publish – main to live - 2026-01-29 06:00 UTC
2 parents 90c12f9 + 7208d1a commit bc1b4e6

40 files changed

Lines changed: 527 additions & 353 deletions

File tree

articles/active-directory-b2c/phone-based-mfa.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: kengaderdus
77
manager: CelesteDG
88
ms.service: azure-active-directory
99
ms.topic: how-to
10-
ms.date: 1/21/2025
10+
ms.date: 1/23/2025
1111
ms.author: kengaderdus
1212
ms.subservice: b2c
1313
ms.custom: sfi-image-nochange
@@ -144,8 +144,8 @@ To help prevent fraudulent sign-ups, remove any country/region codes that do not
144144
</RelyingParty>
145145
</TrustFrameworkPolicy>
146146
```
147-
> [!IMPORTANT]
148-
>Add the code in step 2 to the _relying party policy_ to enforce country/region code restrictions on the server side. You must not define these elements only in parent policies; put them in the relying party policy.
147+
> [!IMPORTANT]
148+
>Add the code in step 2 to the _relying party policy_ to enforce country/region code restrictions on the server side. You must not define these elements only in parent policies; put them in the relying party policy.
149149

150150
1. In the `BuildingBlocks` section of this policy file, add the following code. Make sure to include only the country/region codes relevant to your organization:
151151

articles/azure-netapp-files/TOC.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -257,6 +257,8 @@
257257
href: azacsnap-troubleshoot.md
258258
- name: Preview release features of AzAcSnap
259259
href: azacsnap-preview.md
260+
- name: Understand advanced ransomware protection
261+
href: advanced-ransomware-protection.md
260262
- name: Solutions and benefits
261263
items:
262264
- name: Solution architectures using Azure NetApp Files

articles/azure-netapp-files/advanced-ransomware-protection.md

Lines changed: 138 additions & 0 deletions
Large diffs are not rendered by default.
179 KB
Loading
246 KB
Loading

articles/backup/azure-data-lake-storage-backup-support-matrix.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Vaulted backups of Azure Data Lake Storage are available in the following region
2626

2727
| Availability type | Region |
2828
| --- | --- |
29-
| **General availability** | Australia East, Central US, East Asia, France South, Germany West Central, Southeast US, Switzerland North, Switzerland West, UAE North, UK West, West India, Central India, North Central US, South India, UK South, West Central US, West US 3, West Europe, North Europe, West US, West US 2, East US, East US 2, Southeast Asia. |
29+
| **General availability** | Australia East, Central US, East Asia, France South, Germany West Central, Southeast US, Switzerland North, Switzerland West, UAE North, UK West, West India, Central India, North Central US, South India, UK South, West Central US, West US 3, West Europe, North Europe, West US, West US 2, East US, East US 2, Southeast Asia, Australia Central, Australia Southeast, Brazil South, Brazil Southeast, Canada Central, Canada East, Denmark East, East US 3, France Central, Germany North, Indonesia Central, Jio India West, Japan East, Korea Central, Korea South, Malaysia South, Malaysia West, Norway East, New Zealand North, South Central US, Sweden Central, Sweden South, Spain Central, Southwest US, UAE Central, South Central US 2, Southeast US 5. |
3030

3131
## Supported storage accounts
3232

articles/backup/blob-backup-support-matrix.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ Operational backup for blobs is available in all public cloud regions, except Fr
2424

2525
# [Vaulted backup](#tab/vaulted-backup)
2626

27-
Vaulted backup for blobs is available in all public cloud regions.
27+
Vaulted backup for blobs is available in all public cloud regions. It's also available in China East, China East 2, China East 3, China North 2, China North 3, US GOV Arizona, US GOV Texas, US GOV Virginia, US DoD East, US DoD Central.
2828

2929

3030
---

articles/data-factory/automatic-connector-upgrade.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.topic: concept-article
99
ms.custom:
1010
- references_regions
1111
- build-2025
12-
ms.date: 11/19/2025
12+
ms.date: 01/08/2026
1313
---
1414

1515
# Automatic connector upgrade
@@ -71,8 +71,9 @@ You can find more details from the table below on the connector list that is pla
7171
| [Amazon Redshift](connector-amazon-redshift.md) | If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.60 or above.|
7272
| [Google BigQuery](connector-google-bigquery.md) | Scenario that doesn't rely on below capability in Google BigQuery V1:<br><br> • Use `trustedCertsPath`, `additionalProjects`, `requestgoogledrivescope` connection properties.<br> • Set `useSystemTrustStore` connection property as `false`.<br> • Use **STRUCT** and **ARRAY** data types. <br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.58 or above. |
7373
| [Greenplum](connector-greenplum.md) | If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.56 or above. |
74-
| [Hive](connector-hive.md) | Scenario that doesn't rely on below capability in Hive (version 1.0):<br><br>• Authentication types:<br>&nbsp;&nbsp;• Username<br>• Thrift transport protocol:<br>&nbsp;&nbsp;• HiveServer1<br>• Service discovery mode: True<br>• Use native query: True <br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.59 or above.|
75-
| [Impala](connector-impala.md) | Scenario that doesn't rely on below capability in Impala (version 1.0):<br><br>• Authentication types:<br>&nbsp;&nbsp;• SASL Username<br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.59 or above. |
74+
| [Hive](connector-hive.md) | Scenario that doesn't rely on below capability in Hive (version 1.0):<br><br>• Use Username authentication type.<br>• Thrift transport protocol:<br>&nbsp;&nbsp;• HiveServer1<br>• Service discovery mode: True<br>• Use native query: True <br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.59 or above.|
75+
| [Impala](connector-impala.md) | Scenario that doesn't rely on below capability in Impala (version 1.0):<br><br>• Use SASL Username authentication type.<br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.59 or above. |
76+
| [Jira](connector-jira.md) | Scenario that doesn't rely on below capability in Jira (version 1.0):<br><br>• Use `useEncryptedEndpoints`, `useHostVerification` and `usePeerVerification` as connection properties. <br>• Use `query`. <br><br>The following Jira tables are supported for automatic upgrade:<br>&nbsp;&nbsp;Platform.Api_Groups_Picker, Platform.Api_Issue_Type, Platform.Api_Project, Platform.Api_Field, Platform.Api_Status, Platform.Api_Status_Category, Platform.Api_Project_Type, Platform.Api_Resolution, Platform.Api_Priority, Platform.ApiAllUsers, Platform.Api_Issue_Link_Type, Platform.Api_Role, Platform.Api_Project_Versions, Platform.Api_Component, Platform.Api_Project_IssueTypes, Agile.Agile_Board_Epic, Agile.Agile_Board, Agile.Agile_Board_Sprint, Agile.Agile_Board_Issue, Agile.Agile_Board_Epic_Issue. <br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.63 or above. |
7677
| [MariaDB](connector-mariadb.md) | If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.58 or above. |
7778
| [MySQL](connector-mysql.md) | If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.58 or above. |
7879
| [Netezza](connector-netezza.md) | If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.59 or above. |
@@ -83,9 +84,10 @@ You can find more details from the table below on the connector list that is pla
8384
| [Salesforce Service Cloud](connector-salesforce-service-cloud.md) | Scenario that doesn't rely on capability below in Salesforce Service Cloud V1:<br><br>• Use the following SOQL queries and your pipeline runs on the self-hosted integration runtime with a version below 5.59.<br>&nbsp;&nbsp;• TYPEOF clauses<br>&nbsp;&nbsp;• Compound address/geolocations fields<br>• Use the following SQL-92 queries and your pipeline runs on the self-hosted integration runtime.<br>&nbsp;&nbsp;• Timestamp ts keyword<br>&nbsp;&nbsp;• Top keyword<br>&nbsp;&nbsp;• Comments with -- or /*<br>&nbsp;&nbsp;• Group By and Having <br>• Report query {call "\<report name>"}|
8485
| [ServiceNow](connector-servicenow.md) | Scenario that doesn't use the custom SQL query in dataset in ServiceNow V1. <br><br>Ensure that you have a role with at least read access to *sys_db_object*, *sys_db_view* and *sys_dictionary* tables in ServiceNow. To access views in ServiceNow, you need to have a role with at least read access to *sys_db_view_table* and *sys_db_view_table_field* tables.<br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.59 or above. |
8586
| [Snowflake](connector-snowflake.md) | Scenario that doesn't rely on capability below in Snowflake V1:<br><br>• Use any of below<br>&nbsp;&nbsp;properties: connection_timeout, disableocspcheck, enablestaging, on_error, query_tag, quoted_identifiers_ignore_case, skip_header, stage, table, timezone, token, validate_utf8, no_proxy, nonproxyhosts, noproxy. <br>• Use multi-statement query in script activity or lookup activity. <br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.59 or above. |
86-
| [Spark](connector-spark.md) | Scenario that doesn't rely on below capability in Spark (version 1.0):<br><br>• Authentication types:<br>&nbsp;&nbsp;Username<br>• Thrift transport protocol:<br>&nbsp;&nbsp;• SASL<br>&nbsp;&nbsp;• Binary<br>• Thrift transport protocol:<br>&nbsp;&nbsp;• SharkServer<br>&nbsp;&nbsp;• SharkServer2<br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.59 or above.|
87+
| [Spark](connector-spark.md) | Scenario that doesn't rely on below capability in Spark (version 1.0):<br><br>• Use Username authentication type. <br>• Thrift transport protocol:<br>&nbsp;&nbsp;• SASL<br>&nbsp;&nbsp;• Binary<br>• Thrift transport protocol:<br>&nbsp;&nbsp;• SharkServer<br>&nbsp;&nbsp;• SharkServer2<br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.59 or above.|
8788
| [Teradata](connector-teradata.md) | Scenario that doesn't rely on below capability in Teradata (version 1.0):<br><br> • Set below value for **CharacterSet**:<br>&nbsp;&nbsp;• BIG5 (TCHBIG5_1R0)<br>&nbsp;&nbsp;• EUC (Unix compatible, KANJIEC_0U)<br>&nbsp;&nbsp;• GB (SCHGB2312_1T0)<br>&nbsp;&nbsp;• IBM Mainframe (KANJIEBCDIC5035_0I)<br>&nbsp;&nbsp;• NetworkKorean (HANGULKSC5601_2R4)<br>&nbsp;&nbsp;• Shift-JIS (Windows, DOS compatible, KANJISJIS_0S)<br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.58 or above.|
8889
| [Vertica](connector-vertica.md) | Scenario that doesn't rely on below capability in Vertica (version 1.0):<br><br>• Linked service that uses Azure integration runtime.<br><br>Automatic upgrade is only applicable when the driver is installed in your machine that installs the self-hosted integration runtime (version 5.56 or above).<br><br> For more information, go to [Install Vertica ODBC driver for the version 2.0](connector-vertica.md#install-vertica-odbc-driver-for-the-version-20). |
90+
| [Xero](connector-xero.md) | Scenario that doesn't rely on below capability in Xero (version 1.0):<br><br>• Use OAuth 1.0 authentication type. <br>• Use `query`. <br><br>The following Xero tables are supported for automatic upgrade: <br>&nbsp;&nbsp;Accounts, Bank_Transaction_Line_Item_Tracking, Bank_Transaction_Line_Items, Bank_Transactions, Bank_Transfers, Budgets, Contact_Group_Contacts, Contact_Groups, Contacts, Contacts_Addresses, Credit_Note_Line_Items, Credit_Notes, Credit_Notes_Allocations, Credit_Notes_Line_Items_Tracking, Currencies, Employees, Invoice_Line_Items, Invoices, Invoices_Credit_Notes, Invoices_Line_Items_Tracking, Items, Journal_Lines, Journals, Manual_Journal_Line_Tracking, Manual_Journal_Lines, Manual_Journals, Organisations, Overpayments, Payments, Prepayments, Projects, ProjectTasks, ProjectUsers, Purchase_Order_Line_Items, Purchase_Orders, Receipts, Tax_Rates, Tracking_Categories, Tracking_Category_Options, Users. <br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.62 or above. |
8991

9092
## Related content
9193

articles/energy-data-services/release-notes.md

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,19 @@ This page is updated with the details about the upcoming release approximately a
2323

2424
<hr width = 100%>
2525

26+
## January 2026
27+
### Rock and Fluid Samples (RAFS) DDMS - Generally Available
28+
The Rock and Fluid Samples (RAFS) DDMS is now generally available in Azure Data Manager for Energy. RAFS provides a standardized, scalable foundation for storing, querying, and analyzing geological and engineering sample data from subsurface and surface locations. These insights are essential for key energy workflows, including reservoir modeling, drilling planning, and facility design.
29+
30+
To learn more about RAFS DDMS, see the official documentation: [Rock and Fluid Samples (RAFS) DDMS APIs](tutorial-rock-and-fluid-samples-ddms.md).
31+
32+
### Dangerous Query Rate Limit Enforcement
33+
To strengthen service resiliency, Azure Data Manager for Energy now applies targeted ratelimiting to a narrow class of highrisk wildcard queries that can negatively impact cluster performance. This guardrail affects only queries using the fully unbounded pattern \*\:\*\:\*\:\* in the kind field of requests sent to /api/search/v2/query and /api/search/v2/query_with_cursor. Typical ingestion, search, and operational workloads are not impacted.
34+
35+
When a query is ratelimited, clients receive an HTTP 429 – Too Many Requests responses. The response body provides a clear explanation and guidance. The enforcement logic uses a conservative default configuration of 2 burst tokens, a refill rate of 1 token every 5 seconds, and an effective allowance of approximately 12 such wildcard queries per minute.
36+
37+
Users can avoid ratelimit conditions by issuing bounded queries with explicit kind values rather than relying on fully open wildcard patterns.
38+
2639
## December 2025
2740
### Reference Data Values Automatic Sync - Generally Available
2841

articles/event-grid/handler-azure-monitor-alerts.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ author: robece
1010
ms.author: robece
1111
---
1212

13-
# How to send events to Azure monitor alerts (Preview)
13+
# How to send events to Azure monitor alerts
1414

1515
This article describes how Azure Event Grid delivers Azure Key Vault events as Azure Monitor alerts.
1616

0 commit comments

Comments
 (0)