You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/active-directory-b2c/custom-domain.md
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,6 @@ description: Learn how to enable custom domains in your redirect URLs for Azure
5
5
services: active-directory-b2c
6
6
author: kengaderdus
7
7
manager: CelesteDG
8
-
9
8
ms.service: active-directory
10
9
ms.workload: identity
11
10
ms.topic: how-to
@@ -399,6 +398,9 @@ Copy the URL, change the domain name manually, and then paste it back to your br
399
398
400
399
Azure Front Door passes the user's original IP address. It's the IP address that you'll see in the audit reporting or your custom policy.
401
400
401
+
> [!IMPORTANT]
402
+
> If the client sends an `x-forwarded-for` header to Azure Front Door, Azure AD B2C will use the originator's `x-forwarded-for` as the user's IP address for [Conditional Access Evaluation](./conditional-access-identity-protection-overview.md) and the `{Context:IPAddress}`[claims resolver](./claim-resolver-overview.md).
403
+
402
404
### Can I use a third-party Web Application Firewall (WAF) with B2C?
403
405
404
406
Yes, Azure AD B2C supports BYO-WAF (Bring Your Own Web Application Firewall). However, you must test WAF to ensure that it doesn't block or alert legitimate requests to Azure AD B2C user flows or custom policies. Learn how to configure [Akamai WAF](partner-akamai.md) and [Cloudflare WAF](partner-cloudflare.md) with Azure AD B2C.
Copy file name to clipboardExpand all lines: articles/app-service/configure-common.md
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -510,6 +510,9 @@ Set-AzWebApp $webapp
510
510
511
511
By default, App Service starts your app from the root directory of your app code. But certain web frameworks don't start in the root directory. For example, [Laravel](https://laravel.com/) starts in the `public` subdirectory. Such an app would be accessible at `http://contoso.com/public`, for example, but you typically want to direct `http://contoso.com` to the `public` directory instead. If your app's startup file is in a different folder, or if your repository has more than one application, you can edit or add virtual applications and directories.
512
512
513
+
> [!IMPORTANT]
514
+
> Virtual directory to a physical path feature is only available on Windows apps.
515
+
513
516
# [Azure portal](#tab/portal)
514
517
515
518
1. In the [Azure portal], search for and select **App Services**, and then select your app.
Copy file name to clipboardExpand all lines: articles/azure-monitor/app/azure-web-apps-nodejs.md
+7-2Lines changed: 7 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,8 +17,11 @@ Monitoring of your Node.js web applications running on [Azure App Services](../.
17
17
The easiest way to enable application monitoring for Node.js applications running on Azure App Services is through Azure portal.
18
18
Turning on application monitoring in Azure portal will automatically instrument your application with Application Insights, and doesn't require any code changes.
19
19
20
+
>[!NOTE]
21
+
> You can configure the automatically attached agent using the APPLICATIONINSIGHTS_CONFIGURATION_CONTENT environment variable in the App Service Environment variable blade. For details on the configuration options that can be passed via this environment variable, see [Node.js Configuration](https://github.com/microsoft/ApplicationInsights-node.js#Configuration).
22
+
20
23
> [!NOTE]
21
-
> If both autoinstrumentation monitoring and manual SDK-based instrumentation are detected, only the manual instrumentation settings will be honored. This is to prevent duplicate data from being sent. To learn more about this, check out the [troubleshooting section](#troubleshooting) in this article.
24
+
> If both automatic instrumentation and manual SDK-based instrumentation are detected, only the manual instrumentation settings are honored. This is to prevent duplicate data from being sent. For more information, see the [troubleshooting section](#troubleshooting) in this article.
22
25
23
26
### Autoinstrumentation through Azure portal
24
27
@@ -106,7 +109,6 @@ Below is our step-by-step troubleshooting guide for extension/agent based monito
106
109
107
110
If `SDKPresent` is true this indicates that the extension detected that some aspect of the SDK is already present in the Application, and will back-off.
108
111
109
-
110
112
# [Linux](#tab/linux)
111
113
112
114
1. Check that `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of "~3".
@@ -134,6 +136,8 @@ Below is our step-by-step troubleshooting guide for extension/agent based monito
134
136
```
135
137
136
138
If `SDKPresent` is true this indicates that the extension detected that some aspect of the SDK is already present in the Application, and will back-off.
Copy file name to clipboardExpand all lines: articles/cosmos-db/continuous-backup-restore-introduction.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -132,7 +132,7 @@ Currently the point in time restore functionality has the following limitations:
132
132
133
133
* Multi-regions write accounts aren't supported.
134
134
135
-
* Currently Azure Synapse Link can be enabled, in preview, in continuous backup database accounts. The opposite situation isn't supported yet, it is not possible to turn on continuous backup in Synapse Link enabled database accounts. And analytical store isn't included in backups. For more information about backup and analytical store, see [analytical store backup](analytical-store-introduction.md#backup).
135
+
* Currently Azure Synapse Link can be enabled in continuous backup database accounts. But the opposite situation isn't supported yet, it is not possible to turn on continuous backup in Synapse Link enabled database accounts. And analytical store isn't included in backups. For more information about backup and analytical store, see [analytical store backup](analytical-store-introduction.md#backup).
136
136
137
137
* The restored account is created in the same region where your source account exists. You can't restore an account into a region where the source account didn't exist.
Copy file name to clipboardExpand all lines: articles/cosmos-db/postgresql/concepts-colocation.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ ms.service: cosmos-db
7
7
ms.subservice: postgresql
8
8
ms.custom: ignite-2022
9
9
ms.topic: conceptual
10
-
ms.date: 05/06/2019
10
+
ms.date: 10/01/2023
11
11
---
12
12
13
13
# Table colocation in Azure Cosmos DB for PostgreSQL
@@ -18,13 +18,13 @@ Colocation means storing related information together on the same nodes. Queries
18
18
19
19
## Data colocation for hash-distributed tables
20
20
21
-
In Azure Cosmos DB for PostgreSQL, a row is stored in a shard if the hash of the value in the distribution column falls within the shard's hash range. Shards with the same hash range are always placed on the same node. Rows with equal distribution column values are always on the same node across tables.
21
+
In Azure Cosmos DB for PostgreSQL, a row is stored in a shard if the hash of the value in the distribution column falls within the shard's hash range. Shards with the same hash range are always placed on the same node. Rows with equal distribution column values are always on the same node across tables. The concept of hash-distributed tables is also known as [row-based sharding](concepts-sharding-models.md#row-based-sharding). In [schema-based sharding](concepts-sharding-models.md#schema-based-sharding), tables within a distributed schema are always colocated.
22
22
23
23
:::image type="content" source="media/concepts-colocation/colocation-shards.png" alt-text="Diagram shows shards with the same hash range placed on the same node for events shards and page shards." border="false":::
24
24
25
25
## A practical example of colocation
26
26
27
-
Consider the following tables that might be part of a multi-tenant web
27
+
Consider the following tables that might be part of a multitenant web
28
28
analytics SaaS:
29
29
30
30
```sql
@@ -153,4 +153,4 @@ In some cases, queries and table schemas must be changed to include the tenant I
153
153
154
154
## Next steps
155
155
156
-
- See how tenant data is colocated in the [multi-tenant tutorial](tutorial-design-database-multi-tenant.md).
156
+
- See how tenant data is colocated in the [multitenant tutorial](tutorial-design-database-multi-tenant.md).
Copy file name to clipboardExpand all lines: articles/cosmos-db/postgresql/concepts-nodes.md
+17-10Lines changed: 17 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ author: jonels-msft
6
6
ms.service: cosmos-db
7
7
ms.subservice: postgresql
8
8
ms.topic: conceptual
9
-
ms.date: 10/26/2022
9
+
ms.date: 09/29/2023
10
10
---
11
11
12
12
# Nodes and tables in Azure Cosmos DB for PostgreSQL
@@ -25,23 +25,22 @@ allows the database to scale by adding more nodes to the cluster.
25
25
26
26
Every cluster has a coordinator node and multiple workers. Applications
27
27
send their queries to the coordinator node, which relays it to the relevant
28
-
workers and accumulates their results. Applications are not able to connect
29
-
directly to workers.
28
+
workers and accumulates their results.
30
29
31
-
Azure Cosmos DB for PostgreSQL allows the database administrator to *distribute* tables,
32
-
storing different rows on different worker nodes. Distributed tables are the
33
-
key to Azure Cosmos DB for PostgreSQL performance. Failing to distribute tables leaves them entirely
34
-
on the coordinator node and cannot take advantage of cross-machine parallelism.
30
+
Azure Cosmos DB for PostgreSQL allows the database administrator to *distribute* tables and/or schemas,
31
+
storing different rows on different worker nodes. Distributed tables and/or schemas are the
32
+
key to Azure Cosmos DB for PostgreSQL performance. Failing to distribute tables and/or schemas leaves them entirely
33
+
on the coordinator node and can't take advantage of cross-machine parallelism.
35
34
36
35
For each query on distributed tables, the coordinator either routes it to a
37
36
single worker node, or parallelizes it across several depending on whether the
38
-
required data lives on a single node or multiple. The coordinator decides what
37
+
required data lives on a single node or multiple. With [schema-based sharding](concepts-sharding-models.md#schema-based-sharding), the coordinator routes the queries directly to the node that hosts the schema. In both schema-based sharding and [row-based sharding](concepts-sharding-models.md#row-based-sharding), the coordinator decides what
39
38
to do by consulting metadata tables. These tables track the DNS names and
40
39
health of worker nodes, and the distribution of data across nodes.
41
40
42
41
## Table types
43
42
44
-
There are three types of tables in a cluster, each
43
+
There are five types of tables in a cluster, each
45
44
stored differently on nodes and used for different purposes.
46
45
47
46
### Type 1: Distributed tables
@@ -77,7 +76,15 @@ values like order statuses or product categories.
77
76
78
77
When you use Azure Cosmos DB for PostgreSQL, the coordinator node you connect to is a regular PostgreSQL database. You can create ordinary tables on the coordinator and choose not to shard them.
79
78
80
-
A good candidate for local tables would be small administrative tables that don't participate in join queries. An example is a users table for application sign-in and authentication.
79
+
A good candidate for local tables would be small administrative tables that don't participate in join queries. An example is a `users` table for application sign-in and authentication.
80
+
81
+
### Type 4: Local managed tables
82
+
83
+
Azure Cosmos DB for PostgreSQL might automatically add local tables to metadata if a foreign key reference exists between a local table and a reference table. Additionally locally managed tables can be manually created by executing [create_reference_table](reference-functions.md#citus_add_local_table_to_metadata) citus_add_local_table_to_metadata function on regular local tables. Tables present in metadata are considered managed tables and can be queried from any node, Citus knows to route to the coordinator to obtain data from the local managed table. Such tables are displayed as local in [citus_tables](reference-metadata.md#distributed-tables-view) view.
84
+
85
+
### Type 5: Schema tables
86
+
87
+
With [schema-based sharding](concepts-sharding-models.md#schema-based-sharding) introduced in Citus 12.0, distributed schemas are automatically associated with individual colocation groups. Tables created in those schemas are automatically converted to colocated distributed tables without a shard key. Such tables are considered schema tables and are displayed as schema in [citus_tables](reference-metadata.md#distributed-tables-view) view.
0 commit comments