You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cosmos-db/postgresql/concepts-nodes.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -78,11 +78,11 @@ When you use Azure Cosmos DB for PostgreSQL, the coordinator node you connect to
78
78
79
79
A good candidate for local tables would be small administrative tables that don't participate in join queries. An example is a `users` table for application sign-in and authentication.
80
80
81
-
### Type 4: Local Managed Tables
81
+
### Type 4: Local managed tables
82
82
83
-
Azure Cosmos DB for PostgreSQL may automatically add local tables to metadata if a foreign key reference exists between a local table and a reference table. Additionally locally managed tables can be manually created by executing [create_reference_table](reference-functions.md#citus_add_local_table_to_metadata) citus_add_local_table_to_metadata function on regular local tables. Tables present in metadata are considered managed tables and can be queried from any node, Citus knows to route to the coordinator to obtain data from the local managed table. Such tables are displayed as local in [citus_tables](reference-metadata.md#distributed-tables-view) view.
83
+
Azure Cosmos DB for PostgreSQL might automatically add local tables to metadata if a foreign key reference exists between a local table and a reference table. Additionally locally managed tables can be manually created by executing [create_reference_table](reference-functions.md#citus_add_local_table_to_metadata) citus_add_local_table_to_metadata function on regular local tables. Tables present in metadata are considered managed tables and can be queried from any node, Citus knows to route to the coordinator to obtain data from the local managed table. Such tables are displayed as local in [citus_tables](reference-metadata.md#distributed-tables-view) view.
84
84
85
-
### Type 5: Schema Tables
85
+
### Type 5: Schema tables
86
86
87
87
With [schema-based sharding](concepts-sharding-models.md#schema-based-sharding) introduced in Citus 12.0, distributed schemas are automatically associated with individual colocation groups. Tables created in those schemas are automatically converted to colocated distributed tables without a shard key. Such tables are considered schema tables and are displayed as schema in [citus_tables](reference-metadata.md#distributed-tables-view) view.
Copy file name to clipboardExpand all lines: articles/cosmos-db/postgresql/concepts-sharding-models.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,7 +36,7 @@ Drawbacks:
36
36
37
37
## Schema-based sharding
38
38
39
-
Available with Citus 12.0 in Azure Cosmos DB for PostgreSQL, schema-based sharding is the shared database, separate schema model, the schema becomes the logical shard within the database. Multi-tenant apps can use a schema per tenant to easily shard along the tenant dimension. Query changes aren't required and the application only needs a small modification to set the proper search_path when switching tenants. Schema-based sharding is an ideal solution for microservices, and for ISVs deploying applications that can't undergo the changes required to onboard row-based sharding.
39
+
Available with Citus 12.0 in Azure Cosmos DB for PostgreSQL, schema-based sharding is the shared database, separate schema model, the schema becomes the logical shard within the database. Multitenant apps can use a schema per tenant to easily shard along the tenant dimension. Query changes aren't required and the application only needs a small modification to set the proper search_path when switching tenants. Schema-based sharding is an ideal solution for microservices, and for ISVs deploying applications that can't undergo the changes required to onboard row-based sharding.
Copy file name to clipboardExpand all lines: articles/cosmos-db/postgresql/concepts-upgrade.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ ms.date: 10/01/2023
16
16
The Azure Cosmos DB for PostgreSQL managed service can handle upgrades of both the
17
17
PostgreSQL server, and the Citus extension. All clusters are created with [the latest Citus version](./reference-extensions.md#citus-extension) available for the major PostgreSQL version you select during cluster provisioning. When you select a PostgreSQL version such as PostgreSQL 15 for in-place cluster upgrade, the latest Citus version supported for selected PostgreSQL version is going to be installed.
18
18
19
-
If you need to upgrade the Citus version only, you can do so by using an in-place upgrade. For instance, you may want to upgrade Citus 11.0 to Citus 11.3 on your PostgreSQL 14 cluster without upgrading Postgres version.
19
+
If you need to upgrade the Citus version only, you can do so by using an in-place upgrade. For instance, you might want to upgrade Citus 11.0 to Citus 11.3 on your PostgreSQL 14 cluster without upgrading Postgres version.
20
20
21
21
## Upgrade precautions
22
22
@@ -36,8 +36,8 @@ Noteworthy Citus 12 changes:
36
36
37
37
Noteworthy Citus 11 changes:
38
38
39
-
* Table shards may disappear in your SQL client. Their visibility
40
-
is now controlled by
39
+
* Table shards might disappear in your SQL client. You can control their visibility
Copy file name to clipboardExpand all lines: articles/cosmos-db/postgresql/quickstart-build-scalable-apps-overview.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,7 @@ ms.date: 10/01/2023
18
18
There are three steps involved in building scalable apps with Azure Cosmos DB for PostgreSQL:
19
19
20
20
1. Classify your application workload. There are use-case where Azure Cosmos DB for PostgreSQL
21
-
shines: multi-tenant SaaS, microservices, real-time operational analytics, and high
21
+
shines: Multitenant SaaS, microservices, real-time operational analytics, and high
22
22
throughput OLTP. Determine whether your app falls into one of these categories.
23
23
2. Based on the workload, use [schema-based sharding](concepts-sharding-models.md#schema-based-sharding) or identify the optimal shard key for the distributed
24
24
tables. Classify your tables as reference, distributed, or local.
**schemaname:** Name of the schema, which needs to be distributed.
34
34
35
-
#### Return Value
35
+
#### Return value
36
36
37
37
N/A
38
38
@@ -54,7 +54,7 @@ Converts an existing distributed schema back into a regular schema. The process
54
54
55
55
**schemaname:** Name of the schema, which needs to be distributed.
56
56
57
-
#### Return Value
57
+
#### Return value
58
58
59
59
N/A
60
60
@@ -119,7 +119,7 @@ or colocation group, use the [alter_distributed_table](#alter_distributed_table)
119
119
Possible values for `shard_count` are between 1 and 64000. For guidance on
120
120
choosing the optimal value, see [Shard Count](howto-shard-count.md).
121
121
122
-
#### Return Value
122
+
#### Return value
123
123
124
124
N/A
125
125
@@ -175,7 +175,7 @@ distribution.
175
175
**table_name:** Name of the distributed table whose local counterpart on the
176
176
coordinator node should be truncated.
177
177
178
-
#### Return Value
178
+
#### Return value
179
179
180
180
N/A
181
181
@@ -198,7 +198,7 @@ worker node.
198
198
**table\_name:** Name of the small dimension or reference table that
199
199
needs to be distributed.
200
200
201
-
#### Return Value
201
+
#### Return value
202
202
203
203
N/A
204
204
@@ -225,7 +225,7 @@ When you undistribute the table, Citus removes the resulting local tables from m
225
225
226
226
**cascade\_via\_foreign\_keys**: (Optional) When this argument set to “true,” citus_add_local_table_to_metadata adds other tables that are in a foreign key relationship with given table into metadata automatically. Use caution with this parameter, because it can potentially affect many tables.
227
227
228
-
#### Return Value
228
+
#### Return value
229
229
230
230
N/A
231
231
@@ -261,7 +261,7 @@ tables that were previously colocated with the table, and the colocation will
261
261
be preserved. If it is "false", the current colocation of this table will be
262
262
broken.
263
263
264
-
#### Return Value
264
+
#### Return value
265
265
266
266
N/A
267
267
@@ -300,7 +300,7 @@ This function doesn't move any data around physically.
300
300
If you want to break the colocation of a table, you should specify
301
301
`colocate_with => 'none'`.
302
302
303
-
#### Return Value
303
+
#### Return value
304
304
305
305
N/A
306
306
@@ -357,7 +357,7 @@ undistribute_table also undistributes all tables that are related to table_name
357
357
through foreign keys. Use caution with this parameter, because it can
358
358
potentially affect many tables.
359
359
360
-
#### Return Value
360
+
#### Return value
361
361
362
362
N/A
363
363
@@ -406,7 +406,7 @@ a distributed table (or, more generally, colocation group), be sure to name
406
406
that table using the `colocate_with` parameter. Then each invocation of the
407
407
function will run on the worker node containing relevant shards.
408
408
409
-
#### Return Value
409
+
#### Return value
410
410
411
411
N/A
412
412
@@ -460,11 +460,11 @@ overridden with these GUCs:
460
460
**table_name:** Name of the columnar table.
461
461
462
462
**chunk_row_count:** (Optional) The maximum number of rows per chunk for
463
-
newly inserted data. Existing chunks of data won't be changed and may have
463
+
newly inserted data. Existing chunks of data won't be changed and might have
464
464
more rows than this maximum value. The default value is 10000.
465
465
466
466
**stripe_row_count:** (Optional) The maximum number of rows per stripe for
467
-
newly inserted data. Existing stripes of data won't be changed and may have
467
+
newly inserted data. Existing stripes of data won't be changed and might have
468
468
more rows than this maximum value. The default value is 150000.
469
469
470
470
**compression:** (Optional) `[none|pglz|zstd|lz4|lz4hc]` The compression type
@@ -500,7 +500,7 @@ The alter_table_set_access_method() function changes access method of a table
500
500
501
501
**access_method:** Name of the new access method.
502
502
503
-
#### Return Value
503
+
#### Return value
504
504
505
505
N/A
506
506
@@ -529,7 +529,7 @@ will contain the point end_at, and no later partitions will be created.
529
529
**start_from:** (timestamptz, optional) pick the first partition so that it
530
530
contains the point start_from. The default value is `now()`.
531
531
532
-
#### Return Value
532
+
#### Return value
533
533
534
534
True if it needed to create new partitions, false if they all existed already.
535
535
@@ -562,7 +562,7 @@ be partitioned on one column, of type date, timestamp, or timestamptz.
562
562
**older_than:** (timestamptz) drop partitions whose upper range is less than or
563
563
equal to older_than.
564
564
565
-
#### Return Value
565
+
#### Return value
566
566
567
567
N/A
568
568
@@ -591,7 +591,7 @@ or equal to older_than.
591
591
**new_access_method:** (name) either 'heap' for row-based storage, or
592
592
'columnar' for columnar storage.
593
593
594
-
#### Return Value
594
+
#### Return value
595
595
596
596
N/A
597
597
@@ -623,7 +623,7 @@ doesn't work for the append distribution.
623
623
624
624
**distribution\_value:** The value of the distribution column.
625
625
626
-
#### Return Value
626
+
#### Return value
627
627
628
628
The shard ID Azure Cosmos DB for PostgreSQL associates with the distribution column value
0 commit comments