Skip to content

Commit c8c3aa4

Browse files
authored
Add note on Spark table column limit for SQL queries
Added important note about Spark tables with over 1,024 columns and provided a workaround.
1 parent 19c1eb4 commit c8c3aa4

1 file changed

Lines changed: 4 additions & 0 deletions

File tree

articles/synapse-analytics/metadata/overview.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,10 @@ Azure Synapse Analytics allows the different workspace computational engines to
1616

1717
The sharing supports the so-called modern data warehouse pattern and gives the workspace SQL engines access to databases and tables created with Spark. It also allows the SQL engines to create their own objects that aren't being shared with the other engines.
1818

19+
> [!IMPORTANT]
20+
> Tables created in Spark with more than 1,024 columns may appear in Object Explorer but can't be queried from the serverless SQL pool due to incomplete metadata synchronization.
21+
>
22+
> **Workaround**: Avoid creating Spark tables with more than 1,024 columns if they need to be queried from the serverless SQL pool. Redesign the schema and recreate the table.
1923
## Support the modern data warehouse
2024

2125
The shared metadata model supports the modern data warehouse pattern in the following way:

0 commit comments

Comments
 (0)