Skip to content

Commit c16fdeb

Browse files
committed
Update from SAP DITA CMS (squashed):
commit a421c63b64d4a106a677822ba63d83932fe475c9 Author: REDACTED Date: Tue Mar 18 08:55:25 2025 +0000 Update from SAP DITA CMS 2025-03-18 08:55:25 Project: dita-all/tfo1739922175240 Project map: af2fcb3e6dd448f3af3c0ff9c70daaf9.ditamap Output: loioc25299a38b6448f889a43b42c9e5897d Language: en-US Builddable map: 678695d903b546e5947af69e56ed42b8.ditamap commit 64adee127352f4ac861de4c28a33c1a92d11812f Author: REDACTED Date: Tue Mar 18 08:55:21 2025 +0000 Update from SAP DITA CMS 2025-03-18 08:55:21 Project: dita-all/tfo1739922175240 Project map: af2fcb3e6dd448f3af3c0ff9c70daaf9.ditamap Output: loiob8faae83b519439fb4ea9d0eb1a5f26e Language: en-US Builddable map: 4e1c1e1d5d1947f5875e93e7597c4f4c.ditamap commit bdd7f204a78139d257303007f7f29ba9f02d83d1 Author: REDACTED Date: Tue Mar 18 08:52:14 2025 +0000 Update from SAP DITA CMS 2025-03-18 08:52:14 Project: dita-all/tfo1739922175240 Project map: af2fcb3e6dd448f3af3c0ff9c70daaf9.ditamap ################################################## [Remaining squash message was removed before commit...]
1 parent c9e66ff commit c16fdeb

109 files changed

Lines changed: 1302 additions & 419 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/acquiring-and-preparing-data-in-the-object-store-2a6bc3f.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ A user with an administrator role can create a space with SAP HANA data lake fil
4444

4545
## Load Data with Replication Flows
4646

47-
Users with a modeler role can use replication flows to load data in local tables \(file\) that are stored in a file space \(see [SAP Datasphere Targets](sap-datasphere-targets-12c45eb.md)\). A replication flow writes data files to the inbound buffer \(specific folder in file storage\) of a target local table \(File\). To process data updates from this inbound buffer to the local table \(File\), and therefore make data visible, a merge task has to run via a task chain \(see [Creating a Task Chain](creating-a-task-chain-d1afbc2.md)\). You can monitor the buffer merge status using the *Local Tables \(File\)* monitor \(See [Monitoring Local Tables (File)](https://help.sap.com/viewer/be5967d099974c69b77f4549425ca4c0/cloud/en-US/6b2d0073a8684ee6a59d6f47d00ec895.html "Monitor your local tables (file). Check how and when they were last updated and if new data has still to be merged.") :arrow_upper_right:\).
47+
Users with a modeler role can use replication flows to load data in local tables \(file\) that are stored in a file space \(see [SAP Datasphere Targets](sap-datasphere-targets-12c45eb.md)\). A replication flow writes data files to the inbound buffer \(specific folder in file storage\) of a target local table \(File\). To process data updates from this inbound buffer to the local table \(File\), and therefore make data visible, a merge task has to run via a task chain \(see [Creating a Task Chain](creating-a-task-chain-d1afbc2.md)\). You can monitor the buffer merge status using the *Local Tables \(File\)* monitor \(See [Monitoring Local Tables (File)](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/6b2d0073a8684ee6a59d6f47d00ec895.html "Monitor your local tables (file). Check how and when they were last updated and if new data has still to be merged.") :arrow_upper_right:\).
4848

4949

5050

docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/add-the-source-for-a-replication-flow-7496380.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ Define the source for your replication flow \(connection, container, and objects
2020

2121
- For standard CDS views, the container is the CDS root folder \(CDS\_EXTRACTION\).
2222

23-
If a standard CDS view for which replication is enabled is not shown in the CDS\_EXTRACTION folder, make sure that the user in the source connection has the required authorizations. For connections to an SAP S/4HANA Cloud source system, this might mean that the user must be assigned to an authorization group that contains the CDS view \(as described in [Integrating CDS Views Using ABAP CDS Pipeline](https://help.sap.com/docs/SAP_S4HANA_CLOUD/0f69f8fb28ac4bf48d2b57b9637e81fa/f509eddda867452db9631dae1ae442a3.html?version=2308.503)\).
23+
If a standard CDS view for which replication is enabled is not shown in the CDS\_EXTRACTION folder, make sure that the user in the source connection has the required authorizations. For connections to an SAP S/4HANA Cloud source system, this might mean that the user must be assigned to an authorization group that contains the CDS view as described in [Integrating CDS Views Using ABAP CDS Pipeline](https://help.sap.com/docs/SAP_S4HANA_CLOUD/0f69f8fb28ac4bf48d2b57b9637e81fa/f509eddda867452db9631dae1ae442a3.html?).
2424

2525
- For database tables, the container is the schema that includes the table.
2626

docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/add-the-target-for-a-replication-flow-ab490fb.md

Lines changed: 61 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -48,10 +48,67 @@ If you are using an existing table as the target object, this table may contain
4848

4949
- If you activate the property *Skip Unmapped Columns*, the system ignores the unmapped target columns during the replication flow. Their existing content remains as-is.
5050

51-
- If you deactivate it, you get an error message for each unmapped target column when saving the replication flow.
52-
53-
54-
For replication flows created before version 2025.01 of SAP Datasphere, the property is deactivated by default, but you can activate it as required. For replication flows created with version 2025.01 or later, this property is activated by default, and you can deactivate it as required.
51+
The way these additional columns will appear when you perform a data preview depends on your target type:
52+
53+
54+
<table>
55+
<tr>
56+
<th valign="top">
57+
58+
Target
59+
60+
</th>
61+
<th valign="top">
62+
63+
How Additional Columns are Displayed
64+
65+
</th>
66+
</tr>
67+
<tr>
68+
<td valign="top">
69+
70+
SAP Datasphere \(SAP HANA space\)
71+
72+
</td>
73+
<td valign="top">
74+
75+
Additional columns will appear with configured default value or null.
76+
77+
</td>
78+
</tr>
79+
<tr>
80+
<td valign="top">
81+
82+
SAP Datasphere \(File Space\)
83+
84+
</td>
85+
<td valign="top">
86+
87+
Additional columns will not appear
88+
89+
</td>
90+
</tr>
91+
<tr>
92+
<td valign="top">
93+
94+
Confluent / Kafka
95+
96+
</td>
97+
<td valign="top">
98+
99+
Additional columns will not appear
100+
101+
</td>
102+
</tr>
103+
</table>
104+
105+
> ### Note:
106+
> Changing the toggle on an active replication flow will have no effect.
107+
108+
- If you deactivate it, you get an error message for each unmapped target column when saving the replication flow. However, you can still manually set each column to *Skip Mapping* in the *Mapping* tab.
109+
110+
111+
For replication flows created before version 2025.01 of SAP Datasphere, the property is deactivated by default, but you can activate it as required. For replication flows created with version 2025.02 or later, this property is activated by default, and you can deactivate it as required.
55112

56113
If a projection is defined for a target column that doesn't exist in the source, this projection takes precedence over the skipping setting. If there are no unmapped columns in the target, activating or deactivating this property is of no effect.
57114

docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/configure-a-replication-flow-3f5ba0c.md

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -19,21 +19,26 @@ Define settings and properties for your replication flow and individual replicat
1919
> ### Note:
2020
> - A replication flow that contains objects with load type *Initial and Delta* does not have an end date. Once started, it remains in status *Active* until it is stopped or paused or an issue occurs.
2121
>
22+
> > ### Caution:
23+
> > You must always stop or pause a running replication flow before a source system downtime. For more information, see [Working With Existing Replication Flow Runs](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/da62e1ee746448e8bc043e1be4377cbe.html "You can pause a replication flow run and resume it at a later point in time, or you can stop it completely.") :arrow_upper_right:
24+
>
25+
> .
26+
>
2227
> - The system load caused by the delta load operations can vary substantially depending on the frequency of changes in your data source in combination with the interval length you define. Make sure that your tenant configuration supports your settings. For more information, see [Configure the Size of your SAP Datasphere Tenant](https://help.sap.com/docs/SAP_DATASPHERE/9f804b8efa8043539289f42f372c4862/33f8ef4ec359409fb75925a68c23ebc3.html).
2328
>
2429
> - The next interval starts after all changes from the previous interval have been replicated. For example, if replicating a set of changes starts at 10:30 a. m. and takes until 10:45 a. m., and you have defined one-hour intervals, the next delta replication starts at 11:45 a. m.
2530
>
2631
> - If your source object is a local table, you can only use load type *Initial and Delta* if *Delta Capture* is switched on for the local table \(see [Capturing Delta Changes in Your Local Table](https://help.sap.com/docs/SAP_DATASPHERE/c8a54ee704e94e15926551293243fd1d/154bdffb35814d5481d1f6de143a6b9e.html)\).
2732
2833

29-
2. On the *Settings* tab of the canvas, review the *Truncate* setting and change it as required. This setting is only relevant if the target structure already exists and contains data. If the target structure does not yet exist or is empty, you can ignore the *Truncate* setting.
34+
2. On the *Settings* tab of the canvas, review the *Delete All Before Loading* setting and change it as required. This setting is only relevant if the target structure already exists and contains data. If the target structure does not yet exist or is empty, you can ignore the *Delete All Before Loading* setting.
3035

31-
- If *Truncate* is activated for a **database table**, when you start the replication run, the system deletes the table content, but leaves the table structure intact and fills it with the relevant data from the source.
36+
- If *Delete All Before Loading* is activated for a **database table**, when you start the replication run, the system deletes the table content, but leaves the table structure intact and fills it with the relevant data from the source.
3237

3338
If not, the system inserts new data records after the existing data in the target. For data records that already exist in the target and have been changed in the source, the system updates the target records with the changed data from the source using the UPSERT mode.
3439

35-
- For cloud storage provider targets, *Truncate* must always be set. \(If you still try to run a replication flow for an existing target without the *Truncate* option, you get an error message.\) When you start the replication run, the system deletes the object completely \(data and structure\) and re-creates it based on the source data.
36-
- For Apache Kafka and Confluent Kafka, when *Truncate* is enabled, the target topic is re-created. This means that all existing records in that topic are deleted as well. Truncation has no effect on the schema registry.
40+
- For cloud storage provider targets, *Delete All Before Loading* must always be set. \(If you still try to run a replication flow for an existing target without the *Delete All Before Loading* option, you get an error message.\) When you start the replication run, the system deletes the object completely \(data and structure\) and re-creates it based on the source data.
41+
- For Apache Kafka and Confluent Kafka, when *Delete All Before Loading* is enabled, the target topic is re-created. This means that all existing records in that topic are deleted as well. Truncation has no effect on the schema registry.
3742

3843

3944
3. Click <span class="FPA-icons-V3"></span> \(Browse source settings\) to review the source settings and change them as appropriate.

docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/configuring-email-notification-7ff6a4e.md renamed to docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/configure-email-notification-7ff6a4e.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
<link rel="stylesheet" type="text/css" href="../css/sap-icons.css"/>
44

5-
# Configuring Email Notification
5+
# Configure Email Notification
66

77
After creating and deploying a task chain, set up email notification of users for completion of task chain runs.
88

docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-local-table-file-d21881b.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ SAP HANA Cloud, data lake allows SAP Datasphere to store and manage mass-data ef
3131
As a local table \(file\) is capturing delta changes via flows, it creates different entities in the repository after it is deployed:
3232

3333
- An active records entity for accessing the delta capture entity through a virtual table. It excludes the delta capture columns and deleted records, and keeps only the active records.
34-
- A delta capture entity that stores information on changes found in the delta capture table. It serves as target for flows at design time. In addition, every local table \(File\) has a specific folder in file storage \(inbound buffer\) to which a replication flow writes data files to a specific target object. To process data updates from this inbound buffer to the local table \(File\), and therefore make data visible, a merge task has to run via a task chain \(see [Creating a Task Chain](creating-a-task-chain-d1afbc2.md)\). You can monitor the buffer merge status using the *Local Tables \(File\)* monitor \(See and [Monitoring Local Tables (File)](https://help.sap.com/viewer/be5967d099974c69b77f4549425ca4c0/cloud/en-US/6b2d0073a8684ee6a59d6f47d00ec895.html "Monitor your local tables (file). Check how and when they were last updated and if new data has still to be merged.") :arrow_upper_right:\)
34+
- A delta capture entity that stores information on changes found in the delta capture table. It serves as target for flows at design time. In addition, every local table \(File\) has a specific folder in file storage \(inbound buffer\) to which a replication flow writes data files to a specific target object. To process data updates from this inbound buffer to the local table \(File\), and therefore make data visible, a merge task has to run via a task chain \(see [Creating a Task Chain](creating-a-task-chain-d1afbc2.md)\).
3535

3636

3737

docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-replication-flow-25e2bd7.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -268,7 +268,7 @@ For more information about available connection types, sources, and targets, see
268268
<tr>
269269
<td valign="top">
270270

271-
Truncate
271+
Delete All Before Loading
272272

273273
</td>
274274
<td valign="top">
@@ -326,7 +326,7 @@ For more information about available connection types, sources, and targets, see
326326

327327
8. Click <span class="FPA-icons-V3"></span> \(Run\) to start your replication flow.
328328

329-
For more information about how to monitor your replication flow run, see [Monitoring Flows](https://help.sap.com/viewer/be5967d099974c69b77f4549425ca4c0/cloud/en-US/b661ea0766a24c7d839df950330a89fd.html "In the Flows monitor, you can find all the deployed flows per space.") :arrow_upper_right:.
329+
For more information about how to monitor your replication flow run, see [Monitoring Flows](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/b661ea0766a24c7d839df950330a89fd.html "In the Flows monitor, you can find all the deployed flows per space.") :arrow_upper_right:.
330330

331331
9. The tools in the editor toolbar help you work with your object throughout its lifecycle:
332332

0 commit comments

Comments
 (0)