Skip to content

Commit c9e66ff

Browse files
committed
Update from SAP DITA CMS (squashed):
commit 354aab5bc400ba3aacbf99a80da7c41f6e08e001 Author: REDACTED Date: Tue Mar 4 10:44:34 2025 +0000 Update from SAP DITA CMS 2025-03-04 10:44:34 Project: dita-all/kky1738583959055 Project map: af2fcb3e6dd448f3af3c0ff9c70daaf9.ditamap Output: loiod3d776bb52294a17b48298443a286f55 Language: en-US Builddable map: 89ab8c0ed18c432d8fb87551823e7de7.ditamap commit f29e340c9c28d82f735cbbc0ee12970e38e71d3a Author: REDACTED Date: Tue Mar 4 10:43:28 2025 +0000 Update from SAP DITA CMS 2025-03-04 10:43:28 Project: dita-all/kky1738583959055 Project map: af2fcb3e6dd448f3af3c0ff9c70daaf9.ditamap Output: loiob8faae83b519439fb4ea9d0eb1a5f26e Language: en-US Builddable map: 4e1c1e1d5d1947f5875e93e7597c4f4c.ditamap commit 2add16f40ec227b00cf75ae7c0e069382ffe6f01 Author: REDACTED Date: Tue Mar 4 10:40:20 2025 +0000 Update from SAP DITA CMS 2025-03-04 10:40:20 Project: dita-all/kky1738583959055 Project map: af2fcb3e6dd448f3af3c0ff9c70daaf9.ditamap ################################################## [Remaining squash message was removed before commit...]
1 parent 371fcc1 commit c9e66ff

140 files changed

Lines changed: 2520 additions & 1094 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.reuse/dep5

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
2+
Upstream-Name: sap-datasphere
3+
Upstream-Contact: Venet Cindy ([email protected])
4+
Source: https://github.com/sap-docs/sap-datasphere
5+
6+
Files: *
7+
Copyright: 2023 SAP SE or an SAP affiliate company and sap-datasphere contributors
8+
License: CC-BY-4.0

docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/acquiring-and-preparing-data-in-the-object-store-2a6bc3f.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ A user with an administrator role can create a space with SAP HANA data lake fil
4444

4545
## Load Data with Replication Flows
4646

47-
Users with a modeler role can use replication flows to load data in local tables \(file\) that are stored in a file space \(see [SAP Datasphere Targets](sap-datasphere-targets-12c45eb.md)\). A replication flow writes data files to the inbound buffer \(specific folder in file storage\) of a target local table \(File\). To process data updates from this inbound buffer to the local table \(File\), and therefore make data visible, a merge task has to run via a task chain \(see [Creating a Task Chain](creating-a-task-chain-d1afbc2.md)\). You can monitor the buffer merge status using the *Local Tables \(File\)* monitor \(See [Monitoring Local Tables (File)](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/6b2d0073a8684ee6a59d6f47d00ec895.html "Monitor your local tables (file). Check how and when they were last updated and if new data has still to be merged.") :arrow_upper_right:\).
47+
Users with a modeler role can use replication flows to load data in local tables \(file\) that are stored in a file space \(see [SAP Datasphere Targets](sap-datasphere-targets-12c45eb.md)\). A replication flow writes data files to the inbound buffer \(specific folder in file storage\) of a target local table \(File\). To process data updates from this inbound buffer to the local table \(File\), and therefore make data visible, a merge task has to run via a task chain \(see [Creating a Task Chain](creating-a-task-chain-d1afbc2.md)\). You can monitor the buffer merge status using the *Local Tables \(File\)* monitor \(See [Monitoring Local Tables (File)](https://help.sap.com/viewer/be5967d099974c69b77f4549425ca4c0/cloud/en-US/6b2d0073a8684ee6a59d6f47d00ec895.html "Monitor your local tables (file). Check how and when they were last updated and if new data has still to be merged.") :arrow_upper_right:\).
4848

4949

5050

docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/acquiring-data-in-the-data-builder-1f15a29.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,20 @@ Space administrators and integrators prepare connections and other sources to al
2929

3030
Many connections \(including most connections to SAP systems\) support importing remote tables to federate or replicate data \(see [Integrating Data via Connections](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/eb85e157ab654152bd68a8714036e463.html "Users with a space administrator or integrator role can create connections to SAP and non-SAP source systems, including cloud and on-premise systems and partner tools, and to target systems for outbound replication flows. Users with modeler roles can import data via connections for preparation and modeling in SAP Datasphere.") :arrow_upper_right:\).
3131

32+
SAP Datasphere is using two types of adaptors to connect to remote tables:
33+
34+
- SAP HANA smart data integration \(used in connections with *Data Provisioning* option = *Data Provisioning Agent*\).
35+
36+
- SAP HANA smart data access adaptors \(used in connections with no *Data Provisioning* option or *Data Provisioning* option = *Cloud Connector* or *Direct*\).
37+
38+
> ### Note:
39+
> If your source data comes from an SAP HANA On-Premise system, select the adaptor following your use case:
40+
>
41+
> - You want to access the data remotely: SAP HANA smart data access \(Data Provisioning Option: Direct\) would be the recommended adaptor to read the data. It allows higher degree of query pushdown to the remote database, leading to better response times and less resource consumption.
42+
> - You want to replicate the data into SAP Datasphere: The preferred option for this is to use Replication Flows, see [Creating a Replication Flow](creating-a-replication-flow-25e2bd7.md). In case you require replication for remote tables, SAP HANA smart data integration \(Data Provisioning Option: Data Provisioning Agent\) is the recommended adaptor to push the data. It offers more options when loading the data, such as applying filter conditions or data partitioning.
43+
>
44+
> For more information on these adaptors, see [Connecting SAP HANA Cloud, SAP HANA Database to Remote Data Sources](https://help.sap.com/docs/HANA_CLOUD/db19c7071e5f4101837e23f06e576495/afa3769a2ecb407695908cfb4e3a9463.html).
45+
3246
You can import remote tables to make the data available in your space from the *Data Builder* start page, in an entity-relationship model, or directly as a source in a view.
3347

3448
- To get started: In the side navigation area, click <span class="FPA-icons-V3"></span> \(*Data Builder*\), select a space if necessary, and click *Import* \> *Import Remote Tables*. See [Import Remote Tables](import-remote-tables-fd04efb.md).

docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/cloud-storage-provider-targets-43d93a2.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -185,6 +185,11 @@ The `.sap.partfile.metadata` objects include metadata information for the replic
185185

186186
The replication flow creates multiple files \(<code>part-*.<i class="varname">&lt;extension&gt;</i></code>\) during initial and delta loading. The number and size of these files depends on the source table size and structure as well as change frequency \(during delta loading\).
187187

188+
> ### Note:
189+
> Parquet name files are generated using this logic:
190+
>
191+
> The name pattern is part-<Replication Flow Task ID \(UUID, SAP internal\)\>-<delimitation number of the task \(01 to 60, SAP internal\)\>.parquet. This is just a random UUID+NN that is unique to the whole replication task process, to avoid accidentally overwriting an existing file.
192+
188193
Each file contains the source columns as defined in the mapping for the replication object in the replication flow. The system appends the following columns:
189194

190195
- *\_\_operation\_type*: Identifies the type of target row:

docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-local-table-2509fe4.md

Lines changed: 17 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -341,8 +341,9 @@ This procedure explains how to create an empty table by defining its columns. Yo
341341

342342
Delete records:
343343

344-
- Standard table - Delete all records
345-
- Delta capture table- Delete all records, Delete all records \(mark as "deleted"\) or Delete all records marked for deletion which are older than the specified number of days.
344+
- Standard table: *Delete All Records*
345+
- Delta capture table: *Delete All Records*, *Delete Records Marked as "Deleted"*, or *Delete all records marked for deletion which are older than the specified number of days*.
346+
- Local table \(file\): *Delete All Records \(Mark as Deleted\)* or *Delete previous versions \(Vacuum\), which are older than the specified number of days*
346347

347348
See [Load or Delete Local Table Data](load-or-delete-local-table-data-870401f.md).
348349

@@ -379,6 +380,20 @@ This procedure explains how to create an empty table by defining its columns. Yo
379380
<tr>
380381
<td valign="top">
381382

383+
Versions
384+
385+
</td>
386+
<td valign="top">
387+
388+
Open the *Version History* dialog for the object.
389+
390+
See [Reviewing and Restoring Object Versions](../reviewing-and-restoring-object-versions-4f717cc.md).
391+
392+
</td>
393+
</tr>
394+
<tr>
395+
<td valign="top">
396+
382397
Details
383398

384399
</td>

docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-local-table-file-d21881b.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ SAP HANA Cloud, data lake allows SAP Datasphere to store and manage mass-data ef
3131
As a local table \(file\) is capturing delta changes via flows, it creates different entities in the repository after it is deployed:
3232

3333
- An active records entity for accessing the delta capture entity through a virtual table. It excludes the delta capture columns and deleted records, and keeps only the active records.
34-
- A delta capture entity that stores information on changes found in the delta capture table. It serves as target for flows at design time. In addition, every local table \(File\) has a specific folder in file storage \(inbound buffer\) to which a replication flow writes data files to a specific target object. To process data updates from this inbound buffer to the local table \(File\), and therefore make data visible, a merge task has to run via a task chain \(see [Creating a Task Chain](creating-a-task-chain-d1afbc2.md)\). You can monitor the buffer merge status using the *Local Tables \(File\)* monitor \(See and [Monitoring Local Tables (File)](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/6b2d0073a8684ee6a59d6f47d00ec895.html "Monitor your local tables (file). Check how and when they were last updated and if new data has still to be merged.") :arrow_upper_right:\)
34+
- A delta capture entity that stores information on changes found in the delta capture table. It serves as target for flows at design time. In addition, every local table \(File\) has a specific folder in file storage \(inbound buffer\) to which a replication flow writes data files to a specific target object. To process data updates from this inbound buffer to the local table \(File\), and therefore make data visible, a merge task has to run via a task chain \(see [Creating a Task Chain](creating-a-task-chain-d1afbc2.md)\). You can monitor the buffer merge status using the *Local Tables \(File\)* monitor \(See and [Monitoring Local Tables (File)](https://help.sap.com/viewer/be5967d099974c69b77f4549425ca4c0/cloud/en-US/6b2d0073a8684ee6a59d6f47d00ec895.html "Monitor your local tables (file). Check how and when they were last updated and if new data has still to be merged.") :arrow_upper_right:\)
3535

3636

3737

docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-replication-flow-25e2bd7.md

Lines changed: 15 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -326,7 +326,7 @@ For more information about available connection types, sources, and targets, see
326326

327327
8. Click <span class="FPA-icons-V3"></span> \(Run\) to start your replication flow.
328328

329-
For more information about how to monitor your replication flow run, see [Monitoring Flows](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/b661ea0766a24c7d839df950330a89fd.html "In the Flows monitor, you can find all the deployed flows per space.") :arrow_upper_right:.
329+
For more information about how to monitor your replication flow run, see [Monitoring Flows](https://help.sap.com/viewer/be5967d099974c69b77f4549425ca4c0/cloud/en-US/b661ea0766a24c7d839df950330a89fd.html "In the Flows monitor, you can find all the deployed flows per space.") :arrow_upper_right:.
330330

331331
9. The tools in the editor toolbar help you work with your object throughout its lifecycle:
332332

@@ -429,6 +429,20 @@ For more information about available connection types, sources, and targets, see
429429
<tr>
430430
<td valign="top">
431431

432+
Versions
433+
434+
</td>
435+
<td valign="top">
436+
437+
Open the *Version History* dialog for the object.
438+
439+
See [Reviewing and Restoring Object Versions](../reviewing-and-restoring-object-versions-4f717cc.md).
440+
441+
</td>
442+
</tr>
443+
<tr>
444+
<td valign="top">
445+
432446
Details
433447

434448
</td>

0 commit comments

Comments
 (0)