Skip to content

Commit ec0d5f1

Browse files
committed
Uncomment
1 parent 7c63884 commit ec0d5f1

10 files changed

Lines changed: 34 additions & 36 deletions

articles/iot-operations/connect-to-cloud/concept-dataflow-enrich.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,8 @@ ms.service: azure-iot-operations
1515

1616
[!INCLUDE [kubernetes-management-preview-note](../includes/kubernetes-management-preview-note.md)]
1717

18-
<!-- > [!TIP]
19-
> Data flow graphs support enrichment with expanded capabilities including enrichment in filter and branch transforms. For new projects that use MQTT, Kafka, or OpenTelemetry endpoints, see [Enrich with external data in data flow graphs](howto-dataflow-graphs-enrich.md) (preview). -->
18+
> [!TIP]
19+
> Data flow graphs support enrichment with expanded capabilities including enrichment in filter and branch transforms. For new projects that use MQTT, Kafka, or OpenTelemetry endpoints, see [Enrich with external data in data flow graphs](howto-dataflow-graphs-enrich.md) (preview).
2020
2121
You can enrich data by using the *contextualization datasets* function. When incoming records are processed, you can query these datasets based on conditions that relate to the fields of the incoming record. This capability allows for dynamic interactions. Data from these datasets can be used to supplement information in the output fields and participate in complex calculations during the mapping process.
2222

articles/iot-operations/connect-to-cloud/concept-dataflow-mapping.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -16,8 +16,8 @@ ms.service: azure-iot-operations
1616

1717
[!INCLUDE [kubernetes-management-preview-note](../includes/kubernetes-management-preview-note.md)]
1818

19-
<!-- > [!TIP]
20-
> Data flow graphs offer an expanded mapping language with additional functions, composable transforms, and features like conditional routing and time-based aggregation. For new projects that use MQTT, Kafka, or OpenTelemetry endpoints, see [Transform data with map in data flow graphs](howto-dataflow-graphs-map.md) (preview). -->
19+
> [!TIP]
20+
> Data flow graphs offer an expanded mapping language with additional functions, composable transforms, and features like conditional routing and time-based aggregation. For new projects that use MQTT, Kafka, or OpenTelemetry endpoints, see [Transform data with map in data flow graphs](howto-dataflow-graphs-map.md) (preview).
2121
2222
Use the data flow mapping language to transform data in Azure IoT Operations. The syntax is a simple, yet powerful, way to define mappings that transform data from one format to another. This article provides an overview of the data flow mapping language and key concepts.
2323

@@ -958,7 +958,7 @@ In this example, the last known value of `Temperature` is tracked. If a subseque
958958

959959
## Related content
960960

961-
<!-- - [Expressions reference](concept-dataflow-graphs-expressions.md) - Operators, functions, data types, and type conversion rules for all data flow transforms.
962-
- [Filter data in a data flow](howto-dataflow-filter.md) -->
961+
- [Expressions reference](concept-dataflow-graphs-expressions.md) - Operators, functions, data types, and type conversion rules for all data flow transforms.
962+
- [Filter data in a data flow](howto-dataflow-filter.md)
963963
- [Enrich data by using data flows](concept-dataflow-enrich.md)
964964
- [Create a data flow](howto-create-dataflow.md)

articles/iot-operations/connect-to-cloud/concept-schema-registry.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -25,8 +25,8 @@ Data flows use schemas in three places:
2525
- **Transformation**: The operations experience uses the source schema as a starting point when you build transformations.
2626
- **Destination**: Specify an output schema and serialization format when sending data to storage endpoints.
2727

28-
<!-- > [!NOTE]
29-
> For data flow graphs, schemas are configured differently. See [Use schemas in data flow graphs](concept-dataflow-graphs-schema.md). -->
28+
> [!NOTE]
29+
> For data flow graphs, schemas are configured differently. See [Use schemas in data flow graphs](concept-dataflow-graphs-schema.md).
3030
3131
## Schema formats
3232

@@ -85,15 +85,15 @@ Asset sources have a predefined schema created by the connector for OPC UA. For
8585

8686
:::image type="content" source="./media/concept-schema-registry/upload-schema.png" alt-text="Screenshot that shows uploading a message schema in the operations experience web UI.":::
8787

88-
<!-- To reference a schema in your data flow source configuration, use the `schemaRef` field. For more information, see [Configure a data flow source](howto-configure-dataflow-source.md#specify-source-schema). -->
88+
To reference a schema in your data flow source configuration, use the `schemaRef` field. For more information, see [Configure a data flow source](howto-configure-dataflow-source.md#specify-source-schema).
8989

9090
## Configure an output schema
9191

9292
Output schemas control how data is serialized before it reaches the destination. Storage endpoints (ADLS Gen2, Fabric OneLake, Azure Data Explorer, local storage) require a schema and support Parquet and Delta serialization formats. MQTT and Kafka destinations use JSON by default.
9393

9494
In the operations experience, when you select a storage destination, the UI applies any transformations to the source schema and generates a Delta schema automatically. The generated schema is stored in the schema registry and referenced by the data flow.
9595

96-
<!-- For Bicep or Kubernetes deployments, specify the schema and serialization format in the transformation settings. For more information, see [Configure a data flow destination](howto-configure-dataflow-destination.md#serialize-the-output-with-a-schema). -->
96+
For Bicep or Kubernetes deployments, specify the schema and serialization format in the transformation settings. For more information, see [Configure a data flow destination](howto-configure-dataflow-destination.md#serialize-the-output-with-a-schema).
9797

9898
## Upload a schema
9999

@@ -181,7 +181,7 @@ az deployment group create --resource-group <RESOURCE_GROUP> --template-file sch
181181

182182
## Related content
183183

184-
<!-- - [Use schemas in data flow graphs](concept-dataflow-graphs-schema.md)
184+
- [Use schemas in data flow graphs](concept-dataflow-graphs-schema.md)
185185
- [Configure a data flow source](howto-configure-dataflow-source.md)
186-
- [Configure a data flow destination](howto-configure-dataflow-destination.md) -->
186+
- [Configure a data flow destination](howto-configure-dataflow-destination.md)
187187
- [Create a data flow](howto-create-dataflow.md)

articles/iot-operations/connect-to-cloud/howto-configure-dataflow-endpoint.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,10 +32,10 @@ Use the following table to choose the endpoint type to configure:
3232
> [!IMPORTANT]
3333
> **Data flow graphs limitation**: [Data flow graphs (WASM)](howto-dataflow-graph-wasm.md) currently only support MQTT, Kafka, and OpenTelemetry endpoints. OpenTelemetry endpoints can only be used as destinations in data flow graphs. Other endpoint types are not supported for data flow graphs. For more information, see [Known issues](../troubleshoot/known-issues.md#data-flow-graphs-only-support-specific-endpoint-types).
3434
35-
<!-- > [!IMPORTANT]
35+
> [!IMPORTANT]
3636
> Storage endpoints require a [schema for serialization](./concept-schema-registry.md). To use data flow with Microsoft Fabric OneLake, Azure Data Lake Storage, Azure Data Explorer, or Local Storage, you must [specify a schema reference](./howto-configure-dataflow-destination.md#serialize-the-output-with-a-schema).
3737
>
38-
> To generate the schema from a sample data file, use the [Schema Gen Helper](https://azure-samples.github.io/explore-iot-operations/schema-gen-helper/). -->
38+
> To generate the schema from a sample data file, use the [Schema Gen Helper](https://azure-samples.github.io/explore-iot-operations/schema-gen-helper/).
3939
4040
## Data flows must use local MQTT broker endpoint
4141

articles/iot-operations/connect-to-cloud/howto-configure-kafka-endpoint.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1004,8 +1004,8 @@ kubectl create configmap client-ca-configmap --from-file root_ca.crt -n azure-io
10041004

10051005
The consumer group ID is used to identify the consumer group that the data flow uses to read messages from the Kafka topic. The consumer group ID must be unique within the Kafka broker.
10061006

1007-
<!-- > [!IMPORTANT]
1008-
> When the Kafka endpoint is used as [source](howto-configure-dataflow-source.md), the consumer group ID is required. Otherwise, the data flow can't read messages from the Kafka topic, and you get an error "Kafka type source endpoints must have a consumerGroupId defined". -->
1007+
> [!IMPORTANT]
1008+
> When the Kafka endpoint is used as [source](howto-configure-dataflow-source.md), the consumer group ID is required. Otherwise, the data flow can't read messages from the Kafka topic, and you get an error "Kafka type source endpoints must have a consumerGroupId defined".
10091009

10101010
# [Operations experience](#tab/portal)
10111011

articles/iot-operations/connect-to-cloud/howto-configure-local-storage-endpoint.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -181,7 +181,7 @@ To write your data to the cloud, follow the instructions in [Cloud Ingest Edge V
181181
182182
Then, when configuring your local storage data flow endpoint, input the PVC name under `persistentVolumeClaimRef`.
183183

184-
<!-- Finally, when you create the data flow, the [data destination](howto-configure-dataflow-destination.md#configure-the-data-destination-topic-container-or-table) parameter must match the `spec.path` parameter you created for your subvolume during configuration. -->
184+
Finally, when you create the data flow, the [data destination](howto-configure-dataflow-destination.md#configure-the-data-destination-topic-container-or-table) parameter must match the `spec.path` parameter you created for your subvolume during configuration.
185185

186186
## Next steps
187187

articles/iot-operations/connect-to-cloud/howto-create-dataflow.md

Lines changed: 7 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,6 @@ ms.custom:
2020

2121
A data flow is the path that data takes from the source to the destination with optional transformations. You can configure the data flow by creating a *Data flow* custom resource or using the operations experience web UI. A data flow is made up of three parts: the **source**, the **transformation**, and the **destination**.
2222

23-
<!--
2423
```mermaid
2524
flowchart LR
2625
subgraph Source
@@ -37,7 +36,6 @@ flowchart LR
3736
Source - -> BuiltInTransformation
3837
BuiltInTransformation - -> Destination
3938
```
40-
-->
4139

4240
:::image type="content" source="media/howto-create-dataflow/dataflow.svg" alt-text="Diagram of a data flow showing flow from source to transform then destination.":::
4341

@@ -229,13 +227,13 @@ Review the following sections to learn how to configure the operation types of t
229227

230228
Configure the source endpoint and data sources (topics) for the data flow. You can use the default MQTT broker, an asset, or a custom MQTT or Kafka endpoint as the source.
231229

232-
<!-- For complete configuration details, including MQTT topic wildcards, shared subscriptions, Kafka topics, and source schema, see [Configure a data flow source](howto-configure-dataflow-source.md). -->
230+
For complete configuration details, including MQTT topic wildcards, shared subscriptions, Kafka topics, and source schema, see [Configure a data flow source](howto-configure-dataflow-source.md).
233231

234232
If you don't use the default endpoint as the source, you must use it as the [destination](#destination). For more information about using the local MQTT broker endpoint, see [Data flows must use local MQTT broker endpoint](./howto-configure-dataflow-endpoint.md#data-flows-must-use-local-mqtt-broker-endpoint).
235233

236234
## Request disk persistence
237235

238-
<!-- Disk persistence keeps data flow processing state across restarts. For configuration details, see [Configure disk persistence](howto-configure-disk-persistence.md). -->
236+
Disk persistence keeps data flow processing state across restarts. For configuration details, see [Configure disk persistence](howto-configure-disk-persistence.md).
239237

240238
## Transformation
241239

@@ -388,7 +386,7 @@ For more information about condition syntax, see [Enrich data by using data flow
388386

389387
Use the filter stage to drop messages that don't meet a condition. You can define multiple filter rules with input fields and boolean expressions.
390388

391-
<!-- For complete configuration details and examples, see [Filter data in a data flow](howto-dataflow-filter.md). -->
389+
For complete configuration details and examples, see [Filter data in a data flow](howto-dataflow-filter.md).
392390

393391
### Map: Move data from one field to another
394392

@@ -693,18 +691,18 @@ To learn more, see [Map data by using data flows](concept-dataflow-mapping.md).
693691

694692
### Serialize data according to a schema
695693

696-
<!-- If you want to serialize the data before sending it to the destination, specify a schema and serialization format. For details, see [Serialize the output with a schema](howto-configure-dataflow-destination.md#serialize-the-output-with-a-schema). -->
694+
If you want to serialize the data before sending it to the destination, specify a schema and serialization format. For details, see [Serialize the output with a schema](howto-configure-dataflow-destination.md#serialize-the-output-with-a-schema).
697695

698696
## Destination
699697

700698
Configure the destination endpoint and data destination (topic, container, or table) for the data flow. You can use any supported endpoint type as the destination, including MQTT, Kafka, Azure Data Lake Storage, Microsoft Fabric, Azure Data Explorer, and local storage.
701699

702-
<!-- For complete configuration details, including the data destination table, dynamic destination topics, and output serialization, see [Configure a data flow destination](howto-configure-dataflow-destination.md). -->
700+
For complete configuration details, including the data destination table, dynamic destination topics, and output serialization, see [Configure a data flow destination](howto-configure-dataflow-destination.md).
703701

704702
To send data to a destination other than the local MQTT broker, create a data flow endpoint. To learn how, see [Configure data flow endpoints](howto-configure-dataflow-endpoint.md).
705703

706-
<!-- > [!IMPORTANT]
707-
> Storage endpoints require a [schema for serialization](./concept-schema-registry.md). To use data flow with Microsoft Fabric OneLake, Azure Data Lake Storage, Azure Data Explorer, or Local Storage, you must [specify a schema reference](howto-configure-dataflow-destination.md#serialize-the-output-with-a-schema). -->
704+
> [!IMPORTANT]
705+
> Storage endpoints require a [schema for serialization](./concept-schema-registry.md). To use data flow with Microsoft Fabric OneLake, Azure Data Lake Storage, Azure Data Explorer, or Local Storage, you must [specify a schema reference](howto-configure-dataflow-destination.md#serialize-the-output-with-a-schema).
708706

709707
## Example
710708

articles/iot-operations/connect-to-cloud/howto-dataflow-graph-wasm.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -15,10 +15,10 @@ ai-usage: ai-assisted
1515

1616
[!INCLUDE [kubernetes-management-preview-note](../includes/kubernetes-management-preview-note.md)]
1717

18-
<!-- Azure IoT Operations [data flow graphs](concept-dataflow-graphs.md) include built-in transforms for common processing tasks like mapping, filtering, and aggregation. When you need custom logic beyond what the built-in transforms provide, you can deploy WebAssembly (WASM) modules as custom transforms in your data flow graph pipelines.
18+
Azure IoT Operations [data flow graphs](concept-dataflow-graphs.md) include built-in transforms for common processing tasks like mapping, filtering, and aggregation. When you need custom logic beyond what the built-in transforms provide, you can deploy WebAssembly (WASM) modules as custom transforms in your data flow graph pipelines.
1919

2020
> [!TIP]
21-
> For most data processing scenarios, start with the [built-in transforms](concept-dataflow-graphs.md#available-transforms). Use WASM transforms when you need custom business logic, specialized algorithms, or processing that the built-in options don't cover. -->
21+
> For most data processing scenarios, start with the [built-in transforms](concept-dataflow-graphs.md#available-transforms). Use WASM transforms when you need custom business logic, specialized algorithms, or processing that the built-in options don't cover.
2222
2323
> [!TIP]
2424
> Want to run AI in-band? See [Run ONNX inference in WebAssembly data flow graphs](../develop-edge-apps/howto-wasm-onnx-inference.md) to package and execute small ONNX models inside your WASM operators.
@@ -33,8 +33,8 @@ ai-usage: ai-assisted
3333
- **Quick start with public registry**: Create a registry endpoint pointing to `ghcr.io/azure-samples/explore-iot-operations` with anonymous authentication. For instructions, see [Use prebuilt modules from a public registry](../develop-edge-apps/howto-deploy-wasm-graph-definitions.md#use-prebuilt-modules-from-a-public-registry).
3434
- **Private registry**: Set up your own container registry and push the sample modules by following guidance in [Deploy WebAssembly (WASM) modules and graph definitions](../develop-edge-apps/howto-deploy-wasm-graph-definitions.md).
3535

36-
<!-- > [!NOTE]
37-
> **Data flows vs. data flow graphs**: A *data flow* is a pipeline that moves and transforms data between endpoints by using built-in transformations. A *data flow graph* extends data flows with composable processing steps. Azure IoT Operations provides [built-in data flow graphs](concept-dataflow-graphs.md) for common operations like mapping, filtering, branching, and aggregation. For custom processing logic, you can implement WebAssembly modules as described in this article. Data flow graphs use YAML graph definitions that specify how operators connect. The data flow graph resource wraps this definition and maps its abstract source and sink operations to concrete endpoints, like MQTT topics and Kafka topics. -->
36+
> [!NOTE]
37+
> **Data flows vs. data flow graphs**: A *data flow* is a pipeline that moves and transforms data between endpoints by using built-in transformations. A *data flow graph* extends data flows with composable processing steps. Azure IoT Operations provides [built-in data flow graphs](concept-dataflow-graphs.md) for common operations like mapping, filtering, branching, and aggregation. For custom processing logic, you can implement WebAssembly modules as described in this article. Data flow graphs use YAML graph definitions that specify how operators connect. The data flow graph resource wraps this definition and maps its abstract source and sink operations to concrete endpoints, like MQTT topics and Kafka topics.
3838
3939
## Overview
4040

articles/iot-operations/connect-to-cloud/overview-dataflow.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -45,8 +45,8 @@ You can apply transformations to data during the processing stage to perform var
4545
- **Standardizing values**: Scale property values to a user-defined range.
4646
- **Contextualizing data**: Add reference data to messages for enrichment and driving insights.
4747

48-
<!-- > [!TIP]
49-
> For richer processing capabilities including conditional routing, time-based aggregation, and composable transform pipelines, see [Data flow graphs](concept-dataflow-graphs.md) (preview). -->
48+
> [!TIP]
49+
> For richer processing capabilities including conditional routing, time-based aggregation, and composable transform pipelines, see [Data flow graphs](concept-dataflow-graphs.md) (preview).
5050
5151
### Configuration and deployment
5252

@@ -77,9 +77,9 @@ The local MQTT broker message queue is stored in memory by default. You can conf
7777

7878
## Related content
7979

80-
<!-- - [Data flows vs. data flow graphs](overview-dataflow-comparison.md)
80+
- [Data flows vs. data flow graphs](overview-dataflow-comparison.md)
8181
- [Data flow graphs overview](concept-dataflow-graphs.md)
8282
- [Create a data flow](howto-create-dataflow.md)
83-
- [Configure a data flow source](howto-configure-dataflow-source.md) -->
83+
- [Configure a data flow source](howto-configure-dataflow-source.md)
8484
- [Create a data flow endpoint](howto-configure-dataflow-endpoint.md)
8585
- [Tutorial: Send messages from assets to the cloud using a data flow](../end-to-end-tutorials/tutorial-upload-messages-to-cloud.md)

0 commit comments

Comments
 (0)