Skip to content

Commit 358cc7f

Browse files
authored
Merge pull request #312716 from dominicbetts/release-aio-2603-mqtt-connector
AIO 2603: MQTT connector GA
2 parents b8e6211 + 608390b commit 358cc7f

13 files changed

Lines changed: 176 additions & 60 deletions

articles/iot-operations/discover-manage-assets/howto-connect-kafka.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,30 +1,30 @@
11
---
22
title: Connect to a Kafka source
3-
description: Use a data flow and the connector for MQTT (preview) to ingest data from a Kafka-compatible source such as Azure Event Hubs, discover topics as assets, and route data through the MQTT broker in Azure IoT Operations.
3+
description: Use a data flow and the connector for MQTT to ingest data from a Kafka-compatible source such as Azure Event Hubs, discover topics as assets, and route data through the MQTT broker in Azure IoT Operations.
44
author: dominicbetts
55
ms.author: dobett
66
ms.service: azure-iot-operations
77
ms.topic: how-to
8-
ms.date: 02/23/2026
8+
ms.date: 03/06/2026
99

1010
#CustomerIntent: As an industrial edge IT or operations user, I want to ingest data from an Azure Event Hubs namespace into Azure IoT Operations using the Kafka protocol so that I can manage event hubs as assets and route the data through the MQTT broker for processing.
1111
---
1212

1313
# Connect to a Kafka source
1414

15-
Many messaging services expose a Kafka-compatible endpoint, including Azure Event Hubs, Confluent Cloud, and self-hosted Apache Kafka clusters. Azure IoT Operations doesn't include a dedicated southbound Kafka connector, but you can ingest data from any Kafka-compatible source by using a data flow and the connector for MQTT (preview). For simplicity, this article uses an Azure Event Hubs namespace as the Kafka source.
15+
Many messaging services expose a Kafka-compatible endpoint, including Azure Event Hubs, Confluent Cloud, and self-hosted Apache Kafka clusters. Azure IoT Operations doesn't include a dedicated southbound Kafka connector, but you can ingest data from any Kafka-compatible source by using a data flow and the connector for MQTT. For simplicity, this article uses an Azure Event Hubs namespace as the Kafka source.
1616

1717
To connect to a Kafka source, you combine:
1818

1919
- A **data flow** with a Kafka inbound endpoint that connects to the Event Hubs namespace and routes messages to the internal MQTT broker. You can configure the data flow to use the Event Hubs topic name as the destination topic in the MQTT broker. This mapping lets you route messages from multiple event hubs to corresponding MQTT broker topics without a separate configuration for each one.
2020

21-
- The **connector for MQTT (preview)** configured with a device that uses discovery to find the MQTT broker topics that correspond to your event hubs. The connector creates assets for each discovered topic. You can then configure each asset to route data to your own named topics in the MQTT broker, and apply any individual data processing you need as the data flows out of the system.
21+
- The **connector for MQTT** configured with a device that uses discovery to find the MQTT broker topics that correspond to your event hubs. The connector creates assets for each discovered topic. You can then configure each asset to route data to your own named topics in the MQTT broker, and apply any individual data processing you need as the data flows out of the system.
2222

2323
The following diagram illustrates this architecture:
2424

2525
:::image type="content" source="media/howto-connect-kafka/kafka.svg" alt-text="Diagram that shows the architecture of the solution." lightbox="media/howto-connect-kafka/kafka.png":::
2626

27-
Messages enter from the Kafka source, such as an Azure Event Hubs namespace, and are ingested into Azure IoT Operations through a data flow with a Kafka source endpoint. The data flow routes messages to topics in the internal MQTT broker. The connector for MQTT (preview) detects the topics in the MQTT broker and lets you create assets based on the topic names. Each asset can be configured to route data to specific topics in the MQTT broker. You can then perform any routing and custom processing to the messages.
27+
Messages enter from the Kafka source, such as an Azure Event Hubs namespace, and are ingested into Azure IoT Operations through a data flow with a Kafka source endpoint. The data flow routes messages to topics in the internal MQTT broker. The connector for MQTT detects the topics in the MQTT broker and lets you create assets based on the topic names. Each asset can be configured to route data to specific topics in the MQTT broker. You can then perform any routing and custom processing to the messages.
2828

2929

3030
> [!TIP]
@@ -92,9 +92,9 @@ Create a data flow that reads from your Event Hubs namespace using the Kafka pro
9292

9393
Once the data flow is running, messages from your Event Hubs topics appear in the MQTT broker under the `factory/` topic prefix.
9494

95-
## Step 3: Configure the connector for MQTT (preview) device
95+
## Step 3: Configure the connector for MQTT device
9696

97-
Set up a device in the connector for MQTT (preview) that subscribes to the MQTT broker topics where the Kafka data lands. Configure the device to use topic discovery so that each Kafka topic appears as an asset.
97+
Set up a device in the connector for MQTT that subscribes to the MQTT broker topics where the Kafka data lands. Configure the device to use topic discovery so that each Kafka topic appears as an asset.
9898

9999
1. In the operations experience web UI, select **Devices** in the left navigation pane. Then select **Create new**.
100100

@@ -120,7 +120,7 @@ Set up a device in the connector for MQTT (preview) that subscribes to the MQTT
120120

121121
## Step 4: Send test messages using Data Explorer
122122

123-
Before you can discover assets, messages must arrive at the MQTT broker so the connector for MQTT (preview) can detect the topics. You can use the **Data Explorer** feature built into each event hub in the Azure portal to send test messages.
123+
Before you can discover assets, messages must arrive at the MQTT broker so the connector for MQTT can detect the topics. You can use the **Data Explorer** feature built into each event hub in the Azure portal to send test messages.
124124

125125
1. In the [Azure portal](https://portal.azure.com), navigate to your Event Hubs namespace, then select the event hub you want to test, such as `warehouse-w1-machine-m1`.
126126

@@ -138,7 +138,7 @@ Before you can discover assets, messages must arrive at the MQTT broker so the c
138138

139139
## Step 5: Discover and create assets
140140

141-
When the data flow forwards Event Hubs messages to the MQTT broker, the connector for MQTT (preview) detects the topic paths and creates a _discovered_ asset_ for each one. You can then import each discovered asset to manage it individually.
141+
When the data flow forwards Event Hubs messages to the MQTT broker, the connector for MQTT detects the topic paths and creates a _discovered_ asset_ for each one. You can then import each discovered asset to manage it individually.
142142

143143
1. In the operations experience web UI, select **Discovery** in the left navigation pane. Discovered assets from the Event Hubs topics appear in the list with the asset name derived from the topic path:
144144

@@ -172,4 +172,4 @@ To verify that the asset is routing messages correctly, you can use an MQTT clie
172172

173173
- [Configure Azure Event Hubs and Kafka data flow endpoints](../connect-to-cloud/howto-configure-kafka-endpoint.md)
174174
- [Use Azure Event Hubs from Apache Kafka applications](/azure/event-hubs/event-hubs-for-kafka-ecosystem-overview)
175-
- [Configure the connector for MQTT (preview)](howto-use-mqtt-connector.md)
175+
- [Configure the connector for MQTT](howto-use-mqtt-connector.md)

articles/iot-operations/discover-manage-assets/howto-use-http-connector.md

Lines changed: 1 addition & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -275,22 +275,4 @@ resource asset 'Microsoft.DeviceRegistry/namespaces/assets@2025-10-01' = {
275275

276276
## Transform incoming data
277277

278-
To transform the incoming data by using a WASM module and graph, complete the following steps:
279-
280-
1. Develop a WASM module to perform the custom transformation. For more information, see [Develop WebAssembly (WASM) modules and graph definitions](../develop-edge-apps/howto-develop-wasm-modules.md) or [Build WASM modules for data flows in VS Code](../develop-edge-apps/howto-build-wasm-modules-vscode.md).
281-
282-
1. Configure your transformation graph. For more information, see [Configure WebAssembly (WASM) graph definitions](../develop-edge-apps/howto-configure-wasm-graph-definitions.md).
283-
284-
1. Deploy both the module and graph to your container registry. For more information, see [Deploy WebAssembly (WASM) modules and graph definitions](../develop-edge-apps/howto-deploy-wasm-graph-definitions.md).
285-
286-
1. Set up authentication and connection details so Azure IoT Operations can access the container registry.
287-
288-
1. Configure your asset's dataset with the URL of the deployed WASM graph in the **Transform** field:
289-
290-
:::image type="content" source="media/howto-use-http-connector/configure-transform.png" alt-text="Screenshot that shows how to add a WASM transform to a dataset." lightbox="media/howto-use-http-connector/configure-transform.png":::
291-
292-
A data transformation in the HTTP/REST connector only requires a [single map operator](../develop-edge-apps/howto-develop-wasm-modules.md#quickstart-build-deploy-and-verify-a-wasm-module), but WASM graphs are fully supported with the following restrictions:
293-
294-
- The graph must have a single `source` node and a single `sink` node.
295-
- The graph must consume and emit the `DataModel::Message` datatype.
296-
- The graph must be stateless. Currently, this restriction means that accumulate operators aren't supported.
278+
[!INCLUDE [connector-transform-incoming-data](../includes/connector-transform-incoming-data.md)]

0 commit comments

Comments
 (0)