| title | Filter and route data in data flow graphs |
|---|---|
| description | Learn how to filter, branch, and merge messages using data flow graphs in Azure IoT Operations. |
| author | sethmanheim |
| ms.author | sethm |
| ms.service | azure-iot-operations |
| ms.subservice | azure-data-flows |
| ms.topic | how-to |
| ms.date | 04/02/2026 |
| ai-usage | ai-assisted |
Data flow graphs provide two ways to control which messages flow through your pipeline: filter transforms drop unwanted messages, and branch transforms route each message down one of two paths based on a condition. After branching, a concatenate transform merges the paths back together.
For an overview of data flow graphs and how transforms compose in a pipeline, see Data flow graphs overview.
- An Azure IoT Operations instance deployed on an Arc-enabled Kubernetes cluster. For more information, see Deploy Azure IoT Operations.
- A default registry endpoint named
defaultthat points tomcr.microsoft.comis automatically created during deployment. The built-in transforms use this endpoint.
A filter transform evaluates each incoming message against one or more rules and decides whether the message continues through the pipeline or gets dropped.
Each filter rule has these properties:
| Property | Required | Description |
|---|---|---|
inputs |
Yes | List of field paths to read from the incoming message. |
expression |
Yes | Formula applied to the input values. Must return a boolean. |
description |
No | Human-readable label used in error messages. |
Inputs are assigned positional variables based on their order: the first input is $1, the second is $2, and so on.
When you define multiple rules, they use OR logic: if any rule evaluates to true, the message is dropped. The engine short-circuits once a rule matches.
Key constraints:
- Expression is required. Every filter rule must include an
expression. - No wildcard inputs. Each input must reference a specific field path.
- Missing fields cause errors. If a field referenced in
inputsdoesn't exist, the filter returns an error rather than silently passing the message. - Non-boolean results cause errors. If an expression returns a non-boolean value (such as a string or number), the filter returns an error.
To drop messages where the temperature exceeds 100:
In the filter transform configuration, add a rule:
| Setting | Value |
|---|---|
| Input | temperature |
| Expression | $1 > 100 |
filter: [
{
inputs: [ 'temperature' ]
expression: '$1 > 100'
}
][!INCLUDE kubernetes-debug-only-note]
- inputs:
- temperature # $1
expression: "$1 > 100"Messages where the temperature is 100 or less pass through. Messages above 100 are dropped.
When you define more than one rule, the filter drops the message if any rule matches:
Add two rules:
| Input | Expression | Description |
|---|---|---|
temperature |
$1 > 100 |
Drop high temperature |
humidity |
$1 > 95 |
Drop high humidity |
filter: [
{
inputs: [ 'temperature' ]
expression: '$1 > 100'
description: 'Drop high temperature'
}
{
inputs: [ 'humidity' ]
expression: '$1 > 95'
description: 'Drop high humidity'
}
][!INCLUDE kubernetes-debug-only-note]
- inputs:
- temperature # $1
expression: "$1 > 100"
description: "Drop high temperature"
- inputs:
- humidity # $1
expression: "$1 > 95"
description: "Drop high humidity"| Message | temperature rule | humidity rule | Result |
|---|---|---|---|
{"temperature": 150, "humidity": 60} |
true | false | Dropped |
{"temperature": 80, "humidity": 98} |
false | true | Dropped |
{"temperature": 80, "humidity": 60} |
false | false | Passes |
Tip
Use multiple inputs in one rule when you need AND logic across fields. Use multiple rules when you need OR logic across independent conditions.
Reference multiple fields in a single rule and combine them with logical operators:
Add a rule with inputs temperature and humidity, and expression $1 > 30 && $2 < 60.
filter: [
{
inputs: [ 'temperature', 'humidity' ]
expression: '$1 > 30 && $2 < 60'
description: 'Drop hot and dry readings'
}
][!INCLUDE kubernetes-debug-only-note]
- inputs:
- temperature # $1
- humidity # $2
expression: "$1 > 30 && $2 < 60"
description: "Drop hot and dry readings"For the full list of operators and functions, see Expressions reference.
You can configure a filter transform to validate incoming messages against a JSON schema before filter rules run. Messages that don't conform to the schema are dropped immediately.
To enable schema validation, set validateSchema to true in the filter configuration. When enabled, the filter retrieves the schema from the dataflow source's schemaRef setting.
Guidelines:
- Use only one validating filter per pipeline.
- Place the validating filter first so that invalid messages are dropped before other processing.
- Filter rules still apply after schema validation passes. If you only need schema validation, leave the filter rules empty.
To learn about configuring schemas, see Understand message schemas.
Filter rules support datasets, which let you compare values against data from an external state store. For details on configuring datasets, see Enrich with external data.
In the filter transform configuration, add one or more rules with inputs and boolean expressions. Optionally enable schema validation and configure datasets for enrichment lookups.
The filter rules JSON is passed as the value for the rules key:
configuration: [
{
key: 'rules'
value: '{"datasets":[{"key":"device_limits as limits","inputs":["$source.deviceId","$context.deviceId"],"expression":"$1 == $2"}],"filter":[{"inputs":["temperature"],"expression":"$1 > 100","description":"Drop high temperature readings"},{"inputs":["rawValue","$context(limits).maxValue"],"expression":"$1 > $2","description":"Drop readings above device-specific limit"}]}'
}
][!INCLUDE kubernetes-debug-only-note]
{
"datasets": [
{
"key": "device_limits as limits",
"inputs": ["$source.deviceId", "$context.deviceId"],
"expression": "$1 == $2"
}
],
"filter": [
{
"inputs": ["temperature"],
"expression": "$1 > 100",
"description": "Drop high temperature readings"
},
{
"inputs": ["rawValue", "$context(limits).maxValue"],
"expression": "$1 > $2",
"description": "Drop readings above device-specific limit"
}
]
}| Key | Required | Description |
|---|---|---|
filter |
Yes | Array of filter rules. |
datasets |
No | Array of dataset definitions for enrichment lookups. |
validateSchema |
No | When true, validates messages against a JSON schema before filter rules run. Defaults to false. |
A branch transform evaluates a condition on each incoming message and routes it to one of two output paths: true or false. Unlike a filter (which drops messages), a branch preserves every message and directs it down the appropriate path.
Every message goes to exactly one of the two paths. Nothing is dropped.
Key constraints:
- The branch expression must return a boolean. Non-boolean results cause an error (unlike filter, which also errors on non-boolean).
- No wildcard inputs.
- Exactly one branch rule. The
branchkey takes a single object, not an array.
Important
Branching splits messages into separate processing paths, but all paths must merge back together using a concatenate transform before reaching the destination. Think of branching as a way to apply different transformations to different messages, not as a way to route to multiple endpoints.
To branch messages based on a severity threshold:
In the branch transform configuration, set:
| Setting | Value |
|---|---|
| Input | severity |
| Expression | $1 > 5 |
configuration: [
{
key: 'rules'
value: '{"branch":{"inputs":["severity"],"expression":"$1 > 5","description":"Route high-severity messages"}}'
}
][!INCLUDE kubernetes-debug-only-note]
{
"branch": {
"inputs": ["severity"],
"expression": "$1 > 5",
"description": "Route high-severity messages"
}
}Messages where severity is greater than 5 go to the true path. All others go to the false path.
In the pipeline configuration, use the node name followed by .output.true or .output.false to wire each path to a downstream transform.
In the data flow graph editor, drag connections from the branch transform's true and false outputs to the appropriate downstream transforms.
nodeConnections: [
{ from: { name: 'sensors' }, to: { name: 'severity-check' } }
{ from: { name: 'severity-check.output.true' }, to: { name: 'alert-transform' } }
{ from: { name: 'severity-check.output.false' }, to: { name: 'normal-transform' } }
][!INCLUDE kubernetes-debug-only-note]
nodeConnections:
- from: { name: sensors }
to: { name: severity-check }
- from: { name: severity-check.output.true }
to: { name: alert-transform }
- from: { name: severity-check.output.false }
to: { name: normal-transform }All branch paths must converge before reaching a destination. A concatenate transform merges them. It has no configuration and no rules. Messages from all connected inputs pass through unmodified.
Add a concatenate transform to the canvas and connect both branch paths to it, then connect the concatenate to the destination.
{
nodeType: 'Graph'
name: 'merge'
graphSettings: {
registryEndpointRef: 'default'
artifact: 'azureiotoperations/graph-dataflow-concatenate:1.0.0'
}
}[!INCLUDE kubernetes-debug-only-note]
- nodeType: Graph
name: merge
graphSettings:
registryEndpointRef: default
artifact: azureiotoperations/graph-dataflow-concatenate:1.0.0This end-to-end example filters out bad readings, branches by severity, applies different map transforms to each path, and merges the results.
:::image type="content" source="media/howto-dataflow-graphs-filter-route/filter-branch-pipeline.png" alt-text="Screenshot of the operations experience canvas showing a filter, branch, map, concat, and destination pipeline." lightbox="media/howto-dataflow-graphs-filter-route/filter-branch-pipeline.png":::
To build this pipeline in the Operations experience:
- Create a data flow graph and add a source that reads from
telemetry/sensors. - Add a filter transform. Configure a rule that drops messages where
temperature > 1000. - Add a branch transform. Configure the condition
severity > 5to route high-severity messages to the true path. - Add a map transform on the true path. Configure rules to rename
deviceIdtoid,temperaturetotemp, and add a fieldalertset totrue. - Add a map transform on the false path. Configure rules to rename
deviceIdtoidandtemperaturetotemp. - Add a concatenate transform to merge both paths.
- Add a destination that sends to
telemetry/processed. - Connect the elements: source → filter → branch → (true path: alert map, false path: normal map) → concatenate → destination.
resource dataflowGraph 'Microsoft.IoTOperations/instances/dataflowProfiles/dataflowGraphs@2025-10-01' = {
name: 'alert-routing'
parent: dataflowProfile
properties: {
profileRef: dataflowProfileName
mode: 'Enabled'
nodes: [
{
nodeType: 'Source'
name: 'sensors'
sourceSettings: {
endpointRef: 'default'
dataSources: [ 'telemetry/sensors' ]
}
}
{
nodeType: 'Graph'
name: 'remove-bad-data'
graphSettings: {
registryEndpointRef: 'default'
artifact: 'azureiotoperations/graph-dataflow-filter:1.0.0'
configuration: [
{
key: 'rules'
value: '{"filter":[{"inputs":["temperature"],"expression":"$1 > 1000","description":"Drop impossible temperature readings"}]}'
}
]
}
}
{
nodeType: 'Graph'
name: 'severity-check'
graphSettings: {
registryEndpointRef: 'default'
artifact: 'azureiotoperations/graph-dataflow-branch:1.0.0'
configuration: [
{
key: 'rules'
value: '{"branch":{"inputs":["severity"],"expression":"$1 > 5","description":"Route high-severity messages"}}'
}
]
}
}
{
nodeType: 'Graph'
name: 'alert-transform'
graphSettings: {
registryEndpointRef: 'default'
artifact: 'azureiotoperations/graph-dataflow-map:1.0.0'
configuration: [
{
key: 'rules'
value: '{"map":[{"inputs":["deviceId"],"output":"id"},{"inputs":["temperature"],"output":"temp"},{"inputs":[],"output":"alert","expression":"true"}]}'
}
]
}
}
{
nodeType: 'Graph'
name: 'normal-transform'
graphSettings: {
registryEndpointRef: 'default'
artifact: 'azureiotoperations/graph-dataflow-map:1.0.0'
configuration: [
{
key: 'rules'
value: '{"map":[{"inputs":["deviceId"],"output":"id"},{"inputs":["temperature"],"output":"temp"}]}'
}
]
}
}
{
nodeType: 'Graph'
name: 'merge'
graphSettings: {
registryEndpointRef: 'default'
artifact: 'azureiotoperations/graph-dataflow-concatenate:1.0.0'
}
}
{
nodeType: 'Destination'
name: 'output'
destinationSettings: {
endpointRef: 'default'
dataDestination: 'telemetry/processed'
}
}
]
nodeConnections: [
{ from: { name: 'sensors' }, to: { name: 'remove-bad-data' } }
{ from: { name: 'remove-bad-data' }, to: { name: 'severity-check' } }
{ from: { name: 'severity-check.output.true' }, to: { name: 'alert-transform' } }
{ from: { name: 'severity-check.output.false' }, to: { name: 'normal-transform' } }
{ from: { name: 'alert-transform' }, to: { name: 'merge' } }
{ from: { name: 'normal-transform' }, to: { name: 'merge' } }
{ from: { name: 'merge' }, to: { name: 'output' } }
]
}
}[!INCLUDE kubernetes-debug-only-note]
apiVersion: connectivity.iotoperations.azure.com/v1
kind: DataflowGraph
metadata:
name: alert-routing
namespace: azure-iot-operations
spec:
profileRef: default
nodes:
- nodeType: Source
name: sensors
sourceSettings:
endpointRef: default
dataSources:
- telemetry/sensors
- nodeType: Graph
name: remove-bad-data
graphSettings:
registryEndpointRef: default
artifact: azureiotoperations/graph-dataflow-filter:1.0.0
configuration:
- key: rules
value: |
{
"filter": [
{
"inputs": ["temperature"],
"expression": "$1 > 1000",
"description": "Drop impossible temperature readings"
}
]
}
- nodeType: Graph
name: severity-check
graphSettings:
registryEndpointRef: default
artifact: azureiotoperations/graph-dataflow-branch:1.0.0
configuration:
- key: rules
value: |
{
"branch": {
"inputs": ["severity"],
"expression": "$1 > 5",
"description": "Route high-severity messages"
}
}
- nodeType: Graph
name: alert-transform
graphSettings:
registryEndpointRef: default
artifact: azureiotoperations/graph-dataflow-map:1.0.0
configuration:
- key: rules
value: |
{
"map": [
{ "inputs": ["deviceId"], "output": "id" },
{ "inputs": ["temperature"], "output": "temp" },
{ "inputs": [], "output": "alert", "expression": "true" }
]
}
- nodeType: Graph
name: normal-transform
graphSettings:
registryEndpointRef: default
artifact: azureiotoperations/graph-dataflow-map:1.0.0
configuration:
- key: rules
value: |
{
"map": [
{ "inputs": ["deviceId"], "output": "id" },
{ "inputs": ["temperature"], "output": "temp" }
]
}
- nodeType: Graph
name: merge
graphSettings:
registryEndpointRef: default
artifact: azureiotoperations/graph-dataflow-concatenate:1.0.0
- nodeType: Destination
name: output
destinationSettings:
endpointRef: default
dataDestination: telemetry/processed
nodeConnections:
- from: { name: sensors }
to: { name: remove-bad-data }
- from: { name: remove-bad-data }
to: { name: severity-check }
- from: { name: severity-check.output.true }
to: { name: alert-transform }
- from: { name: severity-check.output.false }
to: { name: normal-transform }
- from: { name: alert-transform }
to: { name: merge }
- from: { name: normal-transform }
to: { name: merge }
- from: { name: merge }
to: { name: output }