You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
description: This article provides a list of frequently asked questions (FAQ) for Azure Event Hubs and their answers.
5
5
ms.topic: faq
6
-
ms.date: 09/30/2024
6
+
ms.date: 02/09/2026
7
7
ms.custom: sfi-ropc-nochange
8
8
title: Event Hubs frequently asked questions
9
9
summary: |
@@ -84,7 +84,7 @@ sections:
84
84
- question: |
85
85
What configuration changes need to be done for my existing application to talk to Event Hubs?
86
86
answer: |
87
-
To connect to an event hub, you'll need to update the Kafka client configs. It's done by creating an Event Hubs namespace and obtaining the [connection string](event-hubs-get-connection-string.md). Change the bootstrap.servers to point the Event Hubs FQDN and the port to 9093. Update the sasl.jaas.config to direct the Kafka client to your Event Hubs endpoint (which is the connection string you've obtained), with correct authentication as shown below:
87
+
To connect to an event hub, you need to update the Kafka client configs. It's done by creating an Event Hubs namespace and obtaining the [connection string](event-hubs-get-connection-string.md). Change the bootstrap.servers to point the Event Hubs FQDN and the port to 9093. Update the sasl.jaas.config to direct the Kafka client to your Event Hubs endpoint (which is the connection string you've obtained), with correct authentication as shown here:
88
88
89
89
```properties
90
90
bootstrap.servers={YOUR.EVENTHUBS.FQDN}:9093
@@ -105,7 +105,7 @@ sections:
105
105
```
106
106
107
107
> [!NOTE]
108
-
> If sasl.jaas.config isn't a supported configuration in your framework, find the configurations that are used to set the SASL username and password and use them instead. Set the username to $ConnectionString and the password to your Event Hubs connection string.
108
+
> If sasl.jaas.config isn't a supported configuration in your framework, find the configurations that are used to set the Simple Authentication and Security Layer (SASL) username and password and use them instead. Set the username to $ConnectionString and the password to your Event Hubs connection string.
Can partition count be increased in the Standard tier of Event Hubs?
198
+
Can I increase the partition count in the Standard tier of Event Hubs?
199
199
answer: |
200
200
No, it's not possible because partitions are immutable in the Standard tier. Dynamic addition of partitions is available only in premium and dedicated tiers of Event Hubs.
201
201
@@ -219,7 +219,7 @@ sections:
219
219
- question: |
220
220
How are ingress events calculated?
221
221
answer: |
222
-
Each event sent to an event hub counts as a billable message. An *ingress event* is defined as a unit of data that is less than or equal to 64 KB. Any event that is less than or equal to 64 KB in size is considered to be one billable event. If the event is greater than 64 KB, the number of billable events is calculated according to the event size, in multiples of 64 KB. For example, an 8-KB event sent to the event hub is billed as one event, but a 96-KB message sent to the event hub is billed as two events.
222
+
Each event sent to an event hub counts as a billable message. An *ingress event* is defined as a unit of data that's less than or equal to 64 KB. Any event that's less than or equal to 64 KB in size is considered to be one billable event. If the event is greater than 64 KB, the number of billable events is calculated according to the event size, in multiples of 64 KB. For example, an 8-KB event sent to the event hub is billed as one event, but a 96-KB message sent to the event hub is billed as two events.
223
223
224
224
Events consumed from an event hub, and management operations and control calls such as checkpoints, aren't counted as billable ingress events, but accrue up to the throughput unit allowance.
Copy file name to clipboardExpand all lines: articles/event-hubs/schema-registry-overview.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
title: Azure Schema Registry in Azure Event Hubs
3
3
description: This article provides an overview of Schema Registry support by Azure Event Hubs and how it can be used from your Apache Kafka and other apps.
4
4
ms.topic: concept-article
5
-
ms.date: 12/02/2024
5
+
ms.date: 02/09/2026
6
6
ms.custom: references_regions
7
7
# Customer intent: As an Azure Event Hubs user, I want to know how Azure Event Hubs supports registering schemas and using them in sending and receiving events.
8
8
---
@@ -11,24 +11,24 @@ ms.custom: references_regions
11
11
12
12
Event streaming and messaging scenarios often deal with structured data in the event or message payload. However, the structured data is of little value to the event broker, which only deals with bytes. Schema-driven formats such as [Apache Avro](https://avro.apache.org/), [JSONSchema](https://json-schema.org/), or [Protobuf](https://protobuf.dev/) are often used to serialize or deserialize such structured data to/from binary.
13
13
14
-
An event producer uses a schema definition to serialize event payload and publish it to an event broker such as Event Hubs. Event consumers read event payload from the broker and deserialize it using the same schema definition.
14
+
An event producer uses a schema definition to serialize the event payload and publish it to an event broker such as Event Hubs. Event consumers read the event payload from the broker and deserialize it by using the same schema definition.
15
15
16
-
So, both producers and consumers can validate the integrity of the data with the same schema.
16
+
Both producers and consumers can validate the integrity of the data by using the same schema.
17
17
18
-
:::image type="content" source="./media/schema-registry-overview/schema-driven-ser-de.svg" alt-text="Image showing producers and consumers serializing and deserializing event payload using schemas from the Schema Registry. ":::
18
+
:::image type="content" source="./media/schema-registry-overview/schema-driven-ser-de.svg" alt-text="Diagram showing producers and consumers serializing and deserializing event payload using schemas from the Schema Registry. " lightbox="./media/schema-registry-overview/schema-driven-ser-de.svg":::
19
19
20
20
## What is Azure Schema Registry?
21
-
**Azure Schema Registry** is a feature of Event Hubs, which provides a central repository for schemas for event-driven and messaging-centric applications. It provides the flexibility for your producer and consumer applications to **exchange data without having to manage and share the schema**. It also provides a simple governance framework for reusable schemas and defines relationship between schemas through a logical grouping construct (schema groups).
21
+
**Azure Schema Registry** is a feature of Event Hubs that provides a central repository for schemas for event-driven and messaging-centric applications. It provides the flexibility for your producer and consumer applications to **exchange data without having to manage and share the schema**. It also provides a simple governance framework for reusable schemas and defines relationship between schemas through a logical grouping construct (schema groups).
22
22
23
-
:::image type="content" source="./media/schema-registry-overview/schema-registry.svg" alt-text="Image showing a producer and a consumer serializing and deserializing event payload using a schema from the Schema Registry." border="false":::
23
+
:::image type="content" source="./media/schema-registry-overview/schema-registry.svg" alt-text="Diagram showing a producer and a consumer serializing and deserializing event payload using a schema from the Schema Registry." lightbox="./media/schema-registry-overview/schema-registry.svg" border="false":::
24
24
25
-
With schema-driven serialization frameworks like Apache Avro, JSONSchema and Protobuf, moving serialization metadata into shared schemas can also help with **reducing the per-message overhead**. It's because each message doesn't need to have the metadata (type information and field names) as it's the case with tagged formats such as JSON.
25
+
With schema-driven serialization frameworks like Apache Avro, JSONSchema, and Protobuf, moving serialization metadata into shared schemas can also help **reduce the per-message overhead**. Each message doesn't need to include the metadata (type information and field names) as it does with tagged formats such as JSON.
26
26
27
27
> [!NOTE]
28
-
> The feature is available in the **Standard**, **Premium**, and **Dedicated**tier.
28
+
> The feature is available in the **Standard**, **Premium**, and **Dedicated**tiers.
29
29
>
30
30
31
-
Having schemas stored alongside the events and inside the eventing infrastructure ensures that the metadata required for serialization or deserialization is always in reach and schemas can't be misplaced.
31
+
Storing schemas alongside the events and inside the eventing infrastructure ensures that the metadata required for serialization or deserialization is always available and schemas can't be misplaced.
0 commit comments