Skip to content

Commit 03582f3

Browse files
committed
Event Hubs - Freshness Reviwe
1 parent c4ab977 commit 03582f3

2 files changed

Lines changed: 16 additions & 15 deletions

File tree

articles/event-hubs/event-hubs-faq.yml

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ metadata:
33
title: Frequently asked questions - Azure Event Hubs | Microsoft Docs
44
description: This article provides a list of frequently asked questions (FAQ) for Azure Event Hubs and their answers.
55
ms.topic: faq
6-
ms.date: 09/30/2024
6+
ms.date: 02/09/2026
77
ms.custom: sfi-ropc-nochange
88
title: Event Hubs frequently asked questions
99
summary: |
@@ -84,7 +84,7 @@ sections:
8484
- question: |
8585
What configuration changes need to be done for my existing application to talk to Event Hubs?
8686
answer: |
87-
To connect to an event hub, you'll need to update the Kafka client configs. It's done by creating an Event Hubs namespace and obtaining the [connection string](event-hubs-get-connection-string.md). Change the bootstrap.servers to point the Event Hubs FQDN and the port to 9093. Update the sasl.jaas.config to direct the Kafka client to your Event Hubs endpoint (which is the connection string you've obtained), with correct authentication as shown below:
87+
To connect to an event hub, you need to update the Kafka client configs. It's done by creating an Event Hubs namespace and obtaining the [connection string](event-hubs-get-connection-string.md). Change the bootstrap.servers to point the Event Hubs FQDN and the port to 9093. Update the sasl.jaas.config to direct the Kafka client to your Event Hubs endpoint (which is the connection string you've obtained), with correct authentication as shown here:
8888
8989
```properties
9090
bootstrap.servers={YOUR.EVENTHUBS.FQDN}:9093
@@ -105,7 +105,7 @@ sections:
105105
```
106106
107107
> [!NOTE]
108-
> If sasl.jaas.config isn't a supported configuration in your framework, find the configurations that are used to set the SASL username and password and use them instead. Set the username to $ConnectionString and the password to your Event Hubs connection string.
108+
> If sasl.jaas.config isn't a supported configuration in your framework, find the configurations that are used to set the Simple Authentication and Security Layer (SASL) username and password and use them instead. Set the username to $ConnectionString and the password to your Event Hubs connection string.
109109
110110
- question: |
111111
What is the message/event size for Event Hubs?
@@ -142,7 +142,7 @@ sections:
142142
- question: |
143143
How does Autoinflate feature of Event Hubs work?
144144
answer: |
145-
The autoinflate feature lets you scale up your throughput units (TUs). It means that you can start by purchasing low TUs and autoinflate scales up your TUs as your ingress increases. It gives you a cost-effective option and complete control of the number of TUs to manage. This feature is a **scale-up only** feature, and you can completely control the scaling down of the number of TUs by updating it.
145+
The autoinflate feature lets you scale up your throughput units (TUs). It means that you can start by purchasing low TUs and autoinflate scales up your TUs as your ingress increases. It gives you a cost-effective option and complete control of the number of TUs to manage. This feature is a **scale-up only** feature, and you can scale down of the number of TUs by manually updating it.
146146
147147
You might want to start with low throughput units (TUs), for example, 2 TUs. If you predict that your traffic might grow to 15 TUs, enable the auto inflate feature on your namespace, and set the max limit to 15 TUs. You can now grow your TUs automatically as your traffic grows.
148148
@@ -195,7 +195,7 @@ sections:
195195
[!INCLUDE [event-hubs-partition-count](./includes/event-hubs-partition-count.md)]
196196
197197
- question: |
198-
Can partition count be increased in the Standard tier of Event Hubs?
198+
Can I increase the partition count in the Standard tier of Event Hubs?
199199
answer: |
200200
No, it's not possible because partitions are immutable in the Standard tier. Dynamic addition of partitions is available only in premium and dedicated tiers of Event Hubs.
201201
@@ -219,7 +219,7 @@ sections:
219219
- question: |
220220
How are ingress events calculated?
221221
answer: |
222-
Each event sent to an event hub counts as a billable message. An *ingress event* is defined as a unit of data that is less than or equal to 64 KB. Any event that is less than or equal to 64 KB in size is considered to be one billable event. If the event is greater than 64 KB, the number of billable events is calculated according to the event size, in multiples of 64 KB. For example, an 8-KB event sent to the event hub is billed as one event, but a 96-KB message sent to the event hub is billed as two events.
222+
Each event sent to an event hub counts as a billable message. An *ingress event* is defined as a unit of data that's less than or equal to 64 KB. Any event that's less than or equal to 64 KB in size is considered to be one billable event. If the event is greater than 64 KB, the number of billable events is calculated according to the event size, in multiples of 64 KB. For example, an 8-KB event sent to the event hub is billed as one event, but a 96-KB message sent to the event hub is billed as two events.
223223
224224
Events consumed from an event hub, and management operations and control calls such as checkpoints, aren't counted as billable ingress events, but accrue up to the throughput unit allowance.
225225

articles/event-hubs/schema-registry-overview.md

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,9 @@
11
---
22
title: Azure Schema Registry in Azure Event Hubs
33
description: This article provides an overview of Schema Registry support by Azure Event Hubs and how it can be used from your Apache Kafka and other apps.
4+
#customer intent: As an Azure Event Hubs user, I want to understand how to use Azure Schema Registry to manage schemas for event-driven applications so that I can ensure data consistency between producers and consumers.
45
ms.topic: concept-article
5-
ms.date: 12/02/2024
6+
ms.date: 02/09/2026
67
ms.custom: references_regions
78
# Customer intent: As an Azure Event Hubs user, I want to know how Azure Event Hubs supports registering schemas and using them in sending and receiving events.
89
---
@@ -11,24 +12,24 @@ ms.custom: references_regions
1112

1213
Event streaming and messaging scenarios often deal with structured data in the event or message payload. However, the structured data is of little value to the event broker, which only deals with bytes. Schema-driven formats such as [Apache Avro](https://avro.apache.org/), [JSONSchema](https://json-schema.org/), or [Protobuf](https://protobuf.dev/) are often used to serialize or deserialize such structured data to/from binary.
1314

14-
An event producer uses a schema definition to serialize event payload and publish it to an event broker such as Event Hubs. Event consumers read event payload from the broker and deserialize it using the same schema definition.
15+
An event producer uses a schema definition to serialize the event payload and publish it to an event broker such as Event Hubs. Event consumers read the event payload from the broker and deserialize it by using the same schema definition.
1516

16-
So, both producers and consumers can validate the integrity of the data with the same schema.
17+
Both producers and consumers can validate the integrity of the data by using the same schema.
1718

18-
:::image type="content" source="./media/schema-registry-overview/schema-driven-ser-de.svg" alt-text="Image showing producers and consumers serializing and deserializing event payload using schemas from the Schema Registry. ":::
19+
:::image type="content" source="./media/schema-registry-overview/schema-driven-ser-de.svg" alt-text="Diagram showing producers and consumers serializing and deserializing event payload using schemas from the Schema Registry. " lightbox="./media/schema-registry-overview/schema-driven-ser-de.svg":::
1920

2021
## What is Azure Schema Registry?
21-
**Azure Schema Registry** is a feature of Event Hubs, which provides a central repository for schemas for event-driven and messaging-centric applications. It provides the flexibility for your producer and consumer applications to **exchange data without having to manage and share the schema**. It also provides a simple governance framework for reusable schemas and defines relationship between schemas through a logical grouping construct (schema groups).
22+
**Azure Schema Registry** is a feature of Event Hubs that provides a central repository for schemas for event-driven and messaging-centric applications. It provides the flexibility for your producer and consumer applications to **exchange data without having to manage and share the schema**. It also provides a simple governance framework for reusable schemas and defines relationship between schemas through a logical grouping construct (schema groups).
2223

23-
:::image type="content" source="./media/schema-registry-overview/schema-registry.svg" alt-text="Image showing a producer and a consumer serializing and deserializing event payload using a schema from the Schema Registry." border="false":::
24+
:::image type="content" source="./media/schema-registry-overview/schema-registry.svg" alt-text="Diagram showing a producer and a consumer serializing and deserializing event payload using a schema from the Schema Registry." lightbox="./media/schema-registry-overview/schema-registry.svg" border="false":::
2425

25-
With schema-driven serialization frameworks like Apache Avro, JSONSchema and Protobuf, moving serialization metadata into shared schemas can also help with **reducing the per-message overhead**. It's because each message doesn't need to have the metadata (type information and field names) as it's the case with tagged formats such as JSON.
26+
With schema-driven serialization frameworks like Apache Avro, JSONSchema, and Protobuf, moving serialization metadata into shared schemas can also help **reduce the per-message overhead**. Each message doesn't need to include the metadata (type information and field names) as it does with tagged formats such as JSON.
2627

2728
> [!NOTE]
28-
> The feature is available in the **Standard**, **Premium**, and **Dedicated** tier.
29+
> The feature is available in the **Standard**, **Premium**, and **Dedicated** tiers.
2930
>
3031
31-
Having schemas stored alongside the events and inside the eventing infrastructure ensures that the metadata required for serialization or deserialization is always in reach and schemas can't be misplaced.
32+
Storing schemas alongside the events and inside the eventing infrastructure ensures that the metadata required for serialization or deserialization is always available and schemas can't be misplaced.
3233

3334
## Related content
3435

0 commit comments

Comments
 (0)