Skip to content

Commit e77c16c

Browse files
authored
Merge pull request #311598 from spelluru/ehubfreshness0209
Event Hubs - Freshness Review
2 parents e258215 + 2f9c789 commit e77c16c

2 files changed

Lines changed: 14 additions & 14 deletions

File tree

articles/event-hubs/event-hubs-faq.yml

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ metadata:
33
title: Frequently asked questions - Azure Event Hubs | Microsoft Docs
44
description: This article provides a list of frequently asked questions (FAQ) for Azure Event Hubs and their answers.
55
ms.topic: faq
6-
ms.date: 09/30/2024
6+
ms.date: 02/09/2026
77
ms.custom: sfi-ropc-nochange
88
title: Event Hubs frequently asked questions
99
summary: |
@@ -84,7 +84,7 @@ sections:
8484
- question: |
8585
What configuration changes need to be done for my existing application to talk to Event Hubs?
8686
answer: |
87-
To connect to an event hub, you'll need to update the Kafka client configs. It's done by creating an Event Hubs namespace and obtaining the [connection string](event-hubs-get-connection-string.md). Change the bootstrap.servers to point the Event Hubs FQDN and the port to 9093. Update the sasl.jaas.config to direct the Kafka client to your Event Hubs endpoint (which is the connection string you've obtained), with correct authentication as shown below:
87+
To connect to an event hub, you need to update the Kafka client configs. It's done by creating an Event Hubs namespace and obtaining the [connection string](event-hubs-get-connection-string.md). Change the bootstrap.servers to point the Event Hubs FQDN and the port to 9093. Update the sasl.jaas.config to direct the Kafka client to your Event Hubs endpoint (which is the connection string you've obtained), with correct authentication as shown here:
8888
8989
```properties
9090
bootstrap.servers={YOUR.EVENTHUBS.FQDN}:9093
@@ -105,7 +105,7 @@ sections:
105105
```
106106
107107
> [!NOTE]
108-
> If sasl.jaas.config isn't a supported configuration in your framework, find the configurations that are used to set the SASL username and password and use them instead. Set the username to $ConnectionString and the password to your Event Hubs connection string.
108+
> If sasl.jaas.config isn't a supported configuration in your framework, find the configurations that are used to set the Simple Authentication and Security Layer (SASL) username and password and use them instead. Set the username to $ConnectionString and the password to your Event Hubs connection string.
109109
110110
- question: |
111111
What is the message/event size for Event Hubs?
@@ -195,7 +195,7 @@ sections:
195195
[!INCLUDE [event-hubs-partition-count](./includes/event-hubs-partition-count.md)]
196196
197197
- question: |
198-
Can partition count be increased in the Standard tier of Event Hubs?
198+
Can I increase the partition count in the Standard tier of Event Hubs?
199199
answer: |
200200
No, it's not possible because partitions are immutable in the Standard tier. Dynamic addition of partitions is available only in premium and dedicated tiers of Event Hubs.
201201
@@ -219,7 +219,7 @@ sections:
219219
- question: |
220220
How are ingress events calculated?
221221
answer: |
222-
Each event sent to an event hub counts as a billable message. An *ingress event* is defined as a unit of data that is less than or equal to 64 KB. Any event that is less than or equal to 64 KB in size is considered to be one billable event. If the event is greater than 64 KB, the number of billable events is calculated according to the event size, in multiples of 64 KB. For example, an 8-KB event sent to the event hub is billed as one event, but a 96-KB message sent to the event hub is billed as two events.
222+
Each event sent to an event hub counts as a billable message. An *ingress event* is defined as a unit of data that's less than or equal to 64 KB. Any event that's less than or equal to 64 KB in size is considered to be one billable event. If the event is greater than 64 KB, the number of billable events is calculated according to the event size, in multiples of 64 KB. For example, an 8-KB event sent to the event hub is billed as one event, but a 96-KB message sent to the event hub is billed as two events.
223223
224224
Events consumed from an event hub, and management operations and control calls such as checkpoints, aren't counted as billable ingress events, but accrue up to the throughput unit allowance.
225225

articles/event-hubs/schema-registry-overview.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: Azure Schema Registry in Azure Event Hubs
33
description: This article provides an overview of Schema Registry support by Azure Event Hubs and how it can be used from your Apache Kafka and other apps.
44
ms.topic: concept-article
5-
ms.date: 12/02/2024
5+
ms.date: 02/09/2026
66
ms.custom: references_regions
77
# Customer intent: As an Azure Event Hubs user, I want to know how Azure Event Hubs supports registering schemas and using them in sending and receiving events.
88
---
@@ -11,24 +11,24 @@ ms.custom: references_regions
1111

1212
Event streaming and messaging scenarios often deal with structured data in the event or message payload. However, the structured data is of little value to the event broker, which only deals with bytes. Schema-driven formats such as [Apache Avro](https://avro.apache.org/), [JSONSchema](https://json-schema.org/), or [Protobuf](https://protobuf.dev/) are often used to serialize or deserialize such structured data to/from binary.
1313

14-
An event producer uses a schema definition to serialize event payload and publish it to an event broker such as Event Hubs. Event consumers read event payload from the broker and deserialize it using the same schema definition.
14+
An event producer uses a schema definition to serialize the event payload and publish it to an event broker such as Event Hubs. Event consumers read the event payload from the broker and deserialize it by using the same schema definition.
1515

16-
So, both producers and consumers can validate the integrity of the data with the same schema.
16+
Both producers and consumers can validate the integrity of the data by using the same schema.
1717

18-
:::image type="content" source="./media/schema-registry-overview/schema-driven-ser-de.svg" alt-text="Image showing producers and consumers serializing and deserializing event payload using schemas from the Schema Registry. ":::
18+
:::image type="content" source="./media/schema-registry-overview/schema-driven-ser-de.svg" alt-text="Diagram showing producers and consumers serializing and deserializing event payload using schemas from the Schema Registry. " lightbox="./media/schema-registry-overview/schema-driven-ser-de.svg":::
1919

2020
## What is Azure Schema Registry?
21-
**Azure Schema Registry** is a feature of Event Hubs, which provides a central repository for schemas for event-driven and messaging-centric applications. It provides the flexibility for your producer and consumer applications to **exchange data without having to manage and share the schema**. It also provides a simple governance framework for reusable schemas and defines relationship between schemas through a logical grouping construct (schema groups).
21+
**Azure Schema Registry** is a feature of Event Hubs that provides a central repository for schemas for event-driven and messaging-centric applications. It provides the flexibility for your producer and consumer applications to **exchange data without having to manage and share the schema**. It also provides a simple governance framework for reusable schemas and defines relationship between schemas through a logical grouping construct (schema groups).
2222

23-
:::image type="content" source="./media/schema-registry-overview/schema-registry.svg" alt-text="Image showing a producer and a consumer serializing and deserializing event payload using a schema from the Schema Registry." border="false":::
23+
:::image type="content" source="./media/schema-registry-overview/schema-registry.svg" alt-text="Diagram showing a producer and a consumer serializing and deserializing event payload using a schema from the Schema Registry." lightbox="./media/schema-registry-overview/schema-registry.svg" border="false":::
2424

25-
With schema-driven serialization frameworks like Apache Avro, JSONSchema and Protobuf, moving serialization metadata into shared schemas can also help with **reducing the per-message overhead**. It's because each message doesn't need to have the metadata (type information and field names) as it's the case with tagged formats such as JSON.
25+
With schema-driven serialization frameworks like Apache Avro, JSONSchema, and Protobuf, moving serialization metadata into shared schemas can also help **reduce the per-message overhead**. Each message doesn't need to include the metadata (type information and field names) as it does with tagged formats such as JSON.
2626

2727
> [!NOTE]
28-
> The feature is available in the **Standard**, **Premium**, and **Dedicated** tier.
28+
> The feature is available in the **Standard**, **Premium**, and **Dedicated** tiers.
2929
>
3030
31-
Having schemas stored alongside the events and inside the eventing infrastructure ensures that the metadata required for serialization or deserialization is always in reach and schemas can't be misplaced.
31+
Storing schemas alongside the events and inside the eventing infrastructure ensures that the metadata required for serialization or deserialization is always available and schemas can't be misplaced.
3232

3333
## Related content
3434

0 commit comments

Comments
 (0)