title: Event Hubs features and terminology description: Learn about the core concepts, features, and terminology of Azure Event Hubs including namespaces, partitions, consumers, and protocols. ms.topic: concept-article ms.date: 01/12/2026
This article explains the core concepts and terminology of Azure Event Hubs. For a high-level overview, see What is Event Hubs?
| Concept | Description |
|---|---|
| Namespace | Management container for one or more event hubs. Controls network access and scaling. |
| Event hub | An append-only log that stores events. Equivalent to a Kafka topic. |
| Partition | Ordered sequence of events within an event hub. Enables parallel processing. |
| Producer/Publisher | Application that sends events to an event hub. |
| Consumer | Application that reads events from an event hub. |
| Consumer group | Independent view of the event stream. Multiple groups can read the same data separately. |
| Offset | Position of an event within a partition. Used to track reading progress. |
| Checkpointing | Saving the current offset so consumers can resume from where they left off. |
An Event Hubs namespace is a management container for event hubs (or topics, in Kafka parlance). It provides network endpoints and controls access through features like IP filtering, virtual network service endpoints, and Private Link.
:::image type="content" source="./media/event-hubs-features/namespace.png" alt-text="Diagram showing an Event Hubs namespace containing multiple event hubs.":::
[!INCLUDE event-hubs-partitions]
A producer (or publisher) is any application that sends events to an event hub.
| Method | Description |
|---|---|
| Azure SDKs | .NET, Java, Python, JavaScript, Go |
| REST API | HTTP POST requests for lightweight clients |
| Kafka clients | Use existing Kafka producers without code changes |
| AMQP 1.0 | Any AMQP client such as Apache Qpid |
- Batch or individual: Publish events one at a time or in batches. Maximum 1 MB per publish operation.
- Partition keys: Specify a partition key to group related events in the same partition, ensuring ordered delivery.
- Authorization: Use Microsoft Entra ID (OAuth2) or Shared Access Signatures (SAS) for access control.
:::image type="content" source="./media/event-hubs-features/partition_keys.png" alt-text="Diagram showing how partition keys map events to specific partitions.":::
Publisher policies enable granular control when you have many independent publishers. Each publisher uses a unique identifier:
//<my namespace>.servicebus.windows.net/<event hub name>/publishers/<my publisher name>The publisher name must match the SAS token used for authentication. When using publisher policies, the PartitionKey must match the publisher name.
A consumer is any application that reads events from an event hub. Event Hubs uses a pull model—consumers request events rather than having events pushed to them.
A consumer group is an independent view of the event stream. Multiple consumer groups can read the same event hub simultaneously, each tracking their own position.
| Guideline | Recommendation |
|---|---|
| Readers per partition | One active reader per partition within a consumer group (up to five in special scenarios) |
| Default group | Every event hub has a default consumer group ($Default) |
| Multiple applications | Create separate consumer groups for each application (analytics, archival, alerting) |
//<my namespace>.servicebus.windows.net/<event hub name>/<Consumer Group #1>
//<my namespace>.servicebus.windows.net/<event hub name>/<Consumer Group #2>:::image type="content" source="./media/event-hubs-about/event_hubs_architecture.png" alt-text="Diagram showing multiple consumer groups reading from the same event hub.":::
An offset is the position of an event within a partition—think of it as a cursor. Consumers use offsets to specify where to start reading. You can start from:
- A specific offset value
- A timestamp
- The beginning or end of the stream
:::image type="content" source="./media/event-hubs-features/partition_offset.png" alt-text="Diagram showing events in a partition with offset positions.":::
Checkpointing is when a consumer saves its current offset. This enables:
- Resumption: If a consumer disconnects, it resumes from the last checkpoint
- Failover: A new consumer instance can take over from where another left off
- Replay: Process historical events by specifying an earlier offset
Important
In AMQP, checkpointing is the consumer's responsibility. The Event Hubs service provides offsets, but consumers must store checkpoints.
[!INCLUDE storage-checkpoint-store-recommendations]
The Azure SDKs provide intelligent consumer clients that handle partition management, load balancing, and checkpointing automatically:
| Language | Client |
|---|---|
| .NET | EventProcessorClient |
| Java | EventProcessorClient |
| Python | EventHubConsumerClient |
| JavaScript | EventHubConsumerClient |
Each event contains:
- Body: The event payload
- Offset: Position in the partition
- Sequence number: Order within the partition
- User properties: Custom metadata
- System properties: Service-assigned metadata (enqueue time, etc.)
Events are automatically removed based on a time-based retention policy.
| Tier | Default | Maximum |
|---|---|---|
| Standard | 1 hour | 7 days |
| Premium | 1 hour | 90 days |
| Dedicated | 1 hour | 90 days |
Key points:
- Events can't be explicitly deleted
- Retention changes apply to existing events
- Events become unavailable exactly when the retention period expires
Note
Event Hubs is a real-time streaming engine, not a database. For long-term storage, use Event Hubs Capture to archive events to Azure Storage, Data Lake Storage, or Azure Synapse.
Capture automatically saves streaming data to Azure Blob Storage or Azure Data Lake Storage. Configure a minimum size and time window to control capture frequency.
:::image type="content" source="./media/event-hubs-features/capture.png" alt-text="Diagram showing Event Hubs Capture writing data to Azure Storage.":::
| Format | Description |
|---|---|
| Avro | Default format for captured data |
| Parquet | Available through the no-code editor in Azure portal (learn more) |
Log compaction retains only the latest event for each unique key, rather than using time-based retention. Useful for maintaining current state without storing full history.
Event Hubs supports multiple protocols for flexibility across different client types.
| Protocol | Send | Receive | Best for |
|---|---|---|---|
| AMQP 1.0 | Yes | Yes | High throughput, low latency, persistent connections |
| Apache Kafka | Yes | Yes | Existing Kafka applications (version 1.0+) |
| HTTPS | Yes | No | Lightweight clients, firewall-restricted environments |
- AMQP: Requires persistent bidirectional socket. Higher initial cost, but better performance for frequent operations. Used by Azure SDKs.
- Kafka: Native support means existing Kafka applications work without code changes. Just reconfigure the bootstrap server to point to your Event Hubs namespace.
- HTTPS: Simple HTTP POST for sending. No receiving support. Good for occasional, low-volume publishing.
For Kafka integration details, see Event Hubs for Apache Kafka.
Microsoft Entra ID provides OAuth 2.0 authentication with role-based access control (RBAC). Assign built-in roles to control access:
| Role | Permissions |
|---|---|
| Azure Event Hubs Data Owner | Full access to send and receive events |
| Azure Event Hubs Data Sender | Send events only |
| Azure Event Hubs Data Receiver | Receive events only |
For details, see Authorize access with Microsoft Entra ID.
SAS tokens provide scoped access at the namespace or event hub level. A SAS token is generated from a SAS key and typically grants only send or listen permissions.
For details, see Shared Access Signature authentication.
Application groups let you define resource access policies (like throttling) for collections of client applications that share a security context (SAS policy or Microsoft Entra application ID).