You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/event-hubs/event-hubs-about.md
+21-67Lines changed: 21 additions & 67 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -56,21 +56,30 @@ Azure offers multiple messaging services. Use this guidance to select the right
56
56
57
57
For detailed guidance, see [Choose between Azure messaging services](../service-bus-messaging/compare-messaging-services.md).
58
58
59
-
## Common architecture patterns
59
+
## How it works
60
+
61
+
Event Hubs provides a unified streaming platform with time-based retention, decoupling event producers from consumers. Both can perform large-scale data ingestion and processing through multiple protocols.
62
+
63
+
:::image type="content" source="./media/event-hubs-about/components.png" alt-text="Diagram that shows the main components of Event Hubs.":::
64
+
65
+
### Core components
60
66
61
-
Event Hubs supports these common streaming architectures:
67
+
| Component | Description |
68
+
|-----------|-------------|
69
+
|**Producer applications**| Applications that send events to Event Hubs using [Event Hubs SDKs](sdks.md), Kafka producer clients, or HTTPS |
70
+
|**Namespace**| Management container for one or more event hubs. Handles [streaming capacity](event-hubs-scalability.md), [network security](network-security.md), and [geo-disaster recovery](event-hubs-geo-dr.md) at the namespace level |
71
+
|**Event hub / Kafka topic**| An append-only distributed log that organizes events. Contains one or more [partitions](event-hubs-features.md#partitions) for parallel processing |
72
+
|**Partitions**| Ordered sequences of events used to scale throughput. Think of partitions as lanes on a freeway—more partitions enable higher throughput |
73
+
|**Consumer applications**| Applications that read events by tracking their position (offset) in each partition. Can use [Event Hubs SDKs](sdks.md) or Kafka consumer clients |
74
+
|**Consumer group**| A logical view of the event hub that enables multiple consumer applications to read the same stream independently, each maintaining its own position |
62
75
63
-
-**Fan-out processing**: Multiple consumer groups independently read the same event stream for different purposes (analytics, archival, alerting)
64
-
-**Lambda architecture**: Combine batch and real-time processing using [Event Hubs Capture](event-hubs-capture-overview.md) for batch and Stream Analytics for real-time
65
-
-**Event sourcing**: Store all state changes as immutable events, enabling replay and audit capabilities
66
-
-**CQRS (Command Query Responsibility Segregation)**: Separate read and write models using Event Hubs as the event store
76
+
### Event flow
67
77
68
-
Event Hubs is the preferred event ingestion layer for event streaming solutions built on Azure. It integrates with data and analytics services inside and outside Azure to build complete data streaming pipelines:
78
+
1.**Ingest**: Producer applications send events to an event hub. Events are assigned to partitions based on partition key or round-robin distribution.
79
+
2.**Store**: Events are durably stored with configurable retention (1-90 days depending on tier). The [Capture](event-hubs-capture-overview.md) feature can also write events to long-term storage.
80
+
3.**Process**: Consumer applications read events from partitions using consumer groups. Each consumer tracks its offset using [checkpointing](event-hubs-features.md#checkpointing) for reliable processing.
69
81
70
-
-[Process data with Azure Stream Analytics](./process-data-azure-stream-analytics.md) to generate real-time insights
71
-
-[Analyze streaming data with Azure Data Explorer](/azure/data-explorer/ingest-data-event-hub-overview) for near real-time exploration
72
-
- Build cloud-native applications, functions, or microservices that process streaming data
73
-
-[Validate event schemas](schema-registry-overview.md) using the built-in Schema Registry
82
+
For a detailed explanation, see [Event Hubs features](event-hubs-features.md).
For current pricing and detailed feature comparison, see [Event Hubs pricing](https://azure.microsoft.com/pricing/details/event-hubs/) and [quotas and limits](event-hubs-quotas.md).
164
173
165
-
## How it works
166
-
167
-
Event Hubs provides a unified streaming platform with time-based retention, decoupling event producers from consumers. Both can perform large-scale data ingestion and processing through multiple protocols.
168
-
169
-
:::image type="content" source="./media/event-hubs-about/components.png" alt-text="Diagram that shows the main components of Event Hubs.":::
170
-
171
-
### Core components
172
-
173
-
| Component | Description |
174
-
|-----------|-------------|
175
-
|**Producer applications**| Applications that send events to Event Hubs using [Event Hubs SDKs](sdks.md), Kafka producer clients, or HTTPS |
176
-
|**Namespace**| Management container for one or more event hubs. Handles [streaming capacity](event-hubs-scalability.md), [network security](network-security.md), and [geo-disaster recovery](event-hubs-geo-dr.md) at the namespace level |
177
-
|**Event hub / Kafka topic**| An append-only distributed log that organizes events. Contains one or more [partitions](event-hubs-features.md#partitions) for parallel processing |
178
-
|**Partitions**| Ordered sequences of events used to scale throughput. Think of partitions as lanes on a freeway—more partitions enable higher throughput |
179
-
|**Consumer applications**| Applications that read events by tracking their position (offset) in each partition. Can use [Event Hubs SDKs](sdks.md) or Kafka consumer clients |
180
-
|**Consumer group**| A logical view of the event hub that enables multiple consumer applications to read the same stream independently, each maintaining its own position |
181
-
182
-
### Event flow
183
-
184
-
1.**Ingest**: Producer applications send events to an event hub. Events are assigned to partitions based on partition key or round-robin distribution.
185
-
2.**Store**: Events are durably stored with configurable retention (1-90 days depending on tier). The [Capture](event-hubs-capture-overview.md) feature can also write events to long-term storage.
186
-
3.**Process**: Consumer applications read events from partitions using consumer groups. Each consumer tracks its offset using [checkpointing](event-hubs-features.md#checkpointing) for reliable processing.
187
-
188
-
For a detailed explanation, see [Event Hubs features](event-hubs-features.md).
189
-
190
174
## Related content
191
175
192
-
### Get started
193
-
194
-
Use these quickstarts to start streaming data with Event Hubs:
195
-
196
-
-**.NET**: [Send and receive events](event-hubs-dotnet-standard-getstarted-send.md)
197
-
-**Java**: [Send and receive events](event-hubs-java-get-started-send.md) | [Use with Kafka](event-hubs-quickstart-kafka-enabled-event-hubs.md)
198
-
-**Python**: [Send and receive events](event-hubs-python-get-started-send.md)
199
-
-**JavaScript**: [Send and receive events](event-hubs-node-get-started-send.md)
200
-
-**Go**: [Send and receive events](event-hubs-go-get-started-send.md)
Copy file name to clipboardExpand all lines: articles/event-hubs/event-hubs-features.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -38,7 +38,7 @@ An Event Hubs **namespace** is a management container for event hubs (or topics,
38
38
39
39
---
40
40
41
-
## Producing events
41
+
## Event producers
42
42
43
43
A **producer** (or publisher) is any application that sends events to an event hub.
44
44
@@ -75,7 +75,7 @@ The publisher name must match the SAS token used for authentication. When using
75
75
76
76
<aname="event-consumers"></a>
77
77
78
-
## Consuming events
78
+
## Event consumers
79
79
80
80
A **consumer** is any application that reads events from an event hub. Event Hubs uses a **pull model**—consumers request events rather than having events pushed to them.
0 commit comments