Event Hubs for Identity Events
Azure Event Hubs is the streaming backbone for Entra-adjacent systems that need to move large volumes of events reliably to multiple consumers. It is the right mental model when the system is dealing with event streams, telemetry, or audit-style feeds rather than step-by-step workflow commands.
Use Event Hubs when throughput, replay, and independent consumers matter more than per-message workflow semantics.
The Streaming Model
Event Hubs stores events in ordered streams that are split across partitions. Producers write events into the hub, and consumers read from partitions at their own pace.
The core concepts are:
- Partitions - parallel lanes that determine ordering and scale boundaries.
- Consumer groups - independent read views over the same stream, so multiple applications can process the same events differently.
- Retention - how long events remain available for consumers to read or re-read.
- Replay - the ability to reprocess a stream from an earlier position.
- Throughput - the capacity framing that determines how much event volume the hub can handle.
This is why Event Hubs fits streaming workloads better than workflow coordination.
Where It Shows Up Around Entra
Event Hubs is a strong fit when Entra-backed systems generate or collect high-volume data such as:
- audit-style event feeds,
- operational telemetry from identity workers,
- large reconciliation result streams,
- near-real-time captures that several downstream processors need to inspect,
- events forwarded into analytics systems like Azure Data Explorer.
In these cases, the event itself is the product. Consumers may store, transform, enrich, alert, or analyze it later.
Why Replay And Consumer Groups Matter
Streaming architectures are rarely served by a single consumer forever. Teams often need:
- one consumer for real-time alerting,
- one for long-term analytics,
- one for enrichment or normalization,
- one temporary consumer for incident investigation.
Consumer groups let those readers coexist without stealing messages from each other. Retention and replay let operators or new processors go back and inspect earlier data. That is a core streaming capability, not something you should expect from a workflow queue.
Event Hubs vs Service Bus
Event Hubs and Service Bus are related only in the sense that both move messages. Their job is different.
- Use Event Hubs when you need a high-throughput stream, partition-based scale, replay, and multiple independent readers.
- Use Service Bus when you need workflow coordination, delivery control, retries, sessions, or dead-letter handling.
If a message represents “process this business step exactly as a coordinated unit,” start with Service Bus for Workflows. If it represents “here is another event in the stream,” Event Hubs is the better default.
Practical Trade-Offs
Event Hubs is not a free upgrade over a queue. It asks you to think in stream-processing terms:
- partition design affects ordering and scale,
- consumers must track progress,
- replay can be valuable but also operationally expensive,
- per-message workflow semantics are intentionally limited.
That trade is worth it when the system’s main problem is ingesting and processing lots of events, not coordinating a business workflow.
Typical Placement
In this topic’s stack, Event Hubs usually sits between producers and analytics or processing layers:
- Functions or other producers emit events,
- Event Hubs retains and partitions the stream,
- downstream consumers write to storage, trigger actions, or feed Azure Data Explorer and KQL.
That makes it a first-class streaming service, not a general-purpose broker for everything.