Real‑Time Intelligence (RTI) is the part of Fabric that treats events and logs as first‑class citizens: you connect live streams, shape them, persist them, query them with KQL or SQL, visualize them, and trigger actions—all without leaving the SaaS surface. Concretely, RTI centers on Eventstream (ingest/transform/route), Eventhouse (KQL databases), Real‑Time Dashboards / Map, and Activator (detect patterns and act). That tight loop—capture → analyze → visualize/act—now covers everything from IoT telemetry to operational logs and clickstream analytics.
Where it fits: two common patterns
1) Operational & observability analytics
When you need sub‑second to minutes‑level insight—fleet monitoring, e‑commerce funnel health, fraud signals—stream events land in Eventhouse and are queried with KQL for exploratory analysis, aggregation, and anomaly spotting. Activator can then trigger alerts or workflows the moment conditions are met. With the new anomaly detection (preview), you can even set up no‑code detection directly over Eventhouse tables.
2) Real‑time data products feeding the lake
When the goal is medallion‑style curation, Eventstream → Lakehouse gives you Delta Bronze immediately, while Fabric’s Materialized Lake Views (MLVs, preview) provide declarative SQL transformations for Silver/Gold. The experience emphasizes “one logical copy”: your Lakehouse tables feed Direct Lake semantic models and notebooks, while RTI can still query those same tables through the Eventhouse endpoint for Lakehouse (new) to run KQL against Delta, unifying streaming and batch semantics.
Connecting to event‑based streams (and what’s new)
Eventstream is the routing switchboard: it subscribes to upstreams (Azure Event Hubs, IoT Hub, Event Grid, Confluent Cloud for Apache Kafka, OneLake file‑system events, and more), offers low/no‑code transforms, and fans out to Eventhouse, Lakehouse, Activator, or custom endpoints. Recent updates add a SQL operator for code‑first transforms, derived‑stream → Eventhouse direct ingestion, and Managed Private Endpoints GA to connect privately to Azure services.
A few practical implications worth calling out:
- Kafka everywhere. Eventstream exposes a Kafka‑compatible endpoint on custom endpoints so you can produce/consume with the Kafka protocol and SASL/SSL—useful when integrating existing Kafka apps without brokers of your own.
- Confluent & schemas. In preview, the Confluent Cloud connector can now decode data governed by Confluent Schema Registry, while Fabric’s own Schema Registry (preview) introduces governed “schema sets” you can bind to Eventstream sources (notably custom endpoints and Azure SQL CDC). This materially improves type safety in streaming pipelines.
- OneLake events. Eventstream can subscribe to OneLake file/folder change events, which is handy when your “event” is actually a file landing (for example, micro‑batch sensor CSVs).
CSV connectors: why they’re useful—and where they bite
Eventstream’s Event Hubs source supports JSON, Avro, and CSV (with header). CSV is pragmatic when a legacy emitter can’t serialize JSON/Avro; it’s also easy to inspect and replay. But understand the trade‑offs:
- Typing & drift. CSV carries no embedded schema. Downstream, KQL ingestion mappings must align ordinally, and for tabular formats you can’t “map a column twice” or change a column’s type in place. If producers evolve columns, you’ll manage mappings or stage the stream through a transform. Schema Registry (or Avro/JSON with in‑payload schema cues) reduces this operational friction.
- Lakehouse schema enforcement. When routing to Lakehouse, Fabric enforces table schema at write time; mismatched or extra columns are dropped or set null, and heavy schema drift (like CDC) is discouraged via this path. You also tune Rows per file and Duration to balance many small files vs. larger, fewer files—important for Delta/Direct Lake performance.
Net: CSV works, especially for “flat” payloads, but if you own the producer, prefer JSON/Avro + schema for durable evolution and simpler governance across RTI.
Custom APIs & Kafka connectors: the mechanics
Eventstream’s custom endpoint exposes three protocol tabs (Event Hubs‑style, AMQP, and Kafka). On the Kafka tab you’ll see the bootstrap server, SASL_SSL with PLAIN mechanism, and an sasl.jaas.config string using the provided connection string—exactly what typical Kafka SDKs and CLIs expect. This symmetry also applies in the other direction when you egress to a custom endpoint destination to feed downstream apps in real time.
For teams standardizing on Confluent Cloud, there’s a first‑party connector that ingests topics directly into Eventstream; the Schema Registry support (preview) means you can decode those topic payloads within Fabric rather than building a custom deserializer.
Lakehouse‑first design (with Eventhouse in the loop)
“Lakehouse‑first” resonates when the endgame is a governed medallion model that serves BI, data science, and AI.
- Land events into Bronze as Delta. Point Eventstream to your Lakehouse destination and select JSON/Avro/CSV. Fabric enforces schema at write and lets you tune Rows per file and Duration to optimize Delta layout for Direct Lake and notebooks. This gives you immediate Bronze in OneLake without bespoke loaders.
- Lift Bronze → Silver/Gold with MLVs. Materialized Lake Views (preview) let you express transformations (joins, filters, windowed aggregations) as declarative SQL, with Fabric handling persistence, scheduling, lineage, and monitoring. Today MLV refreshes are full (no incremental yet), which actually makes “reflow”—a controlled recomputation after upstream corrections—predictable and auditable.
- Query Bronze/Silver with KQL via the Eventhouse endpoint. New this fall, Eventhouse endpoint for Lakehouse attaches a KQL “front door” to Lakehouse tables. Under the covers, Fabric tracks and caches the Delta data and applies query acceleration so you can write KQL over your Lakehouse without duplicating data or wiring separate shortcuts. This closes the loop between streaming analytics and lake‑native curation.
Result: a clean division of labor—Eventstream writes Delta; MLVs define business logic; KQL/Maps/Activator sit on top for real‑time exploration and action—while keeping the one‑copy promise in OneLake.
Eventhouse‑first variant (and how it complements Lakehouse)
When the priority is low‑latency analytics and flexible KQL over hot data, stream into Eventhouse. Two FabCon‑era changes matter:
- Derived‑stream → Eventhouse (direct). You can now ingest derived streams straight to Eventhouse—great when you filter or shape events in the Eventstream graph and want those curated rows in their own KQL table.
- OneLake availability (GA, earlier). Flip this on and your Eventhouse tables are exposed in OneLake as Delta—with no extra storage—so batch engines and Direct Lake models can read the same data. It’s the mirror image of the Eventhouse endpoint for Lakehouse noted above.
If you need always‑hot query/ingest capacity, Always‑On prevents auto‑suspend and lets you set a minimum consumption floor. This is useful for mission‑critical monitoring where startup latency is unacceptable.
SQL in Eventstream, anomaly detection, and actions
SQL operator (preview). Eventstream now includes a code editor for Stream Analytics–style SQL—handy when the no‑code canvas isn’t expressive enough. Because it aligns with the ASA query language, most streaming ops (temporal joins, windows, aggregates) are natural, and the output can route to any RTI destination (Eventhouse, Lakehouse, Activator, or another stream).
Anomaly detection (preview). Directly from an Eventhouse table, you can spin up no‑code anomaly models with recommendations, sensitivity tuning, and continuous monitoring—then wire alerts or workflows via Activator. It’s a fast path to first signal without building a custom ML pipeline.
Security and private networking
For regulated estates, Eventstream supports Managed Private Endpoints (MPE) GA, enabling private connectivity to Azure Event Hubs and IoT Hub without exposing public network paths. The setup is workspace‑scoped and approved from the Azure side of the target service.
Maps and graph‑centric analysis
Two additions deepen real‑time context:
- Map (new): a geospatial visualization surface that works with static or real‑time data and fits naturally with fleet, logistics, or site telemetry. Because it sits inside RTI, you can blend it with Eventhouse tables and Live data.
- Graph analysis: Eventhouse (KQL) has graph operators and persistent graphs (preview), and Microsoft has also previewed broader Graph in Fabric directions. The upshot: you can represent relationships (devices↔sites↔alerts) and pattern‑match them in near‑real time as events stream in.
Guidance: bespoke analytics vs. medallion architecture
If your team needs bespoke, exploratory analytics over live telemetry—ad‑hoc KQL, rapid aggregations, anomaly hunting, real‑time dashboards—start with Eventhouse as your hot store and layer Activator and Map for action and geospatial context. When that analysis stabilizes into canonical facts, either enable OneLake availability or copy curated aggregates into Lakehouse to make them broadly consumable.
If you’re building a data product with clear Bronze/Silver/Gold guarantees for BI/AI, go Lakehouse‑first: stream to Delta Bronze, formalize business logic with MLVs, and use the Eventhouse endpoint to apply KQL when you need temporal or log‑style exploration—without creating another copy. This keeps governance tight and aligns performance with downstream consumers.
A note on “CSV vs. Avro/JSON” in the wild
CSV still has a place—especially for edge devices and legacy emitters—but as systems scale and evolve, schemas matter. With Fabric Schema Registry (preview) and Confluent Schema Registry decoding in Eventstream (preview), typed events become the safer default: they preserve compatibility, simplify downstream mappings, and reduce silent data loss when producers change. Use CSV when you must; prefer Avro/JSON + schema when you can.
What changed since FabCon Europe?
Post‑event updates materially expand what you can do without leaving RTI:
- Eventhouse endpoint for Lakehouse (KQL over Delta with acceleration),
- Anomaly detection (no‑code, preview),
- Maps for geospatial in RTI,
- Schema Registry and Confluent Schema Registry decoding,
- SQL operator in Eventstream, and
- Derived‑stream direct ingestion to Eventhouse—plus MPE GA for private networking.
Together these make RTI more composable: you can keep a lakehouse‑first medallion design while retaining real‑time KQL and actions, or you can lead with Eventhouse and materialize out to Delta—either way, it’s one estate, one governance plane.
Closing thought
RTI’s center of gravity has shifted from “a streaming corner of the platform” to how Fabric organizes and operationalizes data in motion. The practical consequence for architects is freedom of approach: keep the one logical copy promise while choosing whether your first landing is a Lakehouse (for medallion) or an Eventhouse (for hot analytics)—and know that with endpoints and availability features on both sides, you don’t have to pick only one.