Real‑Time Data Isn’t Free: The Complexity and Cost Tradeoffs (From Trickle to Internet‑Class)

The first time someone asks for “real‑time,” it sounds like a small tweak: refresh the dashboard faster, trigger an alert sooner, show a counter that feels alive. In a data platform, that single request quietly changes everything—how you ingest, how you process, how you serve, and how you operate.

This post keeps it practical. It frames real‑time as a freshness target (not a vibe), walks through the two taxes real‑time introduces—architectural complexity and cost—and shows how patterns evolve as you scale from modest #StreamingData to internet‑class velocity. It also folds in recent Microsoft Ignite announcements that matter for real‑time platforms, including SQL Server 2025’s “change event streaming” and near real‑time analytics via OneLake/Fabric mirroring, plus the continued maturation of Microsoft Fabric’s Real‑Time Intelligence building blocks.

Continue reading “Real‑Time Data Isn’t Free: The Complexity and Cost Tradeoffs (From Trickle to Internet‑Class)”

2025 Year in Review: When Microsoft Fabric and Microsoft Purview Turned “Data + AI” Into a Governed Operating Model

By the end of 2025, the conversation around analytics stopped being about dashboards and started sounding a lot more like operations. The rise of autonomous and semi-autonomous agents put a sharper edge on an old truth: AI only becomes an enterprise capability when the underlying data is trusted, discoverable, and defensible.

Microsoft Fabric and Microsoft Purview spent 2025 building toward that reality from opposite (but increasingly overlapping) sides of the house. Fabric pushed the platform forward—unifying workloads, expanding OneLake, and adding new intelligence and database capabilities designed for AI-era workloads. Purview tightened the governance and security loop—making data quality, cataloging, risk visibility, and policy enforcement feel less like a separate initiative and more like part of the daily flow.

This year-in-review walks through the story they told together: how Microsoft Fabric expanded the data platform into a more complete data estate, how Microsoft Purview reframed governance for the AI and agent era, and where the two converged into a more practical, enterprise-ready operating model.

Continue reading “2025 Year in Review: When Microsoft Fabric and Microsoft Purview Turned “Data + AI” Into a Governed Operating Model”

Edit, Retarget, and Redeploy: A Practical TMDL Folder Workflow for Fabric Semantic Models

There’s a moment in every Fabric semantic model lifecycle where the “click it in the UI” approach stops scaling.

It usually happens when you need to rename dozens (or hundreds) of fields to match a business glossary, or when Dev is stable and you’re ready to point the same model at a new Lakehouse for Test/Prod. That’s when the model stops being a diagram and starts being an artifact—something you want to treat like code.

This guide reflows the whole workflow end-to-end, using the Fabric service Edit in Desktop experience to open the model, exporting it to a PBIP project stored as a TMDL folder, editing that folder externally (no scripting inside Power BI Desktop), and then getting those changes back into the service—including the key capability you asked for:

  • retargeting the entire model to a different Lakehouse/Warehouse, and
  • retargeting a single table to a different physical table (even in a new Lakehouse).

We’ll do it with the mindset of Power BI + Microsoft Fabric development: repeatable changes, visible diffs, and fewer “hand edits” you regret later.

Continue reading “Edit, Retarget, and Redeploy: A Practical TMDL Folder Workflow for Fabric Semantic Models”

A reference landing zone architecture for Microsoft Fabric

Think of this as two layers that work together:

Fabric layer (tenant + capacities + workspaces)
You set governance boundaries through tenant settings, capacity assignment, workspace structure, and workspace security configuration.

Azure layer (identity, networking, Key Vault, storage, monitoring)
You provide the enterprise foundations Fabric will integrate with: private endpoints, VNets, gateways, Key Vault keys/secrets, and ADLS archive storage.

Continue reading “A reference landing zone architecture for Microsoft Fabric”

Shortcuts Everywhere, But Serving Still Matters: Materialized Lake Views in Fabric

If your Microsoft Fabric estate is “shortcut‑first,” you’re not alone. OneLake shortcuts (and mirroring) make it genuinely easy to unify data that lives elsewhere—on‑prem, multicloud, SaaS—without immediately building a full ingestion factory. That architectural speed is real.

But there’s a predictable moment when the elegance turns into friction: the day consumption outgrows the source.

Dashboards refresh. Analysts explore. Notebooks iterate. And now—post‑Ignite—agents and Copilot scenarios multiply the number of reads in ways that are hard to forecast. Ignite’s messaging was clear: OneLake is the context layer, and Fabric IQ (plus Foundry IQ) is designed to reason across that unified data foundation.

This post refreshes the original argument with what’s changed since then: inserting Materialized Lake Views (MLVs) into your architecture benefits even heavily shortcutted external designs—because MLVs change who hits the source, when they hit it, and how often.

Continue reading “Shortcuts Everywhere, But Serving Still Matters: Materialized Lake Views in Fabric”

From Warehouses to Products: SAMR for Your Cloud Data Platform

Financial Services, Insurance, Wealth Management, and Professional Services have a gift—and a curse—when it comes to data.

The gift is that these industries know how to run critical systems with discipline. The curse is that we’re so good at controlling risk that we often rebuild the same constraints in every new platform we adopt.

That’s why so many “modern cloud data platforms” in these sectors end up feeling like the old data warehouse with a new hosting model: better infrastructure, familiar bottlenecks. Slow change. Long planning cycles. A platform measured by what it ingests, not what it enables.

In the previous post on AI and SAMR, the point wasn’t that AI is inherently transformative. The point was that without a disciplined framework, we use new tools to reproduce old workflows. The same is true for data.

Here’s what I’ll cover in this follow-up:

  • Why cloud platforms in regulated, risk-sensitive industries so often replicate warehouse-era behaviors.
  • How SAMR can be used as a simple truth-teller for your data platform strategy.
  • Why data products are the most practical “unit of redefinition,” and why you don’t need Data Mesh to benefit from them—though mesh and product thinking dramatically expands your toolset.
  • How ground-up design thinking keeps your platform anchored to real business goals, not elegant architectures

If your cloud platform still takes six months to change a definition, you didn’t modernize. You relocated.

Continue reading “From Warehouses to Products: SAMR for Your Cloud Data Platform”

From “Should‑Do” to Done: Digital Workers for Wealth, Energy, and Financial Services

Every enterprise carries a shadow backlog—the should‑do work that never beats the urgent. It’s the reconciliation that almost closes, the control that’s “fine for now,” the evidence that exists but isn’t filed where audit will accept it. None of these items is existential in isolation; together they become trust debt: silent risk, rework, slower decisions, and reputational drag.

2025 amplified the problem. Compressed settlement cycles demand same‑day precision in wealth operations. Emissions and operational reporting remain high‑stakes in oil and gas, with timelines adjusted but scrutiny intact. Financial institutions face fluid sanctions and evolving data‑sharing rules, while beneficial ownership reporting shifted materially. In each case, the gap is less about intent and more about capacity. Digital workers exist to close that gap.

Digital workers are policy‑aware software agents that connect to the systems already in place, act on explicit rules (with narrow ML where safe), and hand back proof that the job was completed—consistently and auditable. What follows clarifies how they work and where they quietly create value in wealth management, oil and gas, and financial services.

Continue reading “From “Should‑Do” to Done: Digital Workers for Wealth, Energy, and Financial Services”

From Chunks to Queries—Ignite 2025 Update: Fabric Data Agents, RAG, and the New IQ Layer

Monday, 9:02 a.m. The CFO pings: “What was Q3 gross margin by region—and did audit call out any risks?” Your RAG bot shines on PDFs and wiki pages, but it can’t compute a number you’d put on a KPI card. After Ignite 2025, the answer is cleaner than ever: let a Fabric Data Agent generate and run a governed query for the metric, and let your RAG retriever bring back the one‑sentence risk note. One conversation; two specialized tools; auditable answers. 

Continue reading “From Chunks to Queries—Ignite 2025 Update: Fabric Data Agents, RAG, and the New IQ Layer”

Fabric Is Medallion‑First, Not Medallion‑Only

If you work with Microsoft Fabric long enough, it’s easy to come away with the impression that “real” Fabric means “medallion everywhere.” The official docs walk through Bronze, Silver, and Gold patterns for lakehouses. The learning paths lean on medallion as the canonical example. Fabric clearly makes medallion a first‑class citizen. 

But that doesn’t mean your data platform – or your data products – must be medallion‑shaped.

In a world of managed, domain‑aligned data products and Data Mesh thinking, what matters most is the contract at the edges: the inputs you accept, the outputs you guarantee, and the behaviors you commit to over time. Inside the boundary of a data product, you have more architectural freedom than many teams allow themselves.

In this post, I’ll walk through three ideas:

Fabric is medallion‑forward, but not medallion‑only. For data products, inputs and outputs matter far more than internal state. Internal architecture should serve engineering excellence, not a single prescriptive pattern – illustrated with small examples from financial services, wealth management, and insurance.

By the end, the goal is simple: when you design a Fabric data product, you should feel comfortable treating medallion as one option in a toolbox, not as a mandatory religion.

Continue reading “Fabric Is Medallion‑First, Not Medallion‑Only”

Spec‑Driven Development: Make the Specification the First Commit

If your acceptance criteria live in a comment thread, they’re not requirements—they’re opinions. Spec‑driven development (SDD) turns those opinions into executable truth so code, tests, docs, and operations move in lockstep.

Building on our split between functional and nonfunctional requirements, this follow‑up introduces spec‑driven development: what it is, why it reduces drift, and how to run it inside agile without ceremony. We’ll connect behavior specs, API contracts, data schemas, and quality budgets to lightweight gates in CI/CD and SLOs in production. By the end, you’ll have a “small slice” pattern you can ship next sprint. This is where spec-driven development meets Agile, DevOps, and APIs.

Continue reading “Spec‑Driven Development: Make the Specification the First Commit”