Semantic Models Aren’t the Finish Line: 10 Underused SemPy Functions for Fabric

If you’ve ever watched a team pour months into a semantic model—only to treat it as “the thing Power BI reads”—you’ve seen a common (and costly) mental model at work.

Semantic models shouldn’t be the last layer before visualization. In Microsoft Fabric, they can be a first-class part of the extended analytics and data science stack: something you can query, validate, profile, and even productize from notebooks. That shift is exactly what SemPy (the Python library behind Semantic Link) makes practical.

In this post, I’m going to do three things:

  • Introduce SemPy for Fabric as the bridge between semantic models and the rest of your Python workflows.
  • Share four “generic” functions that help you discover and understand a model’s surface area.
  • Highlight three functions that make consuming semantic models straightforward, and three that unlock capabilities you’d otherwise spend real time (and compute) rebuilding yourself.

Along the way, I’ll frame these as patterns for turning semantic models into data product surface areas—usable well beyond dashboards. This is where Microsoft Fabric and #SemPy start to feel less like “BI tooling” and more like part of your day-to-day analytics engineering and Data Science workflow.

Continue reading “Semantic Models Aren’t the Finish Line: 10 Underused SemPy Functions for Fabric”

Perfect AI Is the Wrong Standard: Automate the Happy Path and Take the Win

One of the most common complaints I hear about Artificial Intelligence—both from the public and from professionals—is some variation of: “It’s not 100% perfect.”

That reaction is understandable. But it’s also revealing.

In most areas of work, we don’t demand perfection. We demand progress. We accept that humans make mistakes, that processes have variance, and that edge cases exist. Yet the moment a workflow becomes automated—especially when it has “AI” stamped on it—many people quietly shift the standard to flawless execution.

Here’s what I want to do in this post: unpack why “100% perfect” is an unhelpful expectation for AI, and show why automating the happy path (the most common case) can deliver meaningful returns even if exceptions still require human attention.

Continue reading “Perfect AI Is the Wrong Standard: Automate the Happy Path and Take the Win”

From Telemetry to Trust: Using FUAM + Purview Lineage to Make Fabric Governance Pay Off

If you’re running Microsoft Fabric at any real scale, you’ve probably felt the tension: the platform makes it easy to build, share, and iterate—but it also makes it easy to spend, sprawl, and accidentally ship the wrong answer.

The good news is you already have most of the raw ingredients to fix that. What’s missing is an operating model that converts “platform signals” into business outcomes: predictable costs, cleaner estates, and faster response when data is wrong.

In this post I’ll walk through three practical patterns:

  • using FUAM as a telemetry backbone for FinOps that people will actually use
  • using the same signals for stale workspace detection (without manual audits)
  • combining Microsoft Purview lineage with usage signals to identify incorrect datasets that are actively being consumed—and contain the blast radius

Along the way, I’ll stay grounded in business value: what these ideas buy you in dollars, time, and trust.

Continue reading “From Telemetry to Trust: Using FUAM + Purview Lineage to Make Fabric Governance Pay Off”

The Chief Risk Officer’s Quiet Obsession: Data Platforms and Data Products

A Chief Risk Officer (CRO) at an FSS Corporation rarely wakes up thinking, “I can’t wait to talk about data architecture today.”

But they do wake up thinking about something that inevitably leads back to it:

Can I trust what we’re about to tell the Board, the regulator, and the market—especially when conditions get ugly?

That question is why the CRO cares deeply about your Data Platform and your Data Products. Not as “tech initiatives,” but as the machinery that turns risk from opinions and spreadsheets into repeatable, auditable decisions the business can stand behind.

In this post, I’ll connect the CRO’s mandate to the practical realities of platforms and products—and why getting this right is a risk control, not a nice-to-have. Along the way, you’ll see why risk management and operational resilience don’t live in policy binders—they live in data.

Continue reading “The Chief Risk Officer’s Quiet Obsession: Data Platforms and Data Products”

Fabric-CICD Is Official Now. That Changes the Conversation.

If you’ve been building in Microsoft Fabric long enough to feel the friction, you already know the moment: the work is “done,” the PR is merged, and then deployment becomes a mix of careful clicks, environment tweaks, and crossed fingers.

That’s exactly why fabric-cicd (often written as Fabric-CICD) getting official support matters. It’s not just another community accelerator to admire—it’s a signal that code-first deployment is now a first-class part of the Fabric lifecycle story.

In this post I’ll lay out what Fabric-CICD is, why “official” changes its value, and where it fits alongside Git integration and deployment pipelines—so you can decide if it belongs in your Microsoft Fabric delivery path.

Continue reading “Fabric-CICD Is Official Now. That Changes the Conversation.”

Syntax Was Never the Hard Part: What AI Coding Misses in Legacy Modernization

There’s a familiar storyline making the rounds right now: point an AI coding assistant at a legacy application, translate the COBOL (or FORTRAN, or PL/I, or SAS, or VB 6.0), and watch a modern system emerge on the other side.

It’s a comforting idea because it frames modernization as a language problem. And language problems are the kind of problems we’re used to solving with tools.

But most modernization programs don’t fail because the engineers can’t learn the syntax. They fail because the organization can’t recover the intent.

In this post, I want to make a simple case: AI-assisted coding can absolutely accelerate modernization, but it doesn’t remove the hard parts of modernization. Those hard parts live upstream and downstream from “write code”: the “why,” the evidence, the governance, and the operational reality of running real systems under real constraints.

Continue reading “Syntax Was Never the Hard Part: What AI Coding Misses in Legacy Modernization”

Stop Picking a “Winner”: Data Product Interoperability Between Databricks and Fabric in Financial Services

Financial services teams have a familiar argument: “Are we a Databricks shop or a Fabric shop?” It sounds like a strategic question, but it usually hides the real problem—different parts of the business need different ways to use the same data, under tight controls, with clear auditability.

When Databricks and Microsoft Fabric interoperate at the data product level, the conversation shifts from which platform to how the data must be used: BI and semantic models, heavy Spark engineering, real-time analytics, governed sharing across boundaries, or advanced ML. The platform becomes a means, not the decision.

In this post I’ll lay out what “data product level interoperability” looks like in practice, why it enables responsible best-of-breed choices in regulated environments, and how it plays out in both directions: Databricks → Fabric and Fabric → Databricks.

Continue reading “Stop Picking a “Winner”: Data Product Interoperability Between Databricks and Fabric in Financial Services”

Beyond the Medallion: Building Fabric Data Products with Schemas, Materialized Lake Views, and a “Surface Area” Contract

If you’ve been around modern analytics platforms for more than five minutes, you’ve probably built (or inherited) a medallion architecture: bronze → silver → gold. It’s familiar, it’s easy to draw on a whiteboard, and it’s often the first stable pattern teams reach for.

But there’s a quiet problem hiding in that simplicity: the number of sublayers tends to grow, and the complexity of each layer tends to balloon. Before long, you’re not designing a data product—you’re running an assembly line of multi-step transforms, hand-managed orchestration, and fragile dependencies.

Microsoft Fabric is starting to give us a different move: instead of treating transformation as a few “big” layers, you can treat it as a series of small, composable steps—and let the platform manage the dependency graph.

In this article, I’m going to connect three ideas:

  • Lakehouse schemas as your unit of organization (and the boundary between “internal plumbing” and “published contract”)
  • Materialized Lake Views as the declarative engine that builds (and refreshes) a dependency graph for you
  • surface area schema designed to be shortcutted into other workspaces—so each workspace becomes an “analytical microservice” with its own interface, security boundary, and versioning story

Along the way, we’ll introduce a pragmatic versioning approach: create a new schema for major versions so breaking changes get semantic versioning “for free.”

Continue reading “Beyond the Medallion: Building Fabric Data Products with Schemas, Materialized Lake Views, and a “Surface Area” Contract”

The Ideal Microsoft Fabric CI/CD Approach: Git for Change, Deployment Pipelines for Promotion, and a Code-First Escape Hatch

Microsoft Fabric CI/CD has a reputation for being confusing—usually because people look at Git integration and Deployment Pipelines as competing ideas rather than two halves of a single delivery story.

The good news is that the “ideal” approach is not exotic. It’s a handoff:

  • Use Git integration to support real developer workflows (including branching that maps cleanly to isolated workspaces).
  • Use Deployment Pipelines to promote approved changes across environments.
  • When you need richer approvals, tests, and release controls, let traditional tooling—especially GitHub Actions or Azure DevOps Pipeline—orchestrate promotions via Fabric APIs.

In this post, I’ll lay out that end-to-end pattern step-by-step, show where the seams belong, and call out the cost you can’t ignore: workspace sprawl—and the operational discipline required to manage aged workspaces intentionally.

Continue reading “The Ideal Microsoft Fabric CI/CD Approach: Git for Change, Deployment Pipelines for Promotion, and a Code-First Escape Hatch”

The NotebookUtils Gems I Wish More Fabric Notebooks Used

Most Fabric notebook code I review has the same telltale shape: a little Spark, a hardcoded path (or three), and just enough glue logic to “get it to run.” And then, a month later, someone copies it into another workspace and everything breaks.

NotebookUtils is one of the easiest ways to avoid that fate. It’s built into Fabric notebooks, it’s designed for the common “day two” problems (orchestration, configuration, identities, file movement), and it’s still surprisingly underused. NotebookUtils is also the successor to mssparkutils—backward compatible today, but clearly where Microsoft is investing going forward.

In this post, I’m going to do two things:

  • Give you a quick, practical orientation to NotebookUtils in Fabric.
  • Walk through the functions I reach for most often—especially the ones I don’t see enough in real projects: runtime.contextrunMultiple()/validateDAG()variableLibrary.getLibrary()fs.fastcp()fs.getMountPath()credentials.getToken(), and lakehouse.loadTable().

Along the way, I’ll call out a few patterns that make notebooks feel less like “scripts you run” and more like reusable components in Microsoft Fabric data engineering work.

Continue reading “The NotebookUtils Gems I Wish More Fabric Notebooks Used”