Releases and CI/CD in Microsoft Fabric — with Variable Libraries That Keep Meaning Stable

I keep saying the quiet part out loud: a modern warehouse ships meaning and trust, not just tables. If meaning changes invisibly, trust evaporates. Releases, Release Flow, and CI/CD in Microsoft Fabric are how you move quickly and keep confidence—by making change observable, reversible, and governed. Fabric’s Variable Library and a deliberate, database‑level metadata library are the glue that make this work day to day.


A release in data: shipping meaning deliberately

A release in data engineering is a versioned bundle—models, DDL, pipelines, notebooks, semantic definitions, and the permissions posture—promoted through environments with intent and traceability. In Fabric, Deployment Pipelines formalize that path (Dev → Test → Prod), including stage‑specific rules that swap connections and parameters so the same artifact behaves correctly in each stage. This keeps tests real but safe and turns promotion into a controlled, reversible act.

Staging should mirror production closely enough that behavior is predictable. Use OneLake Shortcuts to expose prod‑shaped data without copying petabytes, so performance and edge cases surface before users do.


CI in Fabric: prevent “looks fine locally” from reaching people

CI earns its keep the moment it blocks a bad deploy. In Fabric, keep the spine simple:

  • Git integration ties workspaces to branches, making every change reviewable and reproducible. (Mind the “supported items” list as it evolves.)
  • Validate invariants early: compile, lint, and assert keys, referential links, distribution bounds, and metric semantics in your pipelines/notebooks. When CI fails, the business doesn’t.
  • Keep shape realistic: Test with shortcuts and stage‑correct connections so volume, permissions, and latency aren’t surprises later.

CD in Fabric: promote with intent, cut over without drama

Continuous Delivery is less about auto‑pushing and more about predictable promotion:

  • Promote via Deployment Pipelines and stage rules; treat backfills as first‑class release artifacts you observe in the Monitoring hub.
  • Use Power BI App audiences to canary new semantic models and reports to a small internal group; widen only when drift and performance are acceptable.
  • When you outgrow clicking, automate promotion with the fabric‑cicd library in GitHub Actions or Azure DevOps, using service principals for least privilege.

Where Release Flow fits (and why it works for data)

When we say “reflow,” we mean Release Flow—Microsoft’s trunk‑based model with sprint‑scoped release branches and cherry‑picked hotfixes. Keep main moving; cut a release branch to stabilize; merge fixes to main first, then cherry‑pick to the release. Map Dev to main, Test/Prod to the release branch, and promote through your pipeline. It’s fast, auditable, and avoids “fixed in prod, broken next release.”


Variable Library: stage‑aware configuration without hard‑coding

Fabric’s Variable Library is a workspace item that holds named variables and their values per pipeline stage. Items like Data Pipelines and Dataflow Gen2 can consume these variables directly, so the same artifact resolves the right connection, path, or toggle in Dev/Test/Prod—no string‑surgery, no accidental “Test reading Prod.” This is application lifecycle management (ALM) for configuration, not a bag of ad‑hoc parameters.

In practice, Variable Library becomes your single source for things like:

  • connection aliases (e.g., sales_wh_connbronze_lake_path),
  • time windows and data slices for CI runs (e.g., “last 3 days”),
  • feature toggles (e.g., enable a new scoring routine only in Test),
  • stage‑specific destinations (schemas, lake folders) used by pipelines and dataflows.

Because values are bound by stage, a promotion flips behavior without editing code—exactly what you want when reliability and auditability matter.


Safe development and effective testing, Fabric‑style

Develop in isolated workspaces tied to branches. Use Variable Library values to bind stage‑correct connections and “slice” windows; validate contracts from your metadata schema before any model rebuild or backfill runs. Promote with Deployment Pipelines; canary via App audiences; observe in Monitoring; and roll back quickly because promotion was a metadata change, not a long‑running fix‑by‑hand.


Reliability and governance as properties of the system

Define freshness, completeness, and correctness SLOs; then let your CD gates enforce them. Sensitivity labels and Purview’s Unified Catalog close the loop on governance and lineage so your release record isn’t just technical—it’s compliant. When auditors ask, you don’t reconstruct history; you point to it.


The payoff

With Release Flow, CI/CD, Variable Libraries, and a database‑level metadata library, your warehouse stops being fragile plumbing and becomes a platform. Teams ship more often with less drama. Stakeholders trust numbers because the path to those numbers is visible, repeatable, and reversible.

That’s the bar we set: move fast, keep meaning stable, and let your pipeline tell the story of how you did it.

Bronze Is Live Now: what Mirroring + Shortcuts really change about cost, archives, and getting to Silver

For years, “Bronze” quietly became a parking lot for periodic snapshots: copy a slice from the source every hour/day, write new files, repeat. It worked, but it was noisy and expensive—lots of hot storage, lots of ingest compute, and a tendency to let “temporary” landing data turn into de‑facto history.

Fabric upends that with two primitives that encourage Zero Unmanaged Copies:

  • Mirroring: a service‑managed, near–real‑time replica of your database/tables into OneLake, with replication compute included and a capacity‑based allowance of free mirrored storage (1 TB per CU; e.g., an F64 includes 64 TB just for mirrored replicas). You still pay for downstream query/transform compute, but not for the continuous ingest job itself. Retention for mirrored data is explicitly managed and—by default for new mirrors since mid‑June 2025—kept lean (1 day) unless you raise it.
  • Shortcuts: pointers that let Fabric read in place from ADLS/S3/other OneLake locations (and even across tenants via External Data Sharing, which creates a shortcut in the consumer’s tenant rather than duplicating data). That means zero OneLake bytes for the data itself; you pay storage where the data already lives, and Fabric charges only for the compute you use to read/transform it.

Add Real‑Time Intelligence/Eventhouse or Eventstreams, and “Bronze” becomes the live edge: the freshest, governed view of your sources—either replicated (Mirroring) or virtualized (Shortcuts)—instead of a pile of periodic copies.

Continue reading “Bronze Is Live Now: what Mirroring + Shortcuts really change about cost, archives, and getting to Silver”

The Microsoft Fabric Delta Change Data Feed (CDF)

In Microsoft Fabric you’re sitting on top of Delta Lake tables in OneLake. If you flip on Delta Change Data Feed (CDF) for those tables, Delta will record row‑level inserts, deletes, and updates (including pre‑/post‑images for updates) and let you read just the changes between versions. That makes incremental processing for SCDs (Type 1/2) and Data Vault satellites dramatically simpler and cheaper because you aren’t rescanning entire tables—just consuming the “diff.” Fabric’s Lakehouse fully supports this because it’s natively Delta; Mirrored databases land in OneLake as Delta too, but (as of September 2025) Microsoft hasn’t documented a supported way to enable Delta CDF on the mirrored tables themselves; you can still analyze mirrored data with Spark via Lakehouse shortcuts, or source CDC upstream (Real‑Time hub) and write to your own Delta tables with CDF enabled.

This feature is already underutilized, but once Mirrored Databases support the CDF, it’s going to be a must have in every data engineer’s toolkit.

Continue reading “The Microsoft Fabric Delta Change Data Feed (CDF)”

FabCon Feature: Fabric Real‑Time Intelligence

Real‑Time Intelligence (RTI) is the part of Fabric that treats events and logs as first‑class citizens: you connect live streams, shape them, persist them, query them with KQL or SQL, visualize them, and trigger actions—all without leaving the SaaS surface. Concretely, RTI centers on Eventstream (ingest/transform/route), Eventhouse (KQL databases), Real‑Time Dashboards / Map, and Activator (detect patterns and act). That tight loop—capture → analyze → visualize/act—now covers everything from IoT telemetry to operational logs and clickstream analytics.

Continue reading “FabCon Feature: Fabric Real‑Time Intelligence”

FabCon Feature: OneLake Security

Fabric’s second European conference didn’t just showcase new toys; it tightened the platform’s governance spine. Microsoft moved OneLake Security into full preview and added a Secure tab to the OneLake catalog—a single place to see and manage data‑level permissions across items. That elevates lake‑native RBAC from a feature to a first‑class control surface, so product teams can set access once, at the path where the bytes live, and have it enforced consistently.

Continue reading “FabCon Feature: OneLake Security”

Implementing Stars and Galaxies in Power BI

Power BI rewards clean dimensional models—but it also punishes sloppy ones. This post walks through how to implement star and galaxy schemas in Power BI semantic models, why ambiguous (multiple) filter paths cause headaches, why implicit measures don’t scale beyond the simplest star, and how tightly defined data products keep your BI ecosystem fast, correct, and governable. Because this is such an important topic, I’ve included links to references with each point.

Continue reading “Implementing Stars and Galaxies in Power BI”