If you work with Microsoft Fabric long enough, it’s easy to come away with the impression that “real” Fabric means “medallion everywhere.” The official docs walk through Bronze, Silver, and Gold patterns for lakehouses. The learning paths lean on medallion as the canonical example. Fabric clearly makes medallion a first‑class citizen.
But that doesn’t mean your data platform – or your data products – must be medallion‑shaped.
In a world of managed, domain‑aligned data products and Data Mesh thinking, what matters most is the contract at the edges: the inputs you accept, the outputs you guarantee, and the behaviors you commit to over time. Inside the boundary of a data product, you have more architectural freedom than many teams allow themselves.
In this post, I’ll walk through three ideas:
Fabric is medallion‑forward, but not medallion‑only. For data products, inputs and outputs matter far more than internal state. Internal architecture should serve engineering excellence, not a single prescriptive pattern – illustrated with small examples from financial services, wealth management, and insurance.
By the end, the goal is simple: when you design a Fabric data product, you should feel comfortable treating medallion as one option in a toolbox, not as a mandatory religion.
Fabric’s stance: medallion as a first‑class pattern
Microsoft has been very clear that Fabric is a natural home for medallion‑style lakehouse architectures. Official guidance describes how to organize data into Bronze, Silver, and Gold layers in OneLake, with concrete patterns for lakehouses and warehouses.
Training content and community examples reinforce this view: ingest raw data into Bronze, clean and conform in Silver, and curate optimized structures for analytics in Gold.
Fabric then layers on domains and workspaces as the organizing surface for data mesh–style, business‑oriented grouping.
Taken together, it’s not surprising that many teams assume:
“If I’m doing Fabric correctly, my data products must be medallion internally.”
That’s an understandable starting point. It’s also unnecessarily restrictive.
Data products: contracts at the edges, freedom in the middle
When we talk about data products – especially in a Fabric + domains context – we’re talking about assets that behave like products:
They have clear, documented inputs. They expose well‑defined outputs (schemas, semantics, SLOs, access patterns). They are owned by a domain‑aligned team. They change under versioned, intentional control.
Within that framing, it’s useful to distinguish:
Foundational data products – domain‑canonical truths (e.g., “Customer,” “Position,” “Policy”). Derived data products – analytic, feature, or decision‑oriented views built on top (e.g., churn risk features, claims severity segments, asset allocation diagnostics).
Across both types, consumers mostly care about three things:
Can I rely on the semantics of this output? Can I rely on its timeliness and quality? Can I access it using the tools and methods we’ve agreed on?
They generally do not care whether you used a pure medallion stack, a data vault pattern in the middle, an aggressively flattened Gold table, or a carefully normalized warehouse.
That internal structure is an implementation detail of the product.
Once you adopt that stance, the job of architecture changes. Instead of asking, “Can I force everything into medallion?” you can start asking, “Given the input/output contract of this product, what internal shape will best support reliability, evolvability, and cost?”
Internal architecture as an engineering choice, not a belief system
Inside a Fabric data product boundary, you have a lot of building blocks:
Lakehouses and warehouses Notebooks, Dataflows (Gen1 and Gen2), pipelines Shortcuts into other domains and external storage KQL databases and event streams for real‑time work
Medallion is just one way to coordinate those assets.
Internal architectural choices should primarily serve engineering concerns such as:
Data shape and volatility: are you processing fast‑moving events, slow‑changing reference data, or both?
Access patterns: do consumers mostly scan, slice by time, join on keys, or query events?
Governance and lineage: how traceable must transformations be, and for how long?
Regulatory and audit needs: do you have to preserve raw data for a defined retention period? Team skills and operational maturity: what can your team realistically support?
For some data products, a textbook Bronze–Silver–Gold flow in Fabric is ideal. For others, the right answer may be:
A single curated warehouse with dimensional models, no explicit Bronze/Silver layers.
A data vault core feeding multiple “Gold‑ish” marts.
A streaming‑heavy structure using event hubs and KQL databases as an internal state store.
A thin wrapper that mostly virtualizes and joins other products via shortcuts.
As long as the product honors its public contract, all of those are valid. The internal choice is about engineering excellence, not conformity to a pattern.
Fabric data products: medallion and beyond
To make this concrete, it helps to look at trivial examples from a few industries. In all of them, we’ll keep the outer contract simple and show that multiple internal models are reasonable choices inside the same managed data product.
Financial services: “Card Transactions” as a foundational product
Imagine a retail bank implementing a Card Transactions foundational data product within a “Payments” domain.
External contract
Inputs (from outside the product): Batch files from the card processor Streaming authorization decisions from an event hub Reference tables from a “Customer” product via shortcuts
Outputs (to consumers): A table or view fact_card_transaction in a Fabric warehouse, with columns like: transaction_id card_id customer_id transaction_ts_utc merchant_category_code amount_original_currency amount_home_currency auth_result A simple semantic model on top for BI and risk analytics.
An SLA: “New transactions available within 5 minutes of authorization.”
Nothing in that contract requires medallion. Inside the product, you might see at least three viable structures.
Option A: Classic medallion inside the product
Bronze lakehouse tables: raw ingest from the processor, schema‑on‑read but stored as Delta. Silver tables: cleaned, conformed transactional records; customer and card keyed consistently. Gold tables: warehouse fact table plus a handful of dimensions (merchant, card, customer) optimized for BI.
This can work well if:
You need a preserved raw view for audit. You expect substantial data quality work in Silver. Your team is comfortable with medallion patterns.
Option B: Direct curated warehouse with minimal Bronze
A thin Bronze layer for raw file retention only (no heavy querying). Transformations land directly into a well‑designed warehouse fact table. Additional “gold‑like” views are just semantic model artifacts, not separate storage.
This can work well if:
The source feeds are already reasonably clean. You care more about warehouse semantics than lakehouse flexibility. You want to minimize layers and latency.
Option C: Event‑centric internal model
KQL database or event store for authorization events. Micro‑batch process that aggregates and materializes events into the fact_card_transaction table. Optional Bronze retention of original messages for compliance.
This can work well if:
Real‑time fraud analytics or operational monitoring are first‑class requirements. Downstream consumers care about events as much as static facts.
From the perspective of downstream consumers – fraud models, finance reporting, marketing analytics – all three options can expose the same fact_card_transaction and honor the same SLA. The internal choice is about latency, complexity, and operational fit, not allegiance to medallion.
Wealth management: “Portfolio Holdings” as a foundational product
In a wealth management firm, imagine a Portfolio Holdings foundational data product within an “Investments” domain.
External contract
Inputs: Daily positions from multiple custodians Security master data from a “Reference Data” product FX rates from a “Market Data” domain
Outputs: A fact_holding table with: as_of_date portfolio_id security_id quantity market_value_base_ccy accrued_interest A dim_portfolio and dim_security for reporting.
A guarantee: “Positions are complete and reconciled by 07:00 local time.”
Here are two deliberately toy, but realistic, internal models.
Model 1: Medallion with conformance in Silver
Bronze: custodian files land “as‑is” by source and as‑of date. Silver: conformance logic maps each custodian’s idea of a portfolio and security into canonical IDs; reconciliation checks run here. Gold: fact_holding and dimensional tables in a Fabric warehouse.
This matches the mental model many teams already hold. It’s especially attractive when:
New custodians may be onboarded over time. You need to replay and debug conformance logic.
Model 2: Data vault core + Gold marts
Raw landing still exists but is not treated as a formal Bronze layer. A vault‑style core handles: Hubs: hub_portfolio, hub_security Links: link_portfolio_security Satellites: attributes and position history with full change tracking “Gold” becomes one or more dimensional marts derived from the vault core.
This is overkill for very small implementations, but becomes attractive when:
Regulatory expectations require detailed historical lineage of changes. Multiple, slightly different “holding views” must be derived for different regions or distribution channels.
Again, from the perspective of consumers – performance teams, advisors, risk – both models can produce the same fact_holding structure and SLA. Fabric does not force you to choose one over the other; the choice should reflect governance and maintenance needs.
Insurance: “Claims 360” as a derived product
For an insurer, consider a Claims 360 derived data product in a “Claims” domain.
External contract
Inputs: A foundational “Policy” product exposing policy_header and policy_coverage tables. A foundational “Claims” product exposing raw claim_header and claim_transaction tables. An “External Data” product providing weather events and geospatial enrichments.
Outputs: A wide claims_360 table keyed by claim_id that includes: Policy attributes (line of business, deductible, limits). Claim attributes (loss date, reported date, status, reserve amounts). External signals (hail event indicators, flood risk score). A small feature store view claims_severity_features for modeling.
An SLA: “Updated hourly for open claims; daily for closed claims.”
You might build this derived product in at least three different internal shapes.
Shape A: Simple “Gold‑only” wide table
Shortcuts or queries read the foundational products as if they were your Bronze/Silver. A transformation step materializes a single wide claims_360 Delta table in a lakehouse. Feature views are simply projections over that table.
This is deliberately simple and can be the right choice when:
The calculations are straightforward joins and enrichments. You want minimal operational overhead.
Shape B: Medallion‑within‑derived, for complex logic
Bronze: snapshots of the upstream product tables at the time of processing (for isolation and repeatability). Silver: normalized, intermediate structures to simplify complex business rules (e.g., claim event timelines). Gold: the claims_360 table and feature projections.
This becomes helpful when:
Claims logic is complex and contested, and you want explicit, testable intermediate stages. You expect frequent changes to business rules.
Shape C: Hybrid with KQL for evented timelines
KQL database stores claim events (FNOL, adjuster updates, reserve changes) as an event stream. A periodic job materializes a snapshot claims_360 table for BI and downstream features. Fast‑moving operational dashboards query KQL directly.
This favors operational observability and real‑time response, at the cost of adding another component.
In all three shapes, the external view of the Claims 360 product is the same: a managed, trustworthy table and a set of features. Internally, you optimize for understandability, latency, and the kinds of questions the business actually asks.
The through‑line: contracts, not purity
Across these examples, the pattern repeats:
Fabric provides strong support for medallion, and you should absolutely use it where it fits. Domains, workspaces, and data product thinking encourage you to treat each product as a boundary with clear inputs and outputs. Within that boundary, you get to choose the internal structure that best supports engineering excellence.
A few practical implications for teams working in data products and data mesh on Fabric:
Design the product contract first. Schemas, semantics, SLAs, and access patterns should be front‑row. Be explicit about what is inside the product and what is upstream or downstream. Choose internal architectural patterns based on latency, lineage, and maintainability – not fashion. Allow different products, even within the same domain, to choose different internal shapes when their needs differ. Document internals for maintainers, but keep consumers focused on the contract.
In other words: Fabric is medallion‑first, not medallion‑only. Treat medallion as a powerful default, a pattern you reach for often – but feel free to deviate when your data product’s contract and constraints point elsewhere.
Pulling it all together
We started with the observation that Microsoft Fabric strongly foregrounds medallion architectures, enough that many teams implicitly assume “Fabric = medallion everywhere.” In the context of managed data products, that assumption is too limiting.
We walked through a different framing:
Data products live and die by their input/output contracts and behaviors. Inside the product boundary, internal architecture is an implementation detail. Medallion is a first‑class option in Fabric, but not a mandatory one. In financial services, wealth management, and insurance, we can implement the same product contract with multiple internal shapes – medallion, vault‑style, warehouse‑first, or event‑centric.
The call to action is straightforward: when you design your next Fabric data product, write down the contract first. Then, choose the internal structure that best serves your domain’s needs and your team’s capabilities – even if that means stepping outside a strict medallion pattern.