Financial services teams have a familiar argument: “Are we a Databricks shop or a Fabric shop?” It sounds like a strategic question, but it usually hides the real problem—different parts of the business need different ways to use the same data, under tight controls, with clear auditability.
When Databricks and Microsoft Fabric interoperate at the data product level, the conversation shifts from which platform to how the data must be used: BI and semantic models, heavy Spark engineering, real-time analytics, governed sharing across boundaries, or advanced ML. The platform becomes a means, not the decision.
In this post I’ll lay out what “data product level interoperability” looks like in practice, why it enables responsible best-of-breed choices in regulated environments, and how it plays out in both directions: Databricks → Fabric and Fabric → Databricks.
The data product lens: stable contracts, flexible compute
A data product is more than a table. It’s a well-defined asset that:
- has an owner and a purpose (risk, finance close, fraud, liquidity, client 360)
- has a contract (schema, semantics, quality expectations, refresh cadence)
- has policy and governance (who can access what, under what conditions)
- has an interface for consumption (SQL, BI, notebooks, APIs, sharing protocol)
- can be composed with other data products freely
- is managed as a product
Financial services benefits from this framing because it naturally aligns with the controls auditors and regulators care about: minimizing uncontrolled copies, proving lineage, managing entitlements, and separating duties across producers and consumers.
Interoperability then becomes the ability to keep the product contract stable while offering multiple compute and service “faces”—without forcing every team onto one execution engine.
The practical foundation: open table formats, shortcuts, mirroring, and sharing
This interoperability story works because the “handshake” is not a proprietary pipeline. It’s increasingly based on a few concrete mechanisms that are designed for cross-engine use.
Delta as a common table language
Fabric Lakehouse uses Delta Lake as the standard table format for Lakehouse tables, with Spark under the hood and optimizations applied by default.
That matters because Delta tables are broadly readable across engines (and at the file level are parquet-compatible), which reduces the friction of moving compute to where it fits best. Fabric’s documentation even emphasizes parquet-engine compatibility in the context of its optimization approach.
OneLake shortcuts: “use the data where it lives”
OneLake shortcuts act like symbolic links: they let Fabric experiences work with data in-place, avoiding extra edge copies and reducing latency from staging.
They’re also multi-engine by design: Spark, SQL, real-time analytics experiences, and semantic models can all use the same shortcut surface inside OneLake.
Mirroring Azure Databricks Unity Catalog into Fabric
Fabric’s mirroring capability can target Azure Databricks Unity Catalog and this is explicitly designed to let Fabric workloads read Unity Catalog–registered data without data movement or replication. Only the catalog structure is mirrored; the underlying data is accessed through shortcuts.
It also works with ready-to-use analytical entry points (including a SQL analytics endpoint) and supports Power BI access patterns like Direct Lake for reporting.
Delta Sharing: governed sharing across platform boundaries
Delta Sharing is an open protocol for secure data sharing across organizations and platforms. It’s positioned around sharing live data without copying and supporting a wide range of clients (Spark, pandas, and BI tools), with governance and auditing as first-class concerns.
This is especially relevant in financial services where multi-party data collaboration is common (vendors, market data providers, consortiums, internal subsidiaries, and regulators).
Databricks → Fabric: make engineered products broadly consumable (without replatforming)
The Databricks-to-Fabric direction is often about taking data products engineered under Unity Catalog governance and making them easy to consume across enterprise analytics and BI surfaces.
Unity Catalog–registered gold tables become Fabric-first analytical products
A common pattern is:
- Databricks engineers build and certify a data product surface (Delta tables registered in Unity Catalog).
- Fabric mirrors the Unity Catalog structures so analysts and BI teams can query via familiar endpoints, with no data replication.
This fits regulated reporting workflows (finance close, liquidity reporting, risk aggregation) because it reduces the uncontrolled proliferation of extracts while still expanding access for governed consumers.
A key nuance for responsible adoption: Unity Catalog permissions and policies do not automatically carry over into Fabric—permissions must be re-established using Fabric’s model. That’s not necissarily a weakness; but it is a design reality. In financial services, though, it reinforces a best practice: treat each consumption surface as a controlled interface with explicit entitlements.
Databricks writes Delta to ADLS; Fabric consumes via shortcuts for BI and exploration
When teams want a simpler “table-level handshake,” Databricks can write Delta tables to ADLS Gen2, and Fabric can create OneLake shortcuts to those Delta tables and analyze them in Power BI.
This is an effective way to operationalize a data product where Databricks owns the engineering pipeline, but Fabric provides broad consumption experiences across the business.
Partner and cross-tenant sharing: Delta Sharing as the product interface
For data products that must cross organizational boundaries, Delta Sharing gives a governed, protocol-based interface: share live data without copying and allow consumers to connect using different tooling.
In financial services, this can support scenarios like vendor data onboarding, subsidiary-to-parent reporting, or controlled data exchange for model validation—without forcing the recipient to adopt the producer’s platform.
Fabric → Databricks: keep OneLake as the product surface, bring Databricks compute to the data
The reverse direction is where Fabric becomes the place data products are organized and served, and Databricks becomes the compute choice for specific workloads that benefit from its runtime, libraries, or operational patterns.
Databricks reads (and writes) OneLake-hosted data products for advanced engineering and ML
Azure Databricks serverless compute can connect to OneLake, read Delta tables, and can write back using a service principal and appropriate permissions.
This supports a clean product lifecycle:
- Fabric hosts a curated product in OneLake (Lakehouse tables in Delta format).
- Databricks runs feature engineering, model training, stress testing simulations, or large-scale transformations against that product.
- Outputs (predictions, features, model diagnostics) are written back to OneLake as new governed products or product versions.
This is exactly the “best of breed, but responsible” story: you’re not duplicating data just to use a different engine—you’re choosing compute based on the workload.
Recently, a new capability allowing Databricks Unity Catalog to query OneLake directly has also been released into beta. This will provide even stronger interoperability.
OneLake shortcuts unify domains; Databricks leverages the unified namespace
OneLake shortcuts are explicitly meant to unify data across domains, clouds, and accounts, and they can be accessed by non-Fabric services through OneLake APIs (which support a subset of ADLS Gen2 / Blob APIs).
For financial services organizations with domain-based operating models (retail banking vs. wealth vs. treasury), this supports a practical compromise:
- Fabric provides a coherent data product namespace.
- Databricks teams can still bring specialized compute to the products they’re authorized to use.
Responsible best-of-breed means acknowledging (and designing for) the control plane seams
Interoperability does not eliminate governance. It forces you to be explicit about it—which is a feature in regulated industries.
A few “seams” are worth calling out:
- Mirroring is not a policy teleport. Unity Catalog policies and permissions aren’t mirrored into Fabric; you must configure access controls in Fabric, and the credential used to create the Unity Catalog connection is used for queries.
- Not all table types mirror cleanly. Fabric lists limitations for mirrored Azure Databricks items, including unsupported table types (for example, tables with RLS/CLM policies, streaming tables, and views/materialized views).
- Shortcuts reduce copies, but don’t reduce accountability. They’re a powerful way to eliminate data duplication and latency, but your data product still needs a contract and a clear access model.
The responsible posture is to treat each interoperability mechanism as a published interface for a product, with documented expectations: freshness, propagation delay, entitlement mapping, and audit strategy.
The real decision: how do you need to use the data?
Once you accept “data product first,” the platform choice stops being ideological and becomes contextual:
- If the priority is enterprise BI consumption and semantic consistency, Fabric’s integrated experiences and shortcut-based access patterns become a natural product surface.
- If the priority is specialized engineering, ML, and scalable runtime flexibility, Databricks becomes the compute that executes against the same product surface.
- If the priority is cross-boundary sharing with governance, Delta Sharing is the product interface that travels.
This is how you keep it less about “which platform do I choose?” and more about “what does this data product need to enable—safely?”
Conclusion: interoperability is a governance strategy, not just a connectivity feature
Databricks–Fabric interoperability at the data product level changes the shape of decision-making in financial services. Delta tables, OneLake shortcuts, mirroring of Azure Databricks Unity Catalog, and open sharing protocols like Delta Sharing make it realistic to support a broad range of services and compute capabilities—without multiplying copies or compromising control.
If your architecture discussions are still framed as a platform “either/or,” it may be time to reframe around the data products you need—and then select the compute that serves each product responsibly.