Think of this as two layers that work together:
Fabric layer (tenant + capacities + workspaces)
You set governance boundaries through tenant settings, capacity assignment, workspace structure, and workspace security configuration.
Azure layer (identity, networking, Key Vault, storage, monitoring)
You provide the enterprise foundations Fabric will integrate with: private endpoints, VNets, gateways, Key Vault keys/secrets, and ADLS archive storage.
Fabric layer: structure it like a platform, not a project
A pragmatic pattern is:
- Separate capacities by environment (at minimum: prod vs non-prod), ideally you’ll want to separate each failure domain into one capacity.
- Use workspaces per domain (Finance, Sales, Operations) or ideally per project and per SDLC stage (DEV/TEST/PROD).
- Treat workspace roles as the coarse boundary and use OneLake security roles for fine-grained access control when needed. OneLake security can apply RBAC to folders and can include row/column-level controls (preview), which helps avoid over-granting workspace roles.
Azure layer: the minimum set of services that matter
At a minimum, the landing zone includes:
- Azure Key Vault
- for customer-managed keys (CMK) used to encrypt Fabric workspaces at rest
- and optionally for Key Vault references used by Fabric connections (with important caveats)
- for customer-managed keys (CMK) used to encrypt Fabric workspaces at rest
- ADLS Gen2 storage for archives
- a dedicated storage account (or accounts) designed for long-term retention, cost controls, and separation from Fabric/OneLake.
- Networking (hub-and-spoke or equivalent)
- with private DNS and private endpoint patterns to support Fabric private links and private access to storage.
- plus a strategy for data movement into/out of restricted workspaces (more on this below).
Networking: the piece many Fabric “landing zones” forget
If your design doesn’t include networking, it isn’t really a landing zone—it’s just a workspace naming convention.
Fabric networking comes down to inbound vs outbound:
Inbound: controlling who can reach Fabric
Fabric supports private links at both tenant and workspace scope:
- Tenant-level private links apply network restrictions across the whole tenant.
- Workspace-level private links map a specific workspace to a specific virtual network using Private Link, enabling “this workspace is only reachable privately.”
The nuance: private endpoints help ensure traffic into Fabric follows your private route, but they don’t automatically secure traffic from Fabric out to your data sources. You still need to protect your storage, databases, and services using their own firewall/private endpoint patterns.
Also plan for feature implications: for example, certain Power BI scenarios and some product features have limitations in closed/private-link environments (Copilot support is explicitly called out as not currently supported in Private Link/closed network scenarios).
Outbound: preventing exfiltration (and accidental data leaks)
Private links are the “who can reach Fabric” story. Outbound Access Protection (OAP) is the “what can Fabric reach” story.
When OAP is enabled, outbound connections from the workspace are blocked by default, and admins can allow specific destinations via managed private endpoints.
One critical design note: Data Warehouse outbound access protection currently blocks outbound connections with no exceptions—that can impact patterns where a warehouse is expected to read/write to external locations.
If your architecture assumes “warehouse can just reach that storage account,” OAP forces you to be more deliberate—often shifting integration to pipelines, lakehouse patterns, or staged movement into OneLake first.
Bridging open and restricted workspaces
A common pattern is to lock down data workspaces (private access only) but keep a reporting workspace more open for business users. Microsoft documents a pattern using a VNet gateway to let Power BI semantic models in an open workspace securely access a lakehouse in an inbound-restricted workspace.
Important limitation to account for: Direct Lake semantic models aren’t yet supported against data sources in inbound restricted workspaces.
Key Vault in the landing zone: two different jobs, two different constraints
Key Vault shows up in Fabric architectures in two distinct ways, and you shouldn’t mix them up.
Key Vault for encryption: customer-managed keys for Fabric workspaces
Fabric workspaces can be encrypted at rest using CMK stored in Azure Key Vault. This uses envelope encryption (KEK/DEK) and introduces real operational control—rotation, audit, and revocation.
A few operational details that matter for landing zone design:
- Fabric requires soft delete and purge protection on the Key Vault, and uses versionless keys (Fabric checks for new versions daily).
- Key Vault firewall can be enabled; when disabling public access you can use “Allow Trusted Microsoft Services to bypass this firewall.”
- Key revocation is meaningful: within about an hour of revocation, read/write calls to the workspace fail. That’s both a control and a risk—treat it like a production-grade dependency.
This is a landing zone concern because it impacts key management, break-glass procedures, and operational runbooks.
Azure Key Vault references (connection-level): a feature I’m not recommending right now
Azure Key Vault references in Fabric look like the cleanest possible answer to a hard problem: keep credentials out of pipelines and notebooks, and let Fabric resolve secrets at runtime.
The catch is in the network boundary.
As of the current Fabric implementation, the prerequisites explicitly require that the Azure Key Vault be accessible from the public network, and virtual network data gateways aren’t supported for Key Vault references yet.
In other words: if your security posture is “Key Vault private only” (CIS/NIST-style hardline, private endpoints + firewall, public network access disabled), Key Vault references push you into a compromise that many orgs simply won’t accept. And the community threads echo what practitioners are seeing: the Key Vault reference flow doesn’t currently route through the workspace’s managed private endpoint, and instead attempts to connect via the public endpoint.
So my guidance is straightforward: treat Azure Key Vault references as a convenience feature for lower-risk scenarios, not as the default for production secret management—until the network story supports private-only vaults without widening your attack surface.
The alternate approach: keep Key Vault private, pull secrets in a notebook, and use SLL tools for the work
If you still want the benefits of Key Vault—central storage, access control, rotation—without opening up the vault, the more robust pattern in Fabric today is:
- use a Fabric notebook as the controlled “edge” that talks to Key Vault
- protect that path with Managed Private Endpoints (MPE) and (optionally) Workspace Outbound Access Protection
- then use Semantic Link Labs (SLL) inside the notebook to run the operational work (refresh jobs, metadata syncs, automation), without embedding secrets into every downstream object
This aligns with Microsoft’s own Spark security guidance, which recommends managed VNets + MPE for network isolation and explicitly calls out accessing Azure Key Vault from notebooks, including creating a managed private endpoint to Key Vault.
What this buys you in practice
You get the same “only one place touches the secret” benefit you were aiming for with Key Vault references—but now the “one place” is your notebook/job identity and network boundary, rather than every individual connection/UI surface in the workspace.
And that matters when you’re building #MicrosoftFabric architectures that are already leaning into caching layers like MLVs (#Lakehouse, #DataEngineering): you want fewer components holding or handling credentials, not more.
Step-by-step: Notebook + Key Vault + SLL (Semantic Link Labs)
1) Wire Key Vault to the workspace via Managed Private Endpoint
In the Fabric workspace settings, create a Managed Private Endpoint pointing at your Key Vault resource ID, then have the Key Vault owner approve the private endpoint request.
This is the key distinction versus Key Vault references: MPE is a workspace-level, Spark workload-friendly path that supports the “keep it private” security model. Microsoft’s Spark security guidance explicitly recommends creating an MPE to Key Vault for securely connecting from Spark notebooks.
If you enable Workspace Outbound Access Protection, be aware it can also affect how you install libraries (for example, blocking public PyPI installs), so plan for publishing required packages into a controlled Fabric environment or approved repository.
2) Retrieve secrets from Key Vault in the notebook (not in every query)
Fabric’s NotebookUtils credentials utilities include getSecret, which retrieves a secret from Azure Key Vault using user credentials.
Example:
from notebookutils import credentials
kv_uri = "https://<name>.vault.azure.net/"
sql_password = credentials.getSecret(kv_uri, "sql-password-secret-name")
A couple important notes worth making explicit in the post:
- The notebook execution context matters: Spark notebooks and Spark Job Definitions execute in the context of the submitting user, so the submitting identity needs permissions to retrieve secrets (the Spark security doc calls out “Key Vault Secrets Officer” as the needed access level in their example guidance).
- Don’t print secrets. Treat notebook output, logs, and screenshots as exfiltration risks.
3) Use SLL tools to do the operational work—without embedding secrets everywhere
Semantic Link Labs (SLL) is designed for Fabric notebooks and extends Semantic Link with automation-friendly capabilities.
Two practical ways to combine SLL + Key Vault in this pattern:
Option A: Use SLL’s Key Vault-backed service principal authentication context manager
SLL includes a service_principal_authentication context manager designed specifically to establish service principal auth using secrets stored in Key Vault.
Notably, the function signature takes:
- the Key Vault URI
- the names of the secrets in Key Vault that hold tenant ID, client ID, and client secret
That means your notebook code doesn’t carry raw credential values—only pointers (secret names) and the vault URI.
Example:
import sempy_labs as labs
import sempy_labs.lakehouse as lake
kv_uri = "https://<name>.vault.azure.net/"
with labs.service_principal_authentication(
key_vault_uri=kv_uri,
key_vault_tenant_id="fabric-tenant-id",
key_vault_client_id="automation-spn-client-id",
key_vault_client_secret="automation-spn-client-secret",
):
# Example operational task: refresh all MLVs in a lakehouse
df_job = lake.refresh_materialized_lake_views(
lakehouse="<lakehouse-name>",
workspace="<workspace-name>"
)
display(df_job)
The refresh_materialized_lake_views function runs an on-demand refresh job instance and returns job details as a DataFrame.
This is a clean fit for “MLV-first” architectures: your MLV refresh can run under a controlled identity, and your Key Vault stays private.
Option B: Use SLL for downstream refresh orchestration after your secure notebook work completes
SLL also supports common “finish the pipeline” operations, like refreshing semantic models. The Fabric community notebook example shows labs.refresh_semantic_model(...) directly.
That lets you keep your secrets isolated to the notebook step (retrieval + connection), then let SLL handle the operational API calls as part of a single, auditable execution path.
ADLS Gen2 for archives: a clean pattern that works with Fabric (not against it)
If you want an archive that is explicitly outside Fabric/OneLake—for retention, legal hold, or cost—you can still integrate it cleanly with Fabric.
Here’s the pattern I recommend:
- Keep archives in a dedicated ADLS Gen2 account (or set of accounts) designed for immutability/retention and lifecycle management.
- Restrict network access using storage firewall + private endpoints (standard Azure practice).
- Use trusted workspace access so specific Fabric workspaces can reach the firewall-enabled storage account via resource instance rules, instead of opening the storage broadly. Trusted workspace access is GA and requires workspaces on Fabric capacity (F SKU).
- Prefer workspace identity as the authentication method to avoid secret sprawl; workspace identity can be combined with trusted access for storage accounts.
Once that’s in place, you have options:
- Expose archived data to Fabric via OneLake shortcuts (so teams can analyze without copying everything into OneLake).
- Ingest selected data into OneLake for performance or downstream product needs, then push “cold snapshots” or archival raw data back to ADLS as part of your lifecycle.
A subtle but important “since Ignite” context point: OneLake continues to lean into openness. Microsoft documents that you can access OneLake items via existing ADLS and Blob APIs using OneLake URIs, while noting that some management operations remain Fabric-native.
That openness is good—yet it increases the importance of consistent governance and logging.
Governance and operations: don’t skip the “prove it” layer
Two capabilities stand out for a landing zone that needs to survive compliance review:
OneLake diagnostics
Enabled at the workspace level, OneLake diagnostics streams access events (JSON logs) into a lakehouse in the same capacity and captures activity across UI, APIs, pipelines, analytics engines, and even cross-workspace shortcuts.
This is the foundation for “who accessed what, when, and how” dashboards—and a practical input to your audit process.
Bringing it together: the landing zone as a product
A Fabric landing zone is not a one-time setup. It’s a product you operate: version it, document it, automate the boring parts, and evolve it as Fabric evolves.
If you take nothing else from this: treat networking, Key Vault, and your ADLS archive strategy as first-class architecture, not “phase two.” That’s the difference between a Fabric pilot and a Fabric platform (#AzureLandingZone) that you can confidently scale.