There’s a moment in most Fabric projects when you realize the hard part isn’t the data model, the lakehouse design, or even the DAX.
It’s the manual work: clicking around to confirm what’s deployed, what depends on what, what’s actually used, and what’s quietly broken.
That’s where Semantic Link Labs (the sempy_labs package) starts to feel like a superpower. Not because it does “magic”—but because it turns Fabric into something you can interrogate and automate with the same discipline you bring to code.
In this post, I’m going to do two things:
- Give you a quick orientation to SemPy_labs in Fabric and the patterns that make it useful.
- Walk through a handful of my favorite underused functions—ones that help you extract, validate, and move assets with less guesswork.
If you’re building in Microsoft Fabric and your work touches #PowerBI, this is one of those toolkits that quietly changes what “fast” looks like.
What SemPy_labs is (and why it’s different)
Semantic link is the Fabric feature that connects Power BI semantic models with the Fabric notebook experience—bridging the gap between Power BI and Synapse Data Science in Fabric.
SemPy_labs (Semantic Link Labs) builds on that foundation and focuses on practical “day two” tasks: inspecting item definitions, working with report metadata, checking semantic model health, and wrapping Fabric REST endpoints into functions that feel notebook-native.
It also pairs naturally with sempy.fabric, which provides a REST client for Fabric endpoints (useful when you want to list or enumerate across workspaces/items).
The two patterns that make SemPy_labs click
SemPy_labs is at its best when you lean into two simple patterns:
A. Pull metadata into a DataFrame, then filter like an analyst.
Many functions return pandas DataFrames directly (or can). That means you can slice, group, and join results before you act.
B. Treat Fabric artifacts like “definitions,” not UI objects.
Instead of “the report I click,” it becomes “the report definition I can export, diff, and reason about.” That one mental shift is a gateway to DataOps thinking in Fabric.
Quick setup in a Fabric notebook
If you don’t already have the package available in your environment, install it and import what you need:
%pip install semantic-link-labsimport sempy_labs as labsimport sempy.fabric as fabricfrom sempy_labs.report import ReportWrapper
This is the basic import pattern you’ll see in most examples and walkthroughs.
My favorite underused SemPy_labs functions
Rather than a “top 10 list,” I think about these as verbs—small moves that unlock bigger workflows.
Extract: get_item_definition
If you only adopt one habit from this post, make it this one: export item definitions early and often.
get_item_definition() retrieves a Fabric item’s definition and can decode the payload for you. It can return a dictionary or a DataFrame.
Why it’s underused: teams still treat definitions as something “inside Fabric,” not something they can inspect and version.
workspace = "Contoso Analytics"item_name = "Sales Lakehouse"definition = labs.get_item_definition( item=item_name, type="Lakehouse", workspace=workspace, decode=True, return_dataframe=False)# definition is a dict containing the definition payload/files
Where this pays off:
- Building repeatable “backup/export” notebooks
- Creating lightweight diff checks between dev/test/prod
- Debugging “what changed?” without relying on memory or screenshots
Interrogate: ReportWrapper(...).list_semantic_model_objects()
The ReportWrapper class connects to a Power BI report, retrieves its definition, and exposes a set of report-inspection functions. The key detail: ReportWrapper requires the report to be in PBIR format.
The underused function inside this wrapper is list_semantic_model_objects(), especially with extended=True, because it lets you see which fields are used and whether they still exist in the underlying model.
rpt = ReportWrapper(report="Executive Sales", workspace=workspace)field_usage = rpt.list_semantic_model_objects(extended=True)visuals = rpt.list_visuals()pages = rpt.list_pages()
Why this is suddenly more important: Microsoft has signaled that PBIR is becoming the default report format for new reports in the Power BI service starting in January 2026 (rollout through Feb 2026). That means these PBIR-based inspection workflows become more broadly applicable over time.
If you’ve ever tried to answer “where is this measure used?” by hand… this is the notebook-native alternative.
Count: list_semantic_model_object_report_usage
This one is a governance and refactoring cheat code.
list_semantic_model_object_report_usage() shows semantic model objects and how many times they’re referenced across all reports that rely on the model. It also notes the requirement: reports must be in PBIR format.
usage = labs.list_semantic_model_object_report_usage( dataset="Sales Semantic Model", workspace=workspace, include_dependencies=True, extended=True)# Now you can filter down to "never used" or "rarely used"
Why it’s underused: teams often do model cleanup by intuition (“I don’t think anyone uses that measure”). This function gives you evidence.
A practical use: before you rename or remove anything, generate a usage snapshot and save it to your lakehouse. That’s cheap insurance.
Map: list_item_connections
Connections are one of those areas where “everything is fine” until it really isn’t.
list_item_connections() returns the list of connections that a specified item is connected to.
connections = labs.list_item_connections( item="Executive Sales", type="Report", workspace=workspace)
Why it’s underused: connection sprawl happens slowly, and the UI encourages a per-item mindset. In a notebook, you can inventory connections across a workspace, detect drift, and flag surprises (like a prod report quietly pointed at a dev connection).
This is one of those functions that supports better hygiene without requiring tenant-admin superpowers.
Move: copy_item
There are a lot of ways to promote artifacts in Fabric. copy_item() is my favorite “clean and direct” option when I want to copy an item with its definition from one location to another.
Two parameters that don’t get enough attention:
overwrite(obvious, but easy to forget)keep_existing_bindings(quietly important for reports)
labs.copy_item( item="Executive Sales", type="Report", source_workspace="Contoso - Dev", target_workspace="Contoso - Test", overwrite=True, keep_existing_bindings=True)
Why it’s underused: many teams default to manual copy steps or heavyweight pipelines even when a simple controlled copy is what they need.
Use it for:
- “Clone to troubleshooting workspace”
- “Promote a known-good report definition”
- “Rebuild an environment quickly after re-orgs”
Score: run_model_bpa
Model Best Practice Analyzer (BPA) tends to get framed as “nice to have.” In reality, it’s one of the easiest ways to systematically improve semantic model maintainability.
run_model_bpa() can display an HTML visualization of BPA results. It can also return a DataFrame, export results to a delta table, and run an extended mode that gathers Vertipaq Analyzer statistics for deeper analysis.
bpa_results = labs.run_model_bpa( dataset="Sales Semantic Model", workspace=workspace, extended=True, return_dataframe=True)# Filter down to high-impact rule violations
Why it’s underused: teams run it once during a crisis, then forget it. The better move is to treat BPA results like test output—something you can trend over time.
This is one of the cleanest “make it measurable” moves you can make in #SemanticLink workflows.
Two honorable mentions (because they unlock automation)
I won’t go deep on these, but they’re worth calling out because they make the rest of the toolkit easier to operationalize.
get_connection_string
get_connection_string() returns the SQL connection string for a Lakehouse, Warehouse, or SQL endpoint.
This is useful when your notebook needs to hand off connection details to downstream libraries or scripts (carefully, and with the right security posture).
service_principal_authentication
If you’re building repeatable admin or governance notebooks, service principal patterns matter.
service_principal_authentication() establishes authentication via a service principal using secrets stored in Azure Key Vault.
That’s the kind of building block that turns “I ran this once” into “we can schedule this.”
Closing thought
The point of SemPy_labs isn’t to replace the Fabric UI. The point is to give you a second interface—one that’s composable, testable, and easy to repeat.
In this post, I introduced SemPy_labs in the broader Semantic Link ecosystem, then walked through a handful of underused functions that cover the full lifecycle: exporting definitions, interrogating PBIR reports, measuring real usage, mapping dependencies, copying artifacts safely, and scoring model health.
If you’re building in Fabric, pick one of these functions and add it to your default notebook template. The compounding value comes from repetition.