OneLake Transformations Are GA—and That Makes Zero‑Unmanaged‑Copy Much More Practical

If you were already bullish on OneLake transformations in preview, the structured-file general availability milestone is easy to treat as a nice product update: useful, welcome, and mostly about convenience. I think that undersells it. What Microsoft officially calls shortcut transformations for structured files is now generally available, and the docs position it very plainly: take CSV, Parquet, or JSON files referenced through a OneLake shortcut, convert them into queryable Delta tables, keep them synchronized, and do it without hand-built ETL pipelines. That is not just easier ingestion. It is a stronger architectural bridge between raw file-based inputs and governed analytical assets inside OneLake.

What I want to do in this post is straightforward. First, I want to explain why this GA moment matters to the zero‑unmanaged‑copy model, not just to file onboarding. Second, I want to connect it to the ingest‑transform‑surface framing we have been using here. Third, I want to argue that shortcut transformations are another example—alongside Materialized Lake Views—of how multistep transform pipelines smooth the path between layers and produce something cleaner than a rigid, box-drawing version of bronze‑silver‑gold. Fabric still clearly supports medallion as a first-class pattern, but that does not mean your internal architecture has to stop at three oversized steps.

GA matters because ingest just got more product-shaped

The most important thing about shortcut transformations going GA is that ingestion becomes more like a platform primitive and less like a custom engineering tax. Microsoft’s documentation now describes shortcut transformations as managed conversion from shortcut-backed files into Delta tables with automatic schema handling, deep flattening, recursive folder discovery, frequent synchronization, and inherited governance including OneLake lineage, permissions, and Purview policies. Microsoft’s “What’s New” log also calls out the feature as generally available in April 2026. In other words, the platform now has a stable, service-managed answer for a very common problem: “I have structured files over there; I want governed Delta tables over here; and I do not want to build and babysit another pile of pipelines to make that happen.”

That is a bigger deal in financial services than it might sound at first. Wealth management teams still receive custodian position files. Lending teams still deal with servicer extracts and partner feeds. Reinsurers still work with bordereaux. Property and casualty organizations still inherit operational file drops from claims, finance, and third-party data providers. Credit card processing estates are full of settlement files, exception files, dispute files, and fee reports. Those are not edge cases. They are the day-to-day reality of how important data actually arrives. Historically, that reality has led to a familiar pattern: copy the files into a landing zone, copy them again into a parsed zone, run a notebook to shape them, land them again into a managed table, and only then begin the “real” transformation work. Shortcut transformations do not eliminate every later materialization, but they collapse a large chunk of that low-value plumbing into a governed ingest step that the platform now owns.

And notice how this changes the conversation with delivery teams. The question becomes less “what custom ingest framework are we going to build for this source?” and more “what is the cleanest input boundary we want to declare?” That is a healthier architectural question. It pushes teams to think intentionally about what they are consuming, where the source of authority lives, and what should be represented as managed Delta inside OneLake. That is already a more product-shaped starting point than “let’s dump it somewhere and figure it out later.”

Zero‑unmanaged‑copy gets stronger, not weaker

This is exactly why the feature strengthens the zero‑unmanaged‑copy model rather than contradicting it. OneLake is explicitly described by Microsoft as a single, unified logical lake with one copy of data for use with multiple analytical engines, and shortcuts are explicitly designed to eliminate edge copies and the latency introduced by staging. In your own framing, the important idea has never been “never materialize anything.” It has been “don’t proliferate unmanaged copies that escape governance, clarity, and intent.” When you do materialize, do it in a place and format the platform can govern. Shortcut transformations fit that definition almost perfectly: the upstream files remain where they are, referenced through the shortcut, while Fabric produces a managed Delta table inside the OneLake estate.

That may sound like a subtle distinction, but operationally it is not subtle at all. Consider a lender receiving monthly or daily boarding files from an external servicer. The old pattern tends to produce “temporary” copies in multiple storage locations, each with its own lifecycle, permissions, and chances to drift from the intended source. Or consider a card-processing analytics team pulling settlement and chargeback files from an external store. The common workaround is a ladder of copies and partial transforms, often implemented in a way that nobody wants to fully document because the whole thing is “just staging.” Shortcut transformations move that copy into the open. The resulting Delta table is not an accidental byproduct of bespoke ETL. It is a declared, synchronized, monitored platform asset. That is a much better expression of zero‑unmanaged‑copy than a philosophy that refuses any materialization and then quietly tolerates three layers of shadow duplication anyway.

There is also a governance payoff here that matters in regulated industries. Microsoft’s documentation explicitly notes that the transformed shortcut flow carries inherited governance signals, including lineage, permissions, and Purview policies. That is precisely what you want when the data is headed into lending risk analytics, advisor reporting, reserving support, or operational reconciliation. The copy that exists is not just “inside the platform.” It is inside the platform’s governance envelope. That is what makes it managed.

The ingest‑transform‑surface model just got sharper

This is where the ingest‑transform‑surface framing becomes especially useful. On edudatasci.net, the advanced lakehouse pattern was framed as explicit inputs through shortcuts or schema shortcuts, a small-step transformation layer implemented as a DAG, and a versioned schema-based surface exposed as the product contract. Shortcut transformations make the ingest part of that model stronger because they give file-based sources a cleaner first-class boundary. Before, the model was already strong when the input was Delta-native, mirrored, or otherwise easy to consume. Now the file-heavy edge of the estate gets a much more elegant path into the same pattern. The ingest boundary stops being “a folder our notebook happens to read” and becomes “a managed Delta representation of the files we intentionally consume.”

The supporting mechanics line up nicely too. Microsoft documents lakehouse schemas as named collections of tables, supports schema shortcuts that map external Delta folders or other lakehouse schemas into your local lakehouse, and supports four-part cross-workspace Spark SQL names. That matters because it lets you keep both the input layer and the product surface explicit. Inputs can be isolated into a clearly named schema. Outputs can be versioned into clearly named product schemas. And the path between the two can remain a deliberate internal implementation rather than an accidental tangle of notebooks and one-off scripts.

The surface side of the model is just as important. Microsoft’s Fabric lifecycle docs say that once data is in OneLake, you can transform it within Fabric without moving it between engines. From there, you can expose it through schemas, through the automatically provisioned SQL analytics endpoint, or through Direct Lake semantic models over Delta tables in OneLake. The SQL analytics endpoint gives you a read-only T-SQL surface over lakehouse Delta tables. Direct Lake is explicitly described as ideal for the gold analytics layer because it reads OneLake Delta tables directly and refreshes by copying metadata rather than replicating the full dataset. That is exactly what a surface should be: a published interface over governed assets, not yet another extraction exercise.

Think about a wealth management product for reconciled holdings and exposures. The ingest boundary might be multiple shortcut transformations over custodian and reference-data files. The transform layer might canonicalize security identifiers, standardize portfolio keys, and compute look-through exposures. The surface might be a versioned schema consumed by quants through SQL and by executives through a Direct Lake semantic model. Same OneLake foundation, explicit boundaries, and far fewer excuses to create side copies “just for reporting.” The model becomes cleaner because each step knows what it is for.

This is another multistep transform pipeline story

The real architectural lesson, though, is not just about ingest. It is about small-step composition. Microsoft’s medallion guidance now explicitly says Materialized Lake Views can be used to implement medallion architecture without building complex pipelines between bronze, silver, and gold. The MLV docs describe declarative transformations, automatic dependency management, built-in data quality rules, optimal refresh, and monitoring. They also note that an MLV can be defined from a table or from another MLV, and that lineage is processed in dependency order. That is exactly what a multistep transform pipeline is supposed to look like: not one giant transformation job, but a graph of smaller, observable steps the platform can understand and operate.

Now read that next to the shortcut transformation documentation and the pattern becomes even more explicit. The shortcut transformations doc not only describes the managed file-to-Delta ingest step; it also says, in plain language, that for further transformations—especially where you need more shaping—you should use Materialized Lake Views for the silver layer. That is a remarkably direct articulation of the chained pattern: start with shortcut transformations to get from file-shaped raw data to governed Delta, then continue with MLVs for the internal transformation graph. Sources stay explicit. Steps stay small. Lineage stays visible. Outputs become cleaner.

This is why I keep coming back to the distinction between medallion as vocabulary and medallion as rigid execution template. Bronze, silver, and gold are useful labels. They are useful teaching tools. They are often a sensible way to describe maturity and intent. But when teams turn those labels into three huge engineering buckets, they often hide too much complexity inside each one. Type normalization, deduplication, survivorship, conformance, rule application, exception handling, and regulatory quality checks get shoved into a few oversized jobs, and then everyone pretends the architecture is clean because the folders are named nicely. The actual engineering is still messy. Multistep transform pipelines smooth out the terrain between those layers by making the in-between work first-class and observable. That is the cleaner option.

Shortcut transformations are now part of that same pattern. They are not merely “how files become bronze.” They are a small transformation step at the ingest edge. MLVs are then small steps in the internal transform graph. Schemas, SQL endpoints, and Direct Lake models become the product surface. Once you see the architecture that way, the old bronze‑silver‑gold staircase starts to look less like a design and more like a loose shorthand for where things broadly sit. The real architecture is the chain of explicit steps between ingest and surface.

That matters a lot in financial services because the hardest work is often not the initial landing and not the final dashboard. It is the middle. In lending, the middle is where delinquency logic, payment reversals, and exposure calculations get reconciled. In property and casualty insurance, it is where claim events are ordered, reserves are interpreted, and policy context is attached. In reinsurance, it is where bordereaux are standardized, treaty mappings are applied, and quality exceptions are isolated. In wealth management, it is where multiple custodians’ versions of the same reality are made coherent enough to publish. Those are exactly the places where smaller, composable transform steps beat giant middle layers every time.

Why financial services teams should care now

Financial services teams should care about this GA moment because so much of the estate is still file-shaped at the edges. Not everything arrives as CDC. Not everything is mirrored. Not everything is already Delta. A large amount of economically important data still shows up as structured files in cloud storage, partner locations, or other OneLake-connected sources. Shortcut transformations being generally available means that edge is now less custom, less brittle, and more governable. The platform can take more responsibility for the boring but essential work of turning structured files into governed Delta tables that stay synchronized.

And that changes where your engineering attention can go. Instead of spending cycles on repetitive file-to-table plumbing, teams can concentrate on the parts of the pipeline that actually differentiate the data product: the business rules, the conformance logic, the quality controls, the contract design, and the surface that consumers trust. That is a much healthier allocation of effort. Your best engineers should be spending more time on exposure logic, reserve interpretation, advisor segmentation, fraud features, or liquidity reporting—not on rebuilding another ingestion ladder for CSV files.

So yes, the obvious story is that OneLake transformations are GA. But the more important story is architectural. The feature makes zero‑unmanaged‑copy more practical because it gives file-heavy estates a managed bridge into Delta. It makes ingest‑transform‑surface more complete because the ingest boundary gets sharper. And it reinforces the same lesson Materialized Lake Views have been teaching: the cleanest modern Fabric pipelines are multistep, explicit, and contract-oriented. Bronze‑silver‑gold still has value. It just works better when it describes the landscape rather than dictating three oversized jumps across it.

Closing thoughts

If you have a backlog full of nightly file copy jobs, fragile parsing notebooks, and “temporary” landing zones that somehow became permanent, this is a good moment to redraw the picture. Start with the input boundary. Let OneLake own more of the copy and synchronization work. Use small-step transformations where the business logic actually lives. And treat the surface as the product contract your consumers are meant to rely on. That is a cleaner architecture than a rigid bronze‑silver‑gold staircase—and now that structured shortcut transformations are GA, it is a much more practical one too.

FinOps for the Data + AI Era: Strong Structures Beat Strong Opinions

The fastest way to turn cloud enthusiasm into executive skepticism is simple: ship something impressive in Data or AI…and then hand Finance a bill no one can explain.

That’s not a tooling problem. It’s a structure problem.

In this post, I’m going to make the case for strong FinOps structures that actively engage Data, AI/ML, and the broader cloud stack—not as a “cost police” function, but as an operating model for technology value. We’ll look at why the scope of FinOps has expanded, what makes Data and AI spend uniquely tricky, and what “strong” actually looks like when it’s working.

Continue reading “FinOps for the Data + AI Era: Strong Structures Beat Strong Opinions”

Six SemPy_labs Functions I Wish More Fabric Teams Used

There’s a moment in most Fabric projects when you realize the hard part isn’t the data model, the lakehouse design, or even the DAX.

It’s the manual work: clicking around to confirm what’s deployed, what depends on what, what’s actually used, and what’s quietly broken.

That’s where Semantic Link Labs (the sempy_labs package) starts to feel like a superpower. Not because it does “magic”—but because it turns Fabric into something you can interrogate and automate with the same discipline you bring to code.

In this post, I’m going to do two things:

  • Give you a quick orientation to SemPy_labs in Fabric and the patterns that make it useful.
  • Walk through a handful of my favorite underused functions—ones that help you extractvalidate, and move assets with less guesswork.

If you’re building in Microsoft Fabric and your work touches #PowerBI, this is one of those toolkits that quietly changes what “fast” looks like.

What SemPy_labs is (and why it’s different)

Semantic link is the Fabric feature that connects Power BI semantic models with the Fabric notebook experience—bridging the gap between Power BI and Synapse Data Science in Fabric.

SemPy_labs (Semantic Link Labs) builds on that foundation and focuses on practical “day two” tasks: inspecting item definitions, working with report metadata, checking semantic model health, and wrapping Fabric REST endpoints into functions that feel notebook-native.

It also pairs naturally with sempy.fabric, which provides a REST client for Fabric endpoints (useful when you want to list or enumerate across workspaces/items).

The two patterns that make SemPy_labs click

SemPy_labs is at its best when you lean into two simple patterns:

A. Pull metadata into a DataFrame, then filter like an analyst.
Many functions return pandas DataFrames directly (or can). That means you can slice, group, and join results before you act.

B. Treat Fabric artifacts like “definitions,” not UI objects.
Instead of “the report I click,” it becomes “the report definition I can export, diff, and reason about.” That one mental shift is a gateway to DataOps thinking in Fabric.

Quick setup in a Fabric notebook

If you don’t already have the package available in your environment, install it and import what you need:

%pip install semantic-link-labs
import sempy_labs as labs
import sempy.fabric as fabric
from sempy_labs.report import ReportWrapper

This is the basic import pattern you’ll see in most examples and walkthroughs.

My favorite underused SemPy_labs functions

Rather than a “top 10 list,” I think about these as verbs—small moves that unlock bigger workflows.

Extract: get_item_definition

If you only adopt one habit from this post, make it this one: export item definitions early and often.

get_item_definition() retrieves a Fabric item’s definition and can decode the payload for you. It can return a dictionary or a DataFrame.

Why it’s underused: teams still treat definitions as something “inside Fabric,” not something they can inspect and version.

workspace = "Contoso Analytics"
item_name = "Sales Lakehouse"
definition = labs.get_item_definition(
item=item_name,
type="Lakehouse",
workspace=workspace,
decode=True,
return_dataframe=False
)
# definition is a dict containing the definition payload/files

Where this pays off:

  • Building repeatable “backup/export” notebooks
  • Creating lightweight diff checks between dev/test/prod
  • Debugging “what changed?” without relying on memory or screenshots

Interrogate: ReportWrapper(...).list_semantic_model_objects()

The ReportWrapper class connects to a Power BI report, retrieves its definition, and exposes a set of report-inspection functions. The key detail: ReportWrapper requires the report to be in PBIR format.

The underused function inside this wrapper is list_semantic_model_objects(), especially with extended=True, because it lets you see which fields are used and whether they still exist in the underlying model.

rpt = ReportWrapper(report="Executive Sales", workspace=workspace)
field_usage = rpt.list_semantic_model_objects(extended=True)
visuals = rpt.list_visuals()
pages = rpt.list_pages()

Why this is suddenly more important: Microsoft has signaled that PBIR is becoming the default report format for new reports in the Power BI service starting in January 2026 (rollout through Feb 2026). That means these PBIR-based inspection workflows become more broadly applicable over time.

If you’ve ever tried to answer “where is this measure used?” by hand… this is the notebook-native alternative.

Count: list_semantic_model_object_report_usage

This one is a governance and refactoring cheat code.

list_semantic_model_object_report_usage() shows semantic model objects and how many times they’re referenced across all reports that rely on the model. It also notes the requirement: reports must be in PBIR format.

usage = labs.list_semantic_model_object_report_usage(
dataset="Sales Semantic Model",
workspace=workspace,
include_dependencies=True,
extended=True
)
# Now you can filter down to "never used" or "rarely used"

Why it’s underused: teams often do model cleanup by intuition (“I don’t think anyone uses that measure”). This function gives you evidence.

A practical use: before you rename or remove anything, generate a usage snapshot and save it to your lakehouse. That’s cheap insurance.

Map: list_item_connections

Connections are one of those areas where “everything is fine” until it really isn’t.

list_item_connections() returns the list of connections that a specified item is connected to.

connections = labs.list_item_connections(
item="Executive Sales",
type="Report",
workspace=workspace
)

Why it’s underused: connection sprawl happens slowly, and the UI encourages a per-item mindset. In a notebook, you can inventory connections across a workspace, detect drift, and flag surprises (like a prod report quietly pointed at a dev connection).

This is one of those functions that supports better hygiene without requiring tenant-admin superpowers.

Move: copy_item

There are a lot of ways to promote artifacts in Fabric. copy_item() is my favorite “clean and direct” option when I want to copy an item with its definition from one location to another.

Two parameters that don’t get enough attention:

  • overwrite (obvious, but easy to forget)
  • keep_existing_bindings (quietly important for reports)
labs.copy_item(
item="Executive Sales",
type="Report",
source_workspace="Contoso - Dev",
target_workspace="Contoso - Test",
overwrite=True,
keep_existing_bindings=True
)

Why it’s underused: many teams default to manual copy steps or heavyweight pipelines even when a simple controlled copy is what they need.

Use it for:

  • “Clone to troubleshooting workspace”
  • “Promote a known-good report definition”
  • “Rebuild an environment quickly after re-orgs”

Score: run_model_bpa

Model Best Practice Analyzer (BPA) tends to get framed as “nice to have.” In reality, it’s one of the easiest ways to systematically improve semantic model maintainability.

run_model_bpa() can display an HTML visualization of BPA results. It can also return a DataFrame, export results to a delta table, and run an extended mode that gathers Vertipaq Analyzer statistics for deeper analysis.

bpa_results = labs.run_model_bpa(
dataset="Sales Semantic Model",
workspace=workspace,
extended=True,
return_dataframe=True
)
# Filter down to high-impact rule violations

Why it’s underused: teams run it once during a crisis, then forget it. The better move is to treat BPA results like test output—something you can trend over time.

This is one of the cleanest “make it measurable” moves you can make in #SemanticLink workflows.

Two honorable mentions (because they unlock automation)

I won’t go deep on these, but they’re worth calling out because they make the rest of the toolkit easier to operationalize.

get_connection_string

get_connection_string() returns the SQL connection string for a Lakehouse, Warehouse, or SQL endpoint.

This is useful when your notebook needs to hand off connection details to downstream libraries or scripts (carefully, and with the right security posture).

service_principal_authentication

If you’re building repeatable admin or governance notebooks, service principal patterns matter.

service_principal_authentication() establishes authentication via a service principal using secrets stored in Azure Key Vault.

That’s the kind of building block that turns “I ran this once” into “we can schedule this.”

Closing thought

The point of SemPy_labs isn’t to replace the Fabric UI. The point is to give you a second interface—one that’s composable, testable, and easy to repeat.

In this post, I introduced SemPy_labs in the broader Semantic Link ecosystem, then walked through a handful of underused functions that cover the full lifecycle: exporting definitions, interrogating PBIR reports, measuring real usage, mapping dependencies, copying artifacts safely, and scoring model health.

If you’re building in Fabric, pick one of these functions and add it to your default notebook template. The compounding value comes from repetition.

The AI Knowledge Layer Banks Can Actually Govern: Why Azure Foundry IQ Matters to the CTO, CRO, and CISO

Financial services didn’t get cautious by accident. Banks, insurers, and capital markets firms have learned—repeatedly—that the fastest way to turn a promising innovation into a headline is to let it outrun governance.

GenAI is no different. In most institutions, the gap isn’t “we don’t have models.” The gap is that we don’t have a trusted, permission-aware way to ground agents on enterprise knowledge—without quietly bypassing entitlements, data classifications, and audit expectations.

In this post, I’ll lay out what Azure Foundry IQ (Microsoft’s current naming is Foundry IQ inside Microsoft Foundry, formerly Azure AI Foundry) actually is, why it should matter to CTOsCROs, and CISOs, and how its value shows up in business outcomes, business risk avoided, and compliance risk—specifically in Financial Services.

Continue reading “The AI Knowledge Layer Banks Can Actually Govern: Why Azure Foundry IQ Matters to the CTO, CRO, and CISO”

When the Thing You Care About Almost Never Happens: Rare-Event Modeling as a Fabric Data Product

Rare events are where the money is.

In financial services, the outcomes that barely show up in your data—fraud, default, AML hits, account takeover, operational losses—are the same outcomes that drive outsized loss, regulatory exposure, and customer harm. They’re also the outcomes most likely to embarrass a team that treats model building like a Kaggle exercise: train/test split, maximize accuracy, ship the AUC, call it done.

In this post, I’ll walk through practical techniques for analyzing rare-event problems, why they’re disproportionately valuable in #FinancialServices, how to build them in #MicrosoftFabric’s Data Science and ML capabilities, and then how to pivot from “a model” to “a data product” in the sense we use here: reusable, trustworthy, owned, composable, and contract-driven.

Continue reading “When the Thing You Care About Almost Never Happens: Rare-Event Modeling as a Fabric Data Product”

Beyond Automation: Using SAMR to Explain AI Value in Property & Casualty Insurance Services

Most conversations about AI in property and casualty insurance start with the same promise: “faster, cheaper, smarter.” But in practice, the real question is where AI is being used.

Is it just doing the same work a little quicker… or is it changing the way underwriting, claims, and loss control actually run?

One of the cleanest ways to explain that difference is to borrow a framework from education technology: the SAMR modelSubstitution, Augmentation, Modification, Redefinition—originally articulated by Ruben Puentedura. In SAMR, the first two levels are typically “enhancement” and the latter two are “transformation,” because they represent meaningful redesign (or reinvention) of the work itself.

In this post, I’ll map SAMR to the kinds of operational and strategic value AI can create across P&C insurance services (intake, underwriting, claims, fraud, and risk/loss services), staying away from customer chatbots and focusing instead on business process change that actually moves KPIs. Along the way, I’ll flag where AI and Insurance leaders tend to underestimate the “operating model” work required to reach the top of the SAMR ladder.

Continue reading “Beyond Automation: Using SAMR to Explain AI Value in Property & Casualty Insurance Services”

Knowledge Graphs: The Quiet Superpower Behind Trustworthy AI

If you’ve spent any time building with large language models, you’ve felt the tension: they’re brilliant at language, and occasionally too confident about facts. The more “enterprise” your use case becomes—policies, procedures, product catalogs, research, student records, regulated workflows—the more that gap matters.

This post is about the missing layer that closes it. Knowledge graphs give AI something it often lacks: a durable, explicit model of meaning and relationships. We’ll walk through what knowledge graphs really are, why they matter more now than ever, and how graph-based retrieval (GraphRAG) is changing what “good” looks like in modern AI.

Continue reading “Knowledge Graphs: The Quiet Superpower Behind Trustworthy AI”

Straight Through Processing for Documents: When “Touchless” Becomes the Cost-Saving Feature

Most organizations don’t drown in documents because they lack OCR.

They drown because every document creates work-in-the-middle: a person opens an email, downloads an attachment, checks a value, rekeys it into a system, compares it to a second system, and routes it to a third. Multiply that by thousands of invoices, claims, onboarding packets, and compliance forms, and your “document workflow” turns into a labor model.

That’s where Straight Through Processing (STP) comes in.

In this post, I’ll lay out what STP actually means, why it’s the most practical way to think about cost reduction in document-heavy operations, and what “STP-ready” AI document automation requires beyond basic extraction—without anchoring the conversation to any single vendor.

Continue reading “Straight Through Processing for Documents: When “Touchless” Becomes the Cost-Saving Feature”

Semantic Models Aren’t the Finish Line: 10 Underused SemPy Functions for Fabric

If you’ve ever watched a team pour months into a semantic model—only to treat it as “the thing Power BI reads”—you’ve seen a common (and costly) mental model at work.

Semantic models shouldn’t be the last layer before visualization. In Microsoft Fabric, they can be a first-class part of the extended analytics and data science stack: something you can query, validate, profile, and even productize from notebooks. That shift is exactly what SemPy (the Python library behind Semantic Link) makes practical.

In this post, I’m going to do three things:

  • Introduce SemPy for Fabric as the bridge between semantic models and the rest of your Python workflows.
  • Share four “generic” functions that help you discover and understand a model’s surface area.
  • Highlight three functions that make consuming semantic models straightforward, and three that unlock capabilities you’d otherwise spend real time (and compute) rebuilding yourself.

Along the way, I’ll frame these as patterns for turning semantic models into data product surface areas—usable well beyond dashboards. This is where Microsoft Fabric and #SemPy start to feel less like “BI tooling” and more like part of your day-to-day analytics engineering and Data Science workflow.

Continue reading “Semantic Models Aren’t the Finish Line: 10 Underused SemPy Functions for Fabric”

Perfect AI Is the Wrong Standard: Automate the Happy Path and Take the Win

One of the most common complaints I hear about Artificial Intelligence—both from the public and from professionals—is some variation of: “It’s not 100% perfect.”

That reaction is understandable. But it’s also revealing.

In most areas of work, we don’t demand perfection. We demand progress. We accept that humans make mistakes, that processes have variance, and that edge cases exist. Yet the moment a workflow becomes automated—especially when it has “AI” stamped on it—many people quietly shift the standard to flawless execution.

Here’s what I want to do in this post: unpack why “100% perfect” is an unhelpful expectation for AI, and show why automating the happy path (the most common case) can deliver meaningful returns even if exceptions still require human attention.

Continue reading “Perfect AI Is the Wrong Standard: Automate the Happy Path and Take the Win”