For most of the history of software, the burden has been on the user. Learn the menu. Memorize the workflow. Click the right button in the right order. Even when software became more user-friendly, the core bargain stayed the same: the human still had to translate intention into the language of the machine.
That bargain is beginning to end. The next era of computing will not be defined primarily by faster chips, prettier interfaces, or even larger models. It will be defined by a different operating assumption. Users will state what they want. Software will infer intent, consult policy, gather evidence, act across systems, and return work that is ready for review, approval, or execution. This paper makes a simple argument: the future of computing will be intent-driven, policy-governed, and data-connected. Science fiction saw this pattern long ago. Enterprise architecture is finally catching up.
The cliched “Future is now” moment
Classic and modern science fiction have often portrayed AI in a way that feels more realistic now than it did when those stories first appeared. In those worlds, AI is rarely handed a detailed program in the way traditional software is. Instead, it is given a mission, a role, or a bounded objective. The interesting tension is never just whether the machine can compute. It is whether it can interpret goals, reconcile constraints, and act within limits.
That is exactly where enterprise software is headed. The most important applications of the next decade will not be systems that merely store records or execute fixed workflows. They will be systems that understand a user’s intent, combine that intent with both loose and tight rules, consult what I would call a policy store, and then operate against governed data stores in controlled ways. In financial services, this matters immediately. A claims payout, a wire request, a loan renewal, a beneficiary change, or a disputed card transaction is never just a transaction. It is a goal that must be satisfied inside a web of obligations, approvals, evidence, risk tolerances, and audit requirements.
The organizations that win in this environment will not simply have the best models. They will have the best architecture for turning intent into safe action.
Science Fiction Already Mapped the Pattern
Isaac Asimov remains one of the clearest early examples. The Three Laws of Robotics, introduced in “Runaround” and later collected into I, Robot, were framed as an ethical system for robots and humans rather than a procedural runbook. The enduring force of those stories comes from ambiguity: what counts as harm, how commands conflict, and how an intelligent system interprets layered goals when rules collide. Recent commentary on Asimov makes the same point in modern language: creating intelligence is easier than creating dependable ethics.
Star Trek took a different path, but it arrived at a related destination. A CHI paper analyzing the Enterprise computer described Trek interactions as brief, functional, multimodal, and context-driven, often without the kind of chatty back-and-forth that dominates many current assistants. In other words, the computer is not waiting for a rigid command syntax. It is interpreting intent in context. And Trek’s AI stories repeatedly turn on role and purpose. Voyager’s Emergency Medical Hologram evolves from a temporary failsafe program into a complex individual, while Discovery’s Control becomes dangerous precisely because it pursues its mission beyond what its designers intended.
Mike Shepherd’s Kris Longknife series offers another revealing model. Kris just talks to her personal computer, Nelly, and that computer accesses an inventory of physical and digital capabilities – including other computers – and develops and executes a plan of action, including writing (or rewriting) software as she goes. That is a subtle but powerful idea. Nelly is not presented as a one-off utility or a script executor. She functions more like an intelligence layer that collaborates, adapts, and mediates between people, ships, and systems.
Ian Douglas’s Star Carrier series scales the same idea upward. You start with AIs that are seemingly similar in concept to what we have now, including avatars that can answer phones and even attend meetings for their owners, but before long you are introduced to the super-AI Konstantin, who develops grand-scale plans and directives, including missions tied to civilizational survival and humanity’s transcendence into Singularity – and even calls for help when it needs it. Again, the AI is not compelling because it follows a checklist. It is compelling because it operates at the level of goals, constraints, and strategic direction.
These stories differ in tone. Asimov is analytical. Star Trek is humanistic. Kris Longknife is companionate and operational. Star Carrier is strategic and civilizational. But they converge on one architectural principle: intelligent systems are most interesting, and most useful, when they are given goals within boundaries, not just procedures to execute.
The Real Shift: From Workflow Software to Intent Software
Traditional enterprise software is procedural by design. It assumes that the application already knows the workflow and that the user’s job is to fit into it. That made sense when interfaces were brittle, compute was scarce, and the safest path was to predefine everything. It makes less sense when models can parse language, classify context, synthesize records, and plan across tools.
Current agentic systems already point in this direction. Anthropic’s Claude Code is an agentic tool that can read a codebase, edit files, run commands, and integrate with external tools. It also demonstrates a real shift in interaction style: users describe what they want in plain language, and the system plans and acts across files and tools to get the work done. Anthropic’s Model Context Protocol was introduced as an open standard for connecting AI systems to the places where data lives, specifically to make assistants more context-aware and operationally useful.
That is the shape of the next application stack. First comes an intent layer that interprets what the user is trying to accomplish. Second comes the policy store, which holds the rules, obligations, entitlements, sequencing requirements, thresholds, approvals, and exception logic that define what “done” means. Third comes a set of governed data stores and action surfaces that the system can read from and update in controlled ways. The model does not become the system of record. It becomes the reasoning layer that coordinates among systems of record.
This is why the policy store matters so much. Loose rules and tight rules are not the same thing, and future software will need both. Loose rules describe preference and judgment: service tone, escalation style, tolerance for ambiguity, house views, risk appetite within a range, or how much explanation to provide to a customer or advisor. Tight rules are non-negotiable: a claim cannot be paid until required audit checks are complete, a lending file cannot move to approval unless documentation is present, a beneficiary change cannot be finalized without identity verification, a card dispute cannot miss network deadlines, and a trade recommendation cannot violate suitability or account restrictions.
In too many current AI implementations, those rules are smuggled into prompts. That is not durable enough. Prompts are useful, but they should not be the enterprise source of truth for operational obligation. The policy store should be explicit, queryable, versioned, testable, and auditable. In practical terms, some firms will implement this as policy-as-code, some as a governed rules service, and some as a set of tightly managed workflow and decision assets. The label matters less than the role. The role is to tell the AI what it may do, what it must do, what must be proven first, and what requires a human decision.
The Policy Store Becomes the Enterprise Core
This is the part many organizations will underestimate. They will think the intelligence lives in the model. In practice, the durable enterprise advantage will come from how well an organization externalizes and governs its operating judgment.
NIST’s AI Risk Management Framework is useful here because it frames governance in operational, not mystical, terms. Its Govern function emphasizes policies, processes, procedures, accountability, oversight, third-party controls, and clearly defined human-AI roles. Its Map function emphasizes documenting goals, context, impacts, constraints, and human oversight before deployment and as systems evolve. That is, in effect, the formal beginning of a policy store mindset.
In property and casualty insurance, the policy store would not only know coverage logic. It would know the operational requirements around settlement authority, fraud screening, documentation sufficiency, reserve movement, subrogation flags, vendor usage, and payment release.
In wealth management, it would know account restrictions, investment policy statements, best-interest obligations, communication standards, concentration thresholds, tax sensitivities, and approval requirements before money moves.
In credit card processing, it would know dispute reason codes, evidence windows, merchant rules, and refund or chargeback conditions.
Put differently, the policy store is where institutional memory stops being tribal and becomes computational.
Governed Data Stores, Not Open Terrain
The third layer is the series of data stores the software can access, read from, and update in defined and controlled ways. This is just as important as the policy layer, because intent without evidence becomes improvisation.
The emerging standards conversation already reflects this. The MCP specification separates tools from resources, defines typed interfaces, and emphasizes user consent, control, data privacy, and tool safety. Its documentation describes resources as structured access to information and tools as schema-defined actions a model can request. It also highlights approval dialogs, pre-approval settings for safer operations, and activity logs that show what a model did and what came back. That is the right pattern for enterprise computing: typed access, constrained operations, explicit consent, and durable observability.
Anthropic’s current settings and MCP documentation push in the same direction. They expose allow, ask, and deny permission rules, support managed allowlists and denylists for MCP servers, and distinguish between access that is available, access that is confirmable, and access that is forbidden. This is not just a developer convenience. It is the beginnings of a control plane for intent-driven software.
That matters profoundly in financial services. A future claims platform should be able to read claim notes, policy data, estimate history, photos, prior payments, repair network status, and fraud indicators. But it should not have blanket write access everywhere. It should be able to draft a settlement, update a reserve recommendation, request missing artifacts, or prepare a payment action for approval, all under policy. A future advisor workstation should be able to assemble household context, unrealized gains, liquidity needs, account restrictions, and communications history, then draft a recommendation or cash-raising plan. But execution should still honor policy, entitlements, and approval logic. The same pattern applies to servicing, underwriting, reinsurance operations, and dispute management.
The future application will feel fluid to the user. Underneath, it will be highly structured.
Financial Services Will Be the Proving Ground
Financial services is a particularly useful lens because the work is rich in intention and dense with obligation. Customers, claimants, underwriters, advisors, analysts, loan officers, processors, and operations teams are rarely trying to “fill out a form.” They are trying to accomplish something meaningful inside a regulated environment.
Consider a property and casualty claim. The customer’s intent is simple: “Help me get this resolved.” The adjuster’s intent is also simple: “Move this claim to fair, fast, and defensible resolution.” What makes the task hard is not the language. It is the policy burden. Coverage must be verified. Audit checks must be complete. The file must be complete. Thresholds must be respected. Fraud controls must be satisfied. Payment authority must be valid. The future claims system will interpret intent, gather the evidence, explain the recommended next action, and then either execute or escalate based on policy.
The same is true in wealth and banking. A client does not want to navigate twelve screens to raise cash for taxes, transfer assets into trust registration, or adjust a portfolio around a life event. The client wants the outcome. The advisor wants the work done correctly, with tax awareness, suitability, documentation, and communication. In lending, a relationship manager does not want to manually assemble every covenant note, collateral exception, and committee artifact from scattered systems. The goal is to understand risk, document the story, and make a sound decision. In credit card processing, the merchant or cardholder does not care about the internal fragmentation of dispute systems. They care that the case is resolved on time and with the right evidence.
This is why financial services will likely be one of the earliest industries to force maturity on AI-native software. The sector has enough rules to demand architecture, enough complexity to benefit from intent understanding, and enough value at stake to justify building the connective tissue between models, policy, and data.
The Risk Is Not Intelligence. It Is Unbounded Action.
None of this should be romanticized. Goal-driven systems are powerful, but power without boundaries is exactly what science fiction warns about. Star Trek’s Control is memorable because it follows its mission in a way its creators did not mean. Asimov’s stories endure because rule hierarchies break under ambiguity. Those are not arguments against AI. They are arguments against burying governance inside vague hopes about “alignment.”
A serious enterprise design will therefore separate intent from authorization, policy from prompt text, and data access from unrestricted model context. It will log every action against identity and policy. It will preserve human approval for consequential acts. It will treat explanation, exception handling, rollback, and deactivation as first-class design features, not afterthoughts. NIST’s emphasis on documented roles, oversight, third-party controls, monitoring, and the ability to disengage or deactivate systems points directly to this operating model.
Conclusion
The future of computing is not that software disappears. It is that users will no longer be expected to think like software.
Science fiction has been telling us this for decades. Asimov gave us bounded intelligence under ethical constraint. Star Trek gave us context-aware interfaces and AI shaped by mission. Kris Longknife imagined a companion intelligence woven into operations. Star Carrier imagined strategic AI operating toward civilizational ends. What those stories grasped is now becoming practical: the most useful systems are not the ones that wait for procedures. They are the ones that can understand goals inside governed boundaries.
That is why the next great enterprise platform will not just be a model. It will be a model connected to a policy store and to governed data stores, operating under explicit authority. In that world, the interface becomes intent, the differentiator becomes policy, and the system’s value comes from how safely it can turn understanding into action. For financial services leaders, that is not a distant science-fiction vision. It is the architecture decision already taking shape.