SPH-5 · Post-AI Enterprise / Architecture

SPH‑5: Post‑AI Enterprise Software Architecture

Version 1.0.0

SPH‑5: Post‑AI Enterprise Software Architecture

Executive summary

Enterprise software should converge on a layered architecture where agent, human, and team experiences are governed by explicit business rules, powered by a coherent data management layer, and orchestrated through an enterprise‑standard, enterprise‑secure AI fabric that interoperates with AI embedded in and exposed by all systems in the portfolio.


Strategic Principle Hypothesis (structured)

Claim
Enterprises should evolve their enterprise software architecture so that agent, human, and team (human‑plus‑agent) experiences are (1) governed by explicit enterprise business rules, (2) powered by a coherent data management layer, and (3) orchestrated through an enterprise‑standard, enterprise‑secure AI fabric that interoperates with AI embedded in and exposed by all systems in the portfolio.

Qualifier
Most relevant for enterprises with mixed estates (on‑premise, private cloud, SaaS), growing agent/copilot usage, and significant expected system churn over 3–10 years.

Grounds

  • Business rules are scattered across documents, configuration, and institutional memory, making it hard to ensure consistent behavior and rapid change; agents executing at scale demand explicit rules.
  • Data architectures (warehouses, lakehouses, fabrics, domain data products) are already evolving to support analytics and AI; they are the natural backbone for governed state and data use.
  • Experiences for agents, humans, and teams must align with those rules and data, optimizing dashboards, drills, decisions, and execution flows for each audience.
  • An AI fabric is required to connect rules, data, and experiences, providing identity‑, policy‑, and observability‑aware access to AI capabilities across the estate.
  • Inboard AI can only be treated as “inside” the fabric when it runs on tenant‑bounded models and exposes telemetry/hooks; otherwise it should be disabled or treated as outboard and routed through the fabric.
  • Outboard AI interfaces that route through the enterprise fabric centralize control, observability, and spend; legacy systems can be “wrapped” via the fabric to extend their useful life.

Warrant
When many systems must contribute to coherent experiences for humans and agents, architecture must standardize rules, data, and AI mediation; otherwise, behavior fragments and governance becomes intractable.

Assumptions

  • Vendors will be willing, or can be pressured, to expose AI‑ready interfaces and support enterprise fabrics.
  • Enterprises will enforce turn‑off / exit disciplines where vendors cannot meet minimum inboard/outboard requirements.

Narrative essay

From application‑centric to rule‑ and experience‑centric

For decades, enterprise architecture diagrams have revolved around applications: CRM, ERP, HR, finance, custom systems, integration buses. AI is pushing us toward a different center of gravity.

In the emerging architecture, what matters first are business rules: how you onboard a customer, price risk, approve an exception, close a case. These rules live today in policy documents and scattered configuration. Tomorrow, they must be explicit, executable, and governable, because agents will be carrying them out at scale.

Second comes the data layer. Many enterprises are already investing in warehouses, lakehouses, fabrics, and domain data products. This is where you say: here is the state of the business, here is how it may be joined and shared, here is how long we keep it, here is how we forget it.

Third are the experiences: for agents, for humans in roles, and for mixed teams. Dashboards, co‑pilot panes, agent workbenches, automated playbooks. These are where rules and data meet reality.

Finally—crucially, but not first—you have the AI fabric. This is the layer that retrieves context, calls tools, orchestrates workflows, and mediates between experiences and data under identity‑aware, policy‑aware, observable control.

Inboard and outboard AI in this architecture

Vendor‑embedded AI (“inboard”) can live inside this architecture, but only if it:

  • Runs on tenant‑isolated or contractually bounded models where enterprise data is not used to train shared foundation models and does not leave agreed boundaries.
  • Exposes configuration hooks and telemetry so identity‑aware AI security and AI‑aware SOC operations can see which data it accesses, which actions it takes, and which prompts or contexts drive those actions.

If providers cannot meet those requirements, enterprises should:

  • Turn embedded AI features off by default.
  • Re‑evaluate whether and how long to keep those systems in the portfolio.
  • Prefer patterns where the system exposes APIs or events and AI is provided through the enterprise AI fabric instead.

For “outboard” AI, the technically easy pattern is for every app to call models directly. The preferred pattern is different: suppliers expose AI‑ready interfaces that route through the enterprise AI fabric, so identity, data‑handling, and observability controls are consistent and AI spend can be governed centrally.

Wrapping the past, shaping the future

As this rules–data–experiences–fabric architecture matures, enterprises can “wrap” legacy systems:

  • Exposing just enough data and actions via governed interfaces to support agent and human experiences.
  • Extending the useful life of systems whose vendors are not innovating quickly enough, while planning eventual replacement.

Over time, systems that cannot plug into this structure will be replaced by systems that can. As AI lowers the friction of building features, vendors will need new ways to earn durable margins—likely through high‑quality, pseudonymized benchmarking and analytics that respect enterprise data ownership.