Policy - Governance
Summary
A to E: Implementation status, coverage metrics, and certification results for identity and authorization policies consumed by E-AIG for AI risk assessment and reporting.
E to A: Governance decisions, control objectives, and prohibited use cases translated into specific policy-engine rules, IGA roles, and technical constraints in Pillar A.
Standards and Specifications
- NIST AI RMF
- ISO 42001
This interface translates governance intent—defined by E-AIG in terms of acceptable AI risk, obligations, and prohibited behaviors—into concrete identity and authorization configurations in Pillar A. Pillar A must expose understandable views of policy coverage, entitlements, and certification posture so that governance bodies can make informed decisions about AI adoption and residual risk. Conversely, E-AIG must articulate decisions in forms that policy engineers can directly encode into rules, roles, and access patterns without relying on informal interpretation. When well-governed, this interface creates a traceable chain from board-level AI policy through E-AIG decisions down to specific rules enforced at runtime.
Variants
Governance decisions translated into manual policy authoring
E-AIG records AI governance decisions as structured documents or tickets, which policy engineers review and convert into updates to OPA, Cedar, Cerbos, or native cloud authorization configurations.
Supports nuanced human judgment but introduces risk of misinterpretation; works best when decisions reference canonical policy IDs, data classifications, and role names shared across governance and technical teams.
Governance-driven IGA role and clearance design
E-AIG defines AI access tiers and use-case boundaries as role and clearance structures; IGA teams implement these as roles, groups, and attributes that Pillar A enforcement relies on for AI-related policies.
Aligns AI governance with established IGA processes but requires governance to work within IGA’s modeling constraints and to maintain mappings from governance concepts to concrete IGA objects.
Prohibited use case and prompt pattern lists
E-AIG defines and updates lists of prohibited AI use cases, data combinations, or prompt patterns, which Pillar A enforces by denying certain actions at the policy layer and signaling enforcement decisions to Pillar D for DLP and monitoring.
Gives governance a direct handle on blocking high-risk behaviors but depends on consistent encoding of patterns and on integration between policy engines and downstream logging or DLP systems.
GRC platform with policy translation workflows
Governance decisions and AI risk evaluations are captured in a GRC system, which orchestrates implementation tasks for Pillar A owners and tracks completion, exceptions, and validation activities.
Improves auditability and cross-team coordination but requires connectors between GRC tooling and policy/IGA platforms so that implementation status can be reported accurately.
Control objective catalogs mapped to technical policies
E-AIG curates catalogs of AI control objectives based on frameworks such as NIST AI RMF and ISO 42001, and Pillar A maintains explicit mappings from each objective to specific policies, roles, or enforcement points.
Enables structured compliance reporting and gap analysis across AI use cases, but depends on disciplined mapping maintenance and on policy repositories that can tag rules with control identifiers.
Participating Vendors
Linked Evidence
No public evidence links have been attached directly to this interface yet.
Assertions
Identity Security Cloud integrates with ServiceNow GRC to connect access governance and risk workflows
SailPoint Identity Security Cloud integrates with ServiceNow GRC so that identity governance activities such as access requests, approvals, and certifications in Pillar A are synchronized with ServiceNow GRC workflows and risk processes in Pillar E through custom REST and workflow integrations documented by ServiceNow and SailPoint.
