Making sense of risk in a world of change.

Governing the Future: Reimagining Risk and Control Taxonomy for the Generative AI Era

1. Introduction

The advent of Generative AI offers unprecedented opportunities for innovation, efficiency, and value creation across industries. However, this transformative potential is accompanied by a new and complex landscape of risks that traditional governance, risk, and compliance (GRC) frameworks are ill-prepared to manage.

The core challenge lies in the fundamental nature of these systems: they are probabilistic rather than deterministic, opaque rather than transparent, and data-dependent in ways that make traditional IT risk management approaches inadequate.

Simply adding “AI Risk” to existing risk registers will not be enough; we need a complete rethinking of how we approach AI governance. The solution lies in a process-centric risk and control taxonomy specifically designed around the Gen AI lifecycle, moving beyond high-level principles to offer specific, actionable control activities tailored to the novel threats posed by this technology.

2. The Paradigm Shift

For decades, organizations have successfully relied on established enterprise risk management frameworks like COSO and ISO 31000. These frameworks were built on fundamental assumptions about how business processes and technological systems operate: they are knowable, predictable, and auditable.

Generative AI violates every one of these core assumptions. It operates on entirely different principles:

  • Probabilistic, Not Deterministic: Traditional software operates deterministically – given the same input, it will always produce the same output. This predictability forms the foundation of conventional testing and validation controls. Gen AI models, however, are inherently probabilistic systems that generate outputs based on statistical patterns learned from vast datasets. The same prompt can yield different responses, fundamentally breaking traditional “expected outcome” testing methodologies.
  • Opaque by Design: The “black box” nature of deep learning models presents perhaps the greatest challenge to traditional audit and compliance approaches. With billions of parameters, it’s practically impossible to trace the path from specific inputs to specific outputs. When a model produces biased or harmful content, identifying the root cause within the model’s internal workings becomes an infeasible task, challenging fundamental principles of accountability and auditability.
  • Logic Forged from Data: Unlike traditional software where data is processed by explicit code, a Gen AI model’s behavior, knowledge, and biases are direct reflections of its training data. This elevates data integrity, provenance, and quality from supporting concerns to primary sources of critical enterprise risk. A flaw in training data is equivalent to a critical bug in source code, but far more difficult to detect and remediate.
  • Emergent and Unintended Capabilities: Large Language Models (LLMs) can exhibit abilities that were never explicitly programmed or anticipated during training. While sometimes beneficial, these emergent properties create vast landscapes of “unknown unknowns” that static risk assessments cannot capture. This includes “hallucinations” – where a model generates plausible but entirely fabricated information.

3. A New Taxonomy: Structuring Risk Around the Gen AI Lifecycle

Traditional risk taxonomies classify threats into categories like Strategic, Operational, Financial, and Compliance risks, with IT risks traditionally managed as part of Operational Risk through frameworks such as COBIT and NIST RMF. These frameworks have been effective for deterministic, rule-based systems, offering structured methods for control implementation and monitoring.

To effectively manage Gen AI risks, we must structure our approach around the technology’s unique end-to-end lifecycle. This process-centric model allows risk managers to map specific risks to the precise activities where they originate, enabling proactive prevention rather than reactive management.

The Gen AI lifecycle encompasses six distinct phases, each presenting unique risk profiles and control requirements:

Phase 1: Scoping & Use Case Definition

This foundational phase establishes strategic intent and moves from high-level business problems to well-defined AI initiatives. Key activities include problem definition, feasibility analysis, objective setting, stakeholder alignment, and initial risk assessment.

Critical risks include:

  • Strategic Misalignment Risk: Selecting use cases where Gen AI provides marginal value or creates unacceptable risk levels, leading to wasted resources and negative ROI.
  • Undefined Error Tolerance: Launching applications without formally defined tolerance for inherent issues like hallucinations or bias. An acceptable error rate for an internal marketing draft tool is vastly different from a customer-facing financial advice bot.

Essential controls include establishing AI Governance/Ethics Committees with cross-functional representation (business, IT, legal, risk, ethics) requiring formal business cases with risk assessments, and defining Board-approved risk appetite statements specifically addressing AI-related risks.

Phase 2: Data Management & Preparation

In Gen AI systems, data is not merely input – it becomes the raw material from which the model’s logic and capabilities are formed. This phase encompasses data acquisition, governance, cleaning, and pre-processing.

Critical risks include:

  • Copyright Infringement Risk: Training on copyrighted content without proper licensing, exposing organizations to significant legal liability.
  • Data Privacy Violation Risk: Ingesting datasets containing PII, PHI, or other sensitive data in violation of regulations like GDPR or HIPAA.
  • Inherent Bias Risk: Using training data that reflects historical or societal biases, leading to discriminatory outputs leading to unfair, unethical, and illegal outcomes in areas like hiring and lending.

Essential controls include maintaining comprehensive data source/origin logs, conducting formal bias assessments on training datasets, employing data anonymization or pseudonymization techniques and establishing secure data lineage to fulfill user rights requests, such as those for data erasure and portability, as mandated by privacy regulations.

Phase 3: Model Selection & Acquisition

This phase involves the critical build vs. buy decision, with significant implications for cost, time-to-market, and risk profile.

Critical risks include:

  • Supply Chain/Third Party Risk: Procuring a pre-trained model from a third-party or open-source repository that has been maliciously tampered with to include hidden backdoors, vulnerabilities, or trained-in malicious behaviors.
  • Shadow AI Risk: The proliferation of unvetted public Gen AI tools used by employees for business purposes. This bypasses all corporate security and privacy controls and is a primary source for data leakage and IP (Intellectual Property) loss.

Essential controls include establishing formal vendor risk management programs for AI providers, maintaining centralized catalogs of approved external models, implementing technical controls such as DLP (Data Loss Prevention) systems to detect data transmission to external AI platforms, and providing employees with approved enterprise-grade alternatives.

Phase 4: Model Customization & Alignment

This core development stage modifies general-purpose models for specific tasks through techniques like prompt engineering, fine-tuning, and Reinforcement Learning from Human Feedback (RLHF). This phase provides a prime opportunity for attackers to corrupt Model’s behavior.

Critical risk include:

  • Data Poisoning Risk: An attacker subtly injects malicious examples into the fine-tuning data or Retrieval-Augmented Generation (RAG) knowledge base. This can manipulate the model’s behavior, create hidden triggers, or degrade its performance over time.

Essential controls include implementing strict access controls and version management for training datasets, requiring a “four-eyes” change review process, employing anomaly detection during training, and conducting comprehensive checks to probe for harmful behaviors.

Phase 5: Application Integration & Deployment

This phase transitions trained models from development to production environments where they interact with users and integrate into business processes.

Critical risks include:

  • Prompt Injection Risk: A malicious user crafts an input that tricks the model into ignoring its original instructions, allowing them to bypass safety filters, extract confidential system prompts, or execute harmful commands.
  • Sensitive Data Disclosure Risk: Models inadvertently leaking sensitive or confidential information present in training data in response to a seemingly harmless query.

Essential controls include implementing robust input validation and sanitization filters to detect malicious prompt patterns, deploying real-time output filtering systems, and establishing monitoring for suspicious input patterns.

Phase 6: Monitoring & Continuous Improvement

Once deployed, Gen AI models require ongoing oversight as their performance can degrade over time and new risks can emerge.

Critical risks include:

  • Hallucination Risk: Generation of plausible but factually incorrect information, leading to poor decisions and reputational damage.
  • Model Collapse Risk: Progressive degradation from repeated training on AI-generated (self) content. Lack of steady supply of diverse, high-quality, human-generated training data can cause progressive loss of diversity, nuance, and accuracy, eventually rendering the model useless.

Essential controls include implementing Retrieval-Augmented Generation (RAG) to ground outputs in verifiable sources, establishing real-time quality monitoring, maintaining human-in-the-loop (HITL) review processes for critical outputs, tracking training data source to ensure diversity

4. Translating Theory into Practice

To unlock its true value, a comprehensive taxonomy cannot remain a theoretical document. It must be operationalized as a practical tool that integrates with existing risk management programs to strengthen governance and enable effective oversight. The following steps are key to translating this framework into practice:

A. Adopt Gen AI Lifecycle-Based Taxonomies

Move beyond superficial risk register updates and formally adopt process-centric taxonomies aligned with the Gen AI lifecycle, enabling risk identification at source points and targeted, effective control implementation. Embed trustworthiness by design making fairness, transparency, security, and accountability non-negotiable principles from project inception to deployment to ensure responsible and resilient AI adoption.

B. Embed Gen AI into ERM

Integrate the Gen AI risk taxonomy into existing ERM frameworks (e.g. COSO) rather than treating it as a separate stream. Align governance, strategy, and performance monitoring with the AI lifecycle to ensure risks are managed at source.

C. Engage the Three Lines of Defense

Adapt the traditional three lines model to reflect AI’s distributed and dynamic nature. The first line (business and technology teams) should own lifecycle controls and actively monitor risks. The second line (risk, compliance, and security) must provide tools, frameworks, and training while maintaining oversight. The third line (internal audit) should focus on providing independent assurance by assessing trustworthiness factors such as fairness, transparency, safety, and reliability across AI lifecycle phases.

D. Establish Cross-Functional AI Governance

Create a cross-functional AI Governance/Ethics Committee to oversee high-risk AI initiatives, approve policies, and ensure ethical alignment.

E. Align with Global Regulations and frameworks

Leverage lifecycle-based controls to meet the requirements of emerging regulatory frameworks such as the EU AI Act, NIST AI RMF, and other local regulations. Build globally consistent governance programs that simplify compliance across jurisdictions.

F. Invest in New Talent and Skills

Address expertise gaps through upskilling existing GRC professionals in data science fundamentals and AI ethics while recruiting technical subject matter experts who can bridge the gap between data scientists and business executives.

5. Conclusion:

Generative AI presents both the most significant emerging risk on the enterprise horizon and powerful new capabilities for enhancing risk management processes. The paradigm shift is undeniable: traditional frameworks built for deterministic systems cannot adequately govern probabilistic AI technologies. Organizations must think and act to implement comprehensive, lifecycle-based approaches to Gen AI governance.

The future belongs to organizations that can responsibly navigate this new frontier, securely deploying AI in alignment with strategic objectives and stakeholder expectations.

Leave a comment