Reimagining RISK: Considering Generative AI in ERM
Years ago, when I was working as a hardware design verification engineer, one of the biggest challenges I faced was a classic test engineering dilemma: How do you test for something you can't even imagine?
In chip verification, every functional specification has to be checked for potential errors that might occur within a complex design. Writing test cases is straightforward until you realize you're limited to testing only the problem scenarios you've thought of. The real challenge is verifying functionality for the unexpected scenario, or in other words, discovering the unknown unknowns.
There's a similar dilemma in Enterprise Risk Management (ERM) because managing risk is about preparing for threats you can't always see coming. In traditional ERM systems, identifying potential risk scenarios relies heavily on domain expertise and structured brainstorming sessions. But human cognition has its limits. We can only conceive of risks based on our experience, and in a rapidly evolving world, relying solely on what's been seen before can leave organizations dangerously exposed to novel threats.
This is where generative AI enters the picture.
Generative AI (and Large Language Models in particular), with their ability to synthesize complex scenarios from unstructured data, can help ERM teams break free from these cognitive constraints. Instead of only capturing risks that have been anticipated, AI models can generate nuanced, multi-factor scenarios that blend external trends, historical data, and creative "what-if" scenarios. This capability allows us to explore risk possibilities beyond what a single person, or even a group of experts, might have foreseen.
The core power of generative AI lies in its capacity to leverage natural language to embody these scenarios. Instead of manually constructing risk events and mitigation strategies using templates or predefined forms, users can interact with the AI using everyday language, exploring possibilities like, "What if a major supplier suddenly halts operations?" or "What happens if a competitor disrupts the market with a new technology next quarter?" The AI can then take these inputs, dynamically elaborate on them, and present back complex scenarios that consider a broader set of variables, such as supply chain interdependencies, regulatory impacts, and financial exposures.
By aligning this capability with established ERM frameworks like COSO or ISO 31000, generative AI has the potential to transform the risk identification phase by surfacing unseen or low-probability scenarios that might otherwise be overlooked. This is a crucial enhancement, as traditional ERM relies heavily on predefined risk categories that are often rigid and backward-looking.
While generative AI is a powerful complement, it's important to note that these scenarios should be seen as exploratory tools that require validation and interpretation by human experts, particularly in complex or highly regulated environments.
To make things even more powerful, modern generative AI systems can utilize retrieval-augmented generation (RAG) to seamlessly blend organizational knowledge into scenario generation. In practice, this means that when a user asks the system to consider a risk scenario, the AI can simultaneously draw on relevant compliance documents, internal policies, and past incident reports to inform its outputs.
Unlike conventional compliance models, which are rule-based and static, RAG can dynamically retrieve and integrate compliance text, making AI-generated risk scenarios more contextually aligned. However, the value of RAG's insights depends on the quality and structure of the organization's data. If compliance data is incomplete, outdated, or fragmented, RAG's effectiveness will be limited. Additionally, because compliance requirements can be complex and sometimes ambiguous, RAG must be used in a transparent manner, where its outputs are auditable and easily traced back to specific source documents.
Imagine an AI that, when you brainstorm a new risk scenario, instantly pulls in guidance from specific compliance mandates, aligning the risk scenario with real-world requirements. If a proposed mitigation strategy might conflict with a compliance regulation, the system could flag this in real time, preventing the need for post-brainstorming reviews. This alignment makes the process of risk scenario ideation both more comprehensive and more grounded in reality, reducing the chances of costly oversights.
The integration of generative AI into risk management doesn't just transform the backend; it fundamentally reshapes how users experience ERM solutions. Traditional ERM tools often feel rigid and disconnected, requiring users to manually enter risk data and navigate complex forms. With generative AI, the interface can become a dynamic space for creative, strategic thinking.
It's important to note that generative AI complements, rather than replaces, human expertise. While human input will always be central to ERM, generative AI serves as an augmentation tool, expanding the pool of conceivable risks and supporting deeper exploration.
In today's ERM, once risks are identified, mitigation strategies are usually assessed through scenario analysis and cost-benefit evaluation. Generative AI can suggest novel strategies as a starting point, but final selection and implementation must be guided by human judgment, considering organizational context, culture, and risk appetite.
By integrating generative AI in this way, organizations can strengthen the risk response process without compromising the core tenets of effective risk governance.
Incorporating generative AI into ERM should be viewed as an enhancement rather than a replacement. While AI can provide valuable insights and improve efficiency, it is likely we will maintain the foundational principles of risk management established by traditional frameworks. A balanced approach will ensure organizations are well-equipped to navigate both current and emerging risks.
The value of introducing generative AI into the sphere of ERM is in helping organizations address what they can't yet conceive. Just as in chip verification, where the greatest challenge was not testing known failure points but identifying the unimagined edge cases, the challenge in risk management is to surface those unknown unknowns.
By introducing new capabilities that enhance how risks are conceived, documented, and aligned with compliance, generative AI can take ERM solutions further as dynamic, forward-thinking platforms. With the right UX, these AI-enhanced systems won't just capture risks; they'll redefine how we think about risk itself, offering new levels of insight, agility, and strategic value.
Matt leads Planorama Design, a product acceleration firm for enterprise software teams. With nearly 30 years of engineering experience, he helps CTOs and VPs of Engineering structure requirements, validate AI feasibility, and ship better software faster.
Why flashy but unrealistic AI demonstrations are actually slowing enterprise adoption.
Can OpenAI overcome enterprise security concerns to become a true industry leader?