Managing Risk for AI Systems and Applications

by Dustin Mooney on 2024 | 05

<span id="hs_cos_wrapper_name" class="hs_cos_wrapper hs_cos_wrapper_meta_field hs_cos_wrapper_type_text" style="" data-hs-cos-general-type="meta_field" data-hs-cos-type="text" >Managing Risk for AI Systems and Applications</span>

Artificial Intelligence systems like GenAI have a massive opportunity to improve the overall quality of human life, but not without risk. As AI grows in its infancy, we're starting to realize the significance of nurturing the technology to evolve safely. The National Institute of Standards and Technology (NIST) has been a reliable voice for managing information systems and how to approach identifying and reducing risks with their development and use. Recently, I dug into NIST documentation surrounding AI Risk Management to understand what standards are being established for AI systems. Already, some AI risk management challenges are appearing in the news, and it's not a good look for the sustainable growth for AI. 

Let's start with the good news: we don't have to rethink our approach to AI completely. We can apply foundational cybersecurity concepts to identify and manage risk in this new AI container. Discussed heavily in the NIST AI-100-1 Risk Management Framework (RMF) are the concepts of trust, accountability, verification, and the CIA Triad. While similar, we must use a unique lens to apply these concepts to the AI Development Lifecycle (AIDLC), including testing, evaluation, verification, and validation. Ultimately, the foundation of low-risk AI systems starts with the human, meaning a culture of ethics, responsibility, incentive, and accountability are empirical to establishing trust. So, how are we doing? 

The Biggest Players are Struggling 

A news article published in Business Insider recently discusses some tough times for OpenAI, one of the leading AI companies. OpenAI is accused of illegally using copyright material to train their proprietary systems and later deleting the data sets, leaving them unavailable. Reading between the lines of the article, we see a clear breakdown of some of the most basic AI risk management concepts laid out in the AI RMF, covered below. In summary, the following risk management practices have broken down for the company. 

  • Maintaining the integrity of training data 
  • Implementing risk management in the AIDLC 
  • Lack of reproducible technical knowledge 
  • Closed-source training data that is unverifiable, protected, and secret 
  • Lack of accountability 

In addition to the claims above, the company no longer employs the authors of the training data. OpenAI claims these data sets are no longer in its most recent versions of ChatGPT and were deleted for non-use in 2022. It is willing to share the remaining training data with the necessary parties. 

I've mapped this incident to the following NIST RMF GenAI publication AI600-1 control recommendations. 

  • GV-1.2-007 | Establish transparency policies and processes for documenting the origin of training data and generated data for GAI applications, including copyrights, licenses, and data privacy, to advance content provenance. Data Privacy, Information Integrity, Intellectual Property. 
  • GV-1.7-002 | Communicate decommissioning and support plans for GAI systems to AI actors and users through various channels and maintain communication and associated training protocols. 
  • MP-4.1-011 | Implement policies and practices defining how third-party intellectual property and training data will be used, stored, and protected. 
  • MP-4.1-017 | Use trusted sources for training data that are licensed or open source and ensure that the entity has the legal right for the use of proprietary training data. 
  • MG-2.2-002 | Document training data sources to trace the origin and provenance of AI-generated content. 
  • MG-3.1-007 | Review GAI training data for CBRN information and intellectual property; scan output for plagiarized, trademarked, patented, licensed, or trade secret material. 
  • MG-3.1-009 | Use, review, update, and share various transparency artifacts (e.g., system cards and model cards) for third-party models. Document or retain documentation for: training data content and provenance, methodology, testing, validation, and clear instructions for use from GAI vendors and suppliers, information related to third-party information security policies, procedures, and processes.  

AI Risk Management Priorities 

With any new technology, there will be growing pains. In hindsight, the above is a clear breakdown of AI Risk Management. While we may not have risk management for AI completely figured out, what if we applied our best understanding of AI risk management and built a better world anyway? 

I expect the AI RMF and NIST AI-100 to evolve. With the evolution, we'll learn lessons from our mistakes and improve the risk management framework with an established and clear set of protections for reducing AI risk. Through my reading of the AI RMF, I've identified the following top priorities for managing risk in the AIDLC and using AI-based applications like GenAI.  

Establishing and Maintaining Trust 

 Trust is the result of an entity behaving as expected. If we expect AI systems to be trustworthy, they require a unique set of parameters implemented during their creation and maintenance. The following are key foundations of trust that must be incorporated in the entire AIDLC.  

  • Valid and reliable results 
  • Safe, secure, and resilient 
  • Accountable and transparent 
  • Explainable and interpretable 
  • Fair with harm bias managed 
  • Privacy enhanced 

 These features don’t appear out of thin air. They require a developing company to adopt a culture of integrity and due diligence intentionally. Humans are the baseline for the creation of AI systems and applications. When a company continually operates from these places, trust can be established in the system from inception to operation.  

 AI Development Lifecycle 

 NIST AI-RMF discusses at length the importance of baking risk management into the AI development lifecycle cake. At different stages of the cycle, the builders must consider their decisions from a risk-based approach. If not, these risks can trickle down as the product evolves into a publicly consumable application. In the lessons learned above, the protection, retention, and tracking of training data is a significant risk management requirement. The following graphic shows the lifecycle phases, the activities in each, and the responsible parties. 

NIST AI Software Development Table

Human Influence in Development and Use 

NIST specifically calls out challenges regarding human-based creation, influence, and training for AI systems. Training and baseline models for artificial intelligent systems are based off human notions and influence. However, their non-human application results in dissonance making the synthesis of risk difficult. Additionally, humans are biased in nature due to social norms and conditions, beliefs, convictions, experiences, and influence. Almost unavoidably, these risk-increasing ingredients are largely found in current AI systems. Risk also increases when AI applications are used by humans, and a unique set of risks are exposed when AI to AI interactions are present. 

 Most concerning, the trustworthiness and reliability of AI systems begin with human integrity, currently without verification. NIST AI-RMF calls out the need for AI system builders to start the design and ultimate creation of AI systems from a moral decision to do no harm and protect humankind. As we all know, not everyone holds these values closely, thus increasing the potential for risky AI systems.  

Risk Tolerance 

 Risk tolerance is the amount of acceptable residual risk after applying risk treatment. In its infancy, tolerance for AI risk is being established and needs the opportunity to grow. We are already learning this lesson the hard way; see the article above. Often, tolerance is a decision made by an organization to determine what best serves their continual operations. Risk tolerance can also come from laws and regulations issued by governing bodies. Until these are published, AI risk tolerance decisions are left to business leaders. Let's hope they do the right thing.  

Where is AI Risk Management Headed?

 NIST AI-100 Risk Roadmap paints a picture of growing and evolving the already established concepts discussed above. Moving forward, expect additional guidance of risk tolerance, human factors, explainability and interpretability, additional TEVV (trust, evaluate, validate, verify) efforts, and much more. See the chart below for specific details.  

Ai Roadmap from ppt

AI Leadership by Example 

We need industry leaders to step in and express their due diligence decisions to utilize a risk-based approach for developing, using, and managing AI and GenAI systems. Adopting these standards as leaders will set the bar for emerging companies to meet, ultimately protecting the greater good of information, systems, and humanity.  

Sources:  

  1. Secure Software Development Practices for Generative AI and Dual Use Foundation Models 
  2. Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile  
  3. NIST Artificial Intelligence Risk Management Framework AI600-1  
  4. NIST AI Risk Management Playbook  
  5. Business Insider Article 

Get Email Notifications

No Comments Yet

Let us know what you think