Artificial Intelligence systems like GenAI have a massive opportunity to improve the overall quality of human life, but not without risk. As AI grows in its infancy, we're starting to realize the significance of nurturing the technology to evolve safely. The National Institute of Standards and Technology (NIST) has been a reliable voice for managing information systems and how to approach identifying and reducing risks with their development and use. Recently, I dug into NIST documentation surrounding AI Risk Management to understand what standards are being established for AI systems. Already, some AI risk management challenges are appearing in the news, and it's not a good look for the sustainable growth for AI.
Let's start with the good news: we don't have to rethink our approach to AI completely. We can apply foundational cybersecurity concepts to identify and manage risk in this new AI container. Discussed heavily in the NIST AI-100-1 Risk Management Framework (RMF) are the concepts of trust, accountability, verification, and the CIA Triad. While similar, we must use a unique lens to apply these concepts to the AI Development Lifecycle (AIDLC), including testing, evaluation, verification, and validation. Ultimately, the foundation of low-risk AI systems starts with the human, meaning a culture of ethics, responsibility, incentive, and accountability are empirical to establishing trust. So, how are we doing?
A news article published in Business Insider recently discusses some tough times for OpenAI, one of the leading AI companies. OpenAI is accused of illegally using copyright material to train their proprietary systems and later deleting the data sets, leaving them unavailable. Reading between the lines of the article, we see a clear breakdown of some of the most basic AI risk management concepts laid out in the AI RMF, covered below. In summary, the following risk management practices have broken down for the company.
In addition to the claims above, the company no longer employs the authors of the training data. OpenAI claims these data sets are no longer in its most recent versions of ChatGPT and were deleted for non-use in 2022. It is willing to share the remaining training data with the necessary parties.
I've mapped this incident to the following NIST RMF GenAI publication AI600-1 control recommendations.
With any new technology, there will be growing pains. In hindsight, the above is a clear breakdown of AI Risk Management. While we may not have risk management for AI completely figured out, what if we applied our best understanding of AI risk management and built a better world anyway?
I expect the AI RMF and NIST AI-100 to evolve. With the evolution, we'll learn lessons from our mistakes and improve the risk management framework with an established and clear set of protections for reducing AI risk. Through my reading of the AI RMF, I've identified the following top priorities for managing risk in the AIDLC and using AI-based applications like GenAI.
Trust is the result of an entity behaving as expected. If we expect AI systems to be trustworthy, they require a unique set of parameters implemented during their creation and maintenance. The following are key foundations of trust that must be incorporated in the entire AIDLC.
These features don’t appear out of thin air. They require a developing company to adopt a culture of integrity and due diligence intentionally. Humans are the baseline for the creation of AI systems and applications. When a company continually operates from these places, trust can be established in the system from inception to operation.
NIST AI-RMF discusses at length the importance of baking risk management into the AI development lifecycle cake. At different stages of the cycle, the builders must consider their decisions from a risk-based approach. If not, these risks can trickle down as the product evolves into a publicly consumable application. In the lessons learned above, the protection, retention, and tracking of training data is a significant risk management requirement. The following graphic shows the lifecycle phases, the activities in each, and the responsible parties.
NIST specifically calls out challenges regarding human-based creation, influence, and training for AI systems. Training and baseline models for artificial intelligent systems are based off human notions and influence. However, their non-human application results in dissonance making the synthesis of risk difficult. Additionally, humans are biased in nature due to social norms and conditions, beliefs, convictions, experiences, and influence. Almost unavoidably, these risk-increasing ingredients are largely found in current AI systems. Risk also increases when AI applications are used by humans, and a unique set of risks are exposed when AI to AI interactions are present.
Most concerning, the trustworthiness and reliability of AI systems begin with human integrity, currently without verification. NIST AI-RMF calls out the need for AI system builders to start the design and ultimate creation of AI systems from a moral decision to do no harm and protect humankind. As we all know, not everyone holds these values closely, thus increasing the potential for risky AI systems.
Risk tolerance is the amount of acceptable residual risk after applying risk treatment. In its infancy, tolerance for AI risk is being established and needs the opportunity to grow. We are already learning this lesson the hard way; see the article above. Often, tolerance is a decision made by an organization to determine what best serves their continual operations. Risk tolerance can also come from laws and regulations issued by governing bodies. Until these are published, AI risk tolerance decisions are left to business leaders. Let's hope they do the right thing.
NIST AI-100 Risk Roadmap paints a picture of growing and evolving the already established concepts discussed above. Moving forward, expect additional guidance of risk tolerance, human factors, explainability and interpretability, additional TEVV (trust, evaluate, validate, verify) efforts, and much more. See the chart below for specific details.
We need industry leaders to step in and express their due diligence decisions to utilize a risk-based approach for developing, using, and managing AI and GenAI systems. Adopting these standards as leaders will set the bar for emerging companies to meet, ultimately protecting the greater good of information, systems, and humanity.
Sources: