Managing AI Expectations Through the lens of Cybersecurity Trust Modeling
How much did you trust Clippy? Remember the generative LLM-based support assistant AI chatbot for Microsoft Office? That was a joke, by the way. Nobody quite cared for the virtual assistant, nor did we trust it very much. If we managed our expectations and viewed today’s AI like a paper clip, would we trust it as much?
Trust is a core cybersecurity concept of risk management and is discussed heavily in NIST Special Publication 800-39 (Managing Information Security Risk). Trust can be determined through two primary angles: Trust in security functionality and security assurance. We can establish trust through adequate security functionality baked into the application. We can further establish security assurance through confidence that the security functionality is effective.
Does Generative AI Understand Cyber?
If we take the concepts of establishing trust to assess risk and apply them to today's AI, where do we land? Several months ago, a local pack leader asked me to give a cybersecurity talk to a group of Cub and Boy Scouts. I agreed and decided to ask ChatGPT to write me a quick kid’s story about cybersecurity that I could read to the kids and teach cyber lessons. ChatGPT covered the concepts of staying safe online, carefully downloading files, and protecting passwords.
In the story, the main character bragged to the other characters about the strength of their password, but in the end, they revealed the actual password to their friends.
I observed ChatGPT talk about the importance of password confidentiality and then exhibited a character not abiding by the given advice. After raising their hand to speak, one of the kids noticed, "Why did the character share their password with all their friends?".
Well, little buddy, the answer is because AI is more like Clippy, than it is like the grand expectations we have set as a community for AI.
Generative AI Self Reflections
It is clear that Generative AI has limitations and can sometimes be completely incorrect or contradict itself. I decided to ask ChatGPT what AI's limitations are. Here's the response.
- Common sense reasoning: AI struggles with understanding implicit knowledge and everyday situations.
- Contextual understanding: AI may have difficulty grasping nuanced context in complex situations.
- Ethical decision-making: AI lacks human values and empathy, leading to potential biases and unfair outcomes.
- Creativity and innovation: AI struggles to replicate human-level creativity and produce original works.
- Adaptability and generalization: AI may have difficulty applying knowledge to new or unseen situations.
- Explainability and transparency: AI systems can operate as "black boxes," making it challenging to understand their decision-making process.
- Robustness and security: AI systems are vulnerable to adversarial attacks and may produce unreliable results.
- Data quality and bias: AI performance is heavily influenced by the quality and representativeness of training data, leading to biased outcomes.
I'll add these to the list:
- Confident Tone: AI speaks confidently, leaving little room for humans to sense or determine doubt, where AI results may be inaccurate.
- Assumption of Accuracy: AI speaks its truth but provides no sources to back up its answers.
Returning to a previous thought, we can establish trust when an application has built-in security features and we feel confident that these features are sufficient. However, straight from the source, we may want to be careful with our trust in AI.
Use Generative AI with a Reasonable Level of Skepticism
Instead of establishing trust, we have freely given it away because we believe AI systems are larger than life, capable, and accurate by default. In cybersecurity, blindly giving away trust can lead to significant negative impacts on the organization.
When ChatGPT was new and all the rave, nobody talked about the warnings. These warnings provided by OpenAI alert the user that the system may provide inaccurate data but was unclear regarding how much. So, I decided to get it dig a bit deeper.
Here’s my conversation with ChatGPT
Me: What percentage of trust can I place in the accuracy of chatgpt?
ChatGPT: In general, you can place a moderate level of trust in the accuracy of ChatGPT's responses, particularly for general knowledge questions, explanations of concepts, or assistance with common tasks. However, for critical or specialized topics, it's advisable to verify the information from additional sources.
Me: Use a percentage to represent your words "you can place a moderate level of trust".
ChatGPT: I'd say you can place approximately 70-80% trust in the accuracy of ChatGPT's responses for general knowledge questions, explanations of concepts, or assistance with common tasks. However, for critical or specialized topics, it's advisable to verify the information from additional sources.
Me: Provide a level of trust expressed as a percentage when asking about critical or specialized topics.
ChatGPT: For critical or specialized topics, you may want to place around 50-60% trust in the accuracy of ChatGPT's responses. For greater confidence, verify such information from multiple reliable sources or consult with subject matter experts.
My takeaways:
- ChatGPT seems to inherently avoid directly answering some questions unless "forced" to.
- ChatGPT may leave out information that reflects poorly on the system.
- ChatGPT will be incorrect about half the time when querying topics with advanced subject matter and concepts.
Where else in our lives do we accept 50% accuracy rates? Getting on a plane? Driving a car? Performing a Google Search? Moving forward, users of generative AI systems should approach with reasonable skepticism.
Recommendations to Better Establish Trust with Generative AI
Here are some recommendations for establishing better trust with generative AI systems.
- Do not use AI, Guide AI - Arrive at the answers you want through a series of guiding questions. Do not assume a single question will produce the best response.
- Doubt AI - Validate AI responses through research and alternative sources of knowledge.
- Protect your Prompts - Prompts containing confidential information become property of the AI owner. Are you giving away secrets?
- Manage Expectations - Understand and acknowledge that AI can be wrong, even while sounding and presenting information as correct.
As AI becomes more available to the masses, we should learn critical lessons about this AI Epoch. Many have blindly given away their trust due to AI’s novelty and wow factor. However, we continue to hear cases of real-world AI use, going awry. Collectively, we must establish trust with AI by using the systems with a more careful and calculated approach.
You May Also Like
These Related Stories
No Comments Yet
Let us know what you think