Security Risks with RAG Architectures
RAG is all the rage, but how do you make it secure?
RAG Defined
What is Retrieval-Augmented Generation (RAG)?
Infographic eBook on Securing Gen-AI
The ultimate guide to AI security: key AI security risks, vulnerabilities and strategies for protection. 61% of companies use AI, but few secure it. This white paper covers the key AI risks being overlooked from LLMs to RAG.
Why use RAG?
RAG solves three big problems with LLMs
Retrieval-augmented generation (RAG) is an architecture that solves several problems when using Large Language Models (LLMs):
- “Hallucinated” answers
- LLMs are amazing at answering questions with clear and human-sounding responses that are authoritative and confident in tone. But in many cases, these answers are plausible sounding, but wholly or partially untrue.
- RAG architectures allow a prompt to tell an LLM to use provided source material as the basis for answering a question, which means the LLM can cite its sources and is less likely to imagine answers without any factual basis.
- Stale training data
- LLMs are generally trained on large repositories of text data that were processed at a specific point in time and are often sourced from the Internet. In practice, these training sets are often two or more years old. More recent news, events, scientific discoveries, public projects, etc., will not be included in that training data, which means the LLM can’t answer questions on recent information.
- RAG architectures allow for more recent data to be fed to an LLM, when relevant, so that it can answer questions based on the most up-to-date facts and events.
- Lack of private data
- LLMs are trained on public data from across social media, news sites, Wikipedia, and so forth, which helps them to understand language and to answer questions (sometimes accurately) based on public domain knowledge and opinions. But this limits their knowledge and utility. For an LLM to give personalized answers to individuals or businesses, it needs knowledge that is often private.
- RAG architectures allow non-public data to be leveraged in LLM workflows so organizations and individuals can benefit from AI that is specific to them.
Am I hallucinating?
In Practice
Example RAG augmented prompt
Security Disaster
Top 5 security risks with RAG
1 Proliferation of private data
One of the primary concerns with RAG is the introduction of a new data store called a vector database. This involves new infrastructure and new types of data, which are often copies of private data that’s well protected elsewhere. Vector databases store embeddings, which are derived from the private data, but which can be reversed back to near perfect approximations of the original data via inversion attacks.
Being relatively new, the security offered by vector databases is immature. These systems are changing fast, and bugs and vulnerabilities are near certainties (which is true of all software, but more true with less mature and more quickly evolving projects). Many vector database companies don’t even have controls in place to stop their employees and engineering teams from browsing customer data. And they’ve made the case that vectors aren’t important since they aren’t the same as the source data, but of course, inversion attacks show clearly how wrong that thinking is.
Vector embeddings and vector databases are an underprotected gold mine for attackers.
2 Oversharing and access mismatches
RAG workflows are unintentionally exposing overshared documents in repositories like SharePoint. Worse still, data from domain-specific applications, like CRMs, ERPs, HR systems, and more, are all finding their way into vector databases. These databases don’t have the domain-specific business logic required to control who can see what, which leads to massive oversharing.
3 Simple data discovery
AI systems are great for surfacing information to the people who need it, but they’re also great at surfacing that information to attackers. Previously, an attacker might have had to reverse engineer SQL tables and joins, then spend a lot of time crafting queries to find information of interest, but now they can ask a helpful chat bot for the information they want. And it will be nicely summarized as well. This essentially decreases the time required to effectively respond to an incident and will make incidents more severe, even when the perpetrator is unsophisticated.
4 LLM log leaks
Prompts from users, and especially those with augmented data included, can be incredibly revealing. This sensitive data flows through systems that can be compromised or that may have bugs. These systems may by default keep logs of prompts and responses. Looking at OpenAI specifically, we’ve seen account takeovers, stolen credentials, clever hidden injections, and OpenAI bugs that leaked chat histories across users. Some customers now have control over retention times for these records, but using private data in RAG workflows that utilize third-party LLMs still presents risks. Even if you are running LLMs on systems under your direct control, there is still an increased threat surface.
5 RAG poisoning
Many people today are aware of model poisoning, where intentionally crafted, malicious data used to train an LLM results in the LLM not performing correctly. Few realize that similar attacks can focus on data added to the query process via RAG. Any sources that might get pushed into a prompt as part of a RAG flow can contain poisoned data, prompt injections, and more. A devious employee might add or update documents crafted to give executives who use chat bots bad information. And when RAG workflows pull from the Internet at large, such as when an LLM is being asked to summarize a web page, the prompt injection problem grows worse.
Mitigating RAG risks
Six steps to protect sensitive data in RAG architectures
RAG brings significant enhancements to response relevance, reduces the occurrence of hallucinations, and allows LLMs to provide customized responses based on private data. However, it is crucial to acknowledge that the integration of RAG into applications introduces new risks for organizations, particularly in relation to sensitive data.
AI systems in general operate better with access to more data – both in model training and as sources for RAG. These systems have strong gravity for data, but poor protections for that data, which make them both high value and high risk.
To effectively combat these security risks and ensure the responsible implementation of RAG, organizations should adopt the following measures:
- Robust data protection with encryption
- Specifically, with application-layer encryption that allows the vectors to be utilized, even while encrypted, but only for someone with the right key.
- IronCore Labs’ Cloaked AI is inexpensive and dead simple to integrate, with a growing number of integration examples with various vector databases.
- Zero-retention LLM chat histories
- Minimize the number of places where sensitive data can be found by turning off any retention of logs for prompts and responses unless in a dedicated and secure system.
- Confidential models
- A number of startups are running LLMs – generally open source ones – in confidential computing environments, which will further minimize the risk of leakage from prompts. Running your own models is also an option if you have the expertise and security attention to truly secure those systems.
- Minimize data
- Many systems have custom logic for access controls. For example, a manager should only be able to see the salaries of people in her organization, but not peers or higher-level managers. But access controls in AI systems can’t mirror this logic, which means extra care must be taken with what data goes into which systems and how the exposure of that data – through the chat workflow or presuming any bypasses – would impact an organization. Broad access controls, such as specifying who can view employee information or financial information, can be better managed in these systems.
- Reduce agency
- Many startups and big companies that are quickly adding AI are aggressively giving more agency to these systems. For example, they are using LLMs to produce code or SQL queries or REST API calls and then immediately executing them using the responses. These are stochastic systems, meaning there’s an element of randomness to their results, and they’re also subject to all kinds of clever manipulations that can corrupt these processes. Consider allow lists and other mechanisms to add layers of security to any AI agents and consider any agent-based AI system to be high risk if it touches systems with private data.
- Security best practices
- These are still software systems and all of the best practices for mitigating risks in software systems, from security by design to defense-in-depth and all of the usual processes and controls for dealing with complex systems still apply and are more important than ever.
Watch the Cloaked AI Demo Embedding security Cloaked AI Ask us questions