AI Security Risks Are Real -- Here's 12 Questions to Ask Your Software Vendor
As businesses integrate AI-powered features into their software, security remains a top concern. Vendors are promising cutting-edge AI capabilities, but how well do they protect your data? Before trusting any AI-driven solution, it’s essential to ask the right security questions.
Here are 12 critical questions to help you evaluate your software vendor’s AI security practices and protect your business from AI-related risks.
1. What AI capabilities does your software use?
Start with the basics — understand whether the software vendor uses large language models (LLMs), trains their own models, or uses any AI-powered automation, predictive analytics, or Retrieval-Augmented Generation (RAG) workflows. Knowing the scope of AI involvement helps determine potential security risks.
2. Who hosts the AI models you use?
Does the vendor rely on third-party AI providers like OpenAI, Google, or Anthropic? Or do they use self-hosted models? Hosting impacts data privacy, compliance, and how much control they have over security measures.
3. What sensitive data is handled within the AI system and how is it protected?
Does the vendor send personally identifiable information (PII), private messages, private documents, financial or health information to LLMs? If so, is that data first redacted, anonymized, and/or encrypted before being processed?
4. What is your log retention policy?
Prompts to AI systems and their responses are typically logged. With RAG workflows, these logs can contain full copies of sensitive documents and other data. It’s extremely important to understand how these are handled and who has access. Ask these questions:
- How long are logs retained?
- Where are they stored?
- Who has access to them?
Vendors should implement strict access controls and short or even zero-duration log retention policies to reduce the risk of data leaks.
Additionally, it isn’t just the LLM systems that log prompts and responses. Various prompt firewalls, monitoring, and QA tools are also logging prompts and responses. Be sure the vendor identifies these and the log retention and access control stories for each.
5. Is my data used for AI model training?
Free-tier AI models often use customer data for training. Ensure the vendor explicitly states:
- Whether they allow AI providers to use your data for training
- Whether they use your data to train shared models used by their other customers
- If there’s an option to opt out of training data usage
6. Do you use vector embeddings, and are they encrypted?
Many AI-powered apps store embeddings (numerical representations of data) in vector databases or other vector-enabled storage. These embeddings can be reversed, which means they can be restored to something close to their original input, leading to data leakage through inversion attacks. Ask the vendor the following:
- Do you use vector embeddings?
- What data is being converted to vector embeddings?
- Are you encrypting the vector embeddings before they’re stored?
7. How do you prevent prompt injection attacks?
Prompt injection is one of the biggest security threats to AI applications, since these injections can undermine prompt instructions and change expected behavior of AI systems. No software completely stops this sort of attack, but prompt injections can be made more difficult by prompt firewall software that detects and blocks injections. Responsible software vendors should have one in place and should let you know which one they’re using.
8. What protections are in place against indirect prompt injections?
Attackers can manipulate documents, emails, or database records to sneak malicious instructions into RAG workflows. Ask whether the vendor has protections against instructions showing up in stored user-generated content using a tool that scans such content for attacks before storing it or making it searchable by AI systems.
9. How do you handle AI-generated code execution risks?
Some AI features generate SQL queries, scripts, or API calls. If these are automatically executed, they could delete databases or expose private data. Ensure vendors have:
- Read-only sandboxes without access to the Internet
- Strict permission controls to prevent high-risk actions
10. What AI security tools have you employed?
AI security is still evolving, and no single tool covers all risks. Vendors should leverage solutions like:
- Prompt firewalls (e.g., Lakera, Protect AI)
- AI governance platforms (e.g., OneTrust AI, Credo AI)
- Vector encryption tools (e.g., IronCore Labs’ Cloaked AI)
- Model testing tools (e.g., HiddenLayer, Mindgard)
11. How do you test AI security vulnerabilities?
Vendors should actively test their AI models against adversarial attacks. Ask if they conduct:
- Red team testing for AI models
- Security audits on AI behavior
- Hallucination and misinformation detection
- Prompt and response quality measurements
12. How do you ensure AI governance and compliance?
Different AI applications fall under various risk categories in regulations like the EU AI Act. Vendors should:
- Communicate the EU AI Act risk classification for each AI-powered feature
- Provide governance reports
- Implement compliance tracking systems
Conclusion
Before enabling AI-powered features in your business software, make sure your vendor has strong security measures in place. By asking these 12 critical questions, you can evaluate whether your vendor is taking these new threats seriously and is doing enough to protect the data flowing through these AI systems.
We discuss all of these things in more detail in our white paper, which has illustrations of attacks, types of defenses, and recommended solutions for AI security. The most recent OWASP LLM Top 10 v2 is also out and is a good starting point for understanding the top threats to AI systems.

The ultimate guide to AI security
Infographics and guidance on handling key AI security risks and understanding vulnerabilities. 61% of companies use AI, but few secure it. This white paper covers the key AI risks being overlooked from LLMs to RAG.