Patrick Walsh

Talk to the business about AI security risks

If you’re on a security or engineering team and you’ve read our Securing Gen-AI White Paper, you probably understand that the addition of Gen-AI into applications introduces whole new surface areas and attack vectors that greatly increase the risk of a breach. But how do you help others in your organization understand these problems before hackers are actively exploiting them?

Everyone wants to use new AI tech, but even people in the industry aren’t broadly aware of the issues that ride along. It may not be understood in your organization that addressing these problems requires attention, budget, and prioritization.

Here are three key things stakeholders in your business need to understand:

1. Increase in Avenues of Attack

AI takes instructions in natural language, text, images, and even via video. Malicious instructions to the AI can be hidden in any email, Word document, website, or other data repository.

Virus scanners only look at executable files or embedded macros in Office documents, but now all data from photos to spreadsheets are potentially “executable” from an AI perspective. The number of avenues for an attack has just grown significantly.

2. Increase in Types of Attacks

At the same time, these new Gen-AI systems are vulnerable to whole new classes of vulnerabilities that traditional perimeter and application security solutions don’t cover. For years we’ve understood what types of attacks we face, but now we face many more with little experience seeing and responding to these new types of attacks.

Examples of these new attacks have names like “prompt injection” and “vector embedding inversion,” but the rush to adoption of Gen-AI tech has far outpaced the adoption of tools that mitigate these problems.

3. Need to Address and Prioritize Early

These risks can be mitigated given attention, time, budget, and a willingness to adopt tools that are fairly new (and usually from startups).

A “strike force” should be assembled across security, engineering, and operations groups to understand where you’re vulnerable and how to address it. This review needs to look at both internal projects and at 3rd-party software that your company uses that hold confidential data and has (or plans to add) AI capabilities.

Example: new shadow data

Most infrastructures using Gen-AI have recently added shadow copies of all their data into indices used for natural language search. These are held in vector-capable databases in the form of text embeddings that can be inverted (hacked) back into their original form, but which are required in order to chat with an LLM about your private data.

With new avenues of attack AND new types of attacks, the risk surface in software that uses AI has increased exponentially. This represents an entirely new and under-protected class of data that holds tremendously valuable information. And in all likelihood, any data tracking and tagging software is blind to it.

Conclusion

Step one is getting educated. Step two is starting the conversation with the technical and non-technical stakeholders so everyone understands the new vulnerabilities in software. Some companies will wait until hackers are repeatedly making headlines with hacks via AI channels. But those who watch the ethical proof-of-concepts and academic papers demonstrating these attacks already realize how incredibly powerful these new tools are in the hands of hackers.

And it’s always harder and more expensive to fix security down the road than it is to build it in from the start. Get your organization’s footing right when it comes to AI. It can be used to great effect and it can be used in a high-risk mode or in a manageable risk mode. Convincing the business stakeholders that it’s important now is critical.

P.S. We can help with that shadow AI data

IronCore Labs is the only company that can protect that shadow AI data from being a liability by encrypting it, but still allowing it to be used for search. Without the key, the data is useless. With it, the data is fully usable similar to if there was no protection at all. It’s fast, reliable, inexpensive, and easy to use. Take a look at Cloaked AI to learn more.