Patrick Walsh

Using Application-Layer Encryption to Restrict Insider Access

Application-layer or application-level encryption (ALE) is a pattern for encrypting data before sending it to a data store in order to better protect it. This is much different from the typical protections that most companies mean when they claim your data is “encrypted at rest and in transit,” which usually means they simply use low-level (like disk- or DB-level) transparent encryption, which doesn’t stop someone with access to a running service from accessing the unencrypted data.

I’ve been evangelizing ALE for years and it’s finally having its moment. The new PCIv4 standards require ALE and many companies are recognizing that transparent encryption isn’t actually protecting their data in systems that are always running, so they’re demanding that their SaaS vendors do better. More and more, people are coming to us saying that they need to add ALE, which is what we’re all about here at IronCore Labs.

One of the major drivers for those asking for ALE is a worry about how many people within an organization can “peek” at their data.

When I worked at Oracle, we often were asked how many people had access to customer data. In the product line where I worked, we had DBAs, Ops people, support people, and some select folks in Engineering who could look at customer data. The support folks had to go through approval workflows and use an app that gated access and logged it, but the DBAs and some others could bypass that.

To be clear, I have no idea how many people had access, and I don’t think that access was abused. Those with access were trusted to use it as needed to debug issues – however, the unknown numbers and identities of employees with access to data created friction with customers.

And it isn’t just customers who have these concerns. Governments also have these concerns. They worry about who can access their citizens’ data if that data is held by companies from other countries. That’s led to a slew of “data sovereignty” laws worldwide that constrain which courts must be involved to grant approval before a law enforcement agency can gain access to private data.

The ALE patterns for handling data sovereignty and preventing unauthorized employee access are largely the same, but the tests and criteria are different.

If you’re curious about data sovereignty and encryption, we cover that and the U.S./EU on-again, off-again data transfer saga elsewhere:

Approaches

The most obvious way to handle these requirements is to use end-to-end encryption (E2EE). This is what we call a zero-trust data model (with regard to confidentiality). Using this approach, which you can build with our Data Control Platform, the hosting company can’t see any encrypted data unless that data is expressly shared with the company. That makes for the best security, but it makes it hard for the service to do anything with the data that it holds other than to hand it to the client to process.

When a server-side application needs to be able to see the data to function and provide value, we get into a world of server-side encryption and building systems where we limit access to data even when administrators otherwise have full access to disks and databases. In the course of adding this level of security, the data becomes significantly harder for hackers to steal, too. Our SaaS Shield Platform helps companies add and manage this functionality.

The most common causes of breaches are misconfigurations and stolen credentials, but with the patterns we detail below, even if a data store is accidentally made public or if an admin’s browser cookies are stolen, the attacker can’t see the protected data in a meaningful form.

The same is true for many network breaches and application vulnerabilities, like SQL injections. It isn’t impossible for an attacker to gain access to the data, but they generally have to more thoroughly compromise a company including multiple systems.

So how does it work?

1. The application uses ALE

The application encrypts the data that needs extra protection (not necessarily all data and fields) before sending it to a database, object store, search service, or other storage.

2. The encryption keys are unexportable and inaccessible even to admins

It all hinges on encryption keys that administrators can’t see or save (export). This is the default mode for AWS’ Key Management Service when backed by a Hardware Security Module (HSM). The key is generated in a hardware-protected environment and is only ever used there. Administrators can see information about the key but not the key itself. It can’t be exported.

Alternately and similarly, data can be protected using a key held by a third party. This can be the case, for example, when a SaaS vendor allows a customer to hold their own encryption keys (HYOK) provided the keys are never shared with the vendor and are needed to encrypt and decrypt the customer’s data. This is necessary for the data sovereignty use case where the third party is either the customer or a company that is solely subject to the laws of a particular nation.

When the vendor holds the keys, the KMS admin could add themselves as a permitted user for the key. Consequently, there should be guards to detect if this happens and other layers to prevent it in the first place. For example, make the KMS admin different from the data store admin so a hacker or malicious insider would need to compromise two people or services instead of one to get to the unencrypted data.

3. Access to APIs that can use the key(s) is strictly limited to only the application

Ideally, there is a single service that is able to access the information required to make use of keys. The main application calls this service, and there should be several layers of access control on that communication, starting with authentication credentials available to the calling application, but not to humans. The encryption service can be further restricted to only communicating with the application so that admins, for example, can’t even reach out to it (consider a service mesh to help with this).

4. Make sure the application’s business logic protects access

With the above, you setup a situation where humans don’t have access to keys or services that are able to decrypt data. Now access to that data must go through one or more permissioned applications, which know how to use the decryption keys. These applications can now use domain relevant business logic to determine who can see what.

In the case of a SaaS company that may sometimes need access to customer data to help troubleshoot problems, there should be business logic where the customer must first expressly allow that access. Any access granted this way should have an expiration so it turns off automatically after X days. The business logic may be more sophisticated, for example, taking into account the geographic region of a support rep relative to a customer.

5. Make sure access to KMS configs is carefully controlled

The details for how to connect to a KMS and access a key are as sensitive as the keys themselves. That means these configuration details can’t simply be tracked in a database, but must instead be handled in a way that prevents humans from seeing them.

In IronCore’s SaaS Shield solution, for example, we use end-to-end encryption to encrypt all configurations in the browser before sending them to a server. Only the encryption service is empowered to decrypt these configurations.

6. Add contractual promises about code

There is a degree of trust that must be conferred onto a vendor no matter what security is used. Even with end-to-end encrypted apps, you have to trust that the vendor isn’t subverting the encryption routines or modifying the client-side code to secretly exfiltrate data. The code is the biggest point of trust in these systems. Therefore, it’s essential to only work with trustworthy vendors and then to get them to contractually commit to terms that forbid backdoors, among other things.

We’re not lawyers, but here are some ideas for terms to negotiate with your software provider:

  • The following fields will be considered sensitive and therefore, be protected by ALE: …
    • Henceforth known as Protected Data
  • Access to Protected Data will only be possible through the application.
    • That means there will not be any copies of the unencrypted data anywhere on disk, including in databases, backups, queues, logs, search services, and so on.
    • Transparent encryption, such as disk encryption, is not sufficient to meet this requirement. The data must be encrypted before going to a data store.
  • Employees in the organization will never be able to access plaintext data.
    • OPTIONAL: Unless the customer expressly grants access to the vendor for an X-day window by explicitly granting that access through the app.
  • Even in the case of a compelled access request, protected data can only be produced in encrypted form and the vendor has no ability to produce a key and no functionality that allows them to export unencrypted data.

Compelled access scenario

If the vendor is subject to the laws of a country that can compel changes to code, there is no way to stop a government from forcing a company to write a backdoor that exports data.

In the U.S., whether or not the government has this ability is a gray area. The only time that we know of that the FBI has tried to force a company to add a backdoor was in a case against Apple where the FBI demanded that Apple build a backdoor, and Apple objected to the requests on several grounds (including First Amendment). The case went to court briefly before the FBI dropped it. In a separate case in Brooklyn, a magistrate judge ruled that the government could not compel Apple to make the changes. The government initially appealed but then dropped the case.

For now, it doesn’t seem that the FBI can compel a company to undermine their own security and betray their customers’ trust, possibly destroying their business. This is something the U.S. government has not pressed as they likely prefer the current murkiness to potentially having the courts clarify that they don’t have these powers.

Today, anyone facing such a request from the U.S. government should absolutely fight it and is likely to win. Free help would be available from organizations like the ACLU and EFF if needed.

Each country should be evaluated individually, though. For example, it would not surprise me to learn that China compels Chinese companies to add backdoors to their products – certainly, that is what Western governments have alleged about Huawei and TikTok.

Summary

It’s difficult to engineer systems that leave the admins and ops people unable to access some data in unencrypted form, but it is absolutely possible provided the builder of the system is writing code in good faith. This challenge can be made far easier using products that specialize in this use case, such as the product suite from IronCore Labs, which is used by leading SaaS companies to provide these sorts of guarantees to their customers.

IronCore Labs has a comprehensive platform for keeping data application-layer encrypted but also keeping it usable and searchable (full text search and AI vector search are both supported over encrypted data). All of the hard bits around key management, hold your own key (HYOK) patterns, and protecting access to keys is handled by the IronCore system.

And all sensitive data stays local to the vendor’s infrastructure meaning that IronCore isn’t added to the list of potential parties that could see data.

With care, the right tools, and contractual obligations, companies can in fact shield slices of customer data from their own employees – not because those employees are not trustworthy, but because every person with access is another potential route to a breach and minimizing where access can occur is a core part of security by design.