Rethinking the broken complexity of cloud security
The security industry wants you to believe you need 1,001 different products to secure your cloud applications. But they fail to consider how anti-security complexity is. At the end of the day, complexity is security’s greatest enemy.
So the next time some security vendor wants you to add a Complexity Analysis product to manage and understand the complexity of your software setup and then wants you to add a Complexity Analysis Monitoring program to make sure the first product is doing its job, think twice.
A recent research note from Gartner on the first things a company should do for security when moving to the cloud referenced over 50 new acronyms in its glossary telling you to get all the CIEM, CNAPP, SASE, ZTNA, and on and on.
But the solution to your security problem is not adding more complexity or monitoring or management of the monitoring.
Start with the simple question: what are you protecting? For most, it’s the data.
Then ask the question: how are you protecting the data? If the answer is that you have 1,001 alphabet soup products that watch and monitor and react, then you’re doing it wrong.
Reframe the question to ask how you are proactively protecting your data. How do you make it so someone gaining access to your database is a relative non-event? Answer: if the data is meaningfully encrypted so that the attacker can only see garbage bits, then there’s no data breach. No data breach means no data breach disclosure, means you fix the problem and move on.
Consider the two most common causes of data breaches in the cloud:
- Stolen credentials – someone gets the AWS key or an admin login and gets into your infrastructure. They can see what your admins can see. That’s game over in one shot unless the data is encrypted and that means protecting it also from your insiders. Bonus points because that’s a win for privacy regulations, too.
- Misconfigurations – according to the Ponemon Institute, cloud misconfigurations are tied for the number one cause of data breaches. That’s a stunning statistic and hard to understand until you witness the sheer complexity of most infrastructures today. With complexity comes mistakes that are hard to spot. And they could strike at any time, with any change that brings unintended consequences. Sure you can add layers of complexity that monitor for such things, but it’s already too late. If the data is encrypted, though, you’re safe. It’s another non-event. Fix it and move on.
In case you think these problems only apply to unsophisticated companies, Microsoft recently misconfigured an internal customer support database in the form of five Elasticsearch clusters which were made accessible to the outside world, compromising the data of 250 million customers. The Microsoft Security Response Team said, “Misconfigurations are unfortunately a common error across the industry. We have solutions to help prevent this kind of mistake, but unfortunately, they were not enabled for this database.” They should have just encrypted the data.
If you have application bugs like SQL injection attacks, it’s effectively the same. The data is gone unless you have meaningful data protection like application-layer encryption in place.
In the on-prem days, the mentality was similar to a digital fortress: start with the walls (firewalls in this case) and worry about the inside later.
Modern infrastructure is a very different space. The “walls” are riddled with holes, both by design and through misconfigurations like accidentally public S3 buckets or files. The guards patrolling those walls (the intrusion prevention systems and web application firewalls and such) are themselves subject to vulnerabilities that open up holes in the wall. A single user’s stolen credentials can render the entire perimeter moot.