Nobody’s perfect.
I’m probably one of the most imperfect people out there. I boarded the subway to go to the mall to do some shopping last week. I looked in my purse while I was sitting in the train car, and I realized I didn’t have my wallet! Thankfully I had the key to my apartment, and I found my wallet sitting on my kitchen counter when I got back home. Whew! No harm was done but a waste of my own time. I got back on the subway, did my shopping, and carried on with my day.
I’m a fallible human being, and so are the engineers who work for AWS. Human beings working in networks, from a small business’ LAN to a cloud platform that drives your applications, will inevitably make silly mistakes. One such silly, innocent mistake was discovered by UpGuard on January 13th. As written on UpGuard’s blog:
What was found in the AWS credential leak
“On 13 January at approximately 11am, the UpGuard Data Leaks detection engine identified a GitHub repository with potentially sensitive information that had been uploaded half an hour earlier. Shortly after noon an analyst began reviewing the contents of the repository. After assessing the contents to establish the scope of the data, its degree of sensitivity, and the identity of the owner, the analyst notified AWS Security at 1:18pm. By 4pm, the repository was no longer publicly accessible, and at 4:45pm AWS Security replied to the initial notification email saying that they had taken action.”
Here’s what the GitHub repository contained in its 954MB worth of data; AWS resource templates (used to create cloud instances), log files from the latter half of 2019, AWS key pairs, private encryption keys, API keys, and other forms of machine identities, bank statements, and AWS customer correspondence. A lot of the documents were labelled as “Amazon Confidential,” and one of the files was named “rootkey.csv.” Scary stuff, eh? It’s a good thing that UpGuard was monitoring GitHub and they caught it in time to report the leak to AWS. Unfortunately, many cyber attackers could have downloaded the GitHub repository before it was removed! I wouldn’t be surprised to find some or all of that data being sold on illicit Dark Web markets!
The UpGuard blog explains further:
“Timestamps in the logs indicate they were generated throughout the second half of 2019. Of greater concern, however, were the many credentials found in the repository. Several documents contained access keys for various cloud services. There were multiple AWS key pairs including one named ‘rootkey.csv,’ suggesting it provided root access to the user’s AWS account. Other files contained collections of auth tokens and API keys for third party providers. One such file for an insurance company included keys for messaging and email providers.”
SSL/TLS Certificates and Their Prevalence on the Dark Web
Not a time for human error
If cyber attackers are selling the private data and machine identities that were in the GitHub repository, that’s a malicious act. But the breach itself wasn’t a malicious act at all. Some of the breached data included government ID and a driver’s license that matched the identity of an AWS engineer’s LinkedIn profile. Oops! I’m glad that the engineer’s name isn’t being reported in the media. But that must have been a very embarrassing mistake for them.
With the introduction of human beings in any system comes the risk of human error. Like how I confused my salt shaker for my sugar dish and made myself an unpleasant tasting cup of tea, but the consequences can be much, much worse for your organization. The AWS engineer put the sensitive data in the wrong place, in a place that cyber attackers could access. And very sensitive machine identities and other such encryption keys were included, which inevitably exposed AWS driven applications to potential cyber attacks.
Many still haven't automated their DevOps workflow
Machine identities are all over your DevOps workflow, in various different places, with new machine identities having to be generated each time you make a new cloud instance, a new virtual machine, a new container, a new server, and so on. Old certificates and other identities need to expire and be revoked, and new identities must be generated and implemented in their place. The sheer amount of machine identities that your DevOps applications need to use at any given time can be overwhelming. And human beings will make mistakes and put things in the wrong place. It’s inevitable.
Your DevOps system must be prepared for fallible human beings to make terrible mistakes! The more and more machine identities your DevOps workflow needs to use, the more and more complicated it becomes for people to keep track of everything. That’s why organizations can improve the security of their applications by the proper implementation of automated processes.
If an automated process can be implemented in place of human labour, that can reduce the risk of human error. If your machine identity management can be automated, your DevOps can enjoy greater visibility into all of your machine identities as they need to expire, be revoked, and be generated again. Data loss prevention systems can often spot a data breach long before a network administrator can on their own.
This AWS leak obviously affected a specific cloud platform, but these sorts of leaks can happen to any cloud platform from any vendor, through any provider. So the story of this AWS engineer’s embarrassing mistake is a lesson that all organizations with cloud driven DevOps systems can learn something from. Reduce the risk of human error as much as you can and implement machine identity management and data loss prevention technologies to catch mistakes that can be made by people and computers alike.
Now, don’t mind me while I put an RFID tag on my wallet!
Telling DevOps to automate security is one thing, but a question we always get is - will it slow me down? It doesn't have to. Venafi is security designed with developers in mind.
Machine Identity Security Architecture
Related posts