Sean is a lead security software engineer at Secureworks. He blogs about application security here and is a Beer Farmer. I recently spoke with Sean about DevSecOps and why certificates might still be challenging for developers to implement properly.
What is a Beer Farmer?
Sean: There were a bunch of us on Twitter, chatting about InfoSec things and we felt that sometimes the emphasis in InfoSec can be a bit too serious and we wanted to introduce some light humor, some satire, to help break down the barrier because, from outside, InfoSec can be intimidating with all these people saying big things and the fear of being attacked and experiencing a breach. We wanted to make it feel more accessible so we decided to form a group and we all love beer. So the name came about: ‘The Beer Farmers’.
How do you define DevSecOps?
Sean: I define it as integrating security within a DevOps model; it’s automating where possible and bolting security into the development and deployment of a product. In the past, this was a very manual process; you'd have someone running a tool and manually verifying their results and then reporting to a team what they couldn’t release the code because of specific vulnerabilities. DevSecOps automates that so that the tool is built into the process that releases the software and automates the checks and mechanisms to prevent any high-risk releases from happening. So you might set a bar to say that any vulnerabilities found with a CVSS rating of nine will block the release until those have been addressed.
It also requires tighter collaboration between the development and security personnel. Some developers need to gain some security knowledge and some security resources have to gain some development knowledge, especially if you want your security team to integrate into the development pipeline. They might even need to do some scripting, like Python.
Why is there a fallacy that open source is more secure and why is this a fallacy?
Sean: I love open source; this is not a dig at open source, but I see many people that just assume just because you can see something it makes it more secure. Technically, yes. However, in the real world, no. It's a problem because of volume and resources; many open source projects are done by people or individuals, as opposed to an organization that has money to put it right. So you have this open source code that is viewable by everyone, but maybe nobody is having a look at it. And the person that is developing it, is developing it in a vacuum. It could be riddled with security vulnerabilities that nobody knows about. Whereas if you change it to a company, obviously not all companies are the same and also suffer with security vulnerabilities, but they also have reputations to uphold and they have money to throw at these things, to buy tools such as scanning tools, which can be very expensive. I’m not saying all companies do this, but many companies will. I don't view a product as secure simply because it's open source; there are a lot of factors to consider.
And you just can't rely on humans to mitigate the risk around open source; that's just impossible. Sonatype produces the annual State of the Software Supply Chain Report which shows that the number of people using open source components and the volume of those components in production software are just staggeringly huge and continuing to grow. So manual intervention is just not going to work; you're going to have to use tools. There are a number of vendors that provide products that scan existing open source repositories for vulnerabilities. But there's another problem with open source; it's all very well scanning at the point of entry, but sometimes vulnerabilities can be found after it's been included in the software, so you need continuous scanning.
Why are (TLS/SSL) certificates still a challenge for DevOps?
Sean: People see certificates and freak out and don't know what to do. I think it's intimidating to many. When I was starting out, I often wasn’t sure what to do. I’d look online and try to work it out from there and it was difficult. You don't know what it is and what it does and why it's there. But when you start breaking it down to its simplest forms, it starts making sense and becomes easier for people to understand and then it’s easier for them to cope with them.
What are your recommendations to teams who find managing machine identity certificates a challenge and know that they are introducing risk into their products through poor protection?
Sean: Certificate management is really important. The first thing is to know what certificates you have and where. If you don’t have that, you're going to have certificates expire and you're going to have issues. You might also find that teams are reusing certificates, just because they don't understand how they work. They might turn off things like host validation because they are using a certificate and it's not working. Then they do a switch and it suddenly works. As certificates grow, there is a point where for many organizations trying to manually manage certificates that it's just not going to happen; the sheer volume is just not feasible. So certainly automation and tooling are really important.
What’s the most common myth about certificates and how should security professionals address it?
Sean: I have seen many vendors online say: “Use our certificates, because they'll provide better encryption.” And that's not true. Certificates help with identity. The actual encryption happens at the protocol handshake level, where the server and client agree on a protocol to use. That's where the encryption actually comes into play, and that's after the certificates have been validated and passed along.
Another one is that Let's Encrypt is not secure. That's incorrect. Let’s Encrypt to follow the same compliance and audit requirements as any other CA. So there's no reason to believe that Let’s Encrypt is less secure than any other CA out there.
What’s your advice to a team or organization that experiences a breach? What should they do? What should they not do?
Sean: Firstly: be prepared. I've seen organizations in the middle of a breach with no idea of who's doing what; it's just wasteful and doesn't solve the issue. Do things like tabletop exercises. Make sure that you have a clear chain of command during the breach: who's responsible for running the entire thing and who you call in for what type of breach. This is probably the biggest thing: be open and transparent. Don't try and hide it. This is where an organization likely comes off a lot worse. I've seen organizations do this and they’re still in the media. And I've seen organizations, who've been open and transparent, and they're no longer in the media; it was a blip on the radar for them. Breaches are not great, but they happen, it doesn't mean you as an organization failed. Take it as a learning experience. Don't see it as now everyone thinks you're terrible. Your outcomes and your reaction to a breach is far more important than actual breach itself.
Will DevSecOps live forever?
Sean: For the foreseeable future, I think so, but I wouldn’t be surprised if something else comes along in a few years. That's the way we've seen things; they constantly evolve, especially in this industry.
See what Venafi is doing to integrate security into DevOps seamlessly with HashiCorp. HashiCorp CTO Armon Dadgar expains.
Related posts
- DevSecOps: Minimizing New Attack Surfaces for DevOps [Interview with Mitchell Ashley]
- What Is Your DevSecOps Manifesto? [Interview with Larry Maccherone]
- US DoD Reference Design for DevSecOps [Interview with Nicolas Chaillan]
- DevSecOps, SecDevOps, or RainbowMonkeyUnicornPony? [Interview with DJ Schleen]