Security is an important part of every application, but it can often get shuffled to the back of a project to-do list. To developers, security can sometimes seem like a hassle, or worse: a roadblock.
It’s frustrating when a project makes it almost all the way through the entire development process, only to have it snag on a security review that requires the development team to redesign and rewrite. And so often, it can feel like the security team is issuing strict and unnecessary mandates that don’t take into account the value developers are trying to bring to customers.
What can be done to bridge this gap in understanding?
We spoke with three cybersecurity experts about ways to help security teams and developers work better together. Common themes that emerged were the benefits of developers and security team members learning about each other’s areas of expertise, and the increased adoption of automation to integrate security checks throughout the development life cycle.
Victoria Geronimo is a product manager for security and compliance at 2nd Watch, a cloud consulting company based in Seattle. She helps client companies assess their DevSecOps strategies and works with security and development teams to find ways to integrate better.
You have seen a lot of interactions between security teams and developers. Would you say that it’s a difficult relationship to manage sometimes?
Yes it is. They both sort of understand each other, but they don’t speak the same language, and they don’t always have the exact same goals for success. That’s where a lot of the friction comes in.
Developers are primarily concerned with automation, reliability and speed, continuous improvement, having a collaborative culture, and really getting the best code out. Security, however, is not as concerned with, “How quickly can we get this code out?” They’re concerned with, “How do we make sure this code is as secure as possible?”
Developers also want to have secure code, but what comes in conflict is [the workflow]. When they are going through the test stage for instance, they’ll send the code over to security, and security will run all the different tests to check the code. By the time they get results back from security, the developer team has already moved on to a new product or a new set of code that they’re working with. Their heads are not really in the same space as it was when they were developing the code, and they don’t really always understand what the vulnerabilities mean and how to fix them.
“By the time they get results back from security, the developer team has already moved on.”
And on the converse side, with security, they don’t always give recommendations in developer speak. So security can see that there’s a vulnerability because of the test that they ran, but they don’t fully understand what that means in coding speak, and they don’t understand how to tell the developer team how to remediate that. They just kind of say, “You have to remediate this because our checker found it.” So that throws a wrench in the DevOps pipeline, which is meant to be quick and agile.
In your work with clients, is a lot of it bridging this gap?
Yes. What we’re really doing is bringing together the teams, identifying how they work now, and asking them what their friction and pain points are. Nine times out of 10, it’s going to be “Security doesn’t understand DevOps, DevOps doesn’t understand security, how do we make things go smoother?” And then we give them a roadmap for success, as well as training.
We see the DevSecOps transformation as being a people, process and technology issue. You have to train the people and get them to talk to each other, you have to improve some of the processes that they go through, and you have to also add new tools.
“You have to train the people and get them to talk to each other, you have to improve some of the processes that they go through, and you have to also add new tools.”
[In terms of people], it’s about training each other into each other’s expertise. I know a lot of developers who have never heard of the OWASP Top Ten, which to a security person is very basic — it’s the most common vulnerabilities found in code, the most common attack vectors. But it’s also training security people in DevOps: what does the DevOps life cycle look like, and how can you work with it better so that you’re speaking their language and giving them security results in a timely manner that fits into their cycle. It’s also about getting the C-level involved, and giving them the directive that these teams have to be meeting and talking.
The process part is, “How are we putting things in the DevOps life cycle?” For instance, security can advise on certain guardrails. If somebody’s spinning up a cloud infrastructure, there can be templates — like CloudFormation, or Terraform templates — that define certain guardrails for developers to work within. And it’s also having a process to make sure that both DevOps and security are talking to each other, and that they’re going through the proper checks and channels.
[In terms of tools], it’s not just about throwing in a bunch of tools — it’s also about knowing what tools are essential. No company is going to be able to adopt 20 different tools at once. It’s more likely that they’re going to be implementing a little bit at a time. What tools do we need right now? What tools can we put in in the future? And where in the DevOps life cycle do we actually do all of these things? The “where in the DevOps cycle” is a big part of DevSecOps, because that moves it away from having it all in the testing stage to sprinkling it throughout the DevOps cycle, so that the whole cycle moves more smoothly.
Should the technology a company uses be picked by the security team or by the development team?
The security team should be saying which tools they absolutely need to have right now. They should have a comprehensive list of all the security tools, and they should know which one you’re going to need from day one and which ones you could apply later down the line. And then DevOps would be there to help them implement that and also figure out where the best place in the life cycle is for them.
Lora Vaughn is the chief information security officer at Simmons Bank, a mid-sized bank headquartered in Pine Bluff, Arkansas. She has worked in the cybersecurity space for the past 10 years, most recently running application security testing for another bank prior to joining Simmons.
What are some of the most common roadblocks to good security practices when developing software?
When you learn to code, most of the time it’s about how you code: What are data structures, what are the different data types. It’s not necessarily about writing the best code or the most secure code. That’s one of the struggles.
I think a lot of universities are starting to incorporate some of that into education. But beyond that, most people don’t think about the ways that something can be used in ways it’s not intended. If I’m a developer, I’m most likely going to be focused on, “How do I make an application do what I want it to do?”
“Most people don’t think about the ways that something can be used in ways it’s not intended.”
It’s got a specific purpose, and sometimes those blinders are so intense that you don’t think, “Well, you know, someone might try to enter text into that number field. So, I should check for that.” And that’s really the reason a lot of security flaws come about, is because developers are so focused on their task at hand.
Security’s a practice in itself and has its own mindset. The people who are good at security tend to look at something and go, “Oh, you know what, I bet I could twist that doorknob the other way and get it to open, not just the way that the person who designed it intended.” So, that mentality that you’re in to build something doesn’t always include thinking about the edge cases.
What are some ways that you found are effective in getting programmers to think differently?
When I was doing application security, we would get involved with the agile groups during some of their regular program increment meetings and talk through some of the things that can go wrong with their code. Because they may not even know what cross-site scripting is, or what SQL injection is.
“If I find a major security flaw right before you’re about to go to production, then it may take months to get that thing fixed without introducing unintended consequences.”
But you have to do more than just educate. It’s about putting tools into developers’ hands, earlier on in the process, when they’re developing. In the old waterfall methodology, the way we started testing software was the developer goes and spends a year writing this thing, and we get to the end where we’re doing QA testing and user acceptance testing, we throw security testing in as well. The problem is, if I find a major security flaw right before you’re about to go to production, then it may take months to get that thing fixed without introducing unintended consequences.
That’s how static application security testing tools came up. You can catch a lot of those major flaws early on, so that you’re not doing things after the fact.
Could you talk a bit about teams?
In most agile teams, the way to be most successful is to have a security champion on a particular team. Usually in your agile teams, you’re going to have people who are coding and you may have someone who’s doing the QA testing. A lot of times, that QA person can be a security champion as well and be responsible for doing those security assessments.
Joren McReynolds leads a product team for Red Canary, a Denver-based company that provides intrusion detection and incident response services to clients. He has led intrusion detection, incident response and engineering teams at other companies prior to joining Red Canary.
What are some good security practices that Red Canary follows?
When we are writing new features or new services internally, we have processes in place that say we must use the existing architecture and technology stacks that we already have, because we’ve invested a lot of security in it already. If you feel like you need to do something in a completely different way, then more scrutiny gets applied. We try and avoid having technology sprawl. So instead of having seven different programming languages, we try and hone in on two. That way, we don’t have to make the investment seven times across each programming language.
The other principle we follow is secure by default. A simple example is, instead of each engineer artisanally writing their own way of sending traffic somewhere and having to remember to opt in to use SSL or TLS, we have a nice little abstraction or library that does that for them. And so, by default, data will be encrypted in transit. By default, identities will be verified.
“We have processes in place that say we must use the existing architecture and technology stacks that we already have, because we’ve invested a lot of security in it already.”
And then there’s automation. With continuous integration, anytime someone puts code up for review, it should automatically get checked for vulnerabilities and for whether it adheres to your internal processes. If we know of a bad practice, we can codify it and then get alerted if someone attempts to use it. So through both humans and automation, the goal is to identify bad decision-making — and that could be a vulnerability or just unnecessary complexity that introduces potential security vulnerabilities.
How often do the security team and the developers work together on security?
While it is fun to have engineers hang out with the security team, it’s best if we can find a way to extract the knowledge from the security team and distill that to the engineers in an automated fashion. If the developer can write code in a way that’s secure by default, the engineer doesn’t need to understand what cross-site scripting is, or SQL injection, or the 185 different ways of achieving it. So we try to avoid the older methodology, which is to have a security engineer on every large project and they must review it every two weeks.
“If the developer can write code in a way that’s secure by default, the engineer doesn’t need to understand what cross-site scripting is, or SQL injection, or the 185 different ways of achieving it.”
We try to bake the DNA of that throughout the entire life cycle. Versus having to wait two weeks for a meeting, have a 60-minute meeting, and then be told the three things they need to fix, it’s much better if the three things can automatically get highlighted.
But the challenge always is scaling your security team. There’s always so many projects, so many code changes, and you can’t hire anybody to do that work all manually. So automation is really key. And then you can leverage your security personnel for those very complicated things that they’re best suited to solve.
How technical do you think security team members need to be?
In my career, I’ve definitely seen the need for security team members to have more of an engineering background. Before, security was kind of in this world of compliance and firewalls, and the need for them to know and understand how code works wasn’t as popular as it is today. With engineering teams moving very quickly, with the advent of cloud, and with changes being made every day — maybe even hundreds of code changes every day — having a security person manually review doesn’t scale anymore.
“Engineering will move without you unless you partner with them.”
Security teams need to know more about development, what developers care about, what the process looks like. They need to be more embedded into those teams. They can’t just cast their opinions on how things should get done and people must abide by it. You have to meet in the middle. Engineering will move without you unless you partner with them.
So the team members need to understand how the business works, how engineering works, and find ways that make developers more effective. The skill set of a security person has definitely gotten higher. They have to have that security expertise, but they also have to have some understanding of engineering, and they also have to understand how their work aligns toward the business — and where they should accept risk and where they shouldn’t accept risk.
When do security team members actually have to step in?
Definitely in design. It’s very easy to see if accepted patterns are being used, or if someone’s deciding to introduce a new programming language or wants to use a new AWS service that we’ve never used before. So design is where we have security, from a human perspective.
And then from a code review perspective, anything that smells weird alerts the security team — or sometimes, someone tags the security team manually. They’ll say, “I tried to implement this thing in what seems like a sane way, but I’m not an expert, can you help me out?” So again, they’re kind of seen as an ally and helping the developer get to the finish line.
And there’s other streams of work that are always happening, like ensuring that we’re meeting compliance standards. If the feature is complex enough, or new enough, or we deemed the attack surface to be higher, we’ll engage a penetration testing company to have an expert set of eyes look at it for a defined period of time. So it’s all about risk management.
Is there a best way to structure development teams for thinking about security?
Usually, I don’t see the organization of an engineering team impacting security outcomes that much, because there’s always a business owner or a product manager with a defined set of business outcomes, and the engineers are working toward achieving that. So whether it’s an extremely flat structure or very hierarchical, that doesn’t change a whole lot.
“It’s crucial to have security leadership that is in tune with the business and in tune with the engineering leaders.”
What I’ve found to be really important is where security fits within an organization. Sometimes, security reports to compliance, and it’s a completely different organization than engineering. At other companies, security is within the engineering organization and in fact reports to the same director or VP. I think that has a huge influence, because if there are no shared outcomes, then the security team is operating from a place of influence — and that’s a hard place to be in. If you’re seen as outsiders and you’re trying to influence decisions or inject yourself into a process in an organization that you’re not even a part of, you have to have some really great leadership to build those inroads.
At the end of the day, it’s crucial to have security leadership that is in tune with the business and in tune with the engineering leaders. That way, they can mesh their priorities and see how they can enable each other — versus what can sometimes happen, which is this adversarial nature.
How do companies stay on top of changing security landscapes?
That’s really the benefit of having a security team. That team is afforded the time, money and resources to become experts in the field and see where things are changing so the right investments can get made. It’s really hard to keep up to date, and I think that’s where, as a company, you have to make a decision of, “What level of investment am I going to make internally, and where do I need to leverage outside expertise?”
These interviews have been edited for length and clarity.