Synopsys Software Integrity Group Senior Security Strategist Jonathan Knudsen discusses why failure to address security early in the software development life cycle can increase business risks.
By Jonathan Knudsen
When it comes to functional testing, it usually focuses on the happy path — a place where users act rationally, systems behave well, and nobody is attacking your application. The job of the testing team is often defined too narrowly as ensuring that the software has the required functionality.
However, the real world is a messy and chaotic place. As soon as your software is deployed, it will encounter a wide range of unexpected and badly formed input coming from systems and humans that are accident-prone, like Mr. Magoo — or actively malicious, like Lex Luthor.
To make this absolutely clear, let’s take a look at an extremely simple piece of software. It passes functional tests but is alarmingly easy to break.
Requirements and design
Somebody in your company has a great idea for a software product. “Give it your name,” gushes your product manager, “and it says ‘hello’ back to you. Everybody’s gonna love this thing!” They decide to call it faceplant. The product has a single functional requirement:
Given an input string x, the product will produce the output “Hello, x.”
Without any security review of the design, the requirements are done and get handed off to the development team.
In the absence of any consideration for security, the development team has latitude to choose programming languages, frameworks, and open source software components to meet the functional requirements.
For whatever reason, your developers use C to implement this software. The developers build the software and do a little local testing to make sure it works. Everything looks good, so they tell the testing group that it’s ready for testing.
The test group designs tests to make sure the functional requirements are met. They might do these manually, or they could automate them. Regardless, they work up a series of test cases that will ensure the software works as intended.
They might start with the same test cases the developers used, and then expand them.
If they’re really going the extra mile, they might throw in some test cases to support non-English-speaking customers even though there’s no functional requirements for this.
The software passes all the tests and gets handed off to the deployment and release team.
Wait … what?
The product team is just about finished with champagne toasts when things start falling apart.
Bug reports start flooding in. When faceplant is deployed as a network service, servers start suffering from denial-of-service attacks and possibly being compromised.
What happened? All the tests passed, which means the software is perfect, right? Right?
Meanwhile, in a secret hacker lair
The fundamental problem here was trusting user input. Both the developers and the testers focused on functionality, assuming that the software would be handed input that makes sense.
An attacker looking to exploit faceplant would utterly disregard any explicit or implied rules about input. The first thing a hacker might try is supplying no input at all.
Woops, that produced a segmentation fault. Next, an attacker might try supplying some long input.
That gave another segmentation fault. This one seems promising for the attacker — maybe it could lead to a remote code execution exploit.
The attacker might also try supplying format strings.
Now the attacker is able to see the contents of memory, which could be useful in crafting an exploit or could reveal sensitive data, just like the Heartbleed vulnerability.
Yes, it’s a toy
Of course, faceplant is not a real piece of software. But it highlights how easy it is to have implementation bugs in even a trivial implementation.
Admittedly, our development team did a truly awful job with this one. But that’s not really the point. Developers will make mistakes — they’re human, after all — so the process that surrounds them must help drive down risk in the software they’re creating.
If this happened in 10 lines of code, what about an application with 10k lines of code? 100k? 1m?
Make sure it doesn’t fail
Functional testing is important. You use it to make sure that the software works when used as intended.
But it’s equally important to make sure that the software doesn’t fail when weird stuff happens. If somebody runs the software without specifying a name, it shouldn’t produce a segmentation fault. Instead, it shouldn’t respond at all, or respond with a polite error message, or write information to a log file somewhere. If somebody calls the software with a very long name, the software should handle it properly.
Battle-hardened developers know these things and write more-robust, more-secure code. Battle-hardened test teams know these things as well, and make test cases that supply unexpected or unusual inputs to software to see what happens.
Relying on battle-hardened developers and battle-hardened testers isn’t a viable strategy, though. For one thing, that kind of talent is hard to find. More importantly, even if you have the best engineers, they still miss things. Baking security into every phase of software development, and taking advantage of automation, is the best way to achieve more-secure software and drive down business risk.
Design. When an application or feature is envisioned, designers and architects need to adopt an attacker mindset to see how the application could be compromised. Threat modeling and architectural risk assessments are useful at this phase and help improve the overall structure and resiliency of the application.
Implementation. Tools such as the Code Sight IDE plugin help developers identify and remediate security issues as they’re writing code. Code Sight is like a senior engineer peeking over your shoulder as you write code, pointing out problems, suggesting solutions, and explaining concepts you don’t understand.
Testing. Security testing should be incorporated into the test cycle just like functional testing. Automated security testing (static application security testing [SAST], software composition analysis [SCA], fuzzing, interactive application security testing [IAST], etc.) should be run automatically, with results feeding into the issue tracker and other systems that the development team is already using.
Deployment. Policies and processes should mandate safe and secure deployments. SCA can also be advantageous here to examine open source components present in a deployment container or system.
Aside from the individual steps of the development cycle, continuous efforts are also helpful.
Education. Ongoing education about software security topics helps the entire organization make better choices and reduce risk. Topics can range from general software security to specific coverage of programming languages and frameworks.
A proactive, holistic approach to security is the best way to reduce business risk.
Automate and integrate your security tools
A holistic approach embeds security in every part of software development. Security tools, such as static analysis, SCA, fuzzing, and more, help locate weaknesses. Tools should be automated so that they run at appropriate times during development, and results should be integrated to issue tracking and other systems that the development team is already using.
When security is done right, it doesn’t feel like something extra. By automating and integrating security testing, development teams can address security issues just like they address any other issues. The end result is a stronger, more secure application that minimises business risk for the vendor and for customers.
(Ed. Featured image by Photographer Tim Gouw.)