12 minute read

How Hackers Can Poison Your Code

Hackers are always looking for new ways to compromise applications. As languages, tools and architectures evolve, so do application exploits. And the latest target is developers.

Traditionally, software supply chain exploits, such as the Struts incident at Equifax, depended on an organization ’ s failure to patch a known vulnerability. More recently, supply chain attacks have taken a more sinister turn because bad actors are no longer waiting for public vulnerability disclosures. Instead, they ’ re injecting malicious code into opensource projects, or building malicious components that feed the global supply chain.

No one in the enterprise, including developers, knows all of the components that an application comprises, nor do they understand all the dependencies associated with those components. It’ s a potential liability issue that, combined with a demand for greater transparency, has fueled the adoption of software composition analysis (SCA) and software bill-of-materials (SBOM) tools.

“We ’ ve created package managers that make it easy and fast for developers to reuse binary components which arguably makes them more productive, but those tools also introduce transitive dependencies, ” said Brian Fox, CTO of Sonatype. “If I pull one thing, that thing pulls in its dependencies and in Java it’ s not uncommon to see a 10x or even 100x explosion. In JavaScript it’ s even worse, 100x to 1,000x. ”

‘How canyou be sure thatwhatwas okay ayear ago is stillokay?’

— Brian Fox, CTO, Sonatype

Next-Gen Supply Chain Attacks Growing

According to Sonatype

’ s 2020 State of the Software Supply Chain report, the number of next-generation cyberattacks actively targeting open-source projects have been rising rapidly. From February 2015 to June 2019, 216 such attacks were recorded. Then, from July 2019 to May 2020, an additional 929 attacks occurred. These next-generation supply chain attacks are increasing for three reasons.

First, open-source projects rely on contributions from thousands of volunteer developers and it’ s difficult or impossible to discern between members with good or malicious intent.

Second, when malicious code is secretly injected “ upstream ” to the

developer via open source, it’ s highly likely that no one realizes the malware exists, except for the person who planted it. This approach allows adversaries to surreptitiously set traps upstream and carry out attacks downstream once the vulnerability has moved through the supply chain into the wild.

Finally, open-source projects typically incorporate hundreds or thousands of dependencies from other open-source projects, many of which contain known vulnerabilities. While some open-source projects demonstrate exemplary hygiene as measured by mean time to remediate (MTTR) and mean time to update (MTTU), many others do not.

Why Approved Component Lists Don’t Help

The dynamic nature of software development is at odds with approved component lists because the lists are not to be updated as often as they should be. The task is too complex and time-consuming for humans.

“There are millions of components if you include the multiple ecosystems that are out there, and they ’ re changing four, 10, 100 times a year. How can you be sure that what was okay a year ago is still okay?” said Fox. “People are still using Struts because it’ s on their approved list even though it’ s been a level 10 vulnerability for about 15 years now. ”

Modern enterprises need the ability to define policies that can be applied to individual components, whether the rule is based on licensing, the age of the component, the popularity of the component or other criteria. Once the policy has been defined, it can be executed automatically.

“With tooling, you can inspect the software, run those policies, understand why a certain component wasn ’t used in this application and recommend a better one. By codifying all that, you can avoid walking over to legal, architecture or security to ask permission, ” said Fox.

While static and dynamic analysis tools help identify problems in code, their capabilities may not extend to third-party code because there are too many code paths to evaluate. So, the vast majority of code may not be scanned.

In addition, when a developer downloads and runs a malicious component, that component could install a back door on their system. Similarly, with continuous integrations, the poisonous code can seep even further into the pipeline.

“Attackers are now focused on the developers and the development infrastructure as the way into the organization, ” said Fox. “That way, they can bypass all the enterprise security stuff like firewalls. By abstracting the sheer complexity of applications ’ components and their dependencies into policies, you can provide developers with guardrails that help improve application security and those developers don ’t have to ask others in the organization for permission every time. ”

Learn more at www.sonatype.com. z

Because software supply chain security should feel like a no-brainer.

Continuously monitor open source risk at every stage of the development life cycle within the pipeline and development tools you’re already using.

Lifecycle is made for developers.

You make hundreds of decisions every day to harden your supply chain. You expect interruptions. They’re part of your work. The problem is when they get in the way of your work. We tell you what you need to know to build safely and e ciently — and we tell you when you need to know it. Then we quietly continue our work, and allow you to do the same.

With Nexus Lifecycle, devs can:

Control open source risk without switching tools. Inform your decisions with the best intelligence database out there. Get instant feedback in Source Code Management. Automatically generate a Software Bill of Materials. Enforce open source policies without sacrificing speed.

Two Sides

Software test automation for the survival of business

BY DON JACKSON

In today’s business environment, stakeholders rely on their enterprise applications to work quickly and efficiently, with absolutely no downtime. Anything short of that could result in a slew of business performance issues and ultimately lost revenue. Take the recent incident in which CDN provider Fastly failed to detect a software bug, which resulted in massive global outages for government agencies, news outlets and other vital institutions.

Effective and thorough testing is mission-critical for software development across categories including business software, consumer applications and IoT solutions. But as continuous deployment demands ramp up and companies face an ongoing tech talent shortage, inefficient software testing has become a serious pain point for enterprise developers, and they’ve needed to rely on new technologies to improve the process.

The benefits of test automation

As with many other disciplines, the key to quickly implementing continuous software development and deployment is robust automation. Converting manual tests to automated tests not only reduces the amount of time it takes to test, but it can also reduce the chance of human error and allows minimal defects to escape into production. Just by converting manual testing to automated testing, companies can reduce three to four days of manual testing time to one, eight-hour overnight session. Therefore, testing does not even have to be completed during peak usage hours.

Automation solutions also allow organizations to test more per cycle in less time by running tests across distributed functional testing infrastructures and in parallel with cross-browser and cross-device mobile testing.

Don Jackson is chief technologist of Application Delivery Management, Micro Focus.

Challenges in test automation

Despite all the benefits of automated software testing, many companies are still facing challenges that prevent them from reaping the full benefits of automation. One of those key challenges is managing the complexities of today’s software testing environment, with an increasing pace of releases and proliferation of platforms on which applications need to run (native Android, native iOS, mobile browsers, desktop browsers, etc.). With so many conflicting specifications and platformspecific features, there are many more requirements for automated testing –meaning there are just as many potential pitfalls.

Software releases and application upgrades are also happening at a much quicker pace in recent years. The faster rollout of software releases, while necessary, can break test automation scripts due to fragile, properties-based object identification, or even worse, bitmap-based identification. Due to the varying properties across platforms, tests must be properly replicated and administered on each platform – which can take immense time and effort.

Therefore, robust, and effective test automation also requires an elevated skill set, especially in today’s complex, multi-ecosystem application environment. Record-and-playback testing, a tool which records a tester’s interactions and executes them many times over, is no longer sufficient.

Ensuring robust automation with AI

To meet the high demands of software testing, automation must be coupled with Artificial Intelligence (AI). Truly robust automation must be resilient, and not rely on product code completion to be created. It must be well-integrated into an organization’s product pipelines, adequately data-driven and in full alignment with the business logic.

Organizations can allow quality assurance teams to begin testing earlier – even in the mock-up phase – through the use of AI-enabled capabilities for the creation of single script that will automatically execute on multiple platforms, devices and browsers. With AI alone, companies can experience major increases in test design speed as well as significant decreases in maintenance costs.

Furthermore, with the proliferation of low-code/no-code solutions, AIinfused test automation is even more critical for ensuring product quality. Solutions that infuse AI object recognition can enable test automation to be created from mockups, facilitating test automation in the pipeline even before product code has been generated or configured. These systems can provide immediate feedback once products are initially released into their first environments, providing for more resilient, successful software releases.

Cumbersome, manual testing is no longer sufficient, and enterprises that continue to rely on it will be caught flatfooted and getting outperformed and out-innovated. Investing in automation and AI-powered development tools will give enterprises the edge they need to stay ahead of the competition. z

of Testing

Software is designed for humans: it should be tested by humans

BY JONATHAN ZALESKI

In the sprint to keep a competitive edge during digital transformation, organizations are optimizing and updating how they build and deploy software, trying to create a seamless continuous integration/continuous delivery (CI/CD) structure. Leveraging tech like AI, machine learning and automation is certainly helping to make this process as efficient as possible. But optimizing speed must be carefully balanced with maintaining — and improving — quality.

Where and how does testing fit into accelerating software development pipelines?

Shift-left testing has gone from new concept, to recognized buzzword, to reality for many digitally evolving organizations. Instead of running QA after code is developed, this testing is now taking place earlier and earlier in the software development life cycle (SDLC). This is done in part with the help of the developers who are actually responsible for building the code.

Testing earlier in the SDLC has the potential to slow down development, which runs against the priority of developers building and shipping code as quickly as they can. But this slowdown has been worth it for many brands, leading to a reduced number of bugs released to end users and cost savings involved for fixing bugs later in development or once deployed. Essentially, many organizations are on board with compromising a bit of speed for an overall better user experience.

But should they have to?

Jonathan Zaleski is senior director of Engineering and Head of Applause Labs, at test solutions provider Applause.

Collaboration and real-time reviews

At the core of shift-left testing is the notion that every member of a team is working together in the name of improved quality, but that shouldn’t mean that release velocity is sacrificed to a great degree in the process.

Pair programming — where two developers work together to create code— is a great example of how important collaboration and real-time reviews can be used to improve code quality at the outset. With pair programming, one developer writes the code and one reviews it in real time so as to make the process as efficient and the code as clean as possible early on.

This real-time review process goes against the grain of traditional automation, but is nonetheless an important tool in shifting testing and quality processes left. Real-time review and insprint testing methods like pair programming are useful steps to take while test automation matures.

They also offer benefits that test automation cannot because only human testers can provide the dynamic and unbiased validation and verification of software that machines are simply not yet capable of providing. Automation can tell you if the intended result was achieved, for example, but cannot tell you if the experience was intuitive, easy to use or inclusive of all potential end users.

The human element

Automated software testing does all it needs to do to tell developers and QA teams if software is working or not working. But in the wild, where that software is used and sees its value recognized, it isn’t so simple.

When software is only tested in a lab environment, it doesn’t encounter all these other variables. Automated testing simply does not cover the diversity involved in real user experience by the billions of humans accessing applications every day, around the world.

For this reason, organizations committed to providing the highest quality of user experience and accessibility for their users and customers will keep humans involved in software testing.

Offsetting with automated testing

Developers are an invaluable resource to organizations. IT leaders naturally want the majority of developer time to be spent focused on developing applications. Yes, at some organizations with less mature QA setups they do need to spend some time on quality and testing, but ideally, as little time as possible should be spent away from their main priority of developing exceptional software.

Shifting testing left has pulled developers further into the mix of testing responsibility, however. This can reduce developer productivity, and as we know, reduce release cycle speed. But automated testing capabilities can actively offset these areas of compromise.

All in the name of user experience

The benefits of automated testing practices can’t be understated. Automated tools pick up on issues that humans sometimes miss, and they do it in a way that is agile and efficient. But as long as the end product is being used by people, it is people who also need to be involved in some aspect of the testing.

Adding this human element into the mix alongside the efficiency of automated testing is the best way to make sure an application is ready and accessible for any prospective user. z

This article is from: