SD Times

Page 1

FC_SDT039.qxp_Layout 1 8/21/20 2:09 PM Page 1

SEPTEMBER 2020 • VOL. 2, ISSUE 39 • $9.95 • www.sdtimes.com


IFC_SDT036.qxp_Layout 1 5/20/20 10:52 AM Page 4

Instantly Search Terabytes

www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein drubinstein@d2emerge.com NEWS EDITOR Christina Cardoza ccardoza@d2emerge.com

dtSearch’s document filters support: ‡ popular file types ‡ emails with multilevel attachments ‡ a wide variety of databases ‡ web data

‡ efficient multithreaded search ‡ HDV\ PXOWLFRORU KLW KLJKOLJKWLQJ ‡ forensics options like credit card search

ART DIRECTOR Mara Leonardi mleonardi@d2emerge.com

CONTRIBUTING ANALYSTS Enderle Group, Gartner, IDC, Intellyx

CUSTOMER SERVICE SUBSCRIPTIONS subscriptions@d2emerge.com ADVERTISING TRAFFIC Mara Leonardi mleonardi@d2emerge.com LIST SERVICES Jessica Carroll jcarroll@d2emerge.com

Developers: ‡ 6'.V IRU :LQGRZV /LQX[ PDF26 ‡ &URVV SODWIRUP $3,V IRU & -DYD DQG NET with NET Standard / 1(7 &RUH

.

Jakub Lewkowicz jlwekowicz@d2emerge.com

CONTRIBUTING WRITERS Jacqueline Emigh, Lisa Morgan, Jeffrey Schwartz

2YHU VHDUFK RSWLRQV LQFOXGLQJ

.

SOCIAL MEDIA AND ONLINE EDITORS Jenna Sargent jsargent@d2emerge.com

.

‡ )$4V RQ IDFHWHG VHDUFK JUDQXODU GDWD FODVVLILFDWLRQ $]XUH $:6 DQG PRUH

REPRINTS reprints@d2emerge.com ACCOUNTING accounting@d2emerge.com

ADVERTISING SALES PUBLISHER David Lyman 978-465-2351 dlyman@d2emerge.com

Visit dtSearch.com for ‡ KXQGUHGV RI UHYLHZV DQG FDVH VWXGLHV ‡ IXOO\ IXQFWLRQDO HQWHUSULVH DQG developer evaluations

The Smart Choice for Text Retrieval® since 1991

dtSearch.com 1-800-IT-FINDS

SALES MANAGER Jon Sawyer 603-547-7695 jsawyer@d2emerge.com

PRESIDENT & CEO David Lyman CHIEF OPERATING OFFICER David Rubinstein

D2 EMERGE LLC 80 Skyline Drive Suite 303 Plainview, NY 11803 www.d2emerge.com


003_SDT039.qxp_Layout 1 8/21/20 2:25 PM Page 3

Contents

VOLUME 2, ISSUE 39 • SEPTEMBER 2020

FEATURES

NEWS 4

News Watch

6

Enterprises require SREs to supplement Ops teams, rather than to replace them

16

Harness acquires CI company Drone.io

16

Jenkins graduates from the CD Foundation

GDPR, CCPA, and CPRA — Oh my!

COLUMNS 28 GUEST VIEW by Stephen Gates 3 Reasons to get going with Go Lang

page 8 29 ANALYST VIEW by Arnal Dayaratna

Mainframe for DevOps puts an end to silos

The birth of the digital librarian

30 INDUSTRY WATCH by David Rubinstein Checking my notes

page 12

There’s more to testing than simply testing page 18

Closing the (back) door on supply chain attacks page 24

Software Testing The lasT of Three ParTs

Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 80 Skyline Drive, Suite 303, Plainview, NY 11803. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2020 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 80 Skyline Drive, Suite 303, Plainview, NY 11803. SD Times subscriber services may be reached at subscriptions@d2emerge.com.


004-5_SDT039.qxp_Layout 1 8/24/20 1:19 PM Page 4

4

SD Times

September 2020

www.sdtimes.com

NEWS WATCH Adobe discontinues PhoneGap

JetBrains Space reaches beta

Adobe announced it is ending the development for PhoneGap and PhoneGap Build. PhoneGap was created in 2008 to give mobile app developers a way to easily create web and mobile applications with a single codebase. Since 2008, the industry and market has evolved and PhoneGap usage has declined. “In the context of these developments and declining PhoneGap usage, Adobe is focusing on providing a platform that enables developers to build, extend, customize and integrate with Adobe products,” the company wrote in a post. PhoneGap Build will be discontinued on October 1. In addition, Adobe will be ending its investment in Apache Cordova.

JetBrains Space is a new platform that acts as an integrated team environment. According to the company, Space allows development teams to better organize, communicate, plan, and build and deliver products. For the past eight months the platform has been in an early access program (EAP). Since announcing the EAP, JetBrains received over 30,000 invitation requests, processed 800 issues, and resolved 500 issues. The company plans to officially launch Space in Fall 2020. According to JetBrains, the top requested features have been around automation, enhanced issue tracking and project management, and a standalone version. The com-

People on the move

n The open-source management company FOSSA announced Scott Andress as its vice president of alliances. Andress has more than 20 years of enterprise channel leadership experience working with technology companies Cloudrea, Hortonworks, CSC, and BEA Systems. The appointment of Andress follows the launch of the FOSSA Partner program, a new channel program designed to help expand partner’s open-source compliance and security management offerings. n MongoDB has appointed Harsha Jalihal as its new chief people officer where she will oversee the company’s human resources operations, workforce strategy, talent acquisition and development, and employee engagement. Additionally, the company announced Rishi Dave as its new chief marketing officer. He will be in charge of the marketing organization including marketing operations, corporate communications, demand generation and field marketing, growth marketing and content marketing. n Jason Schmitt is now the general manager of Synopsys’ Software Integrity Group. He has over 20 years of experience in security and enterprise product development and management. Previous he was CEO of Aporeto and vice president and general manager of Enterprise Security Products at Hewlett Packard.

pany has implemented many of these requested features and is in the final testing stages for them.

Datadog’s security and performance monitoring release

AWS Braket tackles quantum computing

Datadog revealed its vision for bringing security and performance monitoring into a single platform in the form of updates and new product features for its cloud infrastructure monitoring platform. At its virtual DASH conference, the company announced Error Tracking, Incident Management, Compliance Monitoring and Continuous Profiler, rounding out its platform to make it easier for developers to find deep performance issues with their applications. For operations teams, the new Incident Management product enables debugging and issue resolution, and for security and compliance teams, full visibility into cloud environments gives them a means to ensure misconfigurations don’t create problems.

Amazon Web Services has announced the general availability of Amazon Braket, which was designed to help developers and researchers get started with quantum computing, providing development tools, simulators, and access to a diverse set of quantum hardware. According to the company, Amazon Braket can be used to test and troubleshoot quantum algorithms on simulated quantum computers running on computing resources in AWS to help them verify their implementation.

Microsoft previews Visual Studio 2019 v16.7 and v16.8 Microsoft announced Preview 1 of its upcoming Visual Studio version 16.7 and Visual Studio version 16.8 releases. New features in Visual Studio 2019 v16.7 include Git integration with a new merge editor and easy conflict resolution, WPF design-time data, C++ support for 64-bit projects and debug builds, and additional IntelliSense functionality. Microsoft also stated that Visual Studio 2019 v16.7 is the next long-term servicing release. Meanwhile, Visual Studio 2019 v16.8 Preview 1 contains a new progress dialog, compiler support for lambdas in unevaluated contexts, which allows users to use lambdas in decltype specifiers, and new pattern matching features.

The Open Source Security Foundation launched The Linux Foundation has announced a new collaboration effort to improve opensource security. The Open Source Security Foundation (OpenSSF) aims to consolidate industry efforts with targeted initiatives and best practices. According to the Linux Foundation, OpenSSF is committed to collaboration and working both upstream and with existing communities to advance open source security for all as open-source software has become more pervasive in data centers, consumer devices, and services. In addition, projects such as The Linux Foundation’s Core Infrastructure Initiative


004-5_SDT039.qxp_Layout 1 8/24/20 1:19 PM Page 5

www.sdtimes.com

(CII), which was created in response to the 2014 HeartBleed bug, and the Open Source Security Coalition, founded by the GitHub Security Lab, will be brought together under the new OpenSSF.

Veracode offers free editions of Security Labs Security company Veracode has announced it will be offering a Security Labs Community Edition as a free-to-use alternative to its Enterprise Edition. This new edition will allow developers to hack and patch real applications, allowing them to learn new tactics and best practices in a controlled, safe environment. The company had recently partnered with Enterprise Strategy Group to survey developers and security professionals. They found that 53% of organizations provide security training less than once per year, and 41% believed it was the responsibility of security analysts to educate developers on security.

New Relic to focus on full-stack observability Longtime application performance monitoring provider New Relic is shifting gears, announcing its product focus has shifted to observability with new updates to its New Relic One platform. According to the company, New Relic One has become an expanded observability platform comprised of three products: the Telemetry Data Platform, Full-Stack Observability and Applied Intelligence. APM is now “only one piece of the puzzle,” Bill Sta-

September 2020

SD Times

Angular makes roadmap public Angular has announced a new roadmap in order to update users on what the team is working on and projects it may be considering in the future. According to the team, this is the first formal roadmap it has published, and it will maintain it quarterly. “We see the roadmap release as a footprint for increasing the visibility of our engineering processes. This is the foundation for improving our collaboration with the community to grow Angular and move the Web forward together,” Jules Kremer, engineering manager at Google, wrote in a post. Currently, the roadmap includes projects from the backlog that are in-progress or will be worked on soon. Additionally, the team will include work that “affects Angular’s own developer and projects that apply only to the internal development.” ples, chief product officer at New Relic, told SD Times about the release. “We’re radically reimagining our business, everything from our product packaging, our pricing, our user experience, and the way we talk about New Relic is all changing.”

Cloudflare’s new developer severless solution Cloudflare has unveiled a new serverless solution to compete with AWS Lambda. The release of Cloudflare Workers Unbound offers a serverless platform for developers to run complicated computing workloads across the Cloudflare network and pay only for what they use. According to the company, the new solution can save users up to 75% for the same workloads running on centralized serverless platforms such as Lambda.

Facebook Transcoder migrates legacy codebases Facebook has developed a new neural transcompiler system, Transcoder, to make it

easier to migrate codebases to other languages. Transcoder uses self-supervised training, which Facebook explained is important for translating between programming languages. According to the company, traditional supervised-learning approaches are dependent on large-scale parallel data sets for the languages, but these don’t exist for all languages. For example, there aren’t any parallel data sets from COBOL to C++ or C++ to Python. Transcoder’s approach only requires source code for one of the languages. It also doesn’t require knowledge of the languages.

MISIM to democratize software development Researchers from Intel, Massachusetts Institute of Technology and Georgia Institute of Technology have announced a new machine programming system designed to detect code similarity. The Machine Inferred Code Similarity (MISIM) system is an automated engine capable of determining when two pieces of code, data structures of algorithms perform the same or similar tasks. According to the re-

searchers, hardware and software systems are increasingly becoming more and more complex. That, coupled with the storage of programmers necessary to develop the hardware and software systems have highlighted a need for a new development approach. The idea of machine programming, which was coined by Intel Labs and MIT, is to improve development productivity through the usage of automated tools.

Apache Wicket 9 The 9th major release of the open-source Java web framework Apache Wicket is now available. Wicket 9 is built on top of Java 11 and designed to help web developers keep up to date with Java’s evolution. “The release of Java 9 has been a turning point in Java history which laid the foundation for the modern Java era. However, the magnitude of this change has discouraged many developers from leaving the safe harbor represented by Java 8. With Wicket 9 we finally have a fundamental tool to move on and bring our web applications into the new Java world,” the team wrote on the project’s website. z

5


006_SDT039.qxp_Layout 1 8/21/20 9:12 AM Page 6

6

SD Times

September 2020

www.sdtimes.com

Enterprises require SREs to supplement Ops teams, rather than to replace them Since Google released its Site Reliability Engineering (SRE) book in 2016, the field has gained widespread attention. However, adopting SRE as defined by Google is not as applicable to most organizations as it may seem, according to Sanjeev Sharma, a principal analyst of Accelerated Strategies, who spoke at Catchpoint’s "SRE from Home" virtual event last week.

because enterprises run their data on custom hardware, which Ops teams have expertise in, Sharma explained. “What [Google] needs to do is have the ability to dynamically shift workloads around so that when one hardware component is being serviced, they can easily pull the workload which is being run without interruption or loss of quality of service to another part of the data center as desired,” Sharma

When it first created SRE, Google had a team of software developers work in operations with the goal of developing software to handle the vast majority of tasks that were assigned to the system administration teams and incident response teams. Sharma explained instead of trying to replace Ops with site reliability engineers, organizations should be supplementing their Ops teams. At Google, the operations teams have to handle incidents and outages on a constant basis. Because their data centers are fairly homogeneous, they can automate a lot of the responses to the incidents because they keep happening over and over again. On the other hand, most enterprises shouldn’t replace their current Ops teams and sys admins with software engineers

said. “This by itself would disqualify most organizations because most companies are made up of hardware that is not commodity hardware and in most cases is custom hardware or generic hardware that has been optimized for the tasks being run.” The idea of SRE is to have software developers working in the Ops team, to identify repetitive tasks and automate them so the actual Ops teams can focus on the outliers. He added that replacing Ops teams with SREs is the first antipattern because it gets rid of all of their existing data center expertise, which would be detrimental to the enterprise. Some of these repetitive tasks that are frequently automated include detection and remediation of outages, degradation, and quality of service efforts, according to Sharma.

BY JAKUB LEWKOWICZ

“You still need to make your services and systems more reliable so maybe you change your definition from site reliability engineering to service reliability engineering, but you don’t need to do it the way Google does,” Sharma said. A second antipattern that is even more common is that organizations are taking their DevOps team and renaming it as the SRE team. “First and foremost, you shouldn’t have a DevOps team. There’s no such thing as a DevOps team! DevOps is something everyone does. They have a different role in how DevOps is adopted and the way the tasks are performed but there shouldn’t be a new silo called DevOps team who is the intermediary between all of the stakeholders in your application delivery pipeline,” Sharma said, adding that if an organization has a DevOps team, it should be called DevOps coaching instead. The third antipattern Sharma found is trying to adopt SRE principles by the book without first changing the culture. To handle this, organizations must first establish a reliability culture that includes establishing the right servicelevel objectives, error budgets, using incident postmortems to understand why something went down, and by hiring software engineers in Ops, according to Sharma. “Adapt the SRE practices for your needs and make an enterprise-wide effort to change the culture and become a culture which focuses on systems and application reliability as everybody’s responsibility,” Sharma said. “A modern system is a constantly changing melange of hardware and software in a variable world. That’s why chaos engineering is very important because you can’t understand how a system behaves without interacting with it and without testing its boundaries and getting ready for outliers. And that is what at the end of the day reliability engineering is all about.” z


Full Page Ads_SDT038.qxp_Layout 1 7/20/20 11:48 AM Page 36


008-10_SDT029.qxp_Layout 1 8/24/20 1:20 PM Page 8

8

SD Times

September 2020

www.sdtimes.com

GDPR, CCPA, and CPRA – Oh my!

BY JENNA SARGENT ver the past few years, the world has seen the introduction of two major data protection regulations: the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA). The GDPR, which affects the European Union, has been in effect since May 2018, which is nearly two and a half years. The CCPA went into effect on Jan. 1, 2020. But what impacts have these major privacy regulations had on the industry? One year after the GDPR had gone into effect, we took a look at what had happened in that first year. It turned out that enforcement had been slow to start, with compliance and enforcement of the law being a bit lax. As of February 2019, nine months after the law took effect, only 91 fines had been issued, and most of them were small fines. One of the major fines at the time had been for Google, who was being fined €50 million (US$56 million) “for lack of transparency, inadequate information and lack of valid consent regarding the ads personalization,” according to the European Data Protection Board. Enforcement seems to have picked up since then. As of August 2020, there have been 347 fines issued, totalling €175,944,866 ($208,812,246 USD), Privacy Affairs’ GDPR tracker shows. The largest fine to date is still the one that was issued to Google in 2019. The smallest fine issued was €90 ($106 USD), and it was to a hospital in Hungary. Upcoming fines not included in the tracker include those for Marriott and British Airways, which are still in the proposal stage. The CCPA went into effect in Janu-

O

ary, and California started enforcement July 1. As of late August, no fines have been issued yet. “For GDPR it took almost one year before the bigger fines started taking effect. Because of the fact that CCPA went into a stretch period with COVID, it was a kind of silent launch. In the next six months we will see more and more of the people, the activists trying to enact their rights and we will see more of the effects of this regulation,” said Jean-Michel Franco, director of product marketing for Talend, a data integration and quality platform provider.

GDPR and CCPA’s impacts Both of these laws have had major impacts on how organizations handle the privacy of their data. According to a 2019 survey from LinkedIn on the most popular jobs, companies are investing in data privacy roles. This is especially true in Europe; Data Protection Officer was the number one fastest growing job

in France, Italy, the Netherlands, and Sweden. “What we see is that more and more companies are being conscious that they need to dedicate people and efforts on that,” said Franco. There are a number of methods companies are using to improve data privacy. For example, data mapping is being used to make sure that whenever a request for removal from a user comes in, the company can make sure that all of the data is being aggregated properly, Franco explained. Another method companies are using is data anonymization. According to Franco, companies are realizing that managing personal data everywhere is costly and risky, so they determine which systems need that personal data and which systems would be functional with anonymized data. Another side effect to privacy regulations like CCPA and GDPR is that


008-10_SDT029.qxp_Layout 1 8/24/20 1:21 PM Page 9

www.sdtimes.com

companies are being smarter about how they take advantage of data. These regulations have caused a shift from ‘opt-out’ consent historically used by marketers to an ‘opt-in’ approach, email company SaleCycle marketing explained in its Email Marketing in a Privacy Conscious World report. According to the report, 32% of marketers now require explicit consent for email marketing, 26% have introduced a stricter opt-in process, and 21% have implemented checkboxes. “And so all of those companies, they implemented a program so that they reclaim customer opt-in, and then they also manage a little differently the marketing problem so that customers are more engaged,” said Franco. “So overall we see benefits. There are statistics that show that the typical return rates and ROI from a marketing standpoint have improved in Europe because of that. Even if there might be fewer customers that are targeted for each campaign because names went out of the database, the ones that remain are more engaged and the company makes more efforts to really customize the message, to [avoid] bombarding the customer with emails that are not targeted.” According to Franco, increased data governance has made it easier for companies to start up new projects as well. “I’ve seen a couple of customers that say now that I’ve implemented my GDPR project, I have a single place where all my employee data is described. And this is not only useful for privacy. This is useful because when I want to launch a new project on HR, I directly know where all of my data is, the data is documented, and there are rules to get access to that.”

Privacy in the UK after Brexit In the time since the GDPR went into effect, the UK officially left the European Union. It currently still operates following the rules of the GDPR, but after December 31, 2020 it will adopt its own regulation, according to an ebook from Zivver, a company that specializes in secure communication. The new regulation, known as the “UK continued on page 10 >

September 2020

SD Times

CPRA, or CCPA 2.0 When both the GDPR and CCPA came out, they were a major force in the industry for making companies rethink how they handled data privacy. They’ve already had significant impacts in the industry, as discussed above. Despite how powerful the CCPA was compared to existing data privacy laws in the United States, a new law might be coming to California to make the CCPA even more powerful. The new law is called the California Privacy Rights and Enforcement Act of 2020 (CPRA). It hasn’t gone into effect yet, and it still needs to pass in November on the California ballot, though some experts, like Dan Clarke, president of products and solutions at technology services company IntraEdge, believe that it’s likely to pass. According to Clarke, CPRA will bring the California privacy laws closer to the GDPR. He said that a lot of people are actually referring to CPRA as “CCPA 2.0” as it is the next evolution of that privacy law. According to Jerry Ray, COO of data security company SecureAge, CCPA has a number of weaknesses. One is that users might have a hard time understanding what they’re opting into when the data is more technical and less attributable, like IP geolocation data, rather than something like a social security number. “Individuals will be hard-pressed to make a decision to opt out that reflects a full understanding of the potential utility and value of that data,” Ray said. Another weakness is that it’s not easy to guess which companies actually need to comply with the CCPA, said Ray. Companies must meet one of the following criteria in order to be subject to the CCPA: 1. An annual revenue of $25 million or more 2. Collect data from 50,000 California consumers 3, Derive 50% or more of revenue from the sale of personal information “What appears to be a small office for mortgage refinancing may be over the 50,000 user records sold threshold with many statewide outlets under different names,” said Ray. “And that leads to the darker side, all of those companies that don’t meet the requirements to be subject to CCPA but collect and trade data as a normal course of business, from boutique job recruitment sites to payday loan offices. Billions of electronic records are independently generated by small and medium-sized enterprises that contribute to millions of personal data repositories that can be breached without any of the sanctions or remedies within CCPA being available to the victims.” CPRA expands upon the CCPA and adds new rights that allow consumers to stop businesses from using sensitive information, safeguards children’s privacy by tripling fines, extends the exemption for employment data, and establishes the California Privacy Protection Agency, Clarke explained. Franco believes that the two major news things that it adds are: 1. More ability for the consumer to control their data and have specific rights on what they can do with their data 2. Extends the scope of CCPA, not only to consumers, but also to customers and employees. According to Franco, the CCPA was heavily focused on protection for data monetization, but missed some things like the right to correct data or opt-out for processing. “So CPRA gets closer to GDPR with respect to the rights that the consumer has on the data that the company has captured from him,” said Franco. Clarke believes that the most significant part of the law is the forming of a separate agency that is responsible for writing operating rules and levying fines. According to Clarke, the attorney general’s office currently has a budget of $1.5 million to enforce the CCPA. He explained that this is enough money to pursue about three large-scale lawsuits with roughly five attorneys working on them. With a separate agency, there could be around 25 attorneys whose main job is just to enforce the CPRA. “I think this is very, very significant in terms of the potential impact of the CPRA,” said Clarke. z

9


008-10_SDT029.qxp_Layout 1 8/24/20 1:21 PM Page 10

10

SD Times

September 2020

www.sdtimes.com

< continued from page 9

GDPR,” will mirror the GDPR as well as contain new additions. “This would be for practical reasons and to help ensure an adequacy decision with the EU, minimizing any impact on cross border data exchange between the regions in 2021 and beyond,” Zivver wrote in a whitepaper. It is anticipated to go into effect in 2021, Zivver added.

Privacy practices and breaches Osano recently released a report that examined the relationship between poor privacy practices and data breaches. It evaluated 11,000 of the top websites, based on Alexa Internet rankings. It found that 2.77% of websites have reported a data breach in the last 15 years. Unsurprisingly, websites that had poor data privacy practices were more likely to experience a data breach. Websites in the top quartile have made proactive efforts to be transparent about data practices, while websites in the bottom quartile have “extremely outdated privacy notices or no privacy notice at all.” Websites in the top quartile had a 1.86% chance of suffering a data breach; websites in the bottom quartile had a 3.36% chance. In addition to just being more likely to be breached, organizations in the bottom quartile suffer more severe data loss when breached. Organizations in the top, second, and third quartiles, lose an average of 7.7 million records. When an organization in the bottom quartile suffers a breach, it loses an average of 54.4 million records — a 7x increase.

What’s next? Going forward, Franco predicts that AI ethics will be the next big aspect of data that starts to see more regulation. “I use the analogy with life science. When a new medicine comes into the market, you have to do some trials,” said Franco. “You have to prove that the new thing doesn’t have some adverse effects, and this clinical trial is heavily regulated. So it looks like the European Union is taking this kind of direction for AI and for

EU-US Privacy Shield decision invalidated On July 16, 2020, the EU-US Privacy Shield was deemed invalid by the Court of Justice of the European Union, essentially the EU’s Supreme Court. The EU-US Privacy Shield “protects the fundamental rights of anyone in the EU whose personal data is transferred to the United States for commercial purposes. It allows the free transfer of data to companies that are certified in the US under the Privacy Shield,” the European Commission stated. The framework includes strong data protection obligations, safeguards on U.S. government access to data, effective protection and redress for individuals, and an annual joint review by EU and U.S. officials to monitor the arrangement. The EU-US Privacy Shield was created as a result of a complaint from Austrian national Maximilian Schrems. Some of Schrems’ data was transferred by Facebook Ireland to Facebook servers in the United States to undergo processing. In his complaint, Schrems claimed that he did not believe the United States offered sufficient protection for his data against public authorities. This complaint was rejected by the High Court in Ireland in 2015, but afterwards, the Irish supervisory authority asked Schrems to reformulate his complaint. In the new complaint, he claimed that “the United States does not offer sufficient protection of data transferred to that country.” He also requested suspension of future transfers of his data to the United States. When brought to court this time, Decision 2016/1250 was adopted, otherwise known as the EU-US Privacy Shield. In invalidating the decision, the court claims that “the limitations on the protection of personal data arising from the domestic law of the United States on the access and use by US public authorities of such data transferred from the European Union to that third country... are not circumscribed in a way that satisfies requirements that are essentially equivalent to those required under EU law.” z

making sure that the AI algorithms, when they apply to a person, run in a fair way and in a controlled and ethical way … That is pretty interesting and probably the next step because we see more and more of the power of data that can automatically recognize people with their face and everything and that can automatically trigger some decision. And so this is becoming a big thing in terms of how a company must govern the way that the data is used to automate decisions.” In February, the EU had announced new objectives to shape its digital future, one of which was more strictly controlling its use of AI. “Clear rules need to address high-risk AI systems without putting too much burden on less risky ones. Strict EU rules for consumer protection, to address unfair commercial practices and to protect personal data and privacy, continue to apply. For highrisk cases, such as in health, policing, or transport, AI systems should be transparent, traceable and guarantee human oversight. Authorities should be able to

test and certify the data used by algorithms as they check cosmetics, cars or toys. Unbiased data is needed to train high-risk systems to perform properly, and to ensure respect of fundamental rights, in particular non-discrimination,” the European Commission wrote in a statement. In the United States, there is also a strong need for stricter regulation on the use of personal data by AI, particularly when used for applications like facial recognition. Facial recognition technology is currently used by government agencies in the U.S., including DMVs in several states, airports, and by police, Recode reports. Tech giants like Microsoft have long been vocal in their support for stronger regulations to avoid misuse by governments and companies, and Washington state (where Microsoft is headquartered) passed facial recognition regulations earlier this year. Washington’s law is the first facial recognition law in the US that includes protections for civil liberties and human rights. z


Full Page Ads_SDT038.qxp_Layout 1 7/20/20 11:45 AM Page 11


012-14_SDT039.qxp_Layout 1 8/21/20 2:55 PM Page 12

12

SD Times

September 2020

www.sdtimes.com

Mainframe for DevOps BY CHRISTINA CARDOZA

“A

fter the last nuclear bomb goes off and the earth cools, and the cockroaches come back out of the ground, they will all be dragging mainframes with them because it’s the only platform that will be able to withstand that,” Thomas Klinect, a senior research director at Gartner, told SD Times. While Klinect was joking when he made that comment during an interview with SD Times, the point he was trying to stress was that mainframes aren’t going anywhere anytime soon and should stop being looked at as legacy technology. Mainframes have a misconception that they are these behemoth systems that are slow and require a lot of support, but with hardware and software advancement made over the last several decades, mainframes are now modern, efficient and fast platforms, Klinect explained. The hurdle that still stands in the way of bringing mainframes into modern, digital transformations is the application development and delivery aspect. Traditionally, updates to applications on the mainframe are difficult and release cycles are too long — so organizations have left mainframes out of modern development and delivery initiatives like Agile and DevOps, causing gaps in the development life cycle, Klinect explained. In addition, a report from Forrester found that because mainframe developers struggle with compliance processes, lack of application modularity, manual processes and inconsistent dev times when it comes to dealing with mainframe apps, it is putting a hindrance on their ability to innovate.

But as organizations try to move faster and be competitive, it is becoming apparent that existing mainframe investments can no longer be ignored as part of the digital journey. “Something that most businesses have learned over the last 20 years is they have to evolve the way they are doing application development,” said Sam Knutson, vice president of product management at Compuware. “In order to do mainframe, development and meet the needs of customers, we have to work in a different way.” Forrester also found a majority of organizations are looking to modernize the mainframe with 82% finding DevOps as a critical or high priority task. Additionally, Compuware is currently conducting a survey on the state of mainframes, and preliminary results found 78% of respondents would find it useful if they could update mainframe applications more frequently than they currently do, 56% are using DevOps on the mainframe, and respondents who have adopted DevOps see a return on investment in one year or less.

Why should you bring mainframe into DevOps initiatives According to Knutson, DevOps is one of the biggest driving forces that has come into the application development space because it changes how enterprises think and deliver software. Customers want work delivered in small batches, shorter release cycles and capabilities as soon as they are ready — exactly what DevOps promises. By not including mainframes in the DevOps conversation, the mainframe space has become siloed and left out of

modern tooling, technologies and investments being made — which results in the inability to move faster or remain competitive, according to Eddie Houghton, director of product management for mainframe and enterprise solutions at Micro Focus. “There is no real customer loyalty now. So to be able to actually differentiate yourself you have to be able to offer services faster and that is the thing that is underpinning and driving that digital transformation in terms of those services you can actually offer out to your customers so you can remain competitive,” said Houghton. Mainframe modernization should not be confused with lift and shift or rip and replace, Knutson stressed. There are decades and decades of work that have gone into mainframes, and trying to recreate it by replatforming an application is a waste of time and will make a company stand still as competitors march forward. Additionally, Houghton added that trying to lift and shift mainframes that essentially have 200 to 400 million lines of code is an absolutely colossal undertaking and introduces a huge amount of risk and pressure to do it in a desirable time frame. “These are services that are providing real value to you. The lower risk approach is to say what are the things that are stopping me from moving quickly. By bringing mainframe development into a more Agile framework allows you to actually accelerate the delivery of change from cutting the code through to actually deploying it to production,” he said. The way to go about doing that is to understand how you currently develop


012-14_SDT039.qxp_Layout 1 8/21/20 2:56 PM Page 13

puts an end to silos today, what tools you are using, where your challenges are and what the culture of the business is. According to Houghton, organizations can implement tools to understand what is on the mainframe and how the data, code and solutions are connected. Once that is established, then they can start making decisions about what to do with those applications. A lot of advancements have also been made in the mainframe DevOps tool space over the last couple of years to make it easier to achieve mainframe agility. For instance IBM, developer of the most commonly used mainframe servers, has worked over the last couple of years to provide a common developer experience for development, test, automation and integration. Mainframes have also been updated with a modern UI experience and now support web interfaces as well and open source tools. There has also been an increase in hybrid applications such as mobile front-ends and cloud-based integrations, making solutions more accessible on the mainframe, according to Peter Wassel, director of product management for the Broadcom Mainframe Software Division. Wassel added that we’ve now reached the tipping point where closed, proprietary tools and waterfall processes are no longer acceptable. “Businesses running mainframes need to break through the silos, and unlock their assets for competitive advantage, especially against digital natives whose developers already use DevOps,” said Wassel. The key to success is to make sure

DevOps for the mainframe is no different from any other platform developers are using. With modern tools available, the same tools, technologies and languages can be used across the mainframe, cloud and other initiatives, Wassel explained. “Mainframe people, processes, and technology are being incorporated into already established enterprise DevOps initiatives, rather than designing a separate mainframe DevOps,” Wassel said. “By creating this bridge, you eliminate the notion of the mainframe as being ‘separate’ from the cloud, and enable businesses to modernize in place.”

How to attract talent One of the biggest challenges, but also benefits to bringing mainframes to DevOps, is that it brings to light the mainframe talent shortage. Current mainframe developers are older and have worked in waterfall environments their entire life, according to Gartner’s Klinect. Learning new tools and capabilities doesn’t come as easy to them and they are not accustomed to change, so it’s important to start looking for ways to bring new talent in. The next generation of developers coming out of college are used to working with Agile development, continuous integration, and modern IDEs, Micro Focus’ Houghton explained. They aren’t going to want to come into an environment that is antiquated and difficult to learn. The reality is that mainframe is a very crucial and valuable component of the organization, it just needs to keep pace with the changes happening, according to Lisa Dyer, vice president

of product management at Ensono. “You need skills not just for today, but for tomorrow. We have this whole giant pool of developers who if we give them modern IDEs it just makes mainframe code another language for my developer and lets them use whatever tools they want,” Dyer explained. Broadcom’s Wassel suggested to focus on finding people who are the right “fit” for the company with the right intellectual aptitude and drive rather than going into the hiring process with a checklist of technical skills and experience. “Employers should think of the talent they bring on board as an investment that will span multiple years, rather than as a quick fix to address what may be a short-term need. This investment calls for training,” Wassel explained. There are also tools such as zAdviser designed to help organizations understand where individual developers are struggling and provide training for that individual rather than just trying to apply broad training across the organization and make them magically Agile, Klinect explained. The point to remember is that change doesn’t just happen at the developer level and it’s a culture change that needs to happen throughout the entire enterprise.

The next phase of mainframe modernization As the cloud takes center stage in digital transformation efforts, organizations will have to learn how to incorporate cloud initiatives with mainframe initiatives. continued on page 14 >


012-14_SDT039.qxp_Layout 1 8/21/20 2:56 PM Page 14

14

SD Times

September 2020

www.sdtimes.com

The path to mainframe agility When Compuware, now a BMC company, came under new leadership in 2014, the company embarked on a mission to realign as an Agile business. It went from delivering customer requirements every 12 to 18 months to a publicly faced quarterly roadmap deeply engaged with customers. Once the company started to see some success, customers went from asking “Why should I do Agile development and DevOps on the mainframe?” to “How do I do Agile and DevOps on the mainframe?” according to Sam Knutson, vice president of product management at Compuware. To help others reap the benefits, Compuware has provided 10 steps to achieving mainframe agility: Determine your current and desired state: Be clear about what you are trying to achieve and what the transformation will look like, then map out a plan.

1. 2.

Modernize your mainframe development environment: Move away from green screen ISPF environments that require specialized skills and knowledge into more modern development spaces. This will open up the talent pool of developers available to work on the mainframe.

3.

Adopt automated testing: In a recent report, the company found that test automation is critical for accelerating innovation with 90% of IT leaders saying automating more test cases is the single most important factor in their success. Additionally, 92% of respondents said mainframe teams spend more time testing code because of the growing application environment complexity. Automated testing will free up developer time and provide fast feedback on the mainframe.

4.

Provide graphical intuitive visibility into existing code and data structure: Developers need a quick and easy way to understand the application

< continued from page 13

“As enterprises move mission-critical workloads to the cloud, a key step along the way will be modernization. Modernization shouldn’t be confused with moving away from the mainframe – rather, the mainframe is a part of most of our clients' hybrid cloud journeys by helping our clients modernize their applications – not the platform. As clients look toward modernizing applications for hybrid cloud, it will be key for them to leverage a set of common, standardized tools to enable DevOps across the enterprise,” said Rosalind Radcliffe, distinguished engineer and chief architect of DevOps for Enterprise Systems at IBM. However, Ensono’s Dyer warns about getting too caught up with the promise of the cloud. She explained there will be a lot of conversations and bias to move stuff off the mainframe into the cloud, but that’s because cloud

logic, interdependencies, data structures, data relationships and runtime behaviors to make sense of what’s going on.

5.

Empower developers at all skill and experience levels: There are not a lot of new mainframe developers coming out of college, so the next generation of developers must be coached and trained if companies want to improve development performance and productivity on the mainframe.

6.

Initiate training in and adoption of Agile processes: Once modern development environments are set up, teams can start training and shift from a waterfall to an Agile approach. Mainframe developers should be paired with Agile developers so they can learn from each other.

7.

Leverage operational data across the development, testing and production life cycle: Developers have to understand how the application is behaving in order to improve it and they can do that by looking at the operational data continuously and measuring progress.

8.

Deploy true Agile/DevOps-enabling source code management: Traditional source code management environments were designed for waterfall. A modern, Agile SCM environment should provide automation, visibility, rules-based workflows, and integration with other tools.

9.

Automate deployment of code into production: Getting new code out quickly and reliably into production requires automating and coordinating deployments in a synchronized manner and being able to pinpoint any issues as they occur.

10.

Enable coordinated cross-platform continuous delivery: Mainframes should not be left out of the pipeline. It should become just another platform that can be updated quickly and adapt as necessary. z

vendors want you to use their platforms. Dyer isn’t saying that mainframe applications shouldn’t be moved to the cloud, but she says not all mainframes apps need to get off the mainframe. Companies have to utilize a hybrid IT platform today and figure out what are the right platforms for the right workloads, and how to establish a frictionless connectivity and way of working. Systems of engagement such as mobile and web apps may not be wellsuited to the mainframe and may need a more hyperscale platform, but will still need connectivity into the systems of records — which still exist on the mainframe, Dyer explained. “Many services these days rely on a mobile front-end, but still benefit enormously from the strength of the mainframe to deliver the optimum end user experience marked by scale, resilience, security, availability, and data protec-

tion,” Broadcom’s Wassel added. However, Gartner’s Klinect says mainframe and the cloud is like putting a round peg in a square hole. While mainframes are very similar to the cloud in terms of paying for CPU usage and supporting RESTful interfaces and open-source tools, adopting mainframe into the cloud just isn’t there yet. He explained the licensing model for software on the mainframe isn’t built for the cloud yet, but they are evolving and changing as more open source becomes available on the platform so he expects the problem will be worked out in a couple of years. “Once those two models merge, and mainframe becomes more flexible and the realization cloud charges for the same process cycles that a mainframe would, I believe we will see a blurring of the lines between what’s a mainframe and what’s a distributed box,” he said. z


Full Page Ads_SDT039.qxp_Layout 1 8/21/20 11:34 AM Page 15


016_SDT039.qxp_Layout 1 8/21/20 9:12 AM Page 16

16

SD Times

September 2020

www.sdtimes.com

DEVOPS WATCH

Harness acquires CI company Drone.io BY JAKUB LEWKOWICZ

Continuous delivery-as-a-service provider Harness announced that it acquired Drone.io, the creator of the Drone open-source project. Drone is a continuous delivery system built on container technology. With this new acquisition, Harness hopes to enable DevOps engineers to build, test and deploy software ondemand, without delay or downtime. Drone.io plugins are containerized and standardized, and designed to reduce the time and cost of continuous integration by five to 10 times. “Instead of requiring scripts, continuous integration pipelines can be declared and managed as code in Git, which means they have standard syntax, require less work and are easy for engineers to

create, use, maintain and troubleshoot,” Harness wrote in its announcement. Harness also stated that it will continue to invest, innovate and support Drone.io’s open-source community, and that it currently has several internal projects under consideration for open source. Drone will continue to be open source and free for the community and will become the Harness CI Community Edition. Drone Enterprise will be sold as Harness CI Essentials Edition, and later on this year, the company will add Harness CI Enterprise Edition. “Today’s software developers are under incredible pressure to create and deploy new applications on-demand, yet a lack of automation means their current process is highly manual, time-consuming and error-prone,” said Jyoti Bansal,

With the acquisition of Drone.io, Harness will... fully embrace, and commit to, the open source community, says Jyoti Bansal.

CEO and co-founder of Harness. “With the acquisition of Drone.io, Harness will continue to simplify software delivery for developers, and will fully embrace, and commit to, the open source community so that we can accelerate the speed of software delivery together.” z

Jenkins graduates from the CD Foundation BY JAKUB LEWKOWICZ

The Continuous Delivery Foundation announced that Jenkins is the first project to graduate by demonstrating growing adoption, an open governance process, feature maturity, and a strong commitment to community, sustainability, and inclusivity. Jenkins is an open-source automation server and CI/CD system that provides the ability to connect all tools and customize to fit any integration requirements. It was built on the Java Virtual Machine (JVM) and offers more than 1,500 plugins to further automation. The project also released a new open public roadmap with a focus on user experience and cloud platforms. This includes Jenkins Kubernetes Operator and an initial project on the integration of Jenkins with the Tekton Triggers pipeline.

“Jenkins is the most prolific CI/CD tool and has been a catalyst for transforming the entire software delivery industry,” said Tracy Miranda, the CD Foundation governing board chair and director of open-source community at CloudBees. “With Jenkins graduating from the CDF it sets the stage for act two of the Jenkins project while simultaneously modelling what a well-adopted, well-governed and well-run open source software project looks like for other open source projects to model.”

Jenkins has also recently achieved the Core Infrastructure Initiative (CII) best practices compliance badge, set up an official bug triage team and created an adopters page to highlight users. Upon graduation status, Jenkins updated its Code of Conduct and updated its governance and user documentation to align with recommendations defined by CDF, according to the foundation. “From the very beginning of the Jenkins project, creating a stable neutral home for Jenkins has been a key goal for the project,” said Kohsuke Kawaguchi, creator of Jenkins, co-CEO at Launchable, Inc. “We believed this is the only way for the community to collectively own this foundational project that played a key role in defining CI/CD. I congratulate the team and the community for hitting the key milestone.” z


VSMDC-house ad.qxp_Layout 1 8/21/20 3:54 PM Page 1

presents

Next year’s date:

March 10, 2021

Join your peers for a day of learning Virtual VSM DevCon is a one-day, digital conference examining the benefits of creating and managing value streams in your development organization. At Virtual VSM DevCon, you will learn how to apply value stream strategies to your development process to gain efficiencies, improve quality and cut costs.

Highlights from last year’s sessions: l

An examination of the VSM market

l

What exactly is value?

l

Slow down to speed up: Bring your whole team along on the VSM journey

l

Why developers reject Value Stream Management — and what to do about it

l

You can measure anything with VSM. That’s not the point

l

Who controls the flow of work?

Taught by leaders

l

Tying DevOps value streams to business success

on the front lines of Value Stream

l

Making VSM actionable

l

Value Stream Mapping 101

l

How to integrate high-quality software delivery into the Value Stream

l

Transitioning from project to product-aligned Value Streams

l

The 3 Keys to Value Stream infrastructure automation

REGISTER FOR FREE TODAY! https://events.sdtimes.com/valuestreamdevcon 2020 Sponsors A

Event


018-22_SDT039.qxp_Layout 1 8/21/20 9:10 AM Page 18

18

SD Times

September 2020

www.sdtimes.com

There’s more to testing than

BY LISA MORGAN

R

apid innovation and the digitalization of everything is increasing application complexity and the complexity of environments in which applications run. While there’s an increasing emphasis on continuous testing as more DevOps teams embrace CI/CD, some organizations are still disproportionately focused on functional testing. “Just because it works doesn’t mean it’s a good experience,” said Thomas Murphy, senior director analyst at Gartner. “If it’s my employee, sometimes I make them suffer but that means I’m going to lose productivity and it may impact employee retention. If it’s my customers, I can lose retention because I did not meet the objectives in the first place.” Today’s applications should help facilitate the organization’s business goals while providing the kind of experience end users expect. To accomplish that, software teams must take a more holistic approach to testing than they have done traditionally, which involves more types of tests and more roles involved in testing. “The patterns of practice come from architecture and the whole idea of designing patterns,” said Murphy. “The best practices 10 years ago are not best practices today and the best practices three years ago are probably not the best practices today. The leading practices are the things Google, Facebook and Netflix were doing three to five years ago.” Chris Lewis, engineering director at technology consulting firm DMW Group, said his enterprise clients are seeing the positive impact a test-first mindset has had over the past couple of years. “The things I’ve seen [are] particularly in the security and infrastructure world where historically testing hasn’t been something that’s been on the agenda. Those people tend to come from more traditional, typically full-stack software development backgrounds and they’re now wanting more control of the development processes end to end,” said Lewis. “They started to inject testing thinking


018-22_SDT039.qxp_Layout 1 8/21/20 9:10 AM Page 19

www.sdtimes.com

across the life cycle.” Nancy Kastl, executive director of testing services at digital transformation agency SPR, said a philosophical evolution is occurring regarding what to test, when to test and who does the testing. “Regarding what to test, the movement continues away from both manual [and] automated UI testing methods and toward API and unit-level testing. This allows testing to be done sooner, more efficiently and fosters better test coverage,” said Kastl. “When” means testing earlier and throughout the SDLC. “Companies are continuing to adopt Agile or improve the way they are using Agile to achieve its benefits of continuous delivery,” said Kastl. “With the current movement to continuous integration and delivery, the ‘shift-left’ philosophy is now embedded in continuous testing.” However, when everyone’s responsible for testing, arguably nobody’s responsible, unless it’s clear how testing should be done by whom, when, and how. Testing can no longer be the sole domain of testers and QA engineers because finding and fixing bugs late in the SDLC is inadequate, unnecessarily costly and untenable as application teams continue to shrink their delivery cycles. As a result, testing must necessarily shift left to developers and right to production, involving more roles. “This continues to be a matter of debate. Is it the developers, testers, business analysts, product owners, business users, project managers, [or] someone else?” said Kastl. “With an emphasis on test automation requiring coding skills, some argue for developers to do the testing beyond just unit tests.” Meanwhile, the scope of tests continues to expand beyond unit, integration, system and user acceptance testing (UAT) to include security, performance, UX, smoke, and regression testing. Feature flags, progressive software delivery, chaos engineering and test-driven

September 2020

SD Times

19

development are also considered part of the testing mix today.

Security goes beyond penetration testing Organizations irrespective of industry are prioritizing security testing to minimize vulnerabilities and manage threats more effectively. “Threat modeling would be a starting point. The other thing is that AI and machine learning are giving me more informed views of both code and code quality,” said Gartner’s Murphy. “There are so many different kinds of attacks that occur and sometimes we think we’ve taken these precautions but the problem is that while you were able to stop [an attack] one way, they’re going to find different ways to launch it, different ways it’s going to behave, different ways that it will be hidden so you don’t detect it.” In addition to penetration testing, organizations may use a combination of tools and services that can vary based on the application. Some of the more common ones are static and dynamic application security testing, mobile application security testing, database security testing, software composition analysis and appsec testing as a service. DMW Group’s Lewis said his organization helps clients improve the way they define their compliance and security rules as code, typically working with people in conventional security architecture and compliance functions. “We get them to think about what

The best practices 10 years ago are not best practices today. —Thomas Murphy, Gartner the outcomes are that they really want to achieve and then provide them with expertise to actually turn those into code,” said Lewis. SPR’s Kastl said continuous delivery requires continuous security verification to provide early insight into potential security vulnerabilities. “Security, like quality, is hard to build in at the end of a software project

Software Testing The lasT of Three ParTs

and should be prioritized through the project life cycle,” said Kastl. “The Application Security Verification Standard (ASVS) is a framework of security requirements and controls that define a secure application with developing and testing modern applications.” Kastl said that includes: l Adding security requirements to the product backlog with the same attention to coverage as the application’s functionality l A standards-based test repository that includes reusable test cases for manual testing and to build automated tests for Level 1 requirements in the ASVS categories, which include authentication, session management, and function-level access control l In-sprint security testing that’s integrated into the development process while leveraging existing approaches such as Agile, CI/CD and DevOps l Post-production security testing that surfaces vulnerabilities requiring immediate attention before opting for a full penetration test. l Penetration testing to find and exploit vulnerabilities and to determine if previously detected vulnerabilities have been fixed.

Performance testing beyond load testing Load testing ensures that the application continues to operate as intended as the workload increases, with emphasis on the upper limit. By comparison, scalability testing considers both minimum and maximum loads. In addition, it’s wise to test outside of normal workloads (stress testing), to see how the application performs when workloads suddenly spike (spike testing) and how well a normal workload endures over time (endurance testing). “Performance really impacts people continued on page 20 >


018-22_SDT039.qxp_Layout 1 8/21/20 9:11 AM Page 20

20

SD Times

September 2020

www.sdtimes.com

< continued from page 19

from a usability perspective. It used to be if your page didn’t load within this amount of time, they’d click away and then it wasn’t just about the page, it was about the performance of specific elements that could be mapped to shopping cart behavior,” said Gartner’s Murphy. For example, GPS navigation and wearable technology company Garmin suffered a multi-day outage when it was hit by a ransomware attack in July 2020. Its devices were unable to upload activity to Strava’s mobile app and website for runners and cyclists. The situation underscores the fact that cybersecurity breaches can have downstream effects. “I think Strava had a 40% drop in data uploads. Pretty soon, all this data in the last three or four days is going to start uploading to them so they’re going to get hit with a spike of data, so those types of things can happen,” said Murphy. To prepare for that sort of thing, one could run performance and stress tests on every build or use feature flags to compare performance with the prior build. “By measuring the response time for a single user performing specific functions, these metrics can be gathered and compared for each build of the application,” said Kastl. “This provides an early warning of potential performance issues. These baseline performance tests can be integrated with your CI/CD pipeline for continuous monitoring of the application’s performance.” Mobile and IoT devices, such as wearables, have increased the need for more comprehensive performance testing and there’s still a lot of room for improvement. “As the industry has moved more to cloud-based technology, performance testing has become more paramount,” said Todd Lemmonds, QA architect at health benefits company Anthem, a Sauce Labs customer. “One of my current initiatives is to integrate performance testing into the CI/CD pipeline. It’s always done more toward UAT

which, in my mind, is too late.” To affect that change, the developers need to think about performance and how the analytics need to be structured in a way that allows the business to make decisions. The artifacts can be used later during a full systems performance test. “We’ve migrated three channels on to cloud, [but] we’ve never done a performance test of all three channels working at capacity,” said Lemmonds. “We need to think about that stuff and predict the growth pattern over the next five years. We need to make sure that not only can our cloud technologies handle that but what the full system performance is going to look like. Then, you run into issues like all of our subsystems are not able to handle the database connections so we have to

that ask whether the user “loves” the app or not. One area that tends to lack adequate focus is accessibility testing, however. “More than 54 million U.S. consumers have disabilities and face unique challenges accessing products, services and information on the web and mobile devices,” said SPR’s Kastl. “Accessibility must be addressed throughout the development of a project to ensure applications are accessible to individuals with vision loss, low vision, color blindness or learning loss, and to those otherwise challenged by motor skills.” “The first step to ensuring an application’s accessibility is to include ADA Section 508 or WCAG 2.1 Accessibility standards as requirements in the product’s backlog along with functional requirements,” said Kastl. Non-compliance to an accessibility standard on one web page tends to be repeated on all web pages or throughout a mobile application. To detect non-compliant practices as early as possible, wireframes and templates for web and mobile applications should be reviewed for potential non-compliant designed components, Kastl said. Corrective action should be taken by the team prior to the start of application testing. Then, during in-sprint testing activities, assistive technologies and tools such as screen readers, screen magnification and speed recognition software should be used to test web pages and mobile applications against accessibility standards. Automated tools can detect and report non-compliance. Gartner’s Murphy said organizations should be monitoring app ratings and reviews as well as social media sentiment on an ongoing basis. “You have to monitor those things, and you should. You’re feeding stuff like that into a system such as Statuspage or PagerDuty so that you know something’s gone wrong,” said Murphy. “It may not just be monitoring your site. It’s also monitoring those external sources because they may be the leading indicator.” z

A philosophical evolution is occurring regarding what to test, when to test and who does the testing. —Nancy Kastl, SPR come up with all kinds of ways to virtualize the services, which is nothing new to Google and Amazon, but [for] a company like Anthem, it’s very difficult.” DMW Group’s Lewis said some of his clients have ignored performance testing in cloud environments since cloud environments are elastic. “We have to bring them back to reality and say, ‘Look, there is an art form here that has significantly changed and you really need to start thinking about it in more detail,” said Lewis.

UX testing beyond UI and UAT While UI and UAT testing remain important, UI testing is only a subset of what needs to be done for UX testing, while traditional UAT happens late in the cycle. Feature flagging helps by providing early insight into what’s resonating and not resonating with users while generating valuable data. There’s also usability testing including focus groups, session recording, eye tracking and quick one-question in-app surveys


Full Page Ads_SDT038.qxp_Layout 1 7/20/20 11:47 AM Page 23


018-22_SDT039.qxp_Layout 1 8/21/20 9:11 AM Page 22

22

SD Times

September 2020

www.sdtimes.com

Engineering practices that advance testing BY LISA MORGAN

Testing practices are shifting left and right, shaping the way software engineering is done. In addition to the many types of tests described in this Deeper Look, test-driven development (TDD), progressive engineering and chaos engineering are also considered testing today.

TDD TDD has become popular with Agile and DevOps teams because it saves time. Tests are written from requirements in the form of use cases and user stories and then code is written to pass those tests. TDD further advances the concept of building smaller pieces of code, and the little code quality successes along the way add up to big ones. TDD builds on the older concept of extreme programming (XP). “Test-driven development helps drive quality from the beginning and [helps developers] find defects in the requirements before they need to write code,” said Thomas Murphy, senior director analyst at Gartner. Todd Lemmonds, QA architect at health benefits company Anthem, said his team is having a hard time with it because they’re stuck in an interim phase. “TDD is the first step to kind of move in the Agile direction,” said Lemmonds. “How I explain it to people is you’re basically focusing all your attention on [validating] these acceptance criteria based on this one story. And then they’re like, OK what tests do I need to create and pass before this thing can move to the next level? They’re validating technical specifications whereas [acceptance test driven development] is validating business specifications and that’s what’s presented to the stakeholders at the end of the day.”

Progressive Software Delivery Progressive software delivery is often misdefined by parsing the words. The thinking is if testing is moving forward (becoming more modern or maturing), then it’s “progressive.” Progressive delivery is something Agile and DevOps teams with a CI/CD pipeline use to further their mission

of delivering higher-quality applications faster that users actually like. It can involve a variety of tests and deployments including A/B and multivariate testing using feature flags, blue-green and canary deployments as well as observability. The “progressive” part is rolling out a feature to progressively larger audiences. “Progressive software delivery is an effective strategy to mitigate the risk to business operations caused by product changes,” said Nancy Kastl, executive director of testing services at digital transformation agency SPR. “The purpose is to learn from the experiences of the pilot group, quickly resolve any issues that may arise and plan improvements for the full rollout.” Other benefits Kastl perceives include: l Verification of correctness of permissions setup for business users l Discovery of business workflow issues or data inaccuracy not detected during testing activities l Effective training on the software product l The ability to provide responsive support during first-time product usage l The ability to monitor performance and stability of the software product under actual production conditions “Global companies with a very large software product user base and custom configurations by country or region often use this approach for planning rollout of software products,” Kastl said.

Chaos Engineering Chaos engineering is literally testing the effects of chaos (infrastructure, network and application failures) as it relates to an application’s resiliency. The idea originated at Netflix with a program called “Chaos Monkey,” which randomly chooses a server and disables it. Eventually, Netflix created an entire suite of open-source tools called the “Simian Army” to test for more types of failures, such as a network failure or an AWS region or availability zone drop. The Simian Army project is no longer actively maintained but some of its functionality has been moved to other Netflix

projects. Chaos engineering lives on. “Now what you’re starting to see are a couple of commercial implementations. For chaos to be accepted more broadly, often you need something more commercial,” said Gartner’s Murphy. “It’s not that you need commercial software, it’s going to be a community around it so if I need something, someone can help me understand how to do it safely.” Chaos engineering is not something teams suddenly just do. It usually takes a couple of years because they’ll experiment in phases, such as lab testing, application testing and pre-production. Chris Lewis, engineering director at technology consulting firm DMW Group, said his firm has tried chaos engineering on a small scale, introducing the concept to DMW’s rather conservative clientele. “We’ve introduced it in a pilot sense showing them it can be used to get under the hood of non-functional requirements and showing that they’re actually being met,” said Lewis. “I think very few of them would be willing to push the button on it in production because they’re still nervous. People in leadership positions [at those client organizations] have come from a much more traditional background.” Chaos engineering is more common among digital disruptors and smaller innovative companies that distinguish themselves using the latest technologies and techniques.

Proceed with caution Expanding more testing techniques can be beneficial when organizations are actually prepared to do that. One common mistake is trying to take on too much too soon and then failing to reap the intended benefits. Raj Kanuparthi, founder and CEO of custom software development company Narwal, said in some cases, people need to be more realistic. “If I don’t have anything in place, then I get my basics right, [create] a road map, then step-by-step instrument. You can do it really fast, but you have to know how you’re approaching it,” said Kanuparthi, who’s a big proponent of Tricentis. “So many take on too much and try 10 things but don’t make meaningful progress on anything and then say, ‘It doesn’t work.” z


Full Page Ads_SDT039.qxp_Layout 1 8/21/20 11:35 AM Page 23

!!#*#0 2# -$25 0# 3 *'27 '2& -+.0#&#,1'4# #12',% )# 130# 7-30 ..*'! 2'-,1 +##2 !312-+#0 #6.#!2 2'-,1ล !-+.*' ,!# 0#%3* 2'-,1ล ," 31',#11 %- *1ล m|;]u-|; -u-vo=| 1om|bmย oย v |;v|bm] voัดย |bomv |o bm1ourou-|; =ย m1|bom-ัด -m7 momล =ย m1|bom-ัด |;v|bm] ย b|_bm ย oย u ;ย rv ย ouh=ัดoย ฤบ

โ ข ย |ol-|; v;1ย ub|ย ฤท u;ัดb-0bัดb|ย ฤท -m7 1olrัดb-m1; -v 1o7; bv ย ub||;mฤบ โ ข 1_b;ย ; 1o7; 1oย ;u-]; |-u];|v ย b|_ ล roย ;u;7 ย mb|ฤท ฤท -m7 & |;v|bm]ฤบ โ ข !;ย v; =ย m1|bom-ัด |;v|v |o v_b=| ัด;=| ัดo-7 -m7 r;u=oul-m1; |;v|bm]ฤบ โ ข m-0ัด; 1om|bmย oย v ;m7ล |oล ;m7 -m7 bm|;]u-|bom |;v|bm] ย b|_ v;uย b1; ย bu|ย -ัดbย -|bomฤบ โ ข m|;]u-|; tย -ัดb|ย bm|o ย oย u ล rbr;ัดbm; |o vblrัดb=ย -m7 -ย |ol-|; 7;ัดbย ;uย 7;1bvbomvฤบ

PA R A S O F T.C O M | I N F O @ PA R A S O F T.C O M | 8 8 8 . 3 0 5 . 0 0 41


024-27_SDT039.qxp_Layout 1 8/21/20 3:48 PM Page 24

24

SD Times

September 2020

www.sdtimes.com

Buyers Guide

Closing the (back) door on supply chain attacks With an increase of data breaches and operational disruptions, demand for security tooling for open-source and commercial components grows

ecurity has become ever more important in the development process, as vulnerabilities last year caused the 2nd, 3rd and 7th biggest breaches of all time measured by the number of people that were affected. This has exposed the industry’s need for more effective use of security tooling within software development as well as the need to employ effective security practices sooner. Another factor contributing to this growing need is the prominence of new attacks such as next-generation software supply-chain attacks that involve the intentional targeting and compromising of upstream open-source projects so that attackers can then exploit vulnerabilities when they inevitably flow downstream. The past year saw a 430% increase in next-generation attacks aimed at opensource software supply chains, according to the 2020 State of the Software Supply Chain report. “Attackers are always looking for the path of least resistance. So I think they found a weakness and an amplifying effect in going after open-source projects and open-source developers,” said Brian Fox, the chief technology officer at Sonatype. “If you can somehow find your way into compromising or tricking people into using a hacked version of a very popular project, you’ve just amplified your base right off the bat. It’s not

S

BY JAKUB LEWKOWICZ yet well understood, especially in the security domain, that this is the new challenge.” These next-gen attacks are possible for three main reasons. One is that opensource projects rely on contributions from thousands of volunteer developers, making it difficult to discriminate between community members with good or bad intentions. Secondly, the projects incorporate up to thousands of dependencies that may contain known vulnerabilities. Lastly, the ethos of open source is built on “shared trust,” which can create a fertile environment for preying on other users, according to the report. However, proper tooling, such as the use of software composition analysis (SCA) solutions, can ameliorate some of these issues. DevOps and Linux-based containers, among other factors, have resulted in a significant increase in the use of OSS by developers, according to Dale Gardner, a senior director and analyst on Gartner’s Digital Workplace Security team. Over 90% of respondents to a July 2019 Gartner survey indicate that they use open-source software. “Originally, a lot of these [security] tools were focused more on the legal side of open source and less on vulnerabilities, but now security is getting more attention,” Gardner said.

The use of automated SCA In fact, the State of the Software Supply Chain report found that high-performing development teams are 59% more likely to use automated SCA and are almost five times more likely to successfully update dependencies and to fix vulnerabilities without breakage. The teams are more than 26 times faster at detecting and remediating open-source vulnerabilities, and deploy changes to code 15 times more frequently than their peers. The main differentiator between the top and bottom performers was that the high performers had a governance structure that relied much more heavily on automated tooling. The top teams were 96% more likely to be able to centrally scan all deployed artifacts for security and license compliance. “Ideally, a tool should also report on whether compromised or vulnerable sections of code — once incorporated into an application — are executed or exploitable in practice,” Gardner wrote in his report titled “Technology Insight for Software Composition Analysis.” He added, “This would require coordination with a static application security testing (SAST) or an interactive application security testing (IAST) tool able to provide visibility into control and data flow within the application.” Gardner added that the most common approach now is to integrate a lot of these security tools into IDEs and CLIs.


024-27_SDT039.qxp_Layout 1 8/21/20 3:48 PM Page 25

www.sdtimes.com

“If you’re asking developers ‘I need you to go look at this tool that understands software composition or whatever the case may be,’ that tends not to happen,” Gardner said. “Integrating into the IDE eliminates some of the friction with other security tools and it also comes down to economics. If I can spot the problem right at the time the developer introduces something into the code, then it will be a lot cheaper and faster to fix it then if it were down the line. That’s just the way a lot of developers work.”

They’re saying, well, it says sugar, it doesn’t say tainted sugar, and there’s no poison in it. So your cake is safe to eat,” Fox said. “Versus what we’re doing here is we’re actually inspecting the contents of the baked cake and going, wait a minute. There’s chromatography that shows that there’s actually poison in here, even though the recipe didn’t call for it, and that’s kind of the fundamental difference.” There has also been a major shift from how application security has traditionally been positioned.

Beyond compliance Using SCA for looking at licenses and understanding vulnerabilities with particular packages are already prominent use cases of SCA solutions, but that’s not all that they’re capable of, according to Gardner. “The areas I expect to grow will have to do with understanding the provenance of a particular package: where did it come from, who’s involved with building it, and how often it’s maintained. That’s the part I see growing most and even that is still relatively nascent,” Gardner said. The comprehensive view that certain SCA solutions provide is not available in many tools that only rely on scanning public repos. Relying on public repos to find vulnerabilities — as many security tools still do — is no longer enough, according to Sonatype’s Fox. Sometimes issues are not filed in the National Vulnerability Database (NVD) and even where these things get reported, there’s often a two-week or more delay before it becomes public information. Instead, effective security requires going a step further into inspecting the built application itself to fingerprint what’s actually inside an application. This can be done through advanced binary fingerprinting, according to Fox. The technology tries to deterministically work backwards from the final product to figure out what’s actually inside it. “It’s as if I hand you a recipe and if you look at it, you could judge a pie or a cake as being safe to eat because the recipe does not say insert poison, right? That’s what those tools are doing.

Targeting development In many attacks that are happening now, the developers and the development infrastructure is the target. The developers might have been the ones that were compromised this whole time, while things were being siphoned out of the development infrastructure. “We’ve seen attacks that were stealing SSH keys, certificates, or AWS credentials and turning build farms into cryptominers, all of which has nothing to do with the final product,” Fox said. “In the DevOps world, people talk a lot

September 2020

SD Times

about Deming and how he helped make Japan make better, more efficient cars for less money by focusing on key principles around supply chains. Well, guess what. Deming wasn’t trying to protect against a sabotage attack of the factory itself. Those processes are designed to make better cars, not to make the factory more secure. And that’s kind of the situation we find ourselves in with these upstream attacks.” Now, effective security tooling can capture and automate the requirements to help developers make decisions up front and to provide them information and context as they’re picking a dependency, and not after, Fox added. Also, when the tooling recognizes that a component has a newly disclosed vulnerability, it can recognize that it’s not necessarily appropriate to stop the whole team and break all the builds, because not everyone is tasked with fixing every single vulnerability. Instead, it’s going to notify one or two senior developers about the issue. “It’s a combination of trying to understand what it takes to help the developers do this stuff faster,” Fox said. z

How does your company help make applications more secure? Brian Fox, CTO of Sonatype: Today, more than 1,200 companies rely on the Nexus platform to unite software developers, security professionals, and IT operations on the same team so they can continuously identify and remediate open-source risk, without slowing down innovation. When speed is critical, Nexus ensures that controls keep pace and that innovation prospers. Our award-winning platform is powered by Nexus Intelligence, a proprietary research service that knows more about the quality of open source than anyone in the world. This highly curated intelligence service integrates easily with a wide range of popular tools across every phase of your software development life cycle and empowers engineering teams to innovate faster with less risk. For software developers, Nexus provides precise information and rapid feedback about open-source projects so engineers always utilize the highest quality third-party libraries to build the best applications. For application security professionals, Nexus integrates with CI/CD pipelines so teams can automatically find, and easily fix, open-source security vulnerabilities and licensing risk. For operations professionals, Nexus continuously examines applications in production and generates a crystal clear picture of third-party open-source dependencies so teams can rapidly patch in the event of new zero day threats. At Sonatype, we’ve also taken great care to establish a culture intensely devoted to each customer’s success. But, don’t just take our word for it. Our customers say it best,” the way Sonatype implemented their application with us has been really, really good… they don’t just give you the software and walk out the door… even now, a year after going live, they still meet with us regularly and give us extremely helpful guidance. It is not often that I say that about companies we work with.” z

25


Full Page Ads_SDT039.qxp_Layout 1 8/21/20 11:35 AM Page 26

High Performers are

On average, there are

59% more likely

38 known OSS vulnerabilities

to be using software composition analysis (SCA) tools

per application

High Performers are

51% more likely to create a software bill of materials (SBOM)

430% YOY growth in cyber attacks targeting open source software projects

1.5 trillion OSS download requests expected in 2020

High Performers detect and remediate OSS vulnerabilities

High Performers are

26x faster

to enforce OSS governance in Continuous Integration (CI)

2020

State Software Supply Chain of the

The 6th Annual Report on Global Open Source Software Development PRESENTED BY

IN PARTNERSHIP WITH

28% more likely

In collaboration with research partners Gene Kim from IT Revolution and Dr. Stephen Magill, CEO at MuseDev, we’ve examined 1.5 trillion open source download requests, 24,000 open source projects, and 5,600 enterprise development teams, to understand how these teams utilize open source components and the performance and risk management outcomes they achieve. Our findings are clear. Faster innovation and productivity are not mutually exclusive.

See for yourself: www.sonatype.com/SDTimes


024-27_SDT039.qxp_Layout 1 8/21/20 3:54 PM Page 27

www.sdtimes.com

A guide to security tools n

FEATURED PROVIDER n

n Sonatype: The Sonatype Nexus Platform automatically enforces open-source governance and controls risk across every phase of the SDLC. Fueled by Nexus Intelligence which includes in-depth security, license, and quality information on millions of open-source components across dozens of ecosystems, the platform precisely identifies open-source risk and provides expert remediation guidance, empowering developers to innovate faster. Only Nexus secures your perimeter and every phase of your SDLC, including production, by continuously monitoring for new risk based on your open-source policies.”

n Aqua Security: The Aqua Container Security Platform protects applications running on-premises or in the cloud, across a broad range of platform technologies, orchestrators and cloud providers. Aqua secures the entire software development life cycle, including image scanning for known vulnerabilities during the build process, image assurance to enforce policies for production code as it is deployed, and run-time controls for visibility into application activity. n Bugcrowd reduces risk with coverage powered by its crowdsourced cybersecurity platform. Crowdsourced security supports today’s key attack surfaces, on all key platforms, as well as “the unknown.” A public crowd program can uncover risks in areas unknown to the security organization, such as shadow IT applications or exposed perimeter interfaces. n Contrast Security achieves comprehensive security observability across the entire software life cycle that enables users to remediate critical vulnerabilities and protect against real threats faster and easier. Contrast OSS allows organizations to establish a comprehensive view of all open-source components and their risks and Contrast Assess uses instrumentation to embed security directly into the development pipeline. n FOSSA: FOSSA’s Deep Discovery enables users to get an accurate view of their open-source dependencies with Deep Discovery. It adds deep license scanning, dependency analysis, and intelligent compliance into a users’ real-time development workflow. FOSSA natively supports complicated workflows including multiple branches, tags and release channels. This allows users to compare releases, see what changed and integrate with code review to preview patches before they bring in issues.

n Palo Alto Networks prevents attacks with its intelligent network security suite featuring an ML-powered next-generation firewall. Its Cortex DR solution is a detection and response platform that runs on fully integrated endpoint, network, and cloud data. Users can manage alerts, standardize processes and automate actions of over 300 third-party products with Cortex XSOAR — a security orchestration, automation and response platform. AutoFocus uses high-fidelity threat intelligence to power investigation, prevention, and response. n Parasoft offers static analysis, dynamic analysis, unit testing, and code coverage for software testing of embedded systems to ensure they are safe, secure, and reliable. Parasoft solutions are built to automate functional safety compliance and keep up with the ever-changing coding standards — so users can rest assured that their application remains compliant at all times. n Signal Sciences offers a next-gen WAF and RASP to help users increase security and maintain site reliability without sacrificing velocity, all at the lowest total cost of ownership. Signal Sciences gets developers and operations involved by providing relevant data, helping them triage issues faster with less effort. With Signal Sciences, teams can see actionable insights, secure across the broadest attack classes, and scale to any infrastructure and volume elastically. n Snyk: Snyk Open Source Security management automatically finds, prioritizes and fixes vulnerabilities in users’ opensource dependencies throughout the development process. Snyk’s dependency path analysis which allows you to understand the dependency path through which transitive vulnerabilities were introduced. Snyk also offers an Infrastructure as Code

September 2020

SD Times

solution that helps developers find and fix security issues in Terraform and Kubernetes code. n Splunk: Its Data-to-Everything Platform unlocks data across all operations and the business and offers AI-driven insights so that IT teams can see the technical details and impact on the business when issues occur. It also provides security professionals with comprehensive capabilities that accelerate threat detection, investigation. The platform offers full-stack, real-time cloud monitoring, complete trace data analysis and alerts, and a mobile-first automated incident response. n Synopsys helps development teams build secure, high-quality software, minimizing risks while maximizing speed and productivity. Synopsys, provides static analysis, software composition analysis, and dynamic analysis solutions that enable teams to quickly find and fix vulnerabilities and defects in proprietary code, open-source components, and application behavior. n Veracode offers a holistic, scalable way to manage security risk across your entire application portfolio. It provides visibility into application status across all testing types, including SAST, DAST, SCA, and manual penetration testing, in one centralized view. Its solution provides instant security feedback in the IDE, fix-first recommendations alongside findings, automated fix advice, and code reviews with secure coding experts. Veracode’s program managers also advise teams on flaw types prevalent in particular development teams. n WhiteHat Security: The WhiteHat Application Security Platform is a cloud service that allows organizations to bridge the gap between security and development to deliver secure applications at the speed of business. Its software security solutions work across departments to provide fast turnaround times for Agile environments, near-zero false positives and precise remediation plans while reducing wasted time verifying vulnerabilities, threats and costs for faster deployment. n WhiteSource enables users to secure and manage open-source components in their apps and containers with support for over 200 languages and frameworks, automated remediation with policies and fixed pull requests, and advanced license compliance policies and reporting. z

27


028_SDT039.qxp_Layout 1 8/21/20 9:09 AM Page 28

28

SD Times

September 2020

www.sdtimes.com

Guest View BY STEPHEN GATES

3 Reasons to get going with Go Lang Stephen Gates is Cybersecurity Evangelist at Checkmarx.

W

hether due to corporate project demands or out of pure curiosity, developers are often faced with learning new programming languages. While this can present challenges, especially when it comes to maintaining secure coding best practices, it also opens the door for developers to become accustomed to new, and increasingly better, languages. One language in particular that has quickly become a rising star in today’s software development community is Go, an open-source language developed by Google. Many have heard of Go since its “birth” in 2009 — in fact, according to a new survey from Stack Overflow, Go is now the fifth most popular language among developers due to its simplicity and reliability. And, the fact that it’s one of the highest-paid programming languages certainly doesn’t hurt things either. However, in a 2019 survey that gathered nearly 11,000 responses from Go developers, the majority of respondents (56%) indicated they’re new to the language, having used it for less than two years. Despite having been around for a while, it’s clear that Go still has a long road ahead to reach widespread adoption, especially in corporate environments. As Go continues its rise, we’ve outlined the top three reasons why we’re Go advocates, so much so that we’re adding enhanced support for the language ourselves, and encourage developers to take the time to learn this easy-to-use language. 1. Go syntax is simple and clean: Go syntax is something between C and Python, with advantages from both languages. It has a garbage collector that is very useful. It does not have objects, but it supports structures (structs) and any type can have a method. It does not support inheritance but does support compositions and interfaces. With Go, developers can have a pointer to a variable, but at same time, don’t have to worry about referring to its methods. It’s statically typed, but it’s nonstrict because of the type inference. And last, but certainly not least, Go has a simple, concurrent model. Digging into Go’s simplicity, but “awesomeness” a bit further:

With Go’s surge in popularity, it’s imperative that applications developed in the language are designed with security in mind.

• Swapping between variable is simple (e.g., b, a = a, b) • Importing packages directly from GitHub or any other source manager is a breeze (import “github.com/pkg/errors”) • By starting a Goroutine, it supports concurrent routing (go runConcurrently()) 2. Go is efficient and scalable: Thanks to the Go dependency model and its memory management, compilation is very fast when compared to low-level languages, and even more so with highlevel languages. Go’s runtime performance is similar to C++ and C, making its performance quite notable. In the context of scaling, Go is much faster than its competitors. For example, when comparing Goroutines to Java threads, Goroutines consume ~2KB, when Java thread consumes ~1MB. 3. Go is widely used and easy to learn: Go is an open source language with wide adoption and a fast-growing community. On the web, there are several free and useful packages and many Q&As, FAQs, and tutorials. In addition, Go Language is very easy to learn. Because of its friendly syntax and the great “Tour of Go” (that takes about two days to complete and covers all the basics developers will need to get started programming in Go), after completing the tour, developers will feel very confident with the language. When starting the language, coding with it will become pretty easy overall. And after about two weeks of using it, it will likely become developers’ preferred/native language. A reminder not to “go” too fast and think securely when using open source As easy as Go makes it for developers to start coding, like any other new language, security must be top of mind. Finding Go security discussions, tips, and training can be challenging and the need for secure coding guides and summaries is apparent, since they are often of tremendous value to those starting any new language. With Go’s surge in popularity, it’s imperative that applications developed in the language are designed with security in mind. Understanding the most common pitfalls is always a good first step. Leveraging application security testing (AST) solutions that support Go can also help ensure that more secure applications are the result. So, what are you waiting for?! Time to get GOing! z


029_SDT039.qxp_Layout 1 8/21/20 9:06 AM Page 29

www.sdtimes.com

September 2020

SD Times

Analyst View BY ARNAL DAYARATNA

The birth of the digital librarian A

s digital transformation accelerates, deepens and intensifies within the enterprise, the number of digital assets that an enterprise has to manage will correspondingly increase. For example, enterprises are already in the process of developing more net new applications, modernizing existing applications, creating microservices, APIs, functions as a service, infrastructure as code solutions, CI/CD toolchains and other implementations of DevOps toolchains. Moreover, enterprises increasingly source digital solutions from other enterprises or community-based sharing infrastructures such as open source code repositories or private repositories. These digital solutions will be deployed across a multitude of infrastructures and correspondingly require minor to substantial modifications to optimize them across different deployment environments. This proliferation of digital assets will require enterprises to dedicate full-time developer resources to manage the exponential growth of digital assets. Such resources will play the role of a digital librarian tasked with the responsibility of ensuring that all digital assets: • can be seamlessly retrieved and used by relevant stakeholders • feature documentation about their origin, life cycle and evolution • have been evaluated for the most recent security-related breaches and considerations • are managed by governance protocols that ensure they are accessible only to appropriate individuals and teams • are replicated in conjunction with business continuity planning to ensure not only their timely recovery, but also the ability to recover past versions of each asset • have access to relevant datastores and APIs This type of full-time developer resource — let’s call it a digital librarian — will become increasingly important to enterprises as they embark further down the path of producing custom solutions. Moreover, a full-time, digital librarian will become imperative as organizations intensify the practice of sharing and borrowing code from other organizations and communities. Put differently, an enterprise digital officer will be required to manage the flow of digital assets and ensure the implementation of processes that enable developers to find the

source of the digital assets that they are using, and how those assets have variously used by the organization for which they work. This kind of digital librarian will require the following skillsets and capabilities: • Proficiency in AI/ML technologies that dynamically manage the indexing, tagging and classification of millions of digital assets • Content management capabilities that empower developers to identify and use assets of interest • Identity and access management experience to ensure that assets are available to appropriate personnel • Business continuity and disaster recovery skills • Granular knowledge of databases, data warehousing practices and APIs used to access data and feed applications Importantly, such librarians will need to meticulously document their own processes and actions to ensure business continuity as they themselves vacate their role over time. The larger point here is that a digital asset manager requires granular knowledge of development processes that they can bring to the role to manage the technologies that help manage digital assets. In addition, knowledge of development will be required to develop taxonomies and processes for classifying digital assets in ways that make sense to developers, and facilitate the use of digital assets. But isn’t everything ultimately stored in an enterprise repository? And isn’t this responsibility already handled by a CIO or chief data officer? The answer is no, on both counts. While code is stored in repositories, the surrounding universe of data stores that populate applications, APIs, testing results and production-grade implementations of applications across multiple platforms are not. The entirety of these artifacts are essential to documenting the digital history of enterprises and their management is invaluable both to innovation as well as to legal considerations about the intellectual property of a digital solution. CIOs and chief data officers certainly manage many of the processes underlying the management of digital assets, but rarely to the point of managing the availability and history of each and every digital asset in the entire organization. z

Arnal Dayaratna is Research Director of Software Development at IDC

This proliferation of digital assets will require… full-time developer resources to manage.

29


030_SDT039.qxp_Layout 1 8/24/20 2:15 PM Page 30

30

SD Times

September 2020

www.sdtimes.com

Industry Watch BY DAVID RUBINSTEIN

Checking my notes David Rubinstein is editor-in-chief of SD Times.

W

e speak to a lot of experts here at SD Times. Almost to a person, they talk about modern applications, tectonic shifts in development, scary scenarios of data breaches, the need for software to ‘be’ the business, and much more. But as I looked back on many of the interviews we’ve done, some overarching themes are still being discussed, even in the face of massive changes in the industry.

on the community to ensure the tool is on top of all patches and potential security vulnerabilities. So, has this argument simply become two sides of the same coin? Use the best open-source tools for the job you can find, but get them through a vendor that provides the support, management and updating required of an enterprise development tool.

Speed vs. quality and security Vendor lock-in vs. best of breed The pendulum is swinging again, only this time, the names have changed, and now there’s a twist. The “best of breed” side more often today is open source, which is exploding in IT departments due to its ease of use and low barrier of entry. With developers and deployment teams given more autonomy than ever to move quickly and release more often, any open-source tool they can find that can quickly help them create, deploy and maintain applications is added to the toolbox. On the one hand, this model certainly helps organizations speed their time to market and quickly adapt to changes in their market. And development teams love the DIY capability without having to go through a lengthy procurement process to get the tools they need. On the other hand, managing the APIs and data exchanges between these tools is becoming ever more complex, and creates the potential for downtime if APIs change and applications suddenly fail. (We won’t even talk about the shadow IT problem all this open source is creating inside organizations.) So, at the other end of the pendulum sit the platform providers, offering abstraction layers that do all the heavy lifting of connecting data sources to applications, tools to tools and more. These platforms make managing that complexity easier, as it’s all done in one place, with one singular view into the systems that organizations rely on for their business life. But here’s now the twist: Many of these platforms are also open source. While the vendors behind them offer 24/7 support and add functionality that the individual projects can’t or don’t provide, the platforms are open source. If developers choose to use the free version of the tool, they have to rely

What [organizations] ought to do is go slow to go fast.

For most businesses today, their websites are the new window displays. This is where their customers visit to see what’s for sale, what’s on sale, to choose an item and make a purchase. And, just as they would change their windows as the seasons changed, so too must they change their websites, only more quickly, to take advantage of market conditions, deal with the unknowns (like pandemics, that have driven so much more traffic to these websites) and ensure a great customer experience. But many experts had been discussing speed even before all the pieces necessary to support going fast were in place. Organizations likely have adopted Agile development practices, and have stopped monolithic development in favor of smaller services and modern architectures to tying those services together and swapping them out. And the cloud has become the data center of choice for many organizations. But doing Agile and DevOps without having a test infrastructure in place, without having a security plan in place, will do more harm than the benefits you gain from going fast. We hear the experts tell us that “shifting left” will solve these problems. But asking developers to be responsible for development, testing, security, deployment, management and maintenance is like asking a plumber to build a house. He or she probably can do it, but it will go slowly and take a lot of time. The point is, this is an exciting time for software development, but also a time that has brought much angst as organizations try to deliver on the vision of software eating the world. What they ought to do is go slow to go fast. Make sure the right skills are in the right place, roles are clearly defined, and everyone has bought into the mission before simply deciding to pick up the pace of development. We’ve already seen enough ‘one step forward, three steps back.’ z


Veracode Webinar Ad.qxp_WirelessDC Ad.qxd 8/20/20 3:12 PM Page 1

presents

What role do developers have in security? (A big one, as it turns out)

FREE VIRTUAL EVENT Sept. 10 at 11:00 am Eastern

Organizations aren’t looking at security as just being perimeter protection anymore. Applications themselves need to be built from the ground up with protections that ensure data integrity and prevent compromise. Since security responsibilities are being put on developers, they should embrace an active role in the process, and should have a voice setting the critical requirements for security solutions.

Cody Bertram

Patrick McNeil

Join Cody Bertram and Patrick McNeil, both Senior Principal Solution Architects at security company Veracode, for an important discussion on enabling developers to take a more active role in application security.

Register Now

https://resources.sdtimes.com/veracode-what-roledo-developers-have-in-application-security


SD Times Newsletters The latest news, news analysis and commentary delivered to your inbox!

• Reports on the newest technologies affecting enterprise developers • Insights into the practices and innovations reshaping software development • News from software providers, industry consortia, open source projects and more

Read SD Times Newsletters to keep up with everything happening in the software development industry. SUBSCRIBE TODAY!


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.