SD Times September 2021

Page 1

Full Page Ads_SDT051.qxp_Layout 1 8/17/21 10:25 AM Page 1

SEPTEMBER 2021 • VOL. 2, ISSUE 51 • $9.95 • www.sdtimes.com


Full Page Ads_SDT051.qxp_Layout 1 8/17/21 10:25 AM Page 2


003_SDT051.qxp_Layout 1 8/23/21 9:47 AM Page 1

SEPTEMBER 2021 • VOL. 2, ISSUE 51 • $9.95 • www.sdtimes.com


004_SDT051.qxp_Layout 1 8/20/21 4:30 PM Page 4

®

Instantly Search Terabytes

www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein drubinstein@d2emerge.com NEWS EDITOR Jenna Sargent jsargent@d2emerge.com MULTIMEDIA EDITOR

dtSearch’s document filters support: popular file types

Jakub Lewkowicz jlewkowicz@d2emerge.com SOCIAL MEDIA AND ONLINE EDITOR

emails with multilevel attachments

Katie Dee kdee@d2emerge.com

a wide variety of databases

ART DIRECTOR

web data

Mara Leonardi mleonardi@d2emerge.com CONTRIBUTING WRITERS Jacqueline Emigh, Caryn Eve Murray, George Tillmann

2YHU VHDUFK RSWLRQV LQFOXGLQJ efficient multithreaded search

CONTRIBUTING ANALYSTS Enderle Group, Gartner, IDC, Intellyx

HDV\ PXOWLFRORU KLW KLJKOLJKWLQJ forensics options like credit card search

CUSTOMER SERVICE SUBSCRIPTIONS subscriptions@d2emerge.com ADVERTISING TRAFFIC Mara Leonardi mleonardi@d2emerge.com

Developers: 6'.V IRU :LQGRZV /LQX[ PDF26

LIST SERVICES Jessica Carroll jcarroll@d2emerge.com

&URVV SODWIRUP $3,V IRU & -DYD DQG NET 5 / 1(7 &RUH

.

.

)$4V RQ IDFHWHG VHDUFK JUDQXODU GDWD FODVVLILFDWLRQ $]XUH $:6 DQG PRUH

REPRINTS reprints@d2emerge.com ACCOUNTING accounting@d2emerge.com

ADVERTISING SALES PUBLISHER David Lyman 978-465-2351 dlyman@d2emerge.com

Visit dtSearch.com for KXQGUHGV RI UHYLHZV DQG FDVH VWXGLHV IXOO\ IXQFWLRQDO HQWHUSULVH DQG developer evaluations

SALES MANAGER Jon Sawyer 603-547-7695 jsawyer@d2emerge.com

The Smart Choice for Text Retrieval® since 1991

dtSearch.com 1-800-IT-FINDS

PRESIDENT & CEO David Lyman

D2 EMERGE LLC www.d2emerge.com

CHIEF OPERATING OFFICER David Rubinstein


005_SDT051.qxp_Layout 1 8/23/21 9:41 AM Page 5

Contents

VOLUME 2, ISSUE 51 • SEPTEMBER 2021

FEATURES

NEWS 6

News Watch

12

Report: 75% of developers say they’re responsible for data quality

14

Why machine learning models fail

15

Why did it take the Colonial hack to focus on security?

16

GitHub Copilot sparks debates around open-source licenses

18

Empower developers for broader role

20

HCL brings DevOps portfolio to the cloud

20

The potential of the DevOps fourth wave

COLUMNS 52

GUEST VIEW by Thomas Kohlenbach How much process debt are you carrying?

37

ANALYST VIEW by Bill Swanton 6 Steps to Upskill Developers

38

Internet Privacy and User Protection

page 8

Low code cuts down on dev time, increases testing headaches page 22

INDUSTRY WATCH by David Rubinstein Is DevOps actually ‘The Bad Place’?

BUYERS GUIDE

To release, or not to release? page 30

Infrastructure management going extinct with serverless page 26

Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 2 Roberts Lane, Newburyport, MA 01950. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2021 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 2 Roberts Lane, Newburyport, MA 01950. SD Times subscriber services may be reached at subscriptions@d2emerge.com.


006-7_SDT051.qxp_Layout 1 8/20/21 11:36 AM Page 6

6

SD Times

September 2021

www.sdtimes.com

NEWS WATCH Automation enhancements come to Wind River Studio Wind River Studio’s latest update adds a customizable automation engine, digital feedback loop, enhanced security, analytics with machine learning capabilities and a DevSecOps pipeline. The platform also now offers customizable automation pipelines and integration with commonly used automation tools to help developers build connected intelligent systems such as airborne delivery drones, autonomous vehicles, and factory robots. For the development stage, the new release offers a pipeline manager visual tool

to monitor the CI/CD of collaborative projects for VxWorks and Wind River Linux. It also offers digital twin creation and synchronization to model resources, devices, and entire systems with Wind River Simics; quick emulation using QEMU; and simulation. For the deployment stage, the new studio offers a modern cloud platform updated with new 5G vRAN accelerator support and GPU enabled for AI-on-5G and augmented reality application support. It also offers a digital feedback loop and analytics powered by the digital feedback loop for real-time analysis, reporting, and alerting of infrastructure and application performance.

People on the move

n JFrog has named Sagi Dudai as its new executive vice president of product and engineering. He comes from Vonage where he held the position of chief technology officer. While at Vonage he drove the company’s technology vision, architecture, and design and oversaw technology development. He will report to JFrog’s CEO and co-founder, Shlomi Ben-Haim. n Matt Johnson will become the new chief executive officer of Silicon Labs at the start of 2022. He will replace the current CEO, Tyson Tuttle, who is retiring. Johnson has been at Silicon Labs since 2018, first being brought on as senior vice president and general manager of IoT products before being promoted to president in April of this year, which is still his current role. n ServiceNow is creating a new role called senior vice president of global alliance and channel ecosystems go-to-market operations. Erica Volini will fill this new role and will be working to help partners build a broad community of digital transformation leaders. Volini previously held the title of principal in Human Capital at Deloitte Consulting. n WSO2 is welcoming two new hires this month: Shekar Hariharan as chief marketing officer and Gregory Stuecklin as vice president and general manager of North America. Hariharan has held marketing roles at Jitterbit, Oracle, and SugarCRM, and Stuecklin comes from Microsoft. According to WSO2, Hariharan and Stuecklin will play a key role in the company’s global expansion.

Rust 1.54 now available The latest version of the programming language Rust is now available. Rust 1.54 introduces a few new stable features. One new update is that attributes can invoke function-like macros. An example use case of it is including documentation from other files into comments. “For example, if your project’s README represents a good documentation comment, you can use include_str! to directly incorporate the contents,” the Rust team explained in a post. According to the team, there were previously some workarounds that would allow for this functionality, but this makes it more ergonomic. Another new addition is the move to stable for several intrinsics for the wasm32 platform. Unlike the x86 and x86_64 intrinsics that are already stabilized, these don’t have a safety requirement where they can only be called if the appropriate target feature is enabled. This is because WebAssembly validates code safely before it is executed so instructions are guaranteed to either be decoded correctly or not at all.

Jetpack Compose reaches 1.0 release Jetpack Compose is a UI toolkit for Android developers. According to the team, Compose has been developed in the open for the past two years with participation from the Android community. As of this 1.0 release, there are already 2000 apps in the Play Store that have utilized Compose. Jetpack Compose is

designed to be interoperable with existing apps, integrate with Jetpack libraries, and offer Material Design components. Its Lazy components provide a simple and powerful way to display lists of data without requiring much boilerplate code, the team explained. Compose also has a selection of animation APIs to make it easier to build animations into Android apps. This 1.0 release introduces Compose Preview, available in Android Studio Arctic Fox, which allows developers to see their Composables in different states, light and dark themes, or different font scalings. This makes component development easier since developers don’t have to deploy a whole app to a device to see what those changes look like. Another new addition is Deploy Preview, which allows developers to test parts of the UI without having to navigate to that part of the app. The Jetpack Compose team also unveiled its roadmap for the future of the toolkit. Going forward, the team will be focusing on performance, Material You components, large screen improvements, homescreen widgets, and Wear OS support.

IntelliJ IDEA 2021.2 focuses on experience It includes a new project-wide analysis feature that allows developers to track errors before compiling the code. JetBrains also added a number of actions that will activate when a project is saved. These include things such as reformatting code and optimizing imports. Markdown support has also been improved. Developers will now be able to convert .md files to and from different formats, configure image sizes,


006-7_SDT051.qxp_Layout 1 8/20/21 11:36 AM Page 7

www.sdtimes.com

Microsoft sunsetting Xamarin Community Toolkit Microsoft is revealing plans for the future of its Xamarin Community Toolkit as the .NET MAUI release nears. This year the company has been working to unify Xamarin SDKs into .NET, and it released .NET MAUI as an evolution of Xamarin.Forms with the ultimate goal of acting as a replacement. Microsoft will be releasing two NuGet packages for .NET MAUI: CommunityToolit.Maui and CommunityToolkit.Maui.Markup. It is planning to release the first preview of these packages in August. The team is currently in the process of bringing features from the Xamarin Community Toolkit to the .NET MAUI Community Toolkit. Microsoft recommends the .NET MAUI Community Toolkit as the toolkit for all .NET MAUI apps. Microsoft will also be releasing two .NET MAUI-compatible versions of the Xamarin Community Toolkit to help developers avoid breaking changes when porting Xamarin.Forms apps to .NET MAUI. According to the company, these will be almost identical to the current Xamarin Community Toolkit libraries, with the only difference being a change in the Xamarin.Forms dependency to .NET MAUI. In terms of sunsetting the Xamarin Community Toolkit, the company will continue to support it through November 2022. It will accept pull requests for bug fixes through the time, but it will only accept pull requests to add new features through September 2021.

and use drag and drop to add images. There is also a new Floating Toolbar and JetBrains fixed list formatting issues.

IBM Z systems get new OS IBM z/OS V2.5 is designed to accelerate the adoption of hybrid cloud and AI as well as drive modernization initiatives. It features tightly integrated high performance AI functionality, which is designed to enable more informed decision making. New security capabilities include expanding pervasive encryption to basic and large format SMS-managed data sets and anomaly mitigation capabilities that utilize Predictive Failure Analysis (PFA), Runtime Diagnostics, Workload Manager (WLM), and JES2. The OS update also introduces new improvements for running on hybrid cloud. IBM z/OS V2.5 adds new Java and COBOL interoperability to extend existing programming models, enhanced perform-

ance and ease of use for z/OS Container Extensions, and transparent cloud tiering and Object Access Method cloud tier support to reduce capital and operating expenses.

Live Preview added to Visual Studio 2022 preview The second preview release for Visual Studio 2022 is now available. Visual Studio 2022 Preview 2 is focused on providing capabilities for productivity, modern development, and innovation, according to Microsoft. The first preview introduced the new Cascadia Code font, which was designed to be easier to read. In this preview, the team also updated icons to make them clearer and easier to distinguish. Preview 2 is also fully localized and includes over a dozen language packs. Languages to choose from include English, Chinese (Simplified), Chinese (Traditional), Czech, French,

German, Italian, Japanese, Korean, Polish, Portuguese (Brazil), Russian, Spanish, and Turkish. Productivity improvements include Live Preview experiences for XAML and web apps that show changes to apps in real time and Force Run, which is a new debug command that allows developers to run an application to a specific point, ignoring other breakpoints and exceptions. Visual Studio 2022 includes support for the latest version of the C++ build tools, new CMake integration, and seamless targeting for WSL2. In addition, this preview adds an update to Hot Reload, adding C++ support. Developers will now be able to use Hot Reload to edit C++ or .NET apps while the app is running.

Python Extension for VS Code July 2021 release This release introduces a quicker way of configuring project

September 2021

SD Times

roots. According to Microsoft, a common issue developers have is that developers see diagnostics under import statements when opening new projects, but they don’t know how to resolve them. Configuring project roots used to require the developer to set python.analysis.extraPaths to let Pylance know what search paths to use for import resolution. Now developers can skip the step of manually changing settings.json and trying to find the right search paths to add. Pylance will now guide them through this process through the editor. To take advantage of this, developers can hover over the diagnostic and click the lightbulb icon or “Quick Fix” in the tooltip to have Pylance suggest search paths. Another new change in the July 2021 release is that selecting an interpreter doesn’t modify workspace settings anymore. In the past, when a Python interpreter was selected or changed, the python.pythonPath setting was updated with the path as its value. The path is usually specific to the machine, so this caused problems when developers tried to share their VS Code settings in a GitHub repo. This release also adds two new debugger features. The first is the ability to select which targets to step into. When the debugger stops at a breakpoint with multiple function calls, developers can pick the one to step into by rightclicking, selecting “step into targets,” and choosing the desired target. The second new debugger feature is function breakpoints, which allows developers to specify a function to inspect its behavior. The debugger will stop executing when it reaches that function. z

7


008-10_SDT051.qxp_Layout 1 8/20/21 4:36 PM Page 8

8

SD Times

September 2021

www.sdtimes.com

Internet Privacy and User Protection

T

here are many facets of internet privacy that must come together in order to provide the best possible protection for users, and it all starts with each application and platform doing their part. According to Curtis Simpson, chief information security officer at the cybersecurity platform provider Armis, the way organizations protect their users comes down to what kind of data the user is providing. Understanding user data is the first step to proving strong privacy and security, Simpson said. “We’ve got to be looking at what personal information is flowing through our environment unprotected,” Simpson explained. “Gaining visibility to that clear tech data that’s linked to the landscape and first and foremost understanding that.” If applications and platforms that users rely on for protection understand the kind of personal data they are entrusted to protect, then it will make it exponentially easier for them to do so.

BY KATIE DEE Simpson cautioned, though, “Unfortunately in most environments we see, a lot of that data is not encrypted. It’s flowing through networks, going outside of the company and can be intercepted and stolen by anyone,” he said. This can be a scary thought for many users. It is not uncommon to browse the internet assuming a certain level of anonymity will be provided and that is why it is so important for organizations to take crucial steps to grant users protection. However, understanding data goes deeper than encryption. Simpson said there are many levels to personal user data, and platforms should strive to have a clear picture of all of them. “What we should be doing from there is taking a step back and looking at things like: where is this data coming from? Who is it being shared with? And really taking action,” he began. Simpson explained that a good way to gain knowledge on these things is for

organizations to create data flow maps. These kinds of maps provide a physical representation of how data is created, who creates it, where it goes, and who needs access to it, making it easier for companies to more securely protect their users. “We’ve got to do that legwork because what we have to do is set a standard, monitor the standard, and continue to build controls around the standard,” Simpson explains. The burden of internet privacy doesn't fall solely on organizations though; users also hold some of the responsibility. According to Simpson, a one-sided approach to privacy will never be enough. He says that keeping track of and hiding passwords is the first thing users should be concerned with online. “There’s a lot of things users can do, but one of the first things I recommend is using a password management or password vaulting service where you can centrally manage passwords in one application,” he explained. With so many different applications and services


008-10_SDT051.qxp_Layout 1 8/20/21 4:37 PM Page 9

www.sdtimes.com

requiring unique passwords for login, it can be challenging to keep track. Being overwhelmed with too many passwords in too many locations can result in carelessness and sometimes leave a metaphorical window open for hackers that may allow them to more easily gain access to user data and information. According to Simpson, storing passwords in a centralized and secure place helps to combat this and provide an extra layer of protection to users. On top of this, Simpson also stresses the importance of having multi-factor authentication enabled when it is available. “It’s particularly important in email because you think about when you hit a password reset button on almost any website, that password reset is going to that email address,” he began. “If someone gained access to that email account, they can gain access to anything else associated with that email account.” According to Simpson, this is how most user information becomes compromised on a regular basis. However, his most important tip to users looking to up their internet security is to simply think about what they are sharing. “If you don't need to share the information, if it’s not a required field, don’t share it.” According to Sri Mukkamala, senior vice president of security products at the IT automation platform Ivanti, the responsibility of internet privacy falls on both the organization and consumer equally. “It’s a combination of both, because as an individual if you give up information, you’re almost signing a waiver,” he began, “There’s something that says ‘I accept’ and you don’t even read through it… and I wouldn’t fully blame the consumer because at the same time a company should not just throw in legalities and take that waiver and do whatever they want.” Mukkamala said that this is a key reason why we see regulations coming into play more and more now. Relying on just the consumers and organizations themselves to provide proper protection is no longer enough. In recent years many applications have opted for biometric identification in place of passwords in pursuit of a

more secure platform. According to Simpson, this has worked in many cases, but not to the scale necessary. “It’s helped but it’s not consistently implemented on a widespread basis that would provide it with the material impact that it could have.” Simpson accredits this lack of widespread adoption to the diversity we see in devices. With so many users employing a number of different technologies, creating a standardized kind of biometric identification has proved to be incredibly difficult. “Everyone is concerned about the business impact as well and the impact that [biometric identification] can have within the organization so these types of things can be scary,” he added. Another key aspect of implementing this kind of technology is assurance that organizations are doing it the right way. While Simpson believes that software like this paired with universal adoption would be a major step in the right direction for internet privacy, he also believes that taking shortcuts with such important technology will do more harm than good.

September 2021

SD Times

There is another side of the shift towards biometric identification, however. According to Mukkamala, using facial recognition or voice identification in place of passwords could result in hackers becoming savvier and gaining access to arguably even more personal information. Mukkamala explained, “The personally identifiable information has just expanded its scope… if I started collecting biometrics, whether that's facial recognition or voice recognition, where will that data go?” This is an interesting point to look at. If organizations opt for this kind of identification, the data they are collecting from users becomes almost more personal and thus, has to be protected accordingly, which brings us right back around to the initial question of how organizations can ensure the best protection for users.

Regulatory controls for data use In the last few years, internet privacy has been taken very seriously by many organizations. Back in 2016, the European Union announced the implemencontinued on page 10 >

Swallowing third-party cookies Google has announced that it will ban third-party cookies from the search engine in the name of internet privacy and protecting user data. According to Curtis Simpson, CISO of the cybersecurity platform Armis, this move away from third-party cookies will have a tremendous impact. “If you look at this whole acceptance model that was built around third-party cookies, that’s generally a joke,” he explained. According to Simpson, most users are hitting “accept all” when pop-ups prompt them to do so, regardless of whether or not they understand what they are actually agreeing to. Once users allow cookies to access their data, it becomes much easier for it to fall into the wrong hands. “In most cases, they’re collecting more information than you want or need to share with them,” Simpson warned. Google’s push away from thirdparty cookies will provide users with better privacy because they will no longer have to worry about what outside sources have access to their personal information. Mukkamala, senior vice president of security products at the IT automation platform Ivanti, puts this into perspective. “If someone walks up to you on the street and says ‘show me your driver's license,’ you’re going to ask why,” he explained, “It’s the same thing online and you don’t even hesitate to give that personal information away.” This comparison drives the point home. When websites like Google ask users to allow third-party cookies, and they do, it is essentially the same as giving a stranger on the street your information. The user has no real knowledge of what websites are going to do with that information or where it could end up. The internet should operate the same as the real world in this way: question why websites want users to grant access to cookies and respond in the same way you would if this were an interaction in the real world. z

9


008-10_SDT051.qxp_Layout 1 8/20/21 4:36 PM Page 10

10

SD Times

September 2021

www.sdtimes.com

< continued from page 9

tation of The General Data Protection Regulation (GDPR) which was designed to better protect internet users. According to Simpson, GDPR is the first set of laws regarding internet privacy that enterprises really took seriously. “In many cases, enterprises see it’s cheaper to be non-compliant than to be compliant, but GDPR changed all of that due to their findings,” Simpson explained. He believes that this widespread compliance with the regulations GDPR put into place is the reason why it has been so effective. However, that does not mean that every organization is following all of these rules. Simpson explained that GDPR was effective because many companies were enforcing these laws due to a fear of repercussions if they did not. Unfortunately, this kind of fear-based acceptance may not be a sustainable model. “If we don’t continue to see penalties, due to inaction, I think we’re going to see a slowdown around some of those privacy elements,” he said. While Simpson believes that if organizations become more lenient with GDPR regulations, it could lead to stricter enforcement and more fines, he also predicts that until penalties become more consistent and more public, privacy issues may fall to the back burner. On the bright side though, Simpson also predicts that in the near future, we will see an increase in regulations like GDPR being implemented on a national scale. “Whether it’s [an adoption of GDPR] or other similar regulations, we’re going to see across the globe that everyone’s going to care and mandate minimum standards,” he said. Once organizations start to care more about internet privacy and putting regulations in place to protect users, we will see a real change. “As we’ve seen, this really does have a general impact on society as we continue to see these breaches at scale affecting hundreds of millions of people,” he began, “We can’t continue to have that happen because it’s disrupting services and capabilities within countries because when this happens at

Disposing of user data Lisa Plaggemier, interim executive director at the National Cybersecurity Alliance, said that she believes one of the biggest challenges facing internet privacy today is the issue of disposing user data once websites no longer need it. “What happens in a lot of companies is there will be a particular initiative and when that program is over, nobody thinks about what happens to that data,” she said. According to Plaggemier, this is one of the biggest blindspots organizations face in terms of user protection. If companies are taking data and personal information from consumers with no proper disposal plan in place, it can become a real risk. User personal information can easily fall into the wrong hands if it is stored or disposed of improperly once it is no longer needed. Plaggemier spoke specifically about a data breach involving Mercedes-Benz and a third-party company. According to Plaggemier, the data compromised was from years before the breach took place long after Mercedes had stopped working with the third-party company involved. “It brings the question to my mind: are you still using that data? Why is it still out there?” she said. This breach left many consumers vulnerable and if the proper user protection and data disposal systems had been in place, it may never have happened. When consumers give online companies their personal information they are placing their trust in them and if organizations don’t properly dispose of this data when it is no longer needed, they are betraying that trust. z

scale, it has a much larger impact.” California was the first state to go the extra mile in terms of internet privacy when it enacted The California Consumer Privacy Act (CCPA). This set of laws used GDPR as a guide to help better implement and enforce these regulations. According to Simpson, while CCPA operates on a smaller scale than GDPR, it is still generally effective. CCPA striving to meet GDPR requirements has caused many organizations to look at their own privacy guides and adjust them. “In many cases, companies are just finding the lowest common denominator,” he explained. These companies and organizations are looking at GDPR and CCPA regulations and trying to enact similar standards on a smaller scale, which will ultimately be a positive for users. Mukkamala said that one way companies and organizations can ensure user privacy outside of enacting new laws is to simply collect less information and be more careful with what they do collect from users. “Companies collect way too much information,” he began, “The company should be very careful about what they’re collecting, why they're collecting it, and how they plan

to use it.” If companies took a step back and reevaluated how much personal user information they are collecting, they could rid themselves of what they deem unnecessary. Doing this would make for more secure platforms because organizations would be more intentional about what they are collecting and storing from users. Mukkamala referred to the excessive amount of personally identifiable information (PII) websites and organizations collect, and the possible misuse of said info as privacy debt. In recent years this has become a bigger problem as more and more privacy debt is incurred by companies. “Because of privacy debt, during transactions, during IPO, during their SEC disclosures, privacy is becoming a very important risk factor to be considered,” Mukkamala said. All this is to say that organizations that are taking more information than needed from their users, while not taking the best steps to protect consumers may end up paying the price for it in the long run. Collecting personal information from users requires proper guidelines for how to use, store, and share said information, whether that be at a company, state, or global level. z


Full Page Ads_SDT051.qxp_Layout 1 8/17/21 10:26 AM Page 11

Collaborative Modeling

Keeping People Connected ®

®

®

®

®

Application Lifecycle Management | Jazz | Jira | Confluence | Team Foundation Server | Wrike | ServiceNow ®

Autodesk | Bugzilla

sparxsystems.com

TM

®

®

®

| Salesforce | SharePoint | Polarion | Dropbox

TM

| *Other Enterprise Architect Models

Modeling and Design Tools for Changing Worlds

®


012_SDT051.qxp_Layout 1 8/20/21 11:35 AM Page 12

12

SD Times

September 2021

www.sdtimes.com

Please rank your data quality challenges

(1 is the greatest challenge, 5 is the smallest)

1

2

3

4

5

Weighted Average

34.3%

12.6%

18.8%

15.2%

19.1%

2.72

Inconsistent Data

15.9%

41.2%

20.0%

15.9%

7.0%

2.57

Incomplete Data

10.3%

16.2%

43.5%

21.9%

8.1%

3.01

Old/Incorrect Data

11.5%

20.0%

21.9%

32.3%

14.3%

3.18

Misfielded Data

18.5%

20.0%

10.0%

20.5%

31.0%

3.25

International Character Sets

22.5%

9.5%

8.9%

13.0%

46.1%

3.51

Rank Duplicates

Source: SD Times Data Quality Market Study 2021

Report: 75% of developers say they’re responsible for data quality BY DAVID RUBINSTEIN

Nearly three-quarters of developers say they are responsible for managing the quality of the data they use in their applications, a key finding in the 2nd SD Times Data Quality Survey, completed in conjunction with data management provider Melissa in July. In last year’s survey, the number of developers claiming this responsibility was less than 50%, supporting the notion that the role of software developers has expanded beyond writing code. As organizations move security, testing, governance and even marketing and finance earlier into the application life cycle, developers are squeezed for time by ever-shrinking delivery timelines, and data quality often remains a “hope it’s right” afterthought to development teams. Among the other key findings is that the top problem development teams face is inconsistency of the data they need to utilize, followed closely by incomplete data and old/incorrect data. Last year’s top choice, duplicate data, fell to fourth this year. Misfielded data and international character sets round out the category. Because of these data problems, respondents to the survey said they

spend about 10 hours per week dealing with data quality issues, taking time from building new applications. Despite these problems, some 83% of respondents claimed their organizations are either data proficient or data aware, while only the remainder say they are data savvy (15%) and data driven (around 2%). “Data is critical to the success of organizations worldwide, and to find that such a small number consider themselves savvy or data driven is somewhat alarming,” said David Lyman, publisher of SD Times. “With the world moving forward on data privacy and governance, to see organizations still failing to maintain their data should be a wakeup call for the industry at large.” James Royster, the head of analytics at Adamas Pharmaceuticals and formerly the senior director of analytics and data strategy for biopharmaceutical company Celgene, said a big problem organizations face with their data is that there are “thousands of nuances” in big sets of data. Royster gave an example of IQVIA, a health care data connectivity solutions provider, which collects data from more than 60,000 pharmacies, each dispensing hundreds and thousands of drugs,

serums and more. On top of that, they service hospitals and doctors’ offices. So, he explained, “there are millions of potential points of error.” And in order for companies to create these datasets, they have to have developers write code that brings these data sets together, in a way that can be digested by a company. And that’s an ongoing process. “So as they’re changing code, updating code, collecting data, whatever it is, there’s millions of opportunities for things to go wrong.” But data issues don’t occur only in large organizations. Smaller companies also have problems with data, as they don’t have the resources to properly collect the data they need and monitor it for changes beyond someone in the database contacting them that something in their data has changed. As an example, smaller companies might use a form to collect data for users, but many users provide bad data to avoid unwanted contact. The problem, Royster said, is that “there’s nobody checking it or aggregating it or applying any sort of logic to it to say, this is how this should be. It’s just data goes in ... data comes out. And if that data that goes in is incorrect, what comes out is incorrect.” z


Full Page Ads_SDT050.qxp_Layout 1 7/21/21 9:27 AM Page 8

Bad Data Happens. We'll Help You Fix It.

TALK TO AN EXPERT

You can’t improve your business unless you know what’s wrong with it. Inaccurate customer data leads to poor business insight, lackluster communications, and inefficient operations. For over 35 years, Melissa has been one of the most trusted global experts in data quality, address verification and identity resolution. When bad data is holding your business back, we’ll help you find it, fix it, or flush it. Guaranteed.

Unify Data to Improve Customer Connections

PowerBad Business Prevent Data from Entering Your Database Intelligence

Increase Enrich Data ROI to Fuel Audience Insights & BI

Ask Melissa for a free troubleshooting & our 120-Day ROI Guarantee.

Melissa.com 1-800-MELISSA


014,15_SDT051.qxp_Layout 1 8/20/21 3:51 PM Page 14

14

SD Times

M

September 2021

www.sdtimes.com

BY KATIE DEE

achine Learning is quickly becoming an important tool for automation, but failing models and improper background knowledge are creating more issues than they are solving. “I think to build a good machine learning model… if you’re trying to do it repeatedly, you need great talent, you need an outstanding research process, and then finally you need technology and tooling that’s kind of up to date and modern,” said Matthew Granade, cofounder of machine learning platform provider Domino Data Lab. He explained how all three of these elements have to come together and operate in unity in order to create the best possible model, though Granade placed a special emphasis on the second aspect. “The research process determines how you’re going to identify problems to work on, find data, work with other parts of the business, test your results, and deliver those results to the business,” he explained. According to Granade, the absence of the essential combination of those aspects is the reason why so many organizations are faced with failing models. “Companies have really high expectations for what data science can do but they’re struggling to bring those three different ingredients together,” he said. This raises the question: why are organizations

investing so much into machine learning models but failing to invest in the things that will actually make their models an ultimate success? According to a study conducted by Domino Data Lab, 97% of those polled say data science is crucial to long-term success, however, nearly as many say that organizations lack the staff, skills, and tools needed to sustain that success. Granade traces this problem back to the tendency to look for shortcuts. “I think the mistake a lot of companies make is that they kind of look for a quick fix,” he began, “They look for a point solution, or this idea of ‘I’m going to hire three or four really smart PHDs and that’s going to solve my problem.’ ” According to Granade, these types of quick fixes never work long-term because the issues run deeper. It is always going to be important to have the best minds on your team but they cannot exist independently. Without the best processes and best tech to back them up, it becomes a futile attempt to utilize data science. Domino Data Lab’s study also revealed that 82% of executives polled said they thought that leadership needed to be concerned about bad or failing models as the consequences of those models could be astronomical. “Those models could lead to bad decisions that produce lost revenue, it could lead to

bad key performance indicators, and security risks,” Granade explained. Granade predicts that those companies that find themselves behind the curve on data science and machine learning practices will work quickly to correct their mistakes. Organizations that have tried and failed to implement this kind of technology will keep their eye on the others that have succeeded and take tips where they can get them. Not adapting to this practice isn’t an option in most industries as it will inevitably lead to certain companies falling behind as a business. Granade goes back to a comprehensive approach as the key to remedy the mistakes he has seen. “I think you can say ‘we’re going to invest as a company to build out this capability holistically. We’re going to hire the right people, we’re going to put a data science process in place, and the right tooling to support that process and those people,’ and I think if you do that you can see great results,” he said. Jason Knight, co-founder and CPO at OctoML, believes that another aspect of creating a successful data science and machine learning model is a firm understanding of the data you’re working with. “You can think you have the right data but because of underlying issues with how it’s collected or annotated or generated in the first place, it can kind of create problems


014,15_SDT051.qxp_Layout 1 8/20/21 3:52 PM Page 15

www.sdtimes.com

September 2021

SD Times

Why did it take the Colonial hack to focus on security? BY DAVID RUBINSTEIN

where you can’t generate a model out of it,” he explained. When there is an issue with the data that goes into generating a successful model, no matter what technique an organization uses, it will not work in the way it was intended. This is why it is so important not to skip steps when working with this kind of technology, assuming that the source data will work without properly understanding its details will spark issues down the line. Vaibhav Nivargi, founder and CTO of the cloud-based AI platform Moveworks, also emphasized good data as being an essential aspect of creating a successful model. “It requires everything from the right data to represent the real world, to the right understanding of this data for a given domain, to the right algorithm for making predictions,” he said. The combination of these will help to ensure that the data going into creating the model is dependable and will create the desired results. OctoML’s Knight also said that while certain organizations have not seen the success they had originally intended with data science and machine learning, he thinks the future is bright. “In terms of the future, I remain optimistic that people are pushing forward the improvements needed to give solutions for the problems we have seen,” he said. z

We’ve had Solar Winds. Kaseya. Microsoft Exchange. We’ve heard of millions upon millions of personal data files being hacked and exploited. So, why was it that the Colonial Pipeline ransomware attack was the one to get people focused on software and infrastructure security? The easy answer is because it hit consumers at the gas pump, and made gasoline very hard to get in other places. With a public outcry like that, it’s no wonder that people took notice — people in security positions, to politicians calling for investigations, to the C suites of many organizations. There were some among us, though, who had already taken notice — security engineers who’ve been literally howling about poorly written code and vulnerabilities for years. And the move to remote work also focused IT and security professionals to push for more effort. Mark Ralls, president and COO of Invicti Security, said that at the Black Hat security conference early last month, there was a palpable sense that there have never been so many big security issues in a short period of time. Among the conference attendees, Ralls said, the general reaction to the Colonial ransomware attack was “not like any sort of shock or surprise. The reaction was, why did it take lines at gas pumps, when there’s been this steady trickle of hospitals getting shut down, city and local government, police forces. And so I think there’s this almost kind of weariness over the fact that, ‘Oh, my God, that’s what they pay attention to?’ ” Security teams, Ralls said, have been trying to get more attention to the problem for a long time, as they see first-hand that things have been out of control. “Security people were frustrated that that’s what it took, but they’re glad attention is being focused on the problem.” Part of the problem is the disconnect between the C suite and the developers on the ground, Ralls explained.

In a survey Invicti did of developer managers and developers, they found those workers were much more worried about security than the higher-ups in the organization. “Senior leadership were massively more confident that every application was being scanned, that security is built in their process. And the lower in the organization you went, they’re all kind of worried. And then you go down to the practitioner level, and they’re very worried.” Perhaps the disconnect is a lack of visibility in the reporting, or maybe it’s the squeaky wheel not getting rewarded, Ralls speculated. This might be leading to people just doing their jobs to the best of their ability and not escalating things up the chain as often. Ralls also laid security problems squarely at the feet of the new complexities required to build applications and deliver new features or bug fixes at everincreasing speeds. “Agile is almost everywhere, and with Agile come microservices and increased use of APIs and breaking the application apart,” he said. “And if your security can’t keep up and you don’t want to be the one to throw a wrench in the works and prevent the latest release from going out, there’s a long-tail risk of a very bad outcome.” He went on to say that if you’re not communicating these issues up the chain, it’s only natural for the C-level executives to say, “Well, security’s not freaking out so therefore, we must be secure.” At Black Hat, Ralls said, one of the keynote speakers was lamenting the fact that the industry has been talking about such application vulnerabilities as crosssite scripting and SQL injection for 20 years and yet they have not been eradicated. “There’s always been a sense of it’ll get solved at the platform level, or some other level,” Ralls said. “I think we have to get to the point that just isn’t going to happen. And so, we got to help developers create their code more securely.” z

15


016-17_SDT051.qxp_Layout 1 8/20/21 12:35 PM Page 16

16

SD Times

September 2021

www.sdtimes.com

GitHub Copilot sparks debates BY JENNA SARGENT few weeks ago GitHub released its Copilot solution, which uses AI to suggest code to developers. Developers can write a comment in their code and Copilot will automatically write the code it thinks is appropriate. It’s an impressive example of the power of AI, but has many developers and members of the open-source community upset and worrying over what it means for the future of open source. One issue is that the program has had many examples of exactly copying an existing function verbatim, rather than using AI to create something new. For example, Armin Ronacher, director of engineering at Sentry and the creator of Flask, tweeted a GIF of himself using Copilot where it reproduced the famous fast inverse square root function from the video game Quake. Leonora Tindall, a free software enthusiast and co-author of Programming Rust, reached out to GitHub asking if her GPL code was used in the training set and the company’s support team responded back saying, “All public GitHub code was used in training. We don’t distinguish by license type.” When SD Times reached out to GitHub to confirm what code Copilot was trained on, they declined to comment. “I, like many others, have shared work on GitHub under the General Public License, which as you may know is a copyright-based license that allows anyone to use the shared code for whatever they want, so long as, 1) they give credit to the original author and 2) anything based on it is also shared, publicly, under the GPL. Microsoft (through GitHub) has fulfilled neither of these requirements,” Tindall said. “Their argument is that copying things is fair use as long as the thing you’re copying it into is a machine learning dataset, and subsequently a machine learning model. It’s

A

clear that copying is happening, since people have been able to get Copilot to emit, verbatim, very novel and unique code (for instance, the Quake fast inverse square root function).”

AI-powered code authoring solution found to be copying GPL code verbatim According to Tobie Langel, an opensource and web standards consultant, the GPL was largely created to avoid things like Copilot from happening. “I understand why people are upset that it’s legal to use — or considered legally acceptable of a risk of using — GPL content to train a model of that nature. I understand why this is upsetting. It’s upsetting because of the intent of what the GPL is about,” said Langel. Ronacher believes that Copilot is largely in the clear based on current copyright laws, but that there’s an argument to be made that there are a lot of elements of copyright laws that ought to be revisited. Langel also feels that legally Copilot is fine based on the conversations he’s had with IP lawyers so far. The issue of what’s copyrightable, and what isn’t, is complicated because different people have different opinions about it. “I think a lot of programmers think there’s a difference between taking one small function and using it without attribution or taking a whole file and using it without attribution. Even from a copyright perspective, there are differences about what’s the minimum level of creation that actually falls under copyright,” said Ronacher. For example a+b wouldn’t be copyrightable, but something more complicated and unique could be. If you were to remove all the comments and reduce it to its minimum, the fast inverse square root function in Quake is still only two lines. “But it’s such a memorable and well-known function that it’s

hard to argue that this is not copyrighted because it’s also very complex and into the creation of this a lot of thought went,” said Ronacher. There is a threshold on what is copyrightable versus what isn’t, but it’s hard for humans to determine where that line is, and even harder for a machine to do that, Ronacher said. “I don’t think being upset about Copilot is synonymous with being for a hardline stance on copyright. A lot of us free software types are pretty anti-copyright, but since we have to play by those rules, we think the big companies should have to obey them also,” said Tindall. Langel believes that Copilot won’t be the final breaking point for addressing some of the issues in open source, just another drop in the bucket. “I think these issues have been increasingly brought up, whether it be ICE using open-source software, there’s lots that


016-17_SDT051.qxp_Layout 1 8/23/21 9:42 AM Page 17

www.sdtimes.com

September 2021

SD Times

around open-source licenses

has been happening and that’s sort of increasingly creating awareness of these issues across the community. But I don’t think we’re at a breaking point and I’m quite convinced that this isn’t the breaking point,” said Langel. Another issue people have with Copilot is the potential for monetization, Ronacher explained. “It obviously requires infrastructure to run … The moment it turns into someone profiting on someone else’s community contributions, then it’s going to get really tricky. The commercial aspect here is opening some wounds that the open-source community has never really solved, which is that open source and commercial interests have trouble overlapping,” Ronacher said. Langel pointed out that large companies are already profiting from the opensource code made by developers who wrote that code for free by leveraging

these open-source solutions. He also noted that these companies profit from user data that is produced on a daily basis, such as location data provided when people walk from place to place. “The location data that you’re giving Apple or Google when you’re walking around is to some degree, just as valuable as the open-source code produced by engineers as part of their professional work or their hobby,” said Langel. In addition to monetization, Langel believes that the large scale of Copilot is another contributing factor into why so many developers are made uneasy by the tool. “Is this bothering us because open source moved from 40-100 engineers that really care about a piece of software working on it together and suddenly it’s like this is a new tool built on all of the software in the world to be used by all of the developers in the world and have

potentially GitHub or Microsoft profit from it? And so the core question to me is one of scale and one of moving from tight-knit small communities to global and you’re essentially bumping into the same issues that are of concern elsewhere about globalization, about people having a hard time finding their place in this increasingly large space in which they’re operating,” said Langel. The issues and controversy aside, Ronacher thinks that Copilot is largely a positive learning tool because it cuts down on the time a developer spends on problems and provides insights into what developers have done before. “The whole point of creation is to do new things and not to reinvent what somebody else already did. This idea that the actually valuable thing is doing the new thing, not something done by someone else already before,” said Ronacher. Still, Ronacher admits he doesn’t actually feel like in its current state Copilot is all that useful in the day-today for most activities a developer needs to do. It is valuable for certain use cases, like making small utilities that a developer only has to do once in a while where they might not remember how to do certain things. “[If there’s] something that you have done in the past and you have to do it again, but you haven’t done it for like a year, you might forget some of these things,” said Ronacher. “And it’s sufficiently good at helping you with that. So for instance, if you have to insert a bunch of rows into the database, you might have forgotten how the database driver’s API is. Typically you go to the documentation to do that, but GitHub Copilot for the most part is actually able to autocomplete a whole bunch of relatively simple statements so that you actually don’t have to go to the documentation. And I think for all of these purposes it’s really good, and for all of these purposes it also doesn’t generate code that’s copyrightable because it’s highly specific to the situation.” z

17


018_SDT051.qxp_Layout 1 8/20/21 5:23 PM Page 18

18

SD Times

September 2021

www.sdtimes.com

INDUSTRY SPOTLIGHT

Empower developers for broader role A

s companies steadily move toward By being integrated into the feedback increased agility, the software sup- loop, the tools create the safety net right ply chain can no longer afford to follow from the start. Machine learning can the old assembly-line model: Specialists bring results such as a download being who once focused their efforts solely on intercepted and found noncompliant developing code have seen their roles with policy or a download discovered to expand to that of generalist. With gov- be potentially malicious. The same is ernance, security and quality assurance true for new releases of the components: professionals less commonplace in the They may have elements that appear industry, developers now integrate their suspicious or originate from a part of the code in an environment where compli- world where such releases are of a quesance, security and problem-solving not tionable nature, Fox said. The feedback only rests on their shoulders but needs message that “this transaction is not to be expedited across the software characteristic for you” blocks the downdevelopment life cycle. “It is almost the inverse of the ‘Developers will be industrial revolution in some way,” naturally suspicious of says Brian Fox, chief technology anything coming from officer of Sonatype, which specialoutside. They have a izes in software supply chain manlong history of being agement. “What that means is that burned by bad tools.’ increasingly the developers…are the ones defining the architec— Brian Fox, CTO, Sonatype ture.” Ultimately that means the developer needs the capability to deterload and prevents its use. mine upfront whether the framework is The advent of these capabilities compatible with the license policy, with highlights even more how inefficient security and with other requirements. the older model of software develop“Everything gets more real time and ment was because, for one thing, those the people doing the work have to be processes customarily used to scan code empowered to make those decisions,” would focus solely on the custom code, said Fox. “They need to be empowered the smaller portion of the code base. with the information to make the right The scans would not take into account decisions.” anything open source, which could That’s especially critical, he said, account for as much as 80 percent of because these tasks are essential when the code base, said Fox. software relies heavily on open-source To make matters worse, he said, components. The constant evolution of legal, security and other professionals these pre-built third-party components within the organization were usually can leads to vulnerabilities, generating unaware that this issue even existed — risks to application security. Without even if the developers themselves did. the proper smart tools to identify code “In 2011 or so when we were really quality, to flag vulnerabilities and to fix starting to solve this problem, we had them in a way that is policy-compliant financial organizations downloading — functions that can be accomplished 60,000 components a year and we automatically — developers may be talked to them and said ‘we see you are unable to track or fix any of these issues using a lot of open source.’ They said and still meet deadlines — if at all. they weren’t using open source. They Content provided by SD Times and

were unaware they were using open source, not recognizing that in the banking training algorithm platform they turned out, 80 percent of that code was open source.” In the years that followed, progressive organizations have come to recognize that the legacy model did not work and the best solution was to turn directly to the developer, said Fox. “Forward-leaning organizations are starting to look at this as proper dependency management, not just picking good frameworks but considering the legal and quality issues, all the way down the dependency stack,” he said. The idea of bringing everything to the developer’s domain in a cohesive, integrated way is finally starting to take hold, he said. This also accepts the reality that open-source libraries can — and should — continue to be a source of efficiency without becoming a source of compromise or threat. It also provides better insurance against common mode failure. Making these better choices upfront means doing less work later, he said. Fox acknowledges that such a change in the model relies heavily on buy-in from the developers who will, of course, necessarily be taking on those additional responsibilities. “Developers will be naturally suspicious of anything coming from outside. They have a long history of being burned by bad tools,” he said. He believes, however, that developers want to solve the overall problems and that they care about what they’re doing. It is a big plus for developers to not have to wait six weeks for the goahead from someone in another building, or perhaps another country, before they can proceed, he said. And ultimately, he said: “They’re going to have less stuff to chase down later.” z


Full Page Ads_SDT049.qxp_Layout 1 6/23/21 12:25 PM Page 15

Because software supply chain security should feel like a no-brainer.

Continuously monitor open source risk at every stage of the development life cycle within the pipeline and development tools you’re already using.

Lifecycle is made for developers. You make hundreds of decisions every day to harden your supply chain. You expect interruptions. They’re part of your work. The problem is when they get in the way of your work. We tell you what you need to know to build safely and efficiently — and we tell you when you need to know it. Then we quietly continue our work, and allow you to do the same.

With Nexus Lifecycle, devs can: Control open source risk without switching tools. Inform your decisions with the best intelligence database out there. Get instant feedback in Source Code Management. Automatically generate a Software Bill of Materials. Enforce open source policies without sacrificing speed.

See for yourself: www.sonatype.com/SDTimes


020_SDT051.qxp_Layout 1 8/20/21 2:51 PM Page 20

20

SD Times

September 2021

www.sdtimes.com

DEVOPS WATCH

HCL brings DevOps portfolio to the cloud Tool creation platform, as-a-service offering now cloud-native BY DAVID RUBINSTEIN

HCL Software has made its DevOps product portfolio cloud-ready, and has introduced HCL SoFy, a cloud-native platform for creating tool solutions, and HCL Now, a cloud-native-as-a-service offering. The work is the result of a major investment HCL made across its entire product portfolio to modernize its solutions for the cloud, according to Alex Mulholland, chief platform architect at HCL Software. “We wanted to take the software our clients have already made investments in and make it easy to use” in modern cloud environments, she explained. The key to this effort, Mulholland

said, was leveraging Kubernetes and Helm charts. Each product in the HCL Software portfolio has been packaged in containers, with UIs and dashboards, wrapped as Helm charts. “SoFy is the platform that brings the portfolio together as services,” she said. “Each item in the [HCL Software] catalog is already a Helm chart, with pre-reqs all wrapped up with configuration so they run out of the box.” Because this is based on native Kubernetes, it is cloud-agnostic, providing flexibility for customers to run the software without cloud vendor lockin, she explained. SoFy allows customers to look at

what’s available and create their own solution packages; those packages can be deployed to a sandbox and include demos and step-by-step instructions for use, Mulholland noted. HCL Now, the service offering, lets users run the HCL software wherever they want to run it, with access to log files, containers, and more. HCL SoFy, which encompasses 26 of the company’s products, is available as a 30-day trial, through a one-click install that can be up and running in an hour. Mulholland also pointed out that cloud-native software also helps customers overcome the pain points of version upgrades, as it’s all done behind the scenes. z

The potential of the DevOps fourth wave Fragmented DevOps, where A fourth wave of DevOps is expected to organizations employed the same spetie together earlier tools into a platform cific set of tools across each stage of the that covers every phase of the DevOps DevOps lifecycle. This granted teams life cycle, according to Sid Sijbrandij, the ability to collaborate but because the co-founder and CEO of GitLab. the tools were not connected it proved Sijbrandij, delivering the keynote difficult to move through the lifecycle. address at the GitLab Commit Virtual DIY DevOps, where organizations conference, said the platform will bring manually integrated their DevOps together development, operations, and security while allowing ‘I believe that machine learning groups to plan, build, secure, and will be critical in making the deploy important software. From 2011, when GitLab was DevOps workflow faster.’ first being created, until today, —Sid Sijbrandij, GitLab the culture around DevOps has changed, he said. “At GitLab we see point solutions. The problem here was four phases in the adoption of DevOps that these tools were not designed to tools over time,” he began. Sijbrandij use the same concepts, and so they nevdefined these phases as: er fit quite right resulting in an enorSilo DevOps, where each team mous effort to uphold. And now, the DevOps Platform Era. would select their own tools. The probSijbrandij discussed the future of lem with this was when the teams tried to collaborate, they were not familiar DevOps and the three things he believes to be the most important intewith the tools of the other team. BY KATIE DEE

grations into the platform. “I believe that a platform solution that integrates security is the future,” Sijbrandij said. He went on to discuss how security built into a platform results in optimal protection without sacrificing any speed. The second important integration, he said, is machine learning. “I believe that machine learning will be critical in making the DevOps workflow faster,” he said. He explained that GitLab is focused on implementing machine learning in order to reduce friction in the development process. Thirdly, he spoke about DevOps Platform adoption and the way he sees it accelerating in the future. “By 2023, 40% of organizations will have switched from multiple point solutions to a platform in order to streamline application delivery,” he concluded. z


IBC_SDT049.qxp_Layout 1 8/17/21 10:37 AM Page 2

Time to go with the flow! Organizations today are turning to value streams to gauge the effectiveness of their work, reduce wait times and eliminate bottlenecks in their processes. Most importantly, they want to know: Is our software delivering value to our customers, and to us? VSM Times is a new community resource portal from the editors of SD Times, providing guides, tutorials and more on the subject of Value Stream Management.

Sign up today to stay ahead of the VSM curve! www.vsmtimes.com


022-25_SDT051.qxp_Layout 1 8/20/21 4:39 PM Page 22

22

SD Times

September 2021

www.sdtimes.com


022-25_SDT051.qxp_Layout 1 8/20/21 4:39 PM Page 23

www.sdtimes.com

September 2021

SD Times

BY JENNA SARGENT

L

ow-code solutions have proven their value in the industry. Many companies are seeing increased time to production and lower development costs, but relying on these low-code solutions may mean sacrificing quality due to a lack of testing. “My belief is that the developer side testing, the quality assurance team type testing that is traditionally done is not done with low code tools,” said Jason Beres, head of UX tool and senior vice president of Indigo.Design at Infragistics, a company that provides UI components. “It’s just like ‘hey, this tool is going to save us time, where can we cut out some of this time?’ Because the tool is template driven, it should deliver the experience that we want, so we’re going to cut out that testing and then where we end up getting into trouble is that the code generated isn’t testable code, it’s not code that your architects could pick up and start to look at like they would if it was a regular .NET app or Java app or Angular app and really understand it without having some indepth knowledge of what that low code provider is doing to generate those screens and that content.” One might assume that since the low-code vendor presumably has already tested its components, that might take some pressure off of testers. But Beres says that most testing — UX testing, unit testing, functional testing, etc — gets ignored by low-code vendors. “What they’re doing is saying ‘use our tool and we’ll provide all the stuff for you, you don’t have to worry about all those other things,’” Beres said. Eran Kinsbruner, DevOps chief evangelist at testing company Perforce Software, also cautioned against relying too heavily on the assumption that the low-code vendor has tested their components. “Low-code components, whether

they were tested well or not, they are still merged into the code or pipeline of your application, which you are responsible for,” said Kinsbruner. “So let’s say from an end user perspective, if there is a bug in your application in production, the end user doesn’t care if you use the vendor to autogenerate some of the code or you do it yourself. Quality is quality. At the end of the day it’s your responsibility, so you cannot really rely on the vendor doing the testing for you.” He does tend to give tool vendors the benefit of the doubt to believe that they provide a baseline of a component that will work out of the box. “But when you merge it and integrate it into your own business flows, it becomes your responsibility and you need to make sure that you have almost full control of the end to end user scenarios,” said Kinsbruner. Security is also a consideration. “Think about it, let’s say you are a bank and your application has its own backend services, databases and personal knowledge of your end users, how do you know these components were welltested and they do not have any leaks or exposures,” Kinsbruner said. Another problem with relying on the low-code vendor to do the testing is that even if they’ve tested their components well, when an issue does arise, it’s harder for developers to resolve those issues because it’s difficult to get into the code. When tests are run and crashes or bugs arise, testers need the ability to go in and understand those issues, which requires a level of visibility. Kinsbruner refers to this as a “black box” of testing because developers don’t really have access to the source code of the tool that generated this code. “As the application developer or the product manager, you own the product continued on page 25 >

23


Full Page Ads_SDT051.qxp_Layout 1 8/17/21 10:27 AM Page 24


022-25_SDT051.qxp_Layout 1 8/20/21 4:40 PM Page 25

www.sdtimes.com

< continued from page 23

regardless of where the code came from and how it was created,” said Kinsbruner. “So testing is testing and bugs are bugs. The only thing regarding the process is to understand the root cause to debug these failures and remediate them properly for future analyses. So let’s say you are looking at code that was originated by this lowcode tool and you are finding bugs, sometimes you can fix them. Sometimes these are bugs that you are inheriting from some areas of code that you can not really control. How do you decide whether to release or not to release? How do you decide the priority and severity of these bugs, especially when there can be a mix of bugs, security bugs, bugs that you cannot really fix because it’s not your own code that created the root cause of these bugs?” Max de Lavenne, CEO of Buildable, a custom software development firm, added that when troubleshooting, it’s best to avoid actually modifying the low-code solution itself. This is because when working with a lowcode tool, it’s presumably been selected because it fits in with their project and end goals. If that turns out to not be the case and the low-code solution starts clashing with those goals, then it may be best to start shopping around for a replacement. “To use a metaphor: if you take an apple and keep trying to morph it into a pear, maybe that’s a sign that you should’ve just bought a pear to begin with,” said de Lavenne.

Testers should help with selection According to Kinsbruner one of the pillars to a successful low code testing process is actually involving QA into the process of selecting low-code vendors. Testers should ideally know which APIs are being inherited, what the dependencies are, and what other technologies are used in the creation of the code. “The test automation architect or the QA manager needs to be in the know of exactly the architecture and

September 2021

SD Times

Citizen developers fall behind in more complex testing Ironically, the draw of low-code for many companies is that it allows anyone to build applications, not just developers. But when bugs arise citizen developers might not have the expertise needed to resolve those issues. “Low-code solutions that are super accessible for the end-user often feature code that’s highly optimized or complicated for an inexperienced coder to read,” said Max de Lavenne, CEO of Buildable, a custom software development firm. “Low-code builds will likely use display or optimization techniques that leverage HTML and CSS to their full extent, which could be more than the average programmer could read. This is especially true for low-code used in database engineering and API connections. So while you don’t need a specialized person to test low-code builds, you do want to bring your A-team.” According to Isaac Gould, research manager at Nucleus Research, a technology analyst firm, a citizen developer should be able to handle testing of simple workflows. Eran Kinsbruner, DevOps chief evangelist at testing company Perforce Software, noted that there could be issues when more advanced tests are needed. He believes that scriptless testing tools may be the answer to enabling citizen developers to test more complex workflows. “You might be able to close the gap by visualizing, modeling the application and the test and let the tool create some of these test cases for you,” he said. Some low-code vendors might be able to set themselves apart by having good educational tools for citizen developers, Gould said. “I think the idea of testing is that you’re always gonna have errors in your code when you’re building an application,” said Gould. “The question is that with these low code platforms is in determining whether or not your error occurs in the areas where you did have to go into coding to finding more complex relationships where you needed a real developer to do it, or are you just putting the components together incorrectly. So I think having your citizen developers trained on those platforms is vitally important.” z

the tool stack that is being used to build the code that they are supposed to bless at the end of the day prior to release,” said Kinsbruner. Unfortunately, from what Kinsbruner has observed, most testers are kept in the dark from that process. “The tester and manager should know what the databases are and what APIs are being used or being called so they can build the test environments for their tools and their process. They also need to know about these kinds of tools and methods as well. I don’t think that’s the case today in most cases,” said Kinsbruner. According to Infragistics’ Beres, another challenge during the selection process is making sure the low-code tool fits into the dev team’s current way of working. Sometimes low-code tools are using technologies that are nonstandard for the development team, which leads to complications.

“If your dev team is an Angular team then no matter what, these are things you just can’t change in the world,” said Beres. “They’re not going to switch to something because the design team says so. They’re not going to switch to something because an executive says let’s use this tool to save time. They’re just going to use what they know. Just like they choose .NET or React, even though the outcome might be the same, it’s what the developers are used to using. So if you’re an Angular dev team and you know how to test and debug and run build scripts and run test scripts and run automation through GitHub or some other tool, that’s how you’re going to get the benefit of your accelerated time to market. If your lowcode doesn’t integrate well with that or if it’s using something non-standard, not within the toolchain that you’re used to using, then that’s where they will fall down.” z

25


026-29_SDT051.qxp_Layout 1 8/20/21 3:52 PM Page 26

26

SD Times

September 2021

www.sdtimes.com

Infrastructure management BY JAKUB LEWKOWICZ

I

t’s no surprise that organizations are trying to do more with less. In the case of managing infrastructure, they’re in fact trying to do much more in the area of provisioning software — not by lessening it but by eliminating infrastructure altogether, through the use of serverless technology. According to Jeffrey Hammond, the vice president and principal analyst at Forrester, one in four developers are now regularly deploying to public clouds using serverless technology, going up from 19% to 24% since last year. This compares to 28% of respondents that said that they are regularly deploying with containers. The main reason containers are a little bit ahead is that when organizations are trying to modernize existing apps, it’s a little bit easier to go from a virtual machine to a container than it is to embrace serverless architecture, especially if one is using something like AWS Lambda, which you requires writing applications that are stateless, according to Hammond. Also, the recently released Shift to Serverless survey conducted by the cloud-native programming platform provider Lightbend found that 83% of respondents said they were extremely satisfied with their serverless application development solutions. However, only a little over half of the organizations expect that making the switch to serverless will be easy. “If I just basically want to run my code and you worry about scaling it then a serverless approach is a very effective way to go. If I don’t want to worry about having to size my database, if I just want to be able to use it as I need it, serverless extensions for things like Aurora make that a lot easier,”

Hammond said. “So basically as a developer, when I want to work at a higher level, when I have a very spiky workload, when I don’t particularly care to tune my infrastructure, I’d rather just focus on solving my business problem, a serverless approach is the way to go.” While serverless is seeing a pickup in new domains, Doug Davis, who heads up the CNCF Serverless Working Group and is an architect and product manager at IBM Cloud Code Engine, said that the main change in serverless is not in the technology itself, but rather providers are thinking of new ways to reel people in to their platforms. “Serverless is what it is. It’s finer-grain microservices, it’s scale to zero, it’s payas-you-go, ignore the infrastructure and all that good stuff. What I think might be sort of new in the community at large is more just, people are still trying to find the right way to expose that to people,” Davis said. “But from the technology perspective, I’m not sure I see a whole lot necessarily changing from that per-


026-29_SDT051.qxp_Layout 1 8/20/21 3:52 PM Page 27

www.sdtimes.com

going extinct with serverless spective because I don’t think there’s a whole lot that you can change right now.”

Abstracting away Kubernetes The major appeal for many organizations moving to serverless is just that they want more and more of the infrastructure abstracted away from them. While Kubernetes revolutionized the way infrastructure is handled, many want to go further, Davis explained. “As good as Kubernetes is from a feature perspective, I don’t think most people will say Kubernetes is easy to use. It abstracts the infrastructure, but then it presents you with different infrastructure,” Davis said. While people don’t need to know which VM they’re using with Kubernetes, they still have to know about nodes, and even though they don’t need to know which load balancer they’re using, there’s always managing the load balancer. “People are realizing not only do I not want to worry about the infrastructure from a service perspective, but I also don’t want to worry about it from a Kubernetes perspective,” Davis said. “I just want to hand you my container image or hand you my source code and you go run it for me all. I’ll tweak some little knobs to tell you what fine-tuning I want to do on it. That’s why I think projects like Knative are kind of popular, not just because yeah, it’s a new flavor of serverless, but it hides Kubernetes.” Davis said there needs to be a new way to present it as hiding the infrastructure, going abstract in a way, and

just handing over the workload in whatever form is desired, rather than getting bogged down thinking, is this serverless, platform-as-a-service, or container-as-a-service. However, Arun Chandrasekaran, a distinguished vice president and analyst at Gartner, said that whereas serverless abstracts more things away from the user, things like containers and Kubernetes are more open-source oriented so the barrier to entry within the enterprise is low. Serverless can be viewed as a little bit of a “black box,” and a lot of the functional platforms today also tend to be a little proprietary to those vendors. “So serverless has some advantages In terms of the elasticity, in terms of the abstraction that it provides in terms of the low operational overhead to the developers. But on the flip side, your application needs to fit into an eventdriven pattern in many cases to be fit for using serverless functions. Serverless can be a little opaque compared to running things like containers,” Chandrasekaran said. “I kind of think of serverless and containers as being that there are some overlapping use cases, but I think by and large, they address very different requirements for customers at this point in time.” Davis said that some decision-makers are still wary of relinquishing control over their infrastructure, because in the past that often equated to reduced functionality. But with the way that serverless stands now, users won’t be losing functionality; instead, they’ll be able to access it in a more streamlined way. “I don’t think they buy that argument yet and I think they’re skeptical. It’s going to take time for them to believe,”

September 2021

SD Times

Davis said. “This really is a fully-featured Kubernetes under the covers.” Other challenges that stifle adoption include the difficulty that developers have with changing to work asynchronously. Also, some would like to have more control over their runtime, including the autoscaling, security, and tendency models, according to Forrester’s Hammond. Hammond added that he is starting to see a bit of an intersection between serverless and containers, but the main thing that sets serverless apart is its auto-scaling features.

Vendors are defining serverless Serverless as a term is expanding and some cloud vendors have started to call all services where one doesn’t have to provision or manage the infrastructure as serverless. Even though these services are not serverless functions, one could argue that they’re broadly part of serverless computing, Gartner’s Chandrasekaran explained. For example, you have services like Athena, which is an interactive query service from Amazon, or Fargate for example, which is a way to run containers, but you’re not operating the container environment. However, Roman Shaposhnik, the co-founder and VP of product and strategy at ZEDEDA, as well as a member of the board of directors for Linux Foundation Edge, and vice president of the Legal Affairs Committee at the Apache Software Foundation, said that the whole term of serverless is a bit of a confusing at the moment and that people typically mean two different things whenever they talk about serverless. Clearly defining the technology is essential to spark interest in more people. “Google has these two services and they kind of very confusingly call them serverless in both cases. One is called Google Functions and the other one is Google Run and people are just constantly confused. Google was such an interesting case for me because I for sure expected Google to at least unify around Knative. Their Google Cloud Functions continued on page 28 >

27


026-29_SDT051.qxp_Layout 1 8/20/21 3:53 PM Page 28

28

SD Times

September 2021

www.sdtimes.com

< continued from page 27

is completely separate, and they don’t seem to be interested in running it as an open-source project,” Shaposhnik said. “This is very emblematic of how the industry is actually confused. I feel like this is the biggest threat to adoption.” This large basket of products has created an API sprawl rather than a tool sprawl because the public cloud typically offers so much that if developers wanted to replicate all of this in an open-source serverless offering like OpenWhisk by the Apache Software Foundation, they really have to build a lot of things that they just have no interest in building. “This is not even because vendors are evil. It’s just because only vendors can give you the full sort of gamut of the APIs that would be meaningful to what they are actually offering you because like 90% of their APIs are closed-source and proprietary anyway. Shaposhnik said, “And if you want to make them, effective, well, you might as well use a proprietary serverless platform. Like what’s the big deal, right?,” Serverless commits users to a certain viewpoint that not all might necessarily enjoy. If companies are doing a lot of hybrid work, if they need to support multiple public clouds and especially if they have some deployments in a pri-

vate data center, it can get painful pretty quickly, Shaposhnik explained. OpenFaaS, an open-source framework and infrastructure preparation system for building serverless applications, is trying to solve the niche of figuring out the sweet spot of dealing with the difficult aspects. “If you have enough of those easy things that you can automate, then you should probably use OpenFaaS, but everything else actually starts making less sense because if your deployment is super heterogeneous, you are not really ready for serverless,” Shaposhnik said.

Serveless adoption slow In general, there is not much uptick with open-source serverless platforms because they need to first find a great environment to be embedded in. “Basically at this point, it is a bit of a solution looking for a problem, and until that bigger environment and to which it can be embedded successfully appears, I don’t think it will be very interesting.” In the serverless space, proprietary vendor-specific solutions are the ones that are pushing the space forward. “I would say open-source is not as compelling as in some other spaces, and the reason is I think a lot of developers prefer open-source not necessarily

Serverless is the architecture for volatility Serverless seems to now be the volatility architecture because of business uncertainty due to the pandemic. “Everyone seems to be talking about scaling up, but there’s this whole other aspect of what about if I need to scale down,” Serverless Framework founder and CEO Austen Collins said. A lot of businesses that deal with events, sports, and anything that’s in-person have had to scale down operations almost immediately. Austen Collins At a moment’s notice, these businesses had to scale down almost immediately due to a shutdown, and for those that work with serverless architecture, their operations can scale down without them having to do anything. The last 16 months have also seen a tremendous amount of employee turnover, especially in tech, so organizations are looking to adopt a way to be able to quickly onboard new hires by abstracting a lot of the infrastructure away, Collins added. “I think it’s our customers that have had serverless architectures that don’t require as much in-house expertise as running your own Kubernetes clusters that have really weathered this challenge better than anyone else,” Collins said. “Now we can see the differences, whenever there’s a mandate to shut down different types of businesses in the usage of people, applications and the scaling down, scaling up when things are opening up again is immediate and they don’t have to do anything. The decision-makers are often now citing these exact concerns.” z

because it’s free as in freedom but because it’s free as in beer,” Forrester’s Hammond said. Because with most functions, organizations pay by the gigabyte second, now developers seem to be able to experiment and prototype and prove value at very low costs. And most of them seem to be willing to pay for that in order to have all the infrastructure managed for them. “So you do see some open source here, but it’s not necessarily at the same level as something like Kafka or Postgres SQL or any of those sorts of opensource libraries,” Hammond said. With so many functionalities to choose from, some organizations are looking to serverless frameworks to help manage how to set up the infrastructure. Serverless frameworks can deploy all the serverless infrastructure needed; it deploys one’s code and infrastructure via a simpler abstract experience. In other words, “you don’t need to be an infrastructure expert to deploy a serverless architecture on AWS if you use these serverless frameworks,” Austen Collins, the founder and CEO of the Serverless Framework, said. Collins added that the Serverless Framework that he heads has seen a massive increase in usage over the duration of the pandemic, starting at 12 million downloads at the beginning of 2020 and now at 26 million. “I think a big difference there between us and a Terraform project is developers use us. They really like Serverless Framework because it helps them deliver applications where Terraform is very much focused on just the hardcore infrastructure side and used by a lot of Ops teams,” Collins said. The growth in the framework can be attributed to the expanding use cases of serverless and every time that there is a new infrastructure as a service (IaaS) offering. “The cloud really has nowhere else to go other than in a more serverless direction,” Collins added. Many organizations are also realizing that they’re not going to be able to keep up with the hyper-competitive, innovative era if they’re trying to maintain and scale their software all by themselves. “The key difference that developers


026-29_SDT051.qxp_Layout 1 8/20/21 3:53 PM Page 29

www.sdtimes.com

and teams will have to understand is that number one, it lives exclusively on the cloud so you’re using cloud services. You can’t really spin up this architecture on your machine as easily. And also the development workflow is different, and this is one big value of Serverless Framework,” Collins said. “But, once you pass that hurdle, you’ve got an architecture with the lowest overhead out of anything else on the market right now.”

All eyes are on serverless at the edge The adoption of serverless has been broad-based, but the larger organizations tend to embrace it a bit more, especially if they need to provide a global reach to their software infrastructure and they don’t want to do that on top of their own hardware, Forrester’s Hammond explained. In the past year, the industry started to see more interest in edge and edgeoriented deployments, where customers wanted to apply some of these workloads in edge computing environments, according to Gartner’s Chandrasekaran. This is evident in content delivery network (CDN) companies such as Cloudflare, Fastly, or Akamai, which are all bringing new serverless products to market that primarily focus on edge computing. “It’s about scale-up, which is to really quickly scale and massively expand, but it’s also about scaling down when data is not coming from IoT endpoints. I don’t want to use the infrastructure and I want the resources to be de-provisioned,” Chandrasekaran said. “Edge is all about rapid elasticity.” The serverless compute running in the edge is a use case that has the possibility of creating new types of architectures to change the way that applications were previously built to process compute closer to the end-user for faster performance, according to Collins. “So an interesting example, this is just how we’re leveraging it. We’ve got serverless.com is actually processed using Cloudflare workers in the edge. And it’s all on one domain, but the different paths are pointing to different architectures. So it’s the same domain, but we have compute running that looks

September 2021

SD Times

A serverless future: A tale of two companies Carla Diaz, the cofounder of Broadband Search, a company that aims to make it easier to find the best internet and television services in an area, has been looking at adopting a serverless architecture since it is now revamping its digital plans. “Since most of the team will be working from home rather than from the office, it doesn’t make sense to continue hosting servers when adopting a cloud-based infrastructure. Overall, that is the main appeal of going serverless, especially if you are beginning to turn your work environment into a hybrid environment,” Diaz said. Overall, the cost of maintaining and committing to downtime are just some of the things the company doesn’t need to worry about anymore with the scalability of the serverless architecture. Carla Diaz Another reason why Broadband Search is interested in going to the cloud-based system is the company doesn’t have to worry about the costs of not only having to buy more hardware, which can already be quite expensive, but the costs of maintaining more equipment and possible downtime if the integration is extensive. “By switching and removing the hardware component, the only real cost is to pay for the service which will host your data off-site and allow you to scale your business’ IT needs either back or forward as needed,” Diaz added. Dmitriy Yeryomin, a senior Golang developer at iTechArt Group, a one-stop source for custom software development, said that many of the 250-plus active projects within the company use serverless architecture. “This type of architecture is not needed in every use case, and you should fully envision your project before considering serverless, microservice, or monolith architecture,” Yeryomin said. In terms of this company’s projects, Yeryomin said it helps to divide up the system into fast coding and deploying sequences, to make their solution high-performance and easily scalable. “In terms of benefits, serverless applications are well-suited to deploying and redeploying to the cloud, while conveniently setting Dmitriy Yeryomin the environmental and security parameters,” Yeryomin said. “I work mostly with AWS, and UI has perfect tools for monitoring and test service. Also local invokes is great for testing and debug services.” However, the most challenging thing with serverless is time. When you configure a lambda function execution, as it is bigger, it becomes more expensive. “You can’t store the data inside more than the function works,” Yeryomin explained. “So background jobs are not for serverless applications.” z

at the past and forwards the request to different technology stacks. So one for our documentation, one for our landing pages, and whatnot,” Collins said. “So there’s a bunch of new architectural patterns that are opening up, thanks to running serverless in the edge.” Another major trend that the serverless space has seen is the growth of product extension models for integrations. “If you’ve got a platform as a company and you want developers to extend it and use it and integrate it into their day-today work, the last thing you want to do is say, well now you’ve got to go stand up infrastructure on your own premises or in another public cloud provider, just so that you can take advantage of our APIs,”

Forrester’s Hammond said. The extensions also involve continued improvements to serverless functions that are adding more programming languages and trying to enhance the existing tooling in areas like security and monitoring. For those companies that are sold on a particular cloud and don’t really care about multicloud or whether Amazon is locking them in, for example, Shaposhnik said not using serverless would be foolish. “Serverless would give you a lot of bang for the buck effectively scripting and automating a lot of the things that are happening within the cloud,” Shaposhnik said. z

29


030-34_SDT051.qxp_Layout 1 8/23/21 10:26 AM Page 30

30

SD Times

September 2021

www.sdtimes.com

Buyers Guide

To release, or not to release? BY DAVID RUBINSTEIN

Release automation can help organizations create repeatable steps to deployment, but human decision-making is still critical for many

New software features and updates are the lifeblood of many organizations today, so the faster they can bring them out, the more competitive and customer-responsive they will be. One of the techniques progressive organizations are using to achieve that goal is release automation. Understanding what goes into release automation is key to realizing its benefits. It’s easy to think of release automation as going the last mile into production, but automation throughout the dev/test/deploy cycle is critical to true release automation. Moving code to those dev/test environments, including performance and acceptance testing, as well as deploying feature branches and deploying changes to the infrastructure or the database, can all be automated. “For many companies, production is the environment they deploy to less often; a team might deploy dozens of times a day to dev/test environments, and only once or twice a week to production,” said Paul Stovell, founder and CEO of release automation software provider Octopus Deploy. But, he cautioned, the deployments to production are the ones that carry the most risk. The goal to narrow that risk is to share “as much as possible between your pre-production and production deployments, so that each time you deploy to dev or test, you are gaining confidence that your production deployment process will work,” he said.

The complexity of modern apps Deploying and releasing software was more straightforward, albeit slower, back when the waterfall development method ruled the world. It was just one big deployment, fully tested and ready to go. But microservices and containers have added a great deal of complexity to deployment. Mike Zorn, a software engineer at feature management platform provider LaunchDarkly, said, “Deploying one application through a fleet of servers is not a trivial thing. Microservices have really magnified deployment complexity, because now you have different versions being shipped on different schedules. And they depend on one another. So, even if your deployment process is highly automated, you need to have some kind of ability to identify when it’s safe to turn something on. You might need to make sure this version of the microservice is running before you can release a feature on a different microservice, because it depends on that thing being present.” Tracking those dependencies, he said, can be “a real nightmare.” “In my mind,” he added, “release automation really is about establishing a repeatable release process that has the right decision-makers involved, and presents them with the data they need to make that decision on whether or not to release something.” Octopus Deploy’s Stovell describes two aspects of release automation: the high-level orchestration of the release


030-34_SDT051.qxp_Layout 1 8/20/21 4:35 PM Page 31

www.sdtimes.com

process, and that last mile into production. The orchestration effort should involve modeling the way a release progresses between environments, which steps in the process can be skipped, environment configuration, promotion rules, and communication around all of that. The last-mile piece, Stovell said, more simply involves “how the bits make their way from the release automation server or artifact repository onto the running production server.” Under this definition, microservices don’t really change the last-mile deployment, but they add a lot of complexity to the orchestration aspect. “I like to say the microservices moves inter-team communication from a compile-time problem to a runtime problem,” Stovell said. “By this, I mean with a monolithic architecture, deployment (runtime) is very straightforward, but the downside is that developers have to talk to each other a lot more while they write the code — compile time.” Meanwhile, microservices and containers — particularly platforms like Kubernetes — help a lot with the runtime deployment, but don’t help much with the orchestration end, he noted.

Mitigating risk With this added complexity, it’s important to be able to respond quickly if a release into production didn’t function as planned or created a vulnerability. The use of canary releases, blue/green deployments and feature flags are effective methods to reduce risk. Because of the nature of microservice dependencies, LaunchDarkly’s Zorn noted that if you deploy the services in the wrong order, with one going live before another dependency, “you could have created a pretty nasty thing.” Being able to turn off the feature can be beneficial in that architecture, he said. “You can just roll it back to the last successful release that you had, and figure out and wait until the dependencies all catch up with each other, and then you can rerelease,” Zorn explained. Because of its access to production servers, the release automation system has much more fine-grained access continued on page 33 >

September 2021

SD Times

What will it take to achieve continuous deployments to production without human involvement? Before a team is ready for continuous deployments to production without human involvement, Octopus Deploy founder and CEO Paul Stovell said he would expect to see: 1. Zero downtime deployments — release automation tools can help with this, but it also requires architectural and infrastructure changes. 2. A high level of automated test coverage — unit, integration and automated UI tests. This is not a release automation issue. 3. Sophisticated monitoring and alerting that can detect when a release is bad. There are plenty of monitoring tools available, but “out of the box” they are all noisy and can lead to false positives. It requires significant and ongoing time investment from operations teams to detect the right errors, ignore the wrong errors, and connect that feedback loop to the release automation process.

CI/CD vs. release automation When developers think about release automation, Octopus Deploy’s Stovell said they probably assume their CI server should handle it. “It’s really the only team-level automation tool that most developers are familiar with.” But, he pointed out, there are fundamentally distinct problems between continuous integration and releasing the code. “A common setup we see is a software team using their CI/CD server to attempt to automate every part of the software delivery process, and then operations teams using different automation tools to automate all of the infrastructure and runbook automation,” he said. “To me, that’s a silo that doesn’t make any sense. I think it’s part of the historical evolution — developers deciding to automate deployments, and using their CI system to do it.” So, riffing on a popular expression, Stovell added, “When all you have is a Jenkins-shaped hammer, every automation task looks like a Jenkins job.” A good CI system, he explained, needs to know about things like source control, branches and unit testing. A good CI process is designed to fail fast — “There’s no point trying to run unit tests if the compile failed, and no point compiling if resolving dependencies failed,” he said — so the process is optimized to be a fast feedback loop to developers. On the other hand, a good release automation system has a completely different problem space to think about, Stovell continued. It needs to model all of the environments a team deploys to, and needs patterns like blue/green, canary, and rolling deployments. And failing fast is not the goal: “If you have 20 front-end web servers and one can’t be deployed to, you can’t fail the deployment. Remove it from the load balancer and keep going, then deal with it after.” These problem areas that release automation tools have to worry about is the same problem areas that IT operations automation tools need to think about, he said. “They both need to model environments. They both need production secrets. They both need to be highly secure and have solid access control and auditing,” he said. “Release automation interacts with load balancers to add/remove servers during a deployment; operations automation interacts with load balancers to add/remove servers to install system updates. There’s a lot of commonality, but the evolution of the tooling stacks means companies often use different tools.” And it’s the use of these different tool sets that maintains silos that organizations say they need to break down. “We talk about DevOps a lot - we gather all the developers, and all the ops people, into a room and we hold a meeting, and we talk about how we’re going to collaborate more,” Stovell said. “And then both groups go back to their desks, and use two completely different tool sets, to automate the same types of tasks. That makes no sense to me. It’s the reason we introduced Runbooks to Octopus Deploy recently — as far as we know, Octopus Deploy is the first release automation tool to explicitly model both Release Automation and Runbook — David Rubinstein Automation in the same tool.” z

31


Full Page Ads_SDT051.qxp_Layout 1 8/17/21 10:27 AM Page 29


030-34_SDT051.qxp_Layout 1 8/20/21 2:49 PM Page 33

www.sdtimes.com

< continued from page 31

control and auditing requirements, Stovell said. And LaunchDarkly’s Zorn added, “Getting stuff on the servers, getting code tested and all that, that’s just pretty well established but then the actual turning the feature on for the customers is where there’s a large degree of control that you can have that is often not capitalized upon.”

Making the decisions To many, automation is a double-edged sword. While there are some big benefits to release automation — creating repeatable processes, taking mundane tasks away so engineers can work on higher-value projects — many organizations still want people and not machines to make the final go/no-go decisions. Zorn explained: “Today, you can say, ‘Okay, I’m ready to roll this feature out.’ But I want to do it in a staggered way. So, let’s do like 10% on Monday, 1% on Tuesday, 50% on Wednesday, and so on, for like, three weeks or something. And then each day, you’re going to be checking the metrics, and monitoring things that are important to you to make sure that it’s safe to proceed. In that release, that’s a procedure that I’ve personally gone through and done a number of times. But one thing that I’m looking toward is a world where the tool that is executing that schedule, adds the ability to look at those metrics on its own, and make a decision like, ‘Oh, this page load time is starting to deteriorate with the people that are getting a new feature, let’s halt the release automatically.’ That kind of thing is a solid use case. In these cases, the computer is in a better position to decide.” But the ability for these automation tools to make decisions is still fairly new and niche. “We have an engineer that came from Twitter recently, and they had a mechanism that could do some of this stuff. But that’s the only company that I’m aware of that is really doing stuff like that. So it’s pretty advanced. But it is something that a lot of people maybe want. Their dream is that an engineer can go on vacation, while their features are rolled out. And if some-

September 2021

SD Times

How does your solution address the issue of release automation? Paul Stovell, founder and CTO, Octopus Deploy DevOps is about bringing teams together to collaborate. But when it comes to automation, the tooling is fragmented. Most CI/CD tools do a basic job of deployments and are designed for developers. Operations teams, meanwhile, need to use completely different tooling to automate the runbooks that keep the software running. There’s no reuse and no single source of truth. The tooling reinforces the silos and discourages sharing and collaboration, and forces a duplication of effort to connect to the infrastructure in multiple tools. We approach it differently. Octopus Deploy is the first platform to enable your developers, release managers, and operations folks to bring all automation into a single place. By reusing configuration variables, environment definition, API keys, connection strings, permissions, service principals, and automation logic, teams work together from a single platform. Silos break down, collaboration begins, and your team can ship — and operate — software with greater confidence. Built for enterprises, Octopus also makes it easy to meet compliance objectives by default. A rich audit log, fine-grained permissions, templates, approval workflows, and more enable you to confidently control who has access to production and create a consistent production deployment process across large portfolios of projects.

Adam Haskew, Copywriter/Editor, LaunchDarkly Speed can make or break a business. Organizations that move too slowly to keep up with customer demands or market trends risk falling behind their competition. Just as gathering and analyzing customer feedback and real-time market data have advanced, the methodologies for building and releasing new features have followed in-suit to keep up. But with competition at an all-time high, speed to market can’t come at the expense of quality. Enter release automation. The critical factor that enables teams to ship releases at the speed of business while reducing the risks of shipping buggy code. To this end, creating a structured end-to-end release automation process can support: Release progression — in managing multiple environments to underscore continuous integration (CI) practices Progressive delegation — giving QA teams the ability to fully validate changes in a production-level environment and find issues that otherwise would go undetected This combination of progressive delivery and delegation can act as a safeguard against shipping bugs. Working in tandem, these principles empower teams by providing more control throughout release cycles and helping ensure clean code. The real advantage of automated processes and programmatic control over feature flags when supporting release automation is minimizing human errors. Flag triggers and ‘set and forget’ releases can simplify the complexity that comes with managing multiple branches and numerous releases — a problem only amplified when releasing at pace. Release automation as a development methodology ultimately works to streamline build processes, testing, and deployment workflows to empower progressive delivery. z

thing goes bad, it just automatically stops. You don’t have to think about when people are taking vacation when you’re shipping stuff.” Octopus Deploy’s Stovell said the ability for machines to make the call on releases depends upon the maturity of the monitoring capabilities of the organization, rather than the tool itself.

For instance, a company with a single product — say a SaaS web app — with hundreds of engineers working on it, can invest in building a very sophisticated, high-velocity pipeline that is uniquely tailored to that project, he explained. “The release automation part is actually easy; the hard part is the continued on page 34 >

33


030-34_SDT051.qxp_Layout 1 8/23/21 10:25 AM Page 34

34

SD Times

September 2021

www.sdtimes.com

< continued from page 31

sophisticated ops/monitoring setup to alert them to problems and roll back bad releases automatically. Such a company would want to optimize for the velocity of change and time to market, and would need to live with some of the architectural constraints that Continuous Deployments/automated rollbacks

require.” For a team like that, he said, “it makes sense to let the machine make the call.” He next offered the example of a similarly sized company with a large portfolio of products — say, an insurance firm — with many of those built on different technology stacks at different times. Some projects in the portfolio might be

under active development, some may just have the occasional bug fix. As a result, Stovell said, that company is “unlikely to have the same ability to invest in the monitoring/ops maturity needed. So for a company like that, it might make a lot more sense for there to be a person making the final call that a release is ready to go to production.” z

A guide to release automation tools n Atlassian: Bitbucket Pipelines is a modern cloud-based continuous delivery service that automates the code from test to production. Bamboo is Atlassian’s onpremises option with first-class support for the “delivery” aspect of Continuous Delivery, tying automated builds, tests and releases together in a single workflow. n CA Technologies, A Broadcom Company: CA Technologies’ solutions address the wide range of capabilities necessary to minimize friction in the pipeline to achieve business agility and compete in today’s marketplace. These solutions include everything from application life cycle management to release automation to continuous testing to application monitoring — and much more. n Chef: Chef Automate, the leader in Continuous Automation, provides a platform that enables you to build, deploy and manage your infrastructure and applications collaboratively. Chef Automate works with Chef’s three open-source projects; Chef for infrastructure automation, Habitat for application automation, and Inspec for compliance automation, as well as associated tools. n CloudBees: The CloudBees Suite builds on continuous integration and continuous delivery automation, adding a layer of governance, visibility and insights necessary to achieve optimum efficiency and control new risks. This automated software delivery system is becoming the most mission-critical business system in the modern enterprise. n Digital.ai: The company’s Deploy product helps organizations automate and standardize complex, enterprise-scale application deployments to any environment —from mainframes and middleware to containers and the cloud. Speed up deployments with increased reliability. Enable self-

n

FEATURED PROVIDERS n

n Launch Darkly: is a feature management platform that empowers all teams to safely deliver and control software through feature flags. By separating code deployments from feature releases, LaunchDarkly enables you to deploy faster, reduce risk, and iterate continuously. Over 1,500 organizations around the world — including Atlassian, IBM, and Square — use LaunchDarkly to control the entire feature lifecycle from concept, to launch, to value. n Octopus Deploy: is the release automation platform trusted by over 25,000 companies, including 35 of the Fortune 100, to create consistency, meet compliance objectives, and tame complexity across thousands of projects. Octopus Deploy provides best-in-class release management, deployment automation, and ops automation in a friendly platform. With the broadest range of supported application types and deployment targets, from on-premises to cloud-native and PaaS, Octopus Deploy is ideal for automating even the most complex deployments. service deployment while maintaining governance and control.

deployments and releases across all your environments.

n GitLab: GitLab’s built-in continuous integration and continuous deployment offerings enable developers to easily monitor the progress of tests and build pipelines, then deploy with the confidence across multiple environments — with minimal human interaction.

n Microsoft: Microsoft’s Azure DevOps Services solution features Azure Pipelines for CI/CD initiatives; Azure Boards for planning and tracking; Azure Artifacts for creating, hosting and sharing packages; Azure Repos for collaboration; and Azure Test Plans for testing and shipping.

n IBM: UrbanCode Deploy accelerates delivery of software change to any platform — from containers on cloud to mainframe in data centers. Manage build configurations and build infrastructures at scale. Release interdependent applications with pipelines of pipelines, plan release events, orchestrate simultaneous deployments of multiple applications.

n Puppet: Puppet Pipelines provides developers with easy-to-use, self-service workflows to build containers, push them to any local or remote registries, build and deploy Helm charts, and deploy containers to Kubernetes in under 15 minutes, while providing governance and visibility into the entire software delivery pipeline and the status of every deployment.

n Micro Focus: ALM Octane provides a framework for a quality-oriented approach to software delivery that reduces the cost of resolution, enables faster delivery, and enables adaptability at scale. Deployment Automation seamlessly enables deployment pipeline automation reducing cycle times and providing rapid feedback on

n VMware: With VMware Tanzu, you can automate the delivery of containerized workloads, and proactively manage apps in production. It’s all about freeing developers to do their thing: build great apps. Enterprises that use Tanzu Advanced benefit from developer velocity, security from code to customer, and operator efficiency. z


Software powers the world LaunchDarkly empowers all teams to deliver and

control

their software. With LaunchDarkly software teams are pairing deployment automation with continuous deployment to get even more control over their production environments.

Deployment Automation is a software deployment approach that allows organizations to increase their velocity by automating developer build processes, testing, and deployment workflows. TL:DR - Developers can safely release new features faster and more frequently.

Continuous Delivery/Deployment allows businesses to deploy applications to a testing or production environment after they pass all checks in a continuous integration pipeline. TL:DR - Release process become fully automated.

Learn about the benefits of bringing continuous, automated release practices to your organization.

launchdarkly.com/dacd


036_SDT051.qxp_Layout 1 8/20/21 3:37 PM Page 36

36

SD Times

September 2021

www.sdtimes.com

Guest View BY THOMAS KOHLENBACH

How much process debt are you carrying? Thomas Kohlenbach, Senior Product Specialist - Trusted Advisor

T

he pandemic has been an exercise in agility for businesses around the world. They’ve had to pivot incredibly fast to support work from home and hybrid workforces, digitize their operations, power through supply chain disruptions and accommodate changing customer behaviors. As a result, the past year and a half has laid bare an insidious problem that stymies companies’ ability to adapt: process debt. Process debt is similar to its cousin, technical debt. It occurs when businesses operate with legacy processes that are suboptimal, leading to wasted time, money and productivity. Process debt hampers how cross-functional teams collaborate with each other, creating unnecessary friction and delays. For example, imagine being a customer seeking a loan from a bank. You just want to get the money into your account at a good interest rate as quickly as possible. You’re paying fees for the process to go smoothly. If there are delays because different teams within the bank aren’t communicating or there are errors in paperwork on their end, your satisfaction as a customer goes way down, and you may move on to a competitor. Additionally, the salesperson who promised you a certain timeline has to make excuses for not delivering because of internal process hiccups, and is left frustrated and demoralized.

Acknowledge that process debt is a problem and that it exists within your organization.

What causes process debt? Process debt is a function of legacy. Processes that may have once worked as a short-term solution or when the company was in an earlier maturity stage continue to be passed down without examination — even when they’re no longer the best practice. This may go ignored within the organization, accompanied by a shoulder-shrugging “that’s the way it’s always been done.”

Fixing the problem So how do you tackle process debt? 1. Acknowledge that process debt is a problem and that it exists within your organization. 2. Work with the appropriate subject matter experts at your company to identify the process pain points and what outcomes need to change.

Talk to your employees in open conversations about inefficiencies and conduct NPS (net promoter score) surveys with customers to learn where they’re being impacted. 3. Assign a dedicated process champion or create a center of excellence to coordinate the effort. 4. Ensure that the process champion supports developing and implementing new processes for every function throughout the organization. You want to take an integrated approach to finding solutions; that’s the only way to really fix the underlying problem. Taking a fragmented approach will only create Band-Aids that won’t sustain your business in the future and might even make things worse. 5. Document your updated processes and treat them as living entities. They should be revisited and updated as technology, the market and your business environment evolve.

Benefits of settling process debt The thought of addressing process debt in the midst of so many other changes can feel overwhelming. I’m reminded of an image I once saw featuring a guy holding a crushingly heavy bag of rocks, unable to carry it. But when he decided to go back and forth carrying the rocks one by one, he was able to efficiently move all of the rocks. The moral? What seems like an insurmountable problem can be managed by breaking it down into steps. That’s what a process is — creating steps that enable your technology and your people to do their best work. It allows them to spend more time working on your business rather than in your business. Not only will this help increase money-earning opportunities, but it will reduce lack of engagement, anxiety and boredom among your employees. Removing process roadblocks means they aren’t bogged down by meaningless work and are instead using their talents to add value. Connected to that, improved processes can also improve the quality of your services, which increases customer satisfaction and frees up employees to focus on solutions. And while reducing process debt is paramount, organizations should always keep the “human interface” front and center. Even in the event of automation, the response and intervention of humans will always be critical. z


037_SDT051.qxp_Layout 1 8/20/21 12:34 PM Page 37

www.sdtimes.com

September 2021

SD Times

Analyst View BY BILL SWANTON

6 steps to upskill developers W

hen software engineering leaders need new skills, they often look to hire people who already have those skills. However, when it comes to modern cloud architectures and languages, those people are hard to find. Recent Gartner research has found that there are a high number of open positions for people with advanced development skills, but relatively few candidates per position. Software development leaders often write into the job description a long list of required skills, making candidates even harder to find. All these factors require organizations to offer higher salaries and delay critical work. To overcome the developer skills shortage, software engineering leaders need to upskill and reskill their existing employees and new hires. Software engineering leaders can use a six-step talent development program to upskill and reskill their developer teams.

Step 1: Identify skills needed now and later For example, lay out core skills relevant to the existing technological landscape and skills that will be required for emerging technologies and architectures. Identify aging systems or digital services to anticipate future needs and the likely enhancements. Collaborate with business leaders, product managers and solution architects to understand the future set of skills needed to modernize the product. Moreover, software engineering leaders can also ask their HR learning and development department to create a skills matrix for software engineering.

Step 2: Inventory current developer skills Before surveying employees to evaluate their skills and competencies, make sure to communicate the purpose of the exercise is to evaluate the organization’s capabilities, not compare employees. Employees should feel secure so that they don’t give prejudiced or biased responses. Also motivate developers to broaden their skills and roles by making reskilling and upskilling a part of the organization’s culture.

Step 3: Motivate employees to broaden their skills Shift performance discussions from “What have you done?” to “What have you learned?” Employees

should learn to identify and target skills (both techBill Swanton is a nical and managerial) that will be game-changers. Distinguished Research VP at Gartner, Inc. Three elements of motivation (based on “Drive: The Surprising Truth About What Motivates Us,” by Daniel Pink) are key to talent development: • Autonomy: Avoid excessive interference; let your teams figure out the best way to get the job done and remove the roadblocks to the progress of employees. • Mastery: Ensure that employees receive recognition on mastering software developer skills — not just from managers, but also from peers. • Purpose: Make sure employees associate a sense of purpose with their work — does their work positively impact the organization’s growth and technological landscape?

Step 4: Accept skills but plan for upskilling Upskill existing developers in tandem with your search for new hires. Finding highly proficient individuals may be challenging, so be willing to hire people with a base level of skills and immediately create learning opportunities to ensure they become valuable assets for the organization. Prioritize qualities like collaborative mindset and adaptability when hiring entry-level employees.

Software engineering leaders need to upskill and reskill their existing employees and new hires.

Step 5: Create on-the-job learning opportunities When evaluating different development approaches, consider low-cost, less time-consuming on-the-job learning opportunities for developers. The idea is to put employees into dynamic environments where they can learn and apply new skills quickly. Peer connections and 360-degree feedback, along with group activities like hackathons, innovation labs, and lunch and learns, can build valuable knowledge sharing channels for new hires and old employees.

Step 6: Dedicate time to learning Learning and development programs should make the enterprise more productive in the long run, but for that to happen, you need to allocate time for learning amid a massive backlog of work. In a work week, a dedicated time window can be created for learning and cross-functional activities. z

37


038_SDT051.qxp_Layout 1 8/20/21 3:04 PM Page 38

38

SD Times

September 2021

www.sdtimes.com

Industry Watch BY DAVID RUBINSTEIN

Is DevOps actually ‘The Bad Place’? David Rubinstein is editor-in-chief of SD Times.

In the television series “The Good Place,” four people die and are told they are in heaven — The Good Place. But their time is marked by a series of escalating annoyances that finally makes one of them realize they are not in fact in heaven, but are in a living hell — The Bad Place. Many conversations I’ve had over the years about the benefits of the cloud, microservices, shifting everything left into development, and giving developers the keys to operations, have made us as an industry think we are getting into The Good Place of faster application delivery, better value and experiences for end users while silos between developers, QA, security, IT and the business are broken down. But to get to this technology heaven, developers are being asked to do things they aren’t trained to do regarding testing, security and governance. And organizations have been slow to invest in the training necessary for developers to be successful. This clearly creates risk for those organizations. Because much of today’s application development relies on open-source packages and features accessed via APIs, developers are using a lot of code they didn’t write, and have to have faith that those open-source packages are updated as vulnerabilities are found. Meanwhile, traditional IT operations has been sidelined, so the precautions they would take before releasing an application — making sure the deployment doesn’t break anything, or introduce vulnerabilities, or that the application will perform well and give a great user experience — are being dealt with after the fact. And, as some risk assessments determine that a vulnerability in an app needn’t be fixed because while it’s an opening, it doesn’t have a path to any critical data. So, mountains of technical debt are inexorably growing, leading to potential damaging data breaches when changes to software packages now make that vulnerability a path to the company jewels. In this new development world, where teams are given the autonomy to decide what their next projects will be and how long it will take them to finish, managing all this has grown increasingly difficult.

Every part of the organization is in each other’s business, using different tools and serving different ends.

Every part of the organization is in each other’s business, using different tools and serving different ends, while fighting for their priorities rather than trying to achieve the same goal. Like it or not, these silos remain. Scott Moore, director of customer engineering at Tricentis, told me: “If silos didn’t work at some level, they wouldn’t exist.” Because of all this, you can make the case that DevOps is, indeed, The Bad Place. But, just as the characters in the television ultimately learned to work together as a team to finally reach The Good Place, I’m supremely confident our industry will find its way there as well. The pain though, will continue as we figure it out. First and foremost, organizations need to make serious investments in training. This is a new world, and you can’t shift tasks left onto developers who historically have never had to deal with things like security, testing in the pipeline, containers and Kubernetes, and writing infrastructure code. Tricentis’ Moore said companies he speaks with about DevOps ask who’s doing it well, and how they do it. When he asks them where they’re at with DevOps, they either say they have it under control, or “they say we’re on a journey, which means they really have no clue.” SREs, he suggested, are seen as the “find it, fix it masters of all,” when in fact they’re high-paid engineers who provide tech support in high-priced war rooms. He suggested creating the title of software life cycle engineer to handle the specialties such as test, security and performance. These engineers, he said, “will know how to improve performance before they write a line of code. Same with security.” Further, organizations need to adopt progressive delivery to further protect themselves from delivering flawed or even dangerous software to their users. Using feature flags and releasing to a small cohort of users least likely to abandon you over a bad delivery — and having the ability to quickly roll it back if it does fail on any level — can protect you and your customers as well. At the end of the day, lifting the burdens off untrained developers and putting them onto life cycle specialists may not solve everyone’s DevOps problems, but it will get them closer to The Good Place. z


Full Page Ads_SDT051.qxp_Layout 1 8/17/21 10:28 AM Page 36


Full Page Ads_SDT016.qxp_Layout 1 9/21/18 4:14 PM Page 28

SD T Times imes News on Mond day The latest news, news analysis and commentary delivvered to your inbox!

• Reports on the newest technologies affecting enterprise deve developers elopers • Insights into the e practices and innovations reshaping softw ware development • News from softtware providers, industry consortia, open n source projects and more m

Read SD Times Ne ews On Monday to o keep up with everything happening in the software devvelopment industrry. SUB BSCRIBE TODA AY! Y!


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.