SD Times August 2020

Page 1

FC_SDT038.qxp_Layout 1 7/22/20 1:30 PM Page 1

AUGUST 2020 • VOLUME 2, ISSUE NO. 38 • $9.95 • www.sdtimes.com


IFC_SDT036.qxp_Layout 1 5/20/20 10:52 AM Page 4

Instantly Search Terabytes

www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein drubinstein@d2emerge.com NEWS EDITOR Christina Cardoza ccardoza@d2emerge.com

dtSearch’s document filters support: ‡ popular file types ‡ emails with multilevel attachments ‡ a wide variety of databases ‡ web data

‡ efficient multithreaded search ‡ HDV\ PXOWLFRORU KLW KLJKOLJKWLQJ ‡ forensics options like credit card search

ART DIRECTOR Mara Leonardi mleonardi@d2emerge.com

CONTRIBUTING ANALYSTS Enderle Group, Gartner, IDC, Intellyx

CUSTOMER SERVICE SUBSCRIPTIONS subscriptions@d2emerge.com ADVERTISING TRAFFIC Mara Leonardi mleonardi@d2emerge.com LIST SERVICES Jessica Carroll jcarroll@d2emerge.com

Developers: ‡ 6'.V IRU :LQGRZV /LQX[ PDF26 ‡ &URVV SODWIRUP $3,V IRU & -DYD DQG NET with NET Standard / 1(7 &RUH

.

Jakub Lewkowicz jlwekowicz@d2emerge.com

CONTRIBUTING WRITERS Jacqueline Emigh, Lisa Morgan, Jeffrey Schwartz

2YHU VHDUFK RSWLRQV LQFOXGLQJ

.

SOCIAL MEDIA AND ONLINE EDITORS Jenna Sargent jsargent@d2emerge.com

.

‡ )$4V RQ IDFHWHG VHDUFK JUDQXODU GDWD FODVVLILFDWLRQ $]XUH $:6 DQG PRUH

REPRINTS reprints@d2emerge.com ACCOUNTING accounting@d2emerge.com

ADVERTISING SALES PUBLISHER David Lyman 978-465-2351 dlyman@d2emerge.com

Visit dtSearch.com for ‡ KXQGUHGV RI UHYLHZV DQG FDVH VWXGLHV ‡ IXOO\ IXQFWLRQDO HQWHUSULVH DQG developer evaluations

The Smart Choice for Text Retrieval® since 1991

dtSearch.com 1-800-IT-FINDS

SALES MANAGER Jon Sawyer 603-547-7695 jsawyer@d2emerge.com

PRESIDENT & CEO David Lyman CHIEF OPERATING OFFICER David Rubinstein

D2 EMERGE LLC 80 Skyline Drive Suite 303 Plainview, NY 11803 www.d2emerge.com


003_SDT038.qxp_Layout 1 7/22/20 10:21 AM Page 3

Contents

VOLUME 2, ISSUE 38 • AUGUST 2020

FEATURES

NEWS 4 18

News Watch

Hiring in a remote-first world

The HCL Software DevOps product portfolio expands with three major project releases

WHY SOF WARE DELIV SOFTWARE DELIVER ERY M MANAGEMENT T MAATTTERS TE S page 6 page 16

User feedback can be found in unlikely places

COLUMNS 32 GUEST VIEW by Kathryn Erikson Open source democratizes data

33 ANALYST VIEW by Michael Azoff The rise of AI chips

34 INDUSTRY WATCH by David Rubinstein The rise of infrastructure open source

BUYERS GUIDE

page 12

AI and ML make testing smarter

What’s standing in the way of achieving “true agility” page 26

Software Testing The second of Three ParTs

page 20

Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 80 Skyline Drive, Suite 303, Plainview, NY 11803. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2018 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 80 Skyline Drive, Suite 303, Plainview, NY 11803. SD Times subscriber services may be reached at subscriptions@d2emerge.com.


004,5_SDT038.qxp_Layout 1 7/22/20 2:20 PM Page 4

4

SD Times

August 2020

www.sdtimes.com

NEWS WATCH Open Usage Commons created The Open Usage Commons is a new organization dedicated to promoting and extending the open source philosophy and definition. The organization was created in collaboration with SADA Systems, academic leaders, and independent contributors. “The mission of the Open Usage Commons is to help

open source projects assert and manage their project identity through programs specific to trademark management and conformance testing. Creating a neutral, independent ownership for these trademarks gives contributors and consumers peace of mind regarding their use of project names in a fair and transparent way,” Chris DiBona, director of open source at Google, wrote in a blog post.

People on the move

n Redis database creator Salvatore Sanfilippo announced he will be stepping down as the opensource project’s BDFL (“benevolent dictator for life”) and sole maintainer. Instead, Sanfilippo explained he will become an “ideas” person at Redis, and continue on the advisory board. As a result, the opensource project has adopted a light governance model. Under the new model, Sanfilippo’s colleague’s Yossi Gottlieb and Oran Agra will become Redis’ project leads. Additionally, Redis Labs technology evangelist Itamar Haber will become the team’s community lead, and AWS senior software development engineer Madelyn Olson and Alibaba Cloud’s senior engineer Zhao Zhao will come on as core team members.

n IntelliSite has appointed Michael Knight as its new chief digital officer. Previous, Knight was the global chief technology officer for safety and security at Dell Technologies. In his new role, he will help develop and expand the company’s IoT and related industry-specific offerings. n Artificial intelligence expert Sebastian Seung will become the new head of Samsung’s research unit. There, he will oversee 15 global research and development centers as well as 7 AI centers. He will also play a central role in building up the company’s fundamental AI research and advancing human knowledge. n Mark Porter is stepping down from MongoDB’s Board of Directors to become the company’s chief technology officer, where he will play a key role in driving its technology vision. Previously, Porter was the CTO of Grab, a Southeast Asia super app that provided everyday services like ride-hailing, food and grocery delivery, and financial services.

AWS enters no-code space with Honeycode Amazon is joining the no-code space with the announcement of Honeycode. The new solution is designed to make it easier for developers to build mobile and web applications with no programming required. Instead of relying on methods like emailing spreadsheets or documents, Honeycode enables developers to use a simple visual application builder and to utilize the AWSbuilt database to perform tasks like tracking data over time and notifying users of any change, the company explained. “What customers want is the ability to create applications using the simplicity and familiarity of a spreadsheet, but with the data management capability of a database, the collaboration and notifications common in business applications, and a truly seamless web and mobile user experience. That’s what Amazon Honeycode delivers,” AWS wrote in a post.

Pylance comes to VS Code Microsoft is adding better support for Python in Visual Studio Code with the introduction of Pylance. Pylance builds on the experience of the Python Language Service, which was added to VS Code two years ago. Continuing the language’s reference to Monty Python, Pylance is named for Monty Python’s Lancelot, Microsoft explained. Pylance includes a number of useful features, including type information, auto-imports, type checking diagnostics, and multi-root

workspace support. “Pylance supercharges your Python IntelliSense experience with rich type information, helping you write better code, faster,” Savannah Ostrowski, program manager of the Python Language Server and Python in Visual Studio, wrote in a post.

Angular 10 now available The Angular team has announced a major version of the web framework. Angular 10 covers the entire platform, framework, Angular Material and CLI. Traditionally, the team releases two major versions every year to coincide with the JavaScript ecosystem and provide a predictable schedule for developers. Angular 9 was just released 4 months ago, and the team is already looking to release Angular 11 in the fall. Key features of the release include: l New data range picker l Warnings about CommonJS imports that can slow down large applications l Optional stricter settings that can improve maintainability, help catch bugs, and enable advanced optimizations l TypeScript 3.9 l TSLib 2.0 l TSLint 6 l New default browser configuration

Gremlin unveils Status Checks Gremlin wants to make it safer to experiment in production with the release of Status Checks. The new capability automatically verifies systems are healthy and ready for chaos engineering. “More and more, companies


004,5_SDT038.qxp_Layout 1 7/22/20 2:20 PM Page 5

www.sdtimes.com

want to do Chaos Engineering. And not only do it, but automate it. But they are concerned if they have attacks triggering automatically, it may perform a chaos attack at a bad time (say when a system is already experiencing an outage). This is a huge concern,” Matt Schillerstrom, product manager at Gremlin, told SD Times in an email. “This is a huge safety improvement, in that it drastically mitigates the chances you break your own systems and impact customers while doing chaos engineering.”

TypeScript 4.0 beta released Microsoft has announced the first beta release for TypeScript 4.0. The most recent version of the language, TypeScript 3.9, was released in May. TypeScript 3.9 had introduced speed and user experience improvements. TypeScript 4.0 follows Microsoft’s pattern of trying to introduce new features that don’t introduce breaking changes for older versions. The next version doesn’t add many breaking changes, but adds new features such as variadic tuple types, labeled tuple elements, class property inference from constructors, short-circuiting assignment operators, custom JSX factories, and more.

Linux Foundation launches initiative for open standards The Linux Foundation has created a new way for open source communities to create Open Standards. Through its new initiative, Community Specification, communities will be able to create stan-

August 2020

SD Times

SUSE acquires Rancher Labs Open-source company SUSE has entered into a definitive agreement to acquire Rancher Labs. According to SUSE, it wants to use the acquisition to establish the companies as the main open source innovators for Enterprise Linux, Kubernetes, edge computing and AI. “By combining Rancher and SUSE, we not only gain massive engineering resources to further strengthen our market-leading product, we are also able to preserve our unique 100% open source business model,” said Sheng Liang, the CEO of Rancher. “SUSE’s strong ecosystem will greatly accelerate Rancher’s on-going efforts to transform how organizations adopt cloud native technology.” With the acquisition, SUSE also wants to help IT and operations leaders meet the needs of their customers at any stage of their digital transformation — whether that’s from the data center to the cloud of the edge.

dards using tools and approaches that are inspired by open source developers. Community Specification lets contributors more easily start the specification process, and also reduces administrative overhead. “The Community Specification can dramatically reduce the time developers spend on building and meeting spec requirements and ensure important work is not lost and time is not wasted. By democratizing the specification build process, developers have more time to innovate and build the technologies that differentiate their work from others,” the Linux Foundation wrote in a post.

Report: Growing interest in cloud-native Java Java developers are increasingly becoming interested in cloud-native Java, according to the findings of a new study from the Eclipse Foundation. The study also showed that there has been significant adoption of Jakarta EE 8. According to the Eclipse Foundation’s data, the number of certifications for Full Platform Compatible Products for Jakarta EE has been greater

in the past 8 months than certifications for Java EE were in over 2 years. There has also been growing interest in the Jakarta EE community itself. “In addition to significant interest in Jakarta EE and the community’s progress toward Jakarta EE 9, the Eclipse Foundation is seeing record growth in membership, as we added more members in Q1 2020 than any time since our inception,” said Thabang Mashologu, vice president of marketing for the Eclipse Foundation. “This maps to the increased interest in the open source model worldwide.”

AWS CodeGuru to improve code quality AWS has released a new developer tool powered by machine learning. CodeGuru provides intelligent recommendations for improving code quality and lowering operational costs. The solution consists of two main parts: the CodeGuru Reviewer and the Application Profiler. The Code Reviewer uses machine learning to automatically flag issues and difficultto-find bugs during the application development process, while providing specific recommendations on how to fix

them. Meanwhile, CodeGuru Profiler uses machine learning to identify the most expensive lines of code by helping developers understand the runtime behavior of their applications. This helps to identify and remove code inefficiencies, improve performance, and significantly decrease compute costs, according to AWS.

Tidelift cleans up open-source dependencies Managed open source company Tidelift wants to help organizations make sense of their open-source dependencies. Its new catalogs feature allows users to create and maintain an organization-wide inventory of their open-source package releases. It is designed to reduce the amount of review work for open-source components, promote an efficient development workflow, and provide accurate data to power workflow automation and policy compliance. Tidelift catalogs provide a way for any organization to get issue-free open source packages without the expense of vetting them wholly on its own,” wrote Havoc Pennington, cofounder of Tidelift. z

5


006-10_SDT038.qxp_Layout 1 7/22/20 10:47 AM Page 6

6

SD Times

August 2020

www.sdtimes.com

Hiring in a remote-first world BY JENNA SARGENT fter over two months of lockdowns due to the novel coronavirus, many states have started or are about to start the process of reopening. While some things will be opening up, many companies, especially those in the tech industry where remote work can be easily achieved, will continue to keep their employees out of the office. A recent Bitglass report indicated widespread support for continued remote work. They found that 84% of survey respondents would continue to support remote work even after stay-athome orders are lifted. The coronavirus has also resulted in a significant number of layoffs and unprecedented numbers of unemployment in the United States, with the U.S. Department of Labor reporting that the unemployment rate hit 14.7% as of April. Still, there are a lucky few companies who are looking to bring on more employees, and they will need to navigate changes to the onboarding process. And as the economy recovers, companies might need to rehire for roles that had been eliminated. Likewise, there are a number of considerations for those looking for new positions as well. Traditionally, most interviews for technical roles consist of multiple stages. First there is a phone screen, which may include some technical questions. That likely will remain the same, as it was already done remotely. Then, there is the traditional behavioral interview that everyone goes through, often conducted by someone from HR. “In thinking through the differences, hiring in-person carries the ben-

A

efit of being in the same room as the candidate, allowing you to interpret physical cues, get that 'gut' feeling about someone, and determine a cultural fit based on their behavior and interactions. On the flip side, candidates can see how the company presents itself and operates on any particular day,” said Ken Schnee, general manager of the Media, Entertainment, and Hospitality group at New Yorkheadquartered background screening and identity services company Sterling. According to Schnee, hiring managers traditionally would say: “I would never hire someone without meeting them

Without face-to-face interviews, managers have to dig deeper to learn about candidates face-to-face.” This had been changing due to the rise of the gig economy, digital nomads, and remote workers, and the COVID-19 crisis has been another tipping point, Schnee explained. “In addition, improvements in technology have conditioned the hiring process to become streamlined such that most of the elements of the in-person experience are now covered. Hiring managers can conduct interviews via online video tools and engage candidates with their teams to drive cultural interaction,” said Schnee. The next stage of the interview process for a technical role would probably be a technical interview, or whiteboard interview, where candidates are given a problem to solve on the spot. With remote interviews, a whiteboard interview isn’t possible on a physical whiteboard. Online coding challenges are another approach to the technical

interview that had been gaining popularity even before the forced switch to remote work. Matt Mead, CTO of consulting group SPR, believes companies will turn to standardized tests to supplement remote interviews. He noted that for a while, fewer than 50% of companies he worked with utilized such tests, though there had been spikes in popularity every so often over the years. He predicts they will now be used as a failsafe if things like whiteboard interviews can’t be conducted. As soon as it is safe to meet in person, Mead wouldn’t be surprised if faceto-face meetings continue, even for fully remote positions. This will especially be true for candidates that are local to the area the position is located in. Employers will want to meet with prospective employees face-to-face to get a sense of their interpersonal skills.


006-10_SDT038.qxp_Layout 1 7/22/20 10:47 AM Page 7

www.sdtimes.com

New methods of identify verification Another one of the considerations for online interviews is identity verification. Schnee recommends identity verification be done early in the process, on top of background checks to ensure trust and safety for their employees, customers, community, and brand. According to Schnee, it is possible to do identity verification remotely by leveraging mobile phones and AI. For example, candidates can take a photo of their government-issued ID along with a selfie. Then, machine learning can be used to validate their documentation and match the individual in the photo with their photo ID. “We employ mobile phones, secure data connections, smart web applications, artificial intelligence, machine learning, optical character recognition, and rich databases of known good identity document patterns to enable rapid remote identity verification that is highly accurate and reliable. Many of these

services are available now to companies who conduct hiring and onboarding remotely,” said Schnee.

On-site perks don’t translate If remote work persists long-term, it will likely change the way companies attract talent. Top tech companies often set themselves apart with special perks, like on-site fitness facilities, catered meals, or gaming consoles in the office, to name a few. While the main goal of perks is to keep employers in the office for longer hours, prospective employees might also consider them when choosing between roles. “I think the problem is when you take these perks and you try and extrapolate them in a virtual way and I can do them on my own in my own home, I don’t think the employer benefits remain,” said Mead. As a result, those types of perks are likely to fade, though some, such as those related to health, and in particular mental health, may

August 2020

SD Times

remain. “While I’m not a mental health expert, I can see with COVID and the anxiety and the stress it’s creating that we can see some perks in the area of mental well-being,” he said. Mead believes that the new way to attract and retain talent is to connect with their passions. For example, younger developers are often interested in working for a company that they perceive to be doing good for the community and the world. “They will trade salary for this greater good. So I think making sure that organizations have identified, have documented, and are projecting the way that they create greater good in the economy and within the world is a way — rather than these perks that were geared towards on site — of attracting and maintaining talent,” said Mead. He also added that talent attracts talent. For years companies have tried to utilize their own employees’ professional networks to attract talent, through the use of employee referral programs. “Perks were nothing other than a way to attract and retain talent and I think those perks, honestly I don’t think they really work in this new remote work, and certainly not as well,” said Mead. “And so we’re going to have to look for other ways to differentiate. And I think they’re harder. I think it’s harder to do some of these examples that I’m throwing out than it is to bring in food for an office or set up some way to provide physical health for the employees.”

Advice for job hunters While job seekers can’t utilize conferences to network currently, there are a number of ways they can set themselves apart from the competition. Mead recommends they demonstrate their involvement in the IT community somehow. Applicants should be staying up to date on the latest technologies, in whatever way works best for them. “I think a red flag has always been and will always be a technologist or someone in IT that’s not keeping up with new technologies or has no familiarity, has done nothing to prepare themselves,” said Mead. “Even if their current job doesn’t require them to use continued on page 10 >

7


006-10_SDT038.qxp_Layout 1 7/22/20 10:48 AM Page 8

8

SD Times

August 2020

www.sdtimes.com

Skills that matter for looking to get hired BY CHRISTINA CARDOZA

The world may seem like it is on pause as the COVID-19 pandemic continues, but technology is still advancing and the skills gap between talent and job requirements is widening. Microsoft estimates that there will be 149 million new technology jobs by 2025. There will be 1 million new jobs for privacy and trust, 6 million for cyber security, 20 million for data analysis, machine learning and AI, 23 million new cloud and data roles and 98 million software development jobs. “The pandemic has shined a harsh light on what was already a widening skills gap around the world – a gap that will need to be closed with even greater urgency to accelerate economic recovery,” Brad Smith, president of Microsoft, wrote in a post.

Closing the skills gap That skills gap is not only a matter of not having enough people to fill these roles, but it’s also developers and employees not having the necessary skills to compete in a digital role. As a result, there is more of an The LinkedIn Economic Graph is designed as a digital representation of the global economy. According to the data, Microsoft has identified 10 jobs in-demand today:

1. software developers 2. sales representatives 3. project managers 4. IT admins 5. customer service specialists 6. digital marketing specialists 7. IT support/help desk 8. data analysts 9. financial analysts 10. graphic designers

Swift

effort to upskill employees and provide new programs and services to build new talent. Microsoft recently launched a global skills initiative with a mission to bring digital skills to 25 million people worldwide by 2022. The initiative will work to find in-demand jobs and the skills needed to fill them, provide free access to learning paths and content to help people develop skills, and lowcost certification and free job-seeking tools. “At its heart, this is a comprehensive technology initiative that will build on data and digital technology. It starts with data on jobs and skills from the LinkedIn Economic Graph. It provides free access to content in LinkedIn Learning, Microsoft Learn, and the GitHub Learning Lab, and couples these with Microsoft Certifications and LinkedIn job-seeking tools,” Microsoft’s Smith wrote. Amazon is taking a similar

Go

approach with the launch of the AWS Academy, which was designed to help higher education institutions prepare students for careers in the cloud with certifications and training. In addition to taking advantage of programs, HackerRank’s CEO Vivek Ravisankar recommends internal employees that want to get into coding or the software development side of the business inquire about available learning paths within their organization. HackerRank provides a skills directory that includes a list of the given skills needed for a given role. For instance, under the .NET learning path, the directory provides all the key competencies a person would need to have a basic understanding of the framework, intermediate and advanced understanding. Other categories in the directory include Ansible, Angular, Apache Spark, AWS, problem solving, technical communication and more.


006-10_SDT038.qxp_Layout 1 7/22/20 10:58 AM Page 9

www.sdtimes.com

August 2020

SD Times

software developers Python JavaScript C++ C#

The most in-demand technology skills According to the 2020 HackerRank Developer Skills report, hiring managers find full-stack developers as the most in-demand role for 2020, fol-

lowed back-end developers and data scientists. Full-stack developers are required to learn new skills the most often to keep up with new methodologies and technology changes. “The relative flexibility of their role—and the breadth of technologies they have to keep up on as a result—means learning on the job never stops,” the report stated. When looking at the language skills hiring managers look for the most, the report found JavaScript, Python, Java, C# and C++ to be the most in demand. Go was the number one programming language developers want to learn followed by Python, Kotlin, TypeScript, R, Scala and Swift. However, the software development landscape isn’t only about the roles organizations are looking to fill or the languages they want developers to work with. Organizations have to take into consideration what potential candidates will want to work with. In Stack Overflow’s 2020 Developer Survey, the company found when developers are looking for a job, the most important factor is the code, not necessarily the money, hours or even

coworkers. Developers evaluate the technology stack they will potentially be using with 54% of respondents citing the programming languages, frameworks and other technologies as their biggest criteria when choosing an employer. Stack Overflow explained that when choosing a technology stack to attract talent, do not default to the most commonly used technologies. While this may feel safe, it doesn’t allow developers to innovate or grow, according to the company. For instance, having an unconventional stack can help attract high-quality engineering candidates. Stack Overflow suggested focusing on adopting languages developers are passionate about or love the most. According to its survey, the most loved languages include: Rust, TypeScript, Python, Kotlin and Go. The most wanted languages were Python, JavaScript, Go, TypeScript and Rust; and the most dreaded languages included VBA, Objective-C, Perl, Assembly and C. Once a technology stack is in place, organizations should continue to build upon the stack and update it based on learning and development needs. z

Don’t be quick to judge developers With more people working from home or open to a remote position, HackerRank’s Ravisankar believes this is an opportunity for peoAccorrding ding to Hired’ red’’ss 2020 State of Soffttwarree Engineers ple to utilize online resources and make the (respondents could select morre than 1 option) switch to a digital role. However he worries that online resources such as coding boot camps have a bad stigma because it looks like someone can just take a crash course and then become a developer, but Ravisankar still thinks it’s something worth looking into. a e self taught ar The HackerRank developer skills report found Gen Z is more likely to utilize boothave a camps than any other previous generation, and less likely to learn coding skills from have a computer science books and on-the-job training. r elev an t degree The company also found 1 in 3 hiring managers have hired a bootcamp grad and college degree e have found they are well suited for work, with 72% of managers finding they were equal to or better than other hires because of their ability to learn new technologies and languages quickly, and eagerness to take on new responsibilities.

How developers learned to program

22%

50%

18%

9


006-10_SDT038.qxp_Layout 1 7/22/20 10:48 AM Page 10

10

SD Times

August 2020

www.sdtimes.com

< continued from page 7

aspects of the cloud, if they’ve taken no steps whatsoever to educate themselves on the cloud, to me that would be a bit of a red flag.” In terms of the interview itself, applicants should also show they’re capable of using remote collaboration software by putting their best foot forward in an online interview. This means not only making sure that audio and video actually work, but ensuring that the lighting looks good, the camera looks good, and that there is no echo. “I think in any interview, whether virtual or not, we assume that the candidate is putting their best foot forward, so if a candidate is unable to use virtual technologies to communicate, if they’re unable to communicate through the camera and express things in a coherent way, if you can’t do that through the camera and through a virtual environment during an interview, then I’m not going to assume as an interviewer that you’re going to be able to do that once I hire you and you’re part of a team,” said Mead.

What happens to the tech hubs? The tech industry is heavily located in Silicon Valley and New York City, but if everyone is working remotely, will those tech hubs be eliminated? Louis Cornejo, managing broker and president at Urban Group Real Estate, a San Francisco-based commercial realtor, doesn’t think so. Cornejo finds it hard to believe that the collaborative nature of tech can be recreated virtually. He believes that eventually companies will want to get back to the office. Currently, Urban Group Real Estate isn’t seeing San Francisco-based companies wanting to downsize or not renew their leases. This might not be the case for much longer as large tech giants embrace the work-from-home model. Earlier this month Twitter announced that its employees can work from home permanently. Immediately after Twitter CEO Jack Dorsey’s announcement, Mark Zuckerberg also made the announcement that Facebook employees can do the same. Google employ-

The candidate matters Going beyond traditional developers and education, hiring managers need to remember that the interview process matters. Unfortunately, there isn’t as much effort put into the candidate’s experience as there should be, according to HackerRank’s Ravisankar. “You have to remember the first touch point of an employee is actually as a candidate. You need to make sure that you can invest your time and effort, including making sure your candidate experience is out of the world,” he said. Hans Leautaud, CTO of project management platform Rodeo, explained hiring is a lot like dating. “You can have a nice time together during a first date, but then after you’re like, ‘Hmm, no.’ And it’s tough to put your finger on what it is that wasn’t right. Of course, you work with the benefit of the doubt, but in the end, it is whether or not you get excited by the candidate and the candidate gets excited by you, as a person but also for the company, of course. So there’s a huge dating element, in my opinion,” he said. When looking at candidates, Leautaud looks at whether or not the candidate can teach the company something. Hiring is a two way street, he explained. You have to look at what the candidate can learn from the company, but also what the company can learn from the candidate. For instance, can the candidate be taught something, learn and grow. And then there is an “airport test” of would you want to be stuck with this person for hours waiting for your plane. However, Eric Riz, founder and CEO of VERIFIED, a data analytics company focused on verifying resumes and social profiles, warns about getting too caught up with the candidate experience. “The first thing an organization should do is spend more time reviewing content and looking at a resume before bringing someone in,” he said. “Make sure that you are 1,000% happy and satisfied with the information. Make sure that you’ve done your due diligence and you’ve done your homework before you bring somebody in because you don’t want to fall in love with their story history or experience if it isn’t true.” z

ees can work from home for the rest of the year, and Amazon and Microsoft employees through at least October, CNET reported. If remote work continues long-term, and living in San Francisco or New York is no longer a prerequisite for a high-paying tech job, it could open up the tech industry to a much wider talent pool. In the immediate future, this could mean people from lower salary markets having the potential to significantly increase salaries by finding highpaying remote jobs in those higher costof-living markets, Mead explained. Mead does believe, however, that over time the markets will begin to fix themselves and that might not be the case anymore. For example, already Face-

book has language in their remote work policy that implies salary adjustments if an employee is moving to a lower cost area, the Washington Post reported. “While I do expect that employees will gradually begin returning to physical work locations as the world recovers from the COVID-19 crisis,” said Schnee. “Some hiring will inevitably transition back to in-person due to the requirements of the role and company interest; however, virtual recruitment will have a steep increase. Prior to the coronavirus, remote hiring had already started to grow. Our experiences during this crisis, combined with technology improvements, have increased comfort levels around hiring and placing someone remotely.” z


Full Page Ads_SDT038.qxp_Layout 1 7/20/20 11:45 AM Page 11


012-15_SDT038.qxp_Layout 1 7/20/20 3:52 PM Page 12

12

SD Times

August 2020

www.sdtimes.com

User feedback

BY JENNA SARGENT

I

n 2013, customer experience firm Walker released a report in which it predicted that by 2020, user experience would be the key differentiator for brands, and price and product would become less important to users when choosing among different digital services. Well, 2020 is here, and that prediction seems to have been pretty accurate.

In today’s highly competitive digital economy, offering your customers a product they actually like to use is key. Unless you have a highly specific, oneof-a-kind application, there’s always another — possibly better — option for your users to switch to if you’re not providing what they need. The testing phase of the software development life cycle may help find bugs in an application, but it can’t catch everything. To ensure that your users are actually enjoying the time spent on your application, you need to continually gather feedback from those users. “Even a program that is coded perfectly and meets the standards of the stakeholders may not work with the way certain users interact with the application,” said Nikhil Koranne, assistant vice president of operations at software development company Chetu. A 2019 survey from Qualtrics revealed that in 2020, 81% of respondents had planned to increase their focus on customer experience. The

report also showed that 71% of companies with customer experience initiatives saw positive value from those efforts. By gathering user feedback, development teams can continually improve and finetune their applications to give their users requested features and fix issues that might not have been discovered during testing. There are a number of ways that teams can collect user feedback, some of which might just occur naturally. For example, user reviews or support questions are things that might come in naturally that teams can use as feedback. If a team is getting a lot of support questions about a certain feature because they’re finding it difficult to use, they can utilize that information in determining what needs to be worked on in future releases. “With hundreds of questions a day, we keep a pulse on what people are asking for or where we could make parts of our product easier to understand. We aggregate these sup-

can be port conversations and share common themes to help the product team prioritize,” said Zack Hendlin, VP of Product at OneSignal. Hendlin also said that his team collects feedback in other forms, such as data analysis, user research sessions, and conversations with customers. The team analyzes user data, such as where users start an action and where they drop off. “Looking at points where there are big dropoffs in integrating us into their site, viewing delivery statistics, upgrading to a paid plan for more features, and the like allow us to optimize those parts of the user journey,” he said. Hendlin added that useful tools for this type of user data analysis are browse maps and tools, such as HotJar and Google Analytics. Itamar Blauer, an SEO consultant, also said he found Hotjar to be a helpful tool to track user behavior across sites. “The best ways that I found to monitor user experience were through heatmap analyses. I would use tools such as Hotjar to track user behavior across a website, identifying the portions of content they were most attracted to, as well as seeing at which part of the page they would exit.” User research sessions are sessions in which a select number of users get access to an early version of a new release. According to Hendlin, this process can help answer the following questions: “Is the way we are planning to solve the problem actually solving


012-15_SDT038.qxp_Layout 1 7/20/20 3:52 PM Page 13

www.sdtimes.com

August 2020

SD Times

unlikely places

found in

the problem for them? Is it easy to use maneuver through the programs withwithout needing explanation? Are there out any preconceived notions of how needs or desires they have that we the process should work, and approach haven’t thought about?” each action the same way other endUser research sessions are also users would operate. Stakeholder testreferred to as user acceptance testing ing is important as well, as you want to (UAT), which often occurs in the last make sure that the program is running phase of the development cycle, as it was originally proposed, but the explained Chetu’s Koranne. real value comes from end-users that According to Koranne, UAT is typi- the application is being built for. When cally handled by the project manage- it comes to what kind of end-users are ment team. The team is responsible for preferred over others, project mansetting up the parameters of the testing agers want those with industry experienvironment, such as the testing group ence in the function the application is and script of commands. This team being developed for, rather than a then delivers the results of testing back completely random sample. However, to the developers. Koranne recom- users from a diverse set of company mends that beta release participants be backgrounds are preferable to ensure selected carefully and thoughtfully. that the program is accounting for “The ideal testing group that project operational use from a multitude of managers are looking for would consist end-users.” of third-party, real-world users with The final way that Hendlin’s team at relevant experience,” said Koranne. “These types of “If we see that it is something that appears users will be able to more frequently and it is something that

appears really painful, we are taking it into the next development cycle.”

— Anna Boyarkina, Miro

OneSignal gathers feedback is by having actual conversations with their customers. By engaging with customers, product teams may learn where there are disconnects between users and the products, and what they can do to fix those. “Really understanding users comes from talking to them, observing how

they interact with the product, analyzing where they were trying to do something but had a hard time, and seeing where they need to consult documentation or ask our support team,” said Hendlin. “There was a supreme court justice, Louis Brandeis who said ‘There is no such thing as great writing, only great rewriting’ and working on building a product and improving it is kind of the same way. As you get user feedback and learn more, you try to ‘re-write’ or update parts of the product to make them better.” Anna Boyarkina, head of product at online whiteboard tool Miro, said that Miro also gathers feedback from a variety of sources, including support tickets, surveys, customer calls, and social media.

Product integration teams With information coming in from all of these different sources, it’s important to have a process for handling and sorting through it all. Boyarkina explained that at Miro there is a product integration team that is tasked with consolidating all of this feedback and then handing it off to the appropriate team. All of their feedback gets put into a customer feedback lake and then tagged. “For instance if it is a support ticket, it is tagged by a support team,” she said. “If it is from the call of the customer, there is a special form which is submitted by a customer success representative or continued on page 14 >

13


012-15_SDT038.qxp_Layout 1 7/20/20 3:52 PM Page 14

14

SD Times

August 2020

www.sdtimes.com

< continued from page 13

6 best practices for integrating design According to a recent study from Limina, only 14% of organizations consider themselves to be Design-Integrated companies. DesignIntegrated organizations, according to Limina, are those that are “embedding a human-centered design culture into their organizations to gain both exceptional customer experiences as well as business and financial goals.” “When the entire organization is focused on the needs of the user and the value their product delivers, better products are created and brought to market,” said Jon Fukuda, co-founder, and principal of Limina. “Strong alignment among cross-functional teams creates higher efficiency, working together toward the common goal of creating higher quality user-centered products, which leads to cost savings and increased revenue.” According to Limina, there are three major barriers to becoming Design-Integrated: C-level support, human-centered design culture, and alignment of operations and metrics. The company offered up six best practices that companies should follow if they wish to become a Design-Integrated Business: l Embed a human-centered design culture in every corner of the company, starting with the C-suite l Establish a common language to drive understanding, mitigate risks, and improve processes l Integrate design resources into relevant business functions

l Capture specific metrics and manage them to bridge organizational divisions and drive business outcomes l Create reusable artifacts and repeatable processes l Invest in artifacts, then processes, then systems

—Jenna Sargent

sales representative, which also contains a tag.” Koranne believes that feedback needs to be considered from a crossfunctional perspective, because many applications in business don’t just affect a single team. For example, an HCM application might be used across HR, Payroll, and IT, so all three of those teams would need to be involved in the process of gathering feedback. “Conversely, the project management/development team would need a cross-functional setup as the feedback given may affect multiple layers of the application,” said Koranne. According to Koranne, an ideal cross-functional team would consist of a business analyst, tester, and UI/UX developer to address feedback items.

Prioritizing features Once the information is with the appropriate team, that team needs to decide what to do with it. At OneSignal, the product team goes through and ranks feature requests on a five factor scale, Hendlin said. Those factors are frequency, value, strategic importance, and engineering effort. Frequency is related to how common a request is. For example, if a similar request is coming in hundreds of times, then that feature would be more highly prioritized than a feature that is only causing issues for a handful of users. They also look at the impact a feature has on a user. For example, a minor change to the UI would be lowly ranked, while seamless data syncing would rank highly, Hendlin explained. The next two factors are considerations for the business side of things. The team considers what financial benefit there is to fixing something. In other words, would users be willing to pay for the feature? The team also considers whether a new feature drives new growth opportunities for the company. Finally, the team looks at how hard the feature would be to build, as well as how much time and effort it would take to build it. “Once we weigh these attributes for


012-15_SDT038.qxp_Layout 1 7/20/20 3:53 PM Page 15

www.sdtimes.com

August 2020

SD Times

a feature, we decide what to take on...and just as importantly, what not to,” Hendlin said.

Feedback is an ongoing process The team at OneSignal works in weekly sprints, Hendlin explained. Before each sprint, the team meets and determines whether something that came up through user feedback ranks higher than what they had been planning to work on during that sprint. “We try to avoid major changes midway through a sprint, but we will re-prioritize as we learn more from our customers,” said Hendlin. Boyarkina’s team also prioritizes the information gathered from customer feedback. She explained that some feedback requires immediate attention, and that if it is a critical issue, they have a 24hour SLA for fixing it, so those issues get implemented right away. If it is a feature request, it gets moved into a backlog and discussed. The product team at Miro gets together on a biweekly basis and is given a report with user insights. On top of that, it has monthly user insight meetings where it dives into what users are saying and any trends that are occurring. When considering whether to implement feature requests, there are a few things Miro teams looks at. First, they determine whether a feature aligns with their existing product roadmap. They also look at the frequency of a particular request. “If we see that it is something that appears more frequently and it is something that appears really painful, we are taking it into the next development cycle,” said Boyarkina. As soon as the team has a prototype of that feature ready, users who requested that feature are invited to participate in a beta for it, Boyarkina explained. Those users are also informed when the feature is actually released. “If we know who requested a certain feature, we usually send a ‘hey we released this, you asked for that’ and it’s usually really pleasant for people,” said Boyarkina.

Challenges of gathering user feedback One of the obvious challenges of gathering and interpreting user feedback is

User sentiment analysis The process of gathering user feedback should be tied closely to APM. According to Catchpoint, a lot of APM and network monitoring tools often forget about the “last mile of a digital transaction”: the user. “Why do you buy expensive monitoring tools if it’s not to catch problems before users are impacted? That’s the whole point,” Mehdi Daoudi, co-founder and CEO of Catchpoint. User sentiment analysis is another element of user monitoring. User sentiment analysis uses machine learning to sort through all of the feedback that is coming in and interprets how a user might be feeling. “Because we are in an outcome economy, in an experience economy, adding the voice of the users on top of your monitoring data is critical,” said Daoudi. As part of this user sentiment analysis, Catchpoint has a free website called WebSee.com, which collects and analyzes user sentiment data. The goal of WebSee is to enable organizations to better respond to service outages. End users can self-report issues they have on various sites, and that data is aggregated and verified by Catchpoint. According to Daoudi, user sentiment is a big part of observability. “People talk about observability but what are we observing? Machines? Are we observing users? It’s actually a combination of both these things and we are all trying to make the internet better for our customers and our employees and so observability needs to take into account multiple things including user sentiment.” —Jenna Sargent

being able to consolidate and sort through information coming in from different sources. “Even when an organization is able to successfully set up the technological capability, (not to mention the cultural support for), gathering continuous user feedback, it’s another task entirely to smartly parse that information, synthesize the insights, determine course of action, and then execute on them,” said Jen Briselli, SVP, Experience Strategy & Service Design at Mad*Pow, a design consulting company. Briselli went on to explain that viewing this as a challenge is a bit of a red herring. “Figuring out the most successful way to procure, interpret, and act on this feedback is less a function of logistics around how, and far more critically a function of internal alignment around why,” said Briselli. She believes that companies with the most success around this are the ones in which there is stakeholder

buy-in to the idea. “Solving for the logistics of data collection and response, and translation for user requirements and development, all fall more naturally out of the process when leadership has bought in and invested in the outcome. From there, finding the methods that fit existing workflows and building the skillsets necessary for its execution resolve more easily,” she said. Mehdi Daoudi, co-founder and CEO of Catchpoint, agrees that a big challenge is the vast amount of data, but he sees this as an opportunity more than a challenge. “I think the fact that we have these different data sources makes the data even more accurate because it allows us to not only connect the dots, but validate that the dots are even correct,” said Daoudi. “So I think the biggest challenge is the amount of data, but I think there is an opportunity there just because of its richness as well.” z

15


016,17_SDT038.qxp_Layout 1 7/20/20 12:08 PM Page 16

16

SD Times

August 2020

www.sdtimes.com

WHY SOF WARE DELIV SOFTWARE DELIVER ERY M MANAGEMENT T MAATTTERS TE S BY MIKE BALDANI oftware delivery has come a long way in the last 10 years. Many organizations have scrapped the restrictive waterfall model in favor of collaborative approaches that enable them to build faster, change features midstream and deliver updates continuously. But today’s delivery processes aren’t as efficient as they could be. They’re still fraught with bottlenecks, mistakes and confusion. Teams are using too many tools that perform specific functions and don’t tie together well. Data is scattered throughout departments with no single pane of glass to provide visibility into the entire delivery life cycle. As much as we’ve moved forward, we’re not there yet. There’s a fundamental disconnect in the way companies continue to deliver software. Creating DevOps cultures and adopting continuous delivery practices have helped, but companies still aren’t managing with the precision necessary to meet the demands placed on industry in the future. What’s needed is a more holistic approach to the management of software delivery.

S

Mike Baldani is product director at CloudBees.

That approach is outlined in a new model called Software Delivery Management (SDM). It’s more than just an industry term. It’s a revolutionary new framework that outlines ways to remove the blockers that lead to inefficiencies — to connect the people, tools and processes that play key roles in software delivery.

Building on DevOps and CI/CD In an enterprise setting, there are often a variety of development methods, tools and technology stacks being used to deliver a wide range of software to meet different needs, through different processes. SDM doesn’t replace DevOps, continuous delivery or continuous integration (CI/CD). These are bedrock concepts that will be essential to software delivery for years to come. Instead, it builds on them. Even in those companies with a mature CI/CD pipeline, multiple daily deployments and a full, company-wide commitment to DevOps, there is often

no end-to-end insight into the value stream — where products and features are stuck now or get stuck frequently, where bottlenecks and inefficiencies slow down value delivery to end users. Teams following DevOps “best practices” often do things completely differently from each other, even within the same company. There is also a complete inability to understand how software affects business key performance indicators (KPIs). CI/CD helps teams accelerate software delivery — but it doesn’t ensure that they are delivering the right software, or that the business need is being met. It doesn’t pull together data and artifacts across the entire software delivery lifecycle from the many siloed tools an organization relies on, to provide a single overview with the contextual information that developers, product managers, operations teams, product marketers and support teams need. CI/CD doesn’t provide the data needed to measure how well the software organization is creating value for


016,17_SDT038.qxp_Layout 1 7/20/20 12:09 PM Page 17

www.sdtimes.com

August 2020

SD Times

the software organization, all the way through to customer success. When this happens, everyone has more time to focus on high-value work, from developers who no longer blindly create features to customer support specialists who are empowered to fix the root cause of customer frustration instead of repeatedly addressing the same customer concerns.

SDM changes the game

How software delivery management brings value

the business — and without a way to measure, software organizations have no way to know if they are reaching prescribed KPIs or even improving. Software Delivery Management evolves CI/CD in several ways. It proposes the use of connected tools, so stakeholders aren’t working in technology silos. It proposes a holistic data model, so all stakeholders in the organization are accessing and sharing the same information. It also extends the feedback loop to encompass the entire application lifecycle, from issue creation to end users interacting with the application. Just like DevOps breaks down the walls between the development and operations teams, SDM breaks down the walls to connect software delivery to cross functional teams across the organization. It allows them to communicate and collaborate better, through a unified process to ultimately make better software faster, that also effectively addresses the business needs and creates value for the customer.

Development and operations teams talk a lot about speed and frequency: if we can deliver hundreds or thousands of times per day, we have a cuttingedge software development process. At the end of the day, though, software delivery is not just about speed: it is about delivering the best possible product — one that meets customers’ needs — as quickly as possible. Speeding up delivery of poor quality software or software that doesn’t address the market need accomplishes nothing. While SDM still places value on speed, the emphasis is on value delivery across the entire software organization through application of the four pillars of SDM: 1. Common data 2. Universal insights 3. Common connected processes and 4. All functions collaborating The goal isn’t just to automate everything as a way to deliver instantaneously, but rather to free the creative humans who build software from lowvalue tasks and allow them to spend more time and energy on high-value, collaborative and creative work. And to give the entire organization visibility into the data required to make software drive better business outcomes. In contrast to stereotypes about the lone, genius developer, companies get the highest value out of collaborative, cross-functional efforts. A DevOps culture is a step in the right direction, but true cross-functionality should involve collaboration among all stakeholders in

Adopting SDM means not having to sort through customer data to deduce what the problem statement might be, then having non-developers create the requirements for a new feature to address those problems — then handing the requirement set to the software organization without ever discussing the original data. Instead, customer support teams, product stakeholders and developers have access to the same shared database and same information about customer usage. They can work together to both identify the problem and brainstorm how software could be used to solve it. Everyone understands the trade-offs different potential solutions involve, and can decide collectively how best to prioritize and plan feature development to get the desired business results, while delivering value to the customer. It’s more likely the developers will deliver the right product the first time — and if they don’t, they have enough information in the feedback loop to iterate until the software solves the customer need. Everyone, from developers to product marketers to support teams, has visibility into the process and knows when to expect new features or updates.

The next era of software delivery Software delivery has come a long way. Companies are building faster, updating more often and releasing higherquality applications than ever before. But, as technologies advance and business sectors evolve, enterprises will face pressure to do more and do it better. There’s still a lot of room for improvement. The industry can get there, if it approaches software delivery holistically and defines it as a core business function. z

17


018_SDT038.qxp_Layout 1 7/20/20 2:20 PM Page 18

18

SD Times

August 2020

www.sdtimes.com

DEVOPS WATCH

The HCL Software DevOps product portfolio expands with three major project releases BY JAKUB LEWKOWICZ

HCL Software expanded its software DevOps team and product portfolio with the July releases of three major tools: HCL Accelerate 2.0, HCL Launch 7.1, and HCL OneTest 10.1. “Digital transformation investments are really accelerating and companies are doubling down here. We see a whole slew of investment in collaboration security tools but Agile, IT, delivery, and DevOps are still top of the list and very relevant to keeping your business thriving during this changing time,“ said Brian Muskoff, the director of DevOps Strategy at HCL Software DevOps. The first update, Accelerate 2.0, is the company’s value stream management platform that aims to improve throughput while identifying bottlenecks and unifying data from across the organization. Release teams can move to more frequent releases with self-service tools, including templates and automated controls, according to the company. Accelerate 2.0, formerly known as HCL UrbanCode Velocity, includes a new DevOps query language that allows users to model their value stream and see the cycle time for each stage. It’s also tool-agnostic and sits on top of the existing DevOps toolsets. The new “swim lanes” view lets users see who is working on each item in an organization’s value stream. “When we look at what’s next, we believe value stream management really enables companies to make this leap to have software delivery as an ongoing core business process,” Muskoff said. “VSM is not a new concept; it’s been a staple of lean for decades, but it’s an emerging concept for our space - software delivery, and it’s an excellent framework for navigating a softwaredriven transformation.”

Another major release, HCL Launch 7.1, is the company’s continuous delivery platform that was called HCL UrbanCode Deploy in previous versions The new release includes updates in flexibility, governance and security, HCL explained. “The new features are tailored to help you modernize your continuous delivery with features that facilitate automation of deployments to development environments and allow deployments to first reach out to your change management systems to determine if they should proceed,” Hayden Schmackpfeffer, senior developer at HCL Software, wrote in a blog post. “This update expands our support of enterprise-grade needs so you can architect for the future and continue your digital transformation journey.” A new feature, External Approvals Processes, will run immediately before a deployment and the success or failure will determine if the proceeding deployment is permitted to execute. Also, with the addition of deployment triggers, development environments can be configured to get the latest build artifacts deployed as soon as they are received by the Launch server. The Launch server, agent, and relay now support the Kubernetes operator model – making it easier to deploy, run, and manage each component of HCL Launch, according to the company. The third major product, HCL OneTest 10.1, enables testers to take automation all the way from mainframe terminals with 3270 interfaces, through various responsive web applications. The testing solution supports scripting options in Java, VB.NET and Storyboard, and supports all of the major web browsers. Users can create realistic workloads on their systems, track SLAs,

and analyze the root causes of issues. “It’s one thing to be able to execute large test suites and run thousands of different test suites but at the end of the day we want to find what failed quickly and get that fast feedback to get it over to the development teams to get it fixed. We’re always looking to be pluggable as an API-first organization,” said Steve Boone, the head of product management at HCL Software DevOps. “We’re building out our APIs to make sure that they’re easily extendable and that our end users are able to build the integrations that they need and care about in this platform.” Users can also add on automated application security testing and management with HCL Appscan. The Software DevOps portfolio also includes HCL Compass, which handles workflow, change, and issue management, and HCL VersionVault for configuration management. Real-time systems and IoT software modelling is available through HCL RTist. “The HCL Software DevOps portfolio is your one-stop DevOps shop, providing you with the tools you need to achieve business agility. Our mission is to enable you to transform your business by providing actionable insights based on speed to market, operational cost, and with risk,” the company wrote in a post. “An investment in HCL Software doesn’t mean you have to replace the tools and platforms your team know and love. Instead, we help you maximize and leverage the tools you already have, like Jira, Jenkins, Git, and Kubernetes.” Lastly, HCL Software will be providing workshops in which DevOps experts offer their take on DevOps culture and practices tailored to their organization. z


Full Page Ads_SDT038.qxp_Layout 1 7/20/20 11:48 AM Page 36


020-24_SDT038.qxp_Layout 1 7/22/20 10:59 AM Page 20

20

SD Times

August 2020

www.sdtimes.com

AI and ML make testing smarter …but autonomous tools are a long way from being enterprise-ready BY LISA MORGAN

AI

and machine learning (ML) are finding their way into more applications and use cases. The software testing vendors are increasingly offering “autonomous” capabilities to help customers become yet more efficient. Those capabilities are especially important for Agile and DevOps teams that need to deliver quality at speed. However, autonomous testing capabilities are relatively new, so they’re not perfect or uniformly capable in all areas. Also, the “autonomous” designation does not mean the tools are in fact fully autonomous, they’re merely assistive.


020-24_SDT038.qxp_Layout 1 7/22/20 10:59 AM Page 21

www.sdtimes.com

“Currently, AI/ML works great for testing server-side glitches and, if implemented correctly, it can greatly enhance the accuracy and quantity of testing over time,” said Nate Nead, CEO of custom software development services company Dev.co. “Unfortunately, where AI/ML currently fails is in connecting to the full stack, including UX/UI interfaces with database testing. While that is improving, humans are still best at telling a DevOps engineer what looks best, performs best and feels best.” Dev.co has tried solutions from TextCraft.io and BMC, and attempted some custom internal processes, but the true “intelligence” is not where imaginations might lead yet, Nead said.

It’s early days Gartner Senior Director Analyst Thomas Murphy said autonomous testing is “still on the left-hand side of the Gartner Hype cycle.” (That’s the early adopter stage characterized by inflated expectations.) The good news is there are lots of places to go for help including industry research firms, consulting firms, and vendors’ services teams. Forrester VP and Principal Analyst Diego Lo Giudice created a five-level maturity model inspired by SAE International’s “Levels of Driving Automation” model. Level 5 (the most advanced level) of Lo Giudice’s model, explained in a report, is fully autonomous, but that won’t be possible anytime soon, he said. Levels one through four represent increasing levels of human augmentation, from minimal to maximum. The most recent Gartner Magic Quadrant for Software Test Automation included a section about emerging autonomous testing tools. The topic will be covered more in the future, Murphy said. “We feel at this point in time that the current market is relatively mature, so we’ve retired that Magic Quadrant and our intent is to start writing more about autonomous capabilities and potentially launch a new market next year,” said Murphy. “But first, we’re trying to get the pieces down to talk about the space and how it works.”

Forrester’s Lo Giudice said AI was included in most of the criteria covered in this year’s Continuous Functional Test Automation Wave. “There was always the question of, tell me if you’re using AI, what for and what are the benefits,” said Lo Giudice. “Most of the tools in the Wave are using AI, machine learning and automation at varying levels of degree, so it’s becoming mainstream of who’s using AI and machine learning.”

How AI and ML are being used in testing AI and ML are available for use at different points in the SDLC and for different types of testing. The most popular and mature area is UI testing. “Applitools allows you to create a baseline of how tolerant you want to be on the differences. If something moved from the upper right-hand corner to the lower left-hand corner, is that a mistake or are you OK with accepting that as the tests should pass?” said Forrester’s Lo Giudice. There’s also log file analysis that can identify patterns and outliers. Gartner’s Murphy said some vendors are using log files and/or a web crawling technique to understand an application and how it’s used. “I’ll look at the UI and just start exercising it and then figure out all the paths just like you used to have in the early days of web applications, so it’s just recursively building a map by talking through the applications,” said Murphy. “It’s useful when you have a very dynamic application that’s content-oriented [like] ecommerce catalogs, news and feeds.” If the tool understands the most frequently used features of an application it may also be capable of comparing its findings with the tests that have been run. “What’s the intersection between the use of the features and the test case that you’ve generated? If that intersection is empty, then you have a concern,” said Forrester’s Lo Giudice. “Am I designing and automating tests for the right features? If there’s a change in that space I want to create tests for those applications. This is an optimization strategy, starting from production.”

August 2020

SD Times

21

Natural language processing (NLP) is another AI technique that’s used in some of the testing tools, albeit to bring autonomous testing capabilities to less technical testers. For example, the Gherkin domain specific language (DSL) for Cucumber has a relatively simple syntax :”Given, When, Then,” but natural language is even easier to use. “There’s a [free and open source] tool called Gauge created by ThoughtWorks [that] combines NLP together with the concept of BDD so now we can start to say you can write requirements using a relatively normal language and from that the tool can figure out what tests you need, when you met the requirement,” said Gartner’s Murphy. “[T]hen, they connect that up to a couple of different tools that create those [tests] for you and run them.” Parasoft uses AI to simplify API test-

Software Testing The second of Three ParTs

ing by allowing a user to run the recordand-play tool and from that it generates APIs. “It would tell you which APIs you need to test if you want to go beyond the UI,” said Forrester’s Lo Giudice. Some tools claim to be “self-healing,” such as noticing that a path changed based on a UI change. Instead of making the entire test fail, the tool may recognize that although a field moved, the URL is the same and that the test should pass instead of fail. “Very often when you’re doing Selenium tests you get a bug, [but] you don’t know whether it’s a real bug of the UI or if it’s just the test that fails because of the locator,” said Lo Giudice. “AI and machine learning can help them get over those sorts of things.” AI and ML can also be used to identify similar tests that have been created over time so the unnecessary tests can continued on page 24 >


020-24_SDT038.qxp_Layout 1 7/22/20 10:59 AM Page 22

22

SD Times

August 2020

www.sdtimes.com

Autonomous testing: are we there yet? BY LISA MORGAN

A couple of years ago, there was a lot of hype about using AI and machine learning (ML) in testing, but not a lot to show for it. Today, there are many options that deliver important benefits, not the least of which are reducing the time and costs associated with testing. However, a hands-on evaluation may be sobering. For example, Nate Custer, senior manager at testing automation consultancy TTC Global, has been researching autonomous testing tools for about a year. When he started the project, he was new to the company and a client had recently inquired about options. The first goal was to build a technique for evaluating how effective the tools were in testing. “The number one issue in testing is test maintenance. That’s what people struggle with the most. The basic idea is that you automate tests to save a little bit of time over and over again. When you test lots of times, you only run tests if the software’s changed, because if the software changes, the test may need to change,” said Custer. “So, when I first evaluate stuff, I care about how fast I can create tests, how much can I automate and the maintenance of those testing projects.” Custer’s job was to show how and where different tools could and could not make an impact. The result of his research is that he’s optimistic, but skeptical.

There’s a lot of potential, but… Based on first-hand research, Custer believes that there are several areas where AI and ML could have a positive impact. At the top of the list is test selection. Specifically, the ability to test all of what’s in an enterprise, not just web and mobile apps. “If I want to change my tools from this to that, the new tool has to handle everything in the environment. That’s the first hurdle,” said Custer. “But what tests to run based on this change can be

independent from the platform you use to execute your test automation, and so I think that’s the first place where you’re going to see a breakthrough of AI in the enterprise. Here’s what’s changed, which tests should I run? Because if I can run 10% of my tests and get the same benefit in terms of risk management, that’s a huge win.” The second area of promise is surfacing log differences, so if a test that should take 30 seconds to run suddenly

‘The number one issue in testing is test maintenance. That’s what people struggle with the most.’ —Nate Custer,

TTC Global

took 90 seconds, the tool might suggest that the delay was caused by a performance issue. “Testing creates a lot of information and logs and AI/ML tools are pretty good at spotting things that are out of the ordinary,” said Custer. The third area is test generation using synthetic test data because synthetic data can be more practical (faster, cheaper and less risky) to use than production data. “I’m at a company right now that does a lot of credit card processing. I need profiles of customers doing the same number of transactions, the same number of cards per household that I would see in production. But I don’t want a copy of the production data because that’s a lot of important information,” said Custer. Self-healing capabilities showed potential, although Custer wasn’t impressed with the results. “Everything it healed already worked.

So, you haven’t really changed maintenance. When a change is big enough to break my automation, the AI tool had a hard time fixing it,” said Custer. “It would surface really weird things. So, that to me is a little longer-term work for most enterprise applications.”

Are we there yet? “Are We There Yet?” was the title of Custer’s research project and his conclusion is that autonomous testing isn’t ready for prime time in an enterprise environment. “I’m not seeing anything I would recommend using for an enterprise customer yet. And the tools that I’ve tested didn’t perform any better. My method was to start with a three-yearold version of software, write some test cases, automate them, go through three years of upgrades and pay attention to the maintenance it took to do those upgrades,” said Custer. “When I did that, I found it didn’t save any maintenance time at all. Everybody’s talking about [AI], everyone’s working on it but there are some of them I’m suspicious about,” said Custer. For example, one company requested the test script so they could parse it in order to understand it. When Custer asked how long it would take, the company said two or three hours. Another company said it would take two or three months to generate a logical map of a program. “[T]hat doesn’t sound different from hiring a consultant to write your testing. AI/ML stuff has to actually make life easier and better,” said Custer. Another disappointment was the lack of support for enterprise applications such as SAP and Oracle eBusiness Suite. “There are serious limitations on what technologies they support. If I were writing my own little startup web application, I would look at these tools. But if I were a Fortune 500 company, I think it’s going to take them a couple of years to get there,” said Custer. “The challenge is most of these companies aren’t selling a little add-on that you can add into your existing system. They’re saying change everything from one tool that works to my thing and that’s a huge risk.” z


Full Page Ads_SDT038.qxp_Layout 1 7/20/20 11:47 AM Page 23


020-24_SDT038.qxp_Layout 1 7/22/20 11:00 AM Page 24

24

SD Times

August 2020

www.sdtimes.com

< continued from page 21

be eliminated. Dev.co uses AI and ML to find and fix runtime errors faster. “The speed improvements of AI/ML allow for runtime errors to be navigated more quickly, typically by binding and rebinding elements in real time, and moving on to later errors that may surface in a particular batch of code,” said Dev.co’s Nead. “Currently, the machine augmentation typically occurs in the binding of the elements, real-time alerts and restarts of testing tools without typically long lags between test runtime.”

they sat down with smart testers and built a machine learning model that looked at the most frequent business configurations used in the last 3, 4, 5 years,” said Forrester’s Lo Giudice. “They used that data and basically leveraging that data, they identified the test cases they had to test for maximum coverage with a machine learning model that tells you this is the minimum number of test cases [you need to run].” Instead of needing a year to run 1.3 trillion tests, they were able to run a subset of tests in 15 days.

Benefits Do autonomous testing tools require special skills? The target audience for autonomous software testing products are technical testers, business testers and developers, generally speaking. While it’s never a bad idea to understand the basics of AI and ML, one does not have to be a data scientist to use the products because the vendor is responsible for ensuring the ongoing accuracy of the algorithms and models used in their products. “In most cases, you’re not writing the algorithm, you’re just utilizing it. Being able to understand where it might go wrong and what the strengths or weaknesses of that style are can be useful. It’s not like you have to learn to write in Python,” said Gartner’s Murphy. Dev.co’s Nead said his QA testing leads and DevOps managers are the ones using autonomous testing tools and that the use of the tools differs based on the role and the project in which the person is engaged. If you want to build your own autonomous testing capabilities, then data scientists and testers should work together. For example, Capgemini explained in a webinar with Forrester that it had developed an ML model for optimizing Dell server testing. Before Dell introduces a new server, it tests all the possible hardware and software configurations, which exceed over one trillion tests. “They said the 1.3 trillion possible test cases would take a year to test, so

The Dell example and the use cases outlined above show that autonomous testing can save time and money. “Speed comes in two ways. One is how quickly can I create tests? The other is how quickly can I maintain those tests?” said Gartner’s Murphy. “One of the issues people run into when they build automation is that they get swamped with maintenance. I’ve created tons of tests and now how do I run them in the amount of time I have to run them?” For example, if a DevOps organization completes three builds per hour but testing a build takes an hour, the choices are to wait for the tests to run in sequence or run them in parallel. “One of the things in CI is don’t break the build. If you start one build, you shouldn’t start another build until you know you have a good build, so if the tests [for three builds] are running [in parallel] I’m breaking the way DevOps works. If we’ve got to wait, then people are laying around before they can test their changes. So if you can say based on the changes you need, you don’t need to run 10,000 tests, just run these 500, that means I can get through a build much faster,” said Murphy. Similarly, it may be that only 20 tests need to be created instead of 100. Creating fewer tests takes less time and a smaller number of tests takes less time to automate and execute. The savings also extend out to cloud resource usage and testing services.

“The more you can shift the use of AI to the left, the greater your benefits will be,” said Forrester’s Lo Giudice.

Limitations The use of AI and ML in testing is relatively new, with a lot of progress being made in the last 12 to 18 months. However, there is always room for improvement, expansion and innovation. Perhaps the biggest limitation has to do with the tools themselves. While there’s a tendency to think of AI in general terms, there is no general AI one can apply to everything. Instead, the most successful applications of AI and ML are narrow, since artificial narrow intelligence (ANI) is the state of the art. So, no one tool will handle all types of tests on code regardless of how it was built. “It’s not just the fact that it’s web or not. It’s this tool works on these frameworks or it works for Node.js but it doesn’t work for the website you built in Java, so we’re focused on JavaScript or PHP or Python,” said Gartner’s Murphy. “Worksoft is focused on traditional legacy things, but the way the tool works, I couldn’t just drop it in and test a generic website.” Dev.co’s Nead considers a human in the loop a limitation. “Fixes still require an understanding of the underlying code, [because one needs to] react and make notes when errors appear. The biggest boons to testing are the speed improvements offered over existing systems. It may not be huge yet as much of the testing still requires restarting and review from a DevOps engineer, but taken in the aggregate, the savings do go up over time,” said Nead. Autonomous testing will continue to become more commonplace because it helps testers do a better job of testing faster and cheaper than they have done in the past. The best way to understand how the tools can help is to experiment with them to determine how they fit with existing processes and technologies. Over time, some teams may find themselves adopting autonomous testing solutions by default, because their favorite tools have simply evolved. z



SD Times

August 2020

www.sdtimes.com

D

igital transformation is now more important than ever. It’s a statement that has continued to be stressed and echoed throughout the industry for years, but with organizations now forced to deal with distributed and remote work, there is a new need to reexamine business models, according to Nick Muldoon, co-founder and co-CEO of Easy Agile. The 14th annual State of Agile report found that distributed teams is the new normal, with 81% of respondents citing that their team members

changing priorities, to improve, and to adapt — all aspects of adopting the Agile mindset.

The roadblocks on the path to true agility Agile challenges are ever-changing depending on the company, team and current business needs. However, there are a few that continue to pose a problem. “Individuals and interactions over processes and tools” is one of the first values in the Agile Manifesto, yet people are still focused on processes over

What’s standing in the way of BY CHRISTINA CARDOZA are not all in the same location. Businesses need to be able find new ways to support and encourage team collaboration, manage changing projects and provide visibility. “So how can we accelerate change when we know our processes are fundamentally broken?” Muldoon asked. His answer is Agile because it promotes flexibility and transparency businesses so desperately need now in a COVID and post-COVID world. The State of Agile report also found that respondents were using Agile to respond to the current business challenges, manage distributed teams, improve productivity and deliver software faster. “Agile gives so many companies results. It removes ambiguity from the value being delivered. Where before we couldn’t be sure when a project or new product could be delivered, we can now know with more certainty that a MVP can be shipped and we can receive feedback from customers much earlier,” said Muldoon. Many organizations already have had Agile practices in place for years and have continued to build upon those practices, but Muldoon explained Agile is an ongoing practice, one that will never get to a phase of “post-Agile.” Businesses will always have to manage

people, according to Muldoon. Dan Rice, executive advisor for ValueOps/DevOps at Broadcom’s Rally Software, explained in order to make agility truly work, buy-in has to happen across the entire organization — and it’s easier done when it comes from the top down. “The key areas we see that still hinder the ability to deliver value to customers are generally related to leadership teams not embracing the Agile culture or thinking Agile is just something IT or software delivery teams do,” he said. Leadership will continue to be a major obstacle unless people in power are willing to relinquish control to the next generation of leaders who were brought up on lean and Agile practices, Muldoon said. “All too often there are people at the top of companies that are protecting the status quo, the entrenched interests, and themselves. We need to see the servant leaders elevated from people managers to department leads, business unit CEOs, and then the board,” Muldoon explained. “Quite often the CEOs and board members don’t come from a software background — they come from finance, legal, marketing more often these days, but not IT or software.


www.sdtimes.com

Scarcity versus abundance mindset — anything can be done in software, yet for an accountant it is very clear-cut and regulated,” he continued. One rising trend to help the organization achieve true agility is to embrace value stream management. VSM is being adopted more and more as a way to connect people, process and technology as well as provide the ability to visualize, measure and deliver business value. According to Rice, Agile is a key enabler to making all the people and steps involved in the delivery of value a part of

SD Times

for long-lived product teams, bringing work to teams instead of bringing teams to work, Agile enables organizations to give customers working software at regular intervals and to facilitate continuous feedback,” he said. Easy Agile’s Muldoon also sees a trend towards businesses being more customer-focused with the release of SAFe 5.0 for example, which “introduced language around Personas, story mapping sessions, and really bringing empathy for the customer into focus for teams.”

achieving “true agility” the product development process. A “characteristic of Agile done well is when value streams, all of the people and steps involved in the delivery of value to a customer, are part of the product development process. A key lean principle of Agile is connecting business stakeholders and developers so that they work together daily through the project. Value streams make this a core aspect of an organization’s culture of Agility,” Rice explained. Another way Rice sees organizations implementing Agile throughout the organization is to incorporate agility into HR functions, encourage autonomy, hire for Agile and culture fit, and provide interactive performance feedback. Muldoon agreed, adding: “Invest in people; it’s the greatest leading indicator of sustained success. Give them the opportunities to expand the breadth of their experience / role and try new things, cross-pollinating one group with the ideas from another, and making the whole company stronger.” Rice also sees the need to move from project-based operating models to product-based operating models with a customer-centric approach to make sure teams are building products customers actually want. “Agile puts the customer and the outcomes they care about most at the center of the products we build. By advocating

August 2020

By focusing on the customer, organizations can react to change and respond at regular intervals, Rice added. Agile “encourages organizations to reflect regularly and become a learning organization so they can constantly improve.” Muldoon also stressed the importance of playing the long game. While it may be tempting to think short-term to address current needs, a transformation can’t be fast-tracked. “True agility comes after a decade of sustained commitment to the Agile principles,” he said. Thinking needs to go from annual financial goals to long-sustainable growth. But they also need to be able to quickly pivot based on market changes and feedback from innovators and customers. Rice explained. “Organizations maintaining traditional processes such as identifying investments multiple years in advance and locking in feature roadmaps have worked to their detriment to reaching true agility.” “When new strategies are identified or experiments fail, organizations must be able to quickly pivot on their strategy, investments and roadmaps so they do not continue down the wrong path,” Rice said. Digital transformations will “continue forever as the company changes and adapts to an evolving market. Be prepared to stick it out,” Muldoon added. z

Buyers Guide


026-30_SDT038.qxp_Layout 1 7/22/20 10:31 AM Page 28

28

SD Times

August 2020

www.sdtimes.com

How these companies are solving the pain points of Agile BY CHRISTINA CARDOZA

Agile tools are only meant to support development tools, not drive the process. According to Nick Muldoon, co-founder and co-CEO of Easy Agile, instead, development teams should have an evolving set of practices in place that enables them to continuously deliver value to customers. “That means they may start with a whiteboard, and in the times of COVID graduate to a digital story mapping solution like Easy Agile User Story Maps. Then in later years once they’ve mastered that level of Agile they may shift to Easy Agile Programs which supports a scaled agile transformation,” he explained. When deciding to put a tool into the mix, that tool should be able to easily and natively integrate with the platform developer teams are comfortable with and already using. According to Dan Rice, executive advisor for ValueOps/DevOps at Broadcom’s Rally Software, to really be effective an Agile tool needs to offer capabilities needed by teams, teams of teams, programs, portfolios, governance and leadership. So it has to provide a way to visualize work, drive data decisions and minimize the work. “Keeping engineers doing what they love, in the tools they love to do it,” he said. “Tools that require custom views and plug-ins, in addition to many clicks to accomplish a task, may appear to save you money up front but you will spend significantly more in the end in lost productivity.” Tools can also support teams by enabling them to visualize plans and see whether or not they have unrealistic expectations, and track progress against release

plans, changes and community risks. “Your tool should be flexible enough to allow your teams to work the way they want without constraining visibility of progress across the organization,” Rice said. The Broadcom Rally solution is an enterprise platform built for scaling agile development practices, while Easy Agile provides Agile applications for building user story maps, personas, roadmaps and programs. Each solution is designed to solve different pain points of Agile: Muldoon explained Easy Agile is “customer focus made easy.” That’s our mantra. Collaboration is so important with distributed teams, and Easy Agile apps for Jira solves the problem of ‘taking it off the wall.’ That is, taking the Agile post-it’s, string and paper off the physical wall and digitizing it in Jira. There is a lot of pain with Agile in that Agile ceremonies are isolated from the delivery of value. However, with Easy Agile Apps for Jira, Story Maps, PI Planning, Roadmapping and Personas can all happen in Jira where the team is already tracking their work. We aim to make common agile practices remarkably simple. So simple that anyone on the team can get involved. We’re not creating one monster app to cover many different customers and problem spaces; we are creating apps that focus on one area at a time, and that fit together nicely like a jigsaw puzzle to provide a solution that is greater than the sum of its parts.

According to Broadcom’s Rice, Rally is used as organizations look to scale Agile beyond the development team. Rally helps reduce the complexity of planning and execution across teams from the entire organization. Some examples include: Rally enables organizations to easily visualize and track all of the work being delivered without the need for custom boards and views. With Rally’s Organizational Hierarchy, it is easy to quickly tie strategic initiatives to program and agile team planning and execution, allowing for a clear understanding of priorities and progress. Rally’s Capacity Planning functionality allows an organization to easily model team and delivery group capacity against planned work so that organizations can create achievable plans and feel confident in commitments, building trust while identifying and understanding risks and issues that could have an impact on those commitments. Rally’s robust Dependency and Risk Management capabilities help you quickly visualize and understand cross-team, program and portfolio dependencies, including misalignments and potential impacts to your plans without the need for complex plug-ins, queries or views. Rally’s Release and PI Tracking functionality allows you to visualize progress against release and PI plans, so that you understand when scope change may put plans at risk. This helps facilitate the right conversations, enabling you to steer work effectively across all teams and delivery groups involved in a release or PI. z


Full Page Ads_SDT038.qxp_Layout 1 7/20/20 11:47 AM Page 29


026-30_SDT038.qxp_Layout 1 7/22/20 10:31 AM Page 30

30

SD Times

August 2020

www.sdtimes.com

A guide to Agile development tools n Axosoft’s agile project management software is based on the Scrum methodology and designed to help teams plan, track and release software. It features a release planner to view capacities of the sprint, team and team members; interactive Kanban boards to customize and edit item cards and work logs; and custom dashboards with an overview of velocity and projected ship date. n Azure DevOps is Microsoft’s suite of DevOps tools that agile teams can utilize to plan, track and discuss work as well as use Scrum-ready and Kanban-capable boards. Other features include Azure Pipelines for CI/CD initiatives, Azure Boards for planning and tracking, Azure Artifacts for creating, hosting and sharing packages, Azure Repos for collaboration and Azure Test Plans for testing and shipping. n Blueprint Storyteller helps make sense of complex agile software development initiatives by aligning business strategy and compliance with IT execution. The solution tackles app development, regulatory compliance, requirements management and software modernization. n Digital.ai is a leading platform provider for Value Stream Management, Agile planning, DevOps and source code management. Its offerings provide global enterprise and government industry leaders a cohesive solution that enables them to ideate, create and orchestrate the flow of value through continuous delivery pipelines with measurable business outcomes. n GitLab is a single application built for all stages of the DevOps lifecycle. Agile teams can use GitLab to plan and manage projects with features like issue tracking and boards, task lists, epics, roadmaps, labels, and burndown charts. GitLab supports SAFe, Spotify, Disciplined Agile Delivery and more. n Inflectra’s enterprise agile program management solution SpiraPlan features one place to keep track of all activities, trackable planning boards and dashboards, a test management platform, and customizable workflows. In addition, SpiraTeam lets teams manage entire project life cycle from requirements and tests to tasks and code.

n

FEATURED PROVIDERS n

n Broadcom: Rally Software is an enterprise-class agile management platform purpose built for scaling agile across the enterprise. It enables businesses to make faster and smarter decisions by aligning work with business objectives. Rally provides a central hub for teams across the organization to collaborate, plan, prioritize and track work, and continuously improve. Rally also enables teams to accurately measure their results with roll-ups of progress, dependencies, alignment and plan health statistics. To learn more visit https://www.broadcom.com/rally. n Easy Agile: Easy Agile is an Australian software technology business focused on helping teams be more effective. We create agile apps for Atlassian’s Jira platform; Story Maps, Roadmaps, Programs and Personas. Easy Agile is built on strong values of work/life/community balance. We pledge 1% of our time and profit back into the community. n Jama Software centralizes upstream planning and requirements management in the software development process with its solution, Jama Connect. Product planning and engineering teams can collaborate quickly while building out traceable requirements and test cases to ensure development stays aligned to customer needs and compliance throughout the process. n Micro Focus ALM Octane is an enterprise DevOps Agile management solution designed to ensure high-quality app delivery. It includes Agile tools for team collaboration, the ability to scale to enterprise Agile tools, and DevOps management. n Perforce Hansoft is an enterprise Agile project planning and management tool. It enables organizations to plan, track, and manage progress of large-scale, multi-element development projects while using their preferred delivery methods (Agile, Waterfall, & hybrids.) Hansoft is perfect for managing dispersed teams, expanding scale, changing goals, and tight schedules. n Pivotal Tracker is a Agile project management solution that offers stories to define projects, automatic planning to keep teams moving, workspaces to organize projects, and analytics to see how teams and projects are functioning. n Planview’s Enterprise Agile Planning solution enables organizations to adopt and embrace Lean-Agile practices, scale Agile beyond teams, practice Agile Program Management, and better connect strategy

to Agile team delivery while continuously improving the flow of work and helping them work smarter and deliver faster. n The Scaled Agile Framework (SAFe) is the leading framework for scaling Agile across the enterprise. It is designed to help businesses deliver value on a regular and predictable schedule. It includes a knowledge base of proven principles and practices for supporting enterprise agility. n Targetprocess: To connect portfolio, products and teams, Targetprocess offers a visual platform to help you adopt and scale Agile across your enterprise. Use SAFe, LeSS or implement your own framework to achieve business agility and see the value flow through the entire organization. n Tasktop Integration Hub connects the network of best-of-breed tools used to plan, build, and deliver software at an enterprise-level. As the backbone for the most impactful Agile and DevOps transformations, Tasktop enables organizations to define their software delivery value stream, and enables end-to-end visibility, traceability and governance over the whole process. n TechExcel recently announced the release of Agile Studio, a new Agile development platform that provides out-of-thebox support for all major Agile methods and consists of three major Agile development tools: DevTrack, DevPlan and KnowledgeWise. z


Full Page Ads_SDT038.qxp_Layout 1 7/20/20 11:48 AM Page 31


032_SDT038.qxp_Layout 1 7/20/20 3:36 PM Page 32

32

SD Times

August 2020

www.sdtimes.com

Guest View BY KATHRYN ERICKSON

Open source democratizes data Kathryn Erickson works in Open Source and Ecosystem Strategy at DataStax

D

ata is more available than ever before, and copious amounts of new data are collected every day. But if there’s one major impediment to helping organizations unlock the full value of their data, it’s the fact that data hasn’t truly been democratized. In large part, data is simply not accessible for far too many professionals who need it. In the years ahead, that needs to change with the confluence of cloud-native, open source, and automation technologies, along with a new collaborative approach that exists across organizations. Recently, a panel of experts discussed this topic, which included Eric Brewer, Google fellow and VP of infrastructure; Melody Meckfessel, CEO of Observable and former VP of engineering at Google; and Sam Ramji, chief of strategy at DataStax. It was a fascinating discussion about the coming decade of data and what it means for data-driven software development.

Imagine a future where data pipelines and visualization frameworks are fully open source.

The power of data visualization

When it comes to software, Meckfessel is most interested in the human element of the equation. Currently, she is focused on removing the toil from DevOps workflows and helping developers, and even nondevelopers, become more productive. She believes that data visualization will play a big role in the upcoming decade as devs move beyond static textwriting code to robust visual displays that can be leveraged by regular business users. “I see a lot of power in visualization, bringing visualization to data, bringing visualization to software development, and empowering anyone who wants to create and interact with data in powerful ways,” Meckfessel said. “Visualization taps into our human visual system, helps us think more effectively, and exposes underlying data as we’re building cloud-native apps.”

Open source is just getting started Brewer, who, among his other notable achievements was involved in open-sourcing Kubernetes —”We wanted it to be a platform for everyone and something everyone could contribute to,” he said — believes open source will continue to take on a larger-than-life role as we move further into the

future, as it enables companies to move faster than ever before. He’s now thinking of how to extend this to automation frameworks and how open source is used in supply chains. “Most of your code isn’t written by your team or even your company,” Brewer said. Meckfessel also believes that the future of software is incredibly collaborative. “We want the exploration of data for devs and creators to be as real-time and interactive as possible,” she said. Part of that requires “bringing together open-source communities so you don’t have to start from a blank slate and can immediately get up and running.” Meckfessel envisions a future where everyone can collaborate and share what they know. “We don’t write software alone.” From my perspective, this is the power of open source and I imagine a future where data pipelines and visualization frameworks are fully open source, and the value of the platform is derived from the data you bring and the wisdom achieved. In this world, to fork a data pipeline means to adapt it to your data. You find the example and then you start to tinker, swap out the data, change the visualization, and get to insights quickly. You’re going to get to the outcome much faster, and can contribute your fork to the repository, because it was the data that had the value, moreso than the pipeline. Brewer sees things the same way. “When you say the word fork, it implies you’re making a copy. You have your own copy, which means you get velocity, autonomy, and independence,” he said. If cloud-native is all about speed and agility, you pretty much need to leverage the power of open source if you want to build the best data-driven apps you can. It would be hard to imagine being cloud-native without being open source. This represents a big change from the classic data management concept of one monolithic database (or data store) containing all the data with tight access controls. In the future, we may see more databases (or data stores) containing less data. I believe all of this will help developers increase velocity in terms of time and quality of their output‚ and that, of course, is a very good thing. z


033_SDT038.qxp_Layout 1 7/22/20 10:33 AM Page 33

www.sdtimes.com

August 2020

SD Times

Analyst View BY MICHAEL AZOFF

The rise of AI chips A

rtificial intelligence (AI) is a broad field that spans academic research with ambitions to create an artificial human brain (general AI) through to practical applications of deep learning (DL), a branch of machine learning (ML, itself the part of AI concerned with learning systems built on data rather than prepared rules). DL has many real-world applications but before we delve into that world, let me level-set by saying I do not believe we have reached narrow AI, and are an exceedingly long way from general AI. There is no standard definition of narrow AI but at a minimum it is an AI system that can learn from a few examples (just like humans can) and not from iterating hundreds of thousands or millions of times data through the model. To finish my level-setting, this pre-narrow AI era of ours I label machine intelligence, and I continue to refer to the whole space as AI. DL has successes and disappointments, the latter mostly driven by hype but there is also a better understanding of DL’s limitations. For example, there is some gloom right now around the prospect for autonomous driving vehicles (AV), despite existence in limited domains of robot taxis and buses. On the success side DL is used in many practical scenarios today such as online recommender systems, wake word technology, voice recognition, security systems, production line fault detection, and image recognition from assisting radiologists in diagnostic imaging to remote medical services, as well as a host of prospective technologies for smart cities that will flow with 5G rollout. In a recent study on AI chips for Kisaco Research, where we closely examined 16 chip providers, we also mapped the AI chip landscape and found 80 startups globally with over $10.5 billion of investor funding going into the space, as well as some 34 established players. Among them are the 'heavy' hardware players such as the racks of Nvidia GPUs, Xilinx FPGAs, and Google TPUs available on the cloud, as well as where high performance computing (HPC) overlaps with AI. Training DL systems tends to be done here, but not exclusively; there are use cases of training at the edge.

Then there are systems where AI inferencing is the main activity, and there are many AI chips exclusively designed for inferencing. In practice this means these chips run in integer precision, which has been shown to provide good enough accuracy for the application but with a reduction in latency and power consumption, critical for small edge devices and AV. The need for AI hardware accelerators has grown with the adoption of DL applications in real-time systems where there is need to accelerate DL computation to achieve low latency (less than 20ms) and ultra-low latency (1-10ms). DL applications in the small edge especially must meet a number of constraints: low latency and low power consumption, within the cost constraint of the small device. From a commercial viewpoint, the small edge is about selling millions of products and the cost of the AI chip component may be as low as $1, whereas a high-end GPU AI accelerator ‘box’ for the data center may have a price tag of $200k.

Michael Azoff is chief analyst at Kisaco Research.

We are an exceedingly long way from general AI.

The opportunity for software developers The availability of AI chips to support ML and DL applications has opened up an opportunity for software developers. Whereas some decades ago the programming of GPUs and FPGAs was the preserve of embedded engineers, the growth of DL has led to the chip designers realizing they need a full software stack to enable traditional software developers, data scientists and ML engineers to build applications with the programming languages they are familiar with, such as Python. Many of the AI chips on the market now support popular ML frameworks/libraries such as Keras, PyTorch, and TensorFlow. It is possible for developers to test ideas out on low-cost AI chips, to name some of the familiar brands and their offerings: Intel Neural Compute Stick 2, Google Coral board, Nvidia Jetson Nano and Nvidia CUDA GPU cards. We are inviting AI chip users, supply chain players and AI-related enterprise decision-makers to join our Kisaco Analysis Network; please contact the author. z

33


034_SDT038.qxp_Layout 1 7/22/20 10:33 AM Page 34

34

SD Times

August 2020

www.sdtimes.com

Industry Watch BY DAVID RUBINSTEIN

The rise of infrastructure open source David Rubinstein is editor-in-chief of SD Times.

O

pen-source software is showing up in evergrowing percentages of applications, and the amount of open-source within those applications is increasing just as fast. Developers are drawn to open source for a number of reasons: the fact that they can access the code within the components; the fact that there is a community of people creating, maintaining and securing that code; and the fact that they don’t have to create some functionality that already has been created. So why has the uptake of open source on the infrastructure side lagged? I spoke with Mark Herring, CMO at Gravitational, a startup in the cloud infrastructure space that offers tooling that controls user access and secures application deployments for developers. Herring put it this way: “I think we’ve become such an instant gratification world. I don’t want to go to find a machine; I don’t even know who to speak to in IT who can give me a machine. I need access. Let me just go and get a thing and use it.” Developers, he said, believe developers. They are dubious of sales pitches and marketing claims. ‘The way they look at it is, ‘Let me go see what’s happening on Github… Let me see what the cool kids are using.” A big reason for lag in adoption on the infrastructure side is that the need didn’t exist in the time of monolithic applications and in-house data centers. Running monolithic applications on inhouse infrastructure is in many ways easier, because the company owns everything. Elastic-scale software that’s accessible creates way more complexity. “From an industry trend, it’s a bit like the legacy problem,” Herring said. “As [companies] look at monolithic software and they go, ‘What are we going to do now as we rearchitect it,’ there is this new kid on the block called Kubernetes, and everybody wants to move everything to Kubernetes, and basically have infinite scaling. The trouble with that is one person’s Kubernetes is not another person’s Kubernetes. If you’re going to deploy something on GCP, or AWS, or Azure, well, it reminds me of the good old heyday at Sun — I come from Sun in the

A big reason for lag in adoption on the infrastructure side is that the need didn’t exist in the time of monolithic applications and in-house data centers.

Java days — and it was ‘write once, run anywhere... sometimes.’ It’s the same problem.” Even before Kubernetes, the Hadoop data systems brought open source to infrastructure, and Docker, Puppet and Rancher followed suit, enabling developers to spin up instances of environments quickly and easily. People, Herring said, stopped looking at open-source tools for infrastructure as toys. “It’s a classic crossing the chasm,” he added. “It used to be a couple of Silicon Valley companies using it, and we’re starting to see the chasm has been crossed.” One downside Herring mentioned is that not all open source is created equal, and there is bad open source in the world. So sorting through the chaff has been an impediment to uptake. “There wasn’t one major shift to open source; it was more just the flavor of going, ‘Ahh, this is where it’s at.’ The trouble is, you have to go and then dig through the morass of bad open source that’s out there.” IT organizations seeking to bring open source into their infrastructure can use methods tried and found to be true by developers: Look for the number of stars the project has, the number of people in the community, the word-of-mouth ‘buzz’ about the project. Herring said, “You dont want to go and put all your infrastructure there and find there’s no one behind it.” As for Gravitational, it has Kleiner Perkins money behind it, which Herring said shows “you’re not fly-by-night.” The company now offers two products: Teleport, which is used for multicloud privileged access management; and Gravity, a tool to package cloud environments — including dependencies — to be delivered to on-premises servers or another cloud instance. The biggest hacks occurring these days are from people coming into systems and stealing someone’s credentials to either siphon data or to plant ransomware. Because of the distributed nature of today’s application architectures, security is higher on the list of concerns than it was in the monolithic world. “What it means for a lot of developers out there,” he explained, “is, ‘Oh my God, I’ve got to do this and it’s a tax on the system. How do I find some people who have done that because I don’t want to write security code.’ ” z


047_SDT032.qxp_Layout 1 1/17/20 5:23 PM Page 1

Reach software development managers the way they prefer to be reached A recent survey of SD Times print and digital subscribers revealed that their number one choice for receiving marketing information from software providers is from advertising in SD Times. Software, DevOps and application development managers at large companies need a wide-angle view of industry trends and what they mean to them. That’s why they read and rely on SD Times.

Isn’t it time you revisited SD Times as part of your marketing campaigns? For advertising opportunities, contact SD Times Publisher David Lyman +1-978-465-2351 • dlyman@d2emerge.com


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.