FC_SDT042.qxp_Layout 1 11/18/20 10:27 AM Page 1
DECEMBER 2020 • VOL. 2, ISSUE 42 • $9.95 • www.sdtimes.com
IFC_SDT041.qxp_Layout 1 10/16/20 3:01 PM Page 2
Instantly Search Terabytes
www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein drubinstein@d2emerge.com NEWS EDITOR Christina Cardoza ccardoza@d2emerge.com
dtSearch’s document filters support: ‡ popular file types ‡ emails with multilevel attachments ‡ a wide variety of databases
SOCIAL MEDIA AND ONLINE EDITORS Jenna Sargent jsargent@d2emerge.com Jakub Lewkowicz jlwekowicz@d2emerge.com ART DIRECTOR Mara Leonardi mleonardi@d2emerge.com
‡ web data
CONTRIBUTING WRITERS Jacqueline Emigh, Lisa Morgan, Jeffrey Schwartz, George Tillmann
2YHU VHDUFK RSWLRQV LQFOXGLQJ ‡ efficient multithreaded search
CONTRIBUTING ANALYSTS Enderle Group, Gartner, IDC, Intellyx
‡ HDV\ PXOWLFRORU KLW KLJKOLJKWLQJ ‡ forensics options like credit card search
CUSTOMER SERVICE SUBSCRIPTIONS subscriptions@d2emerge.com ADVERTISING TRAFFIC Mara Leonardi mleonardi@d2emerge.com
Developers: ‡ 6'.V IRU :LQGRZV /LQX[ PDF26
LIST SERVICES Jessica Carroll jcarroll@d2emerge.com
‡ &URVV SODWIRUP $3,V IRU & -DYD DQG NET with NET Standard / 1(7 &RUH
REPRINTS reprints@d2emerge.com
.
.
.
‡ )$4V RQ IDFHWHG VHDUFK JUDQXODU GDWD FODVVLILFDWLRQ $]XUH $:6 DQG PRUH
ACCOUNTING accounting@d2emerge.com
ADVERTISING SALES PUBLISHER David Lyman 978-465-2351 dlyman@d2emerge.com
Visit dtSearch.com for ‡ KXQGUHGV RI UHYLHZV DQG FDVH VWXGLHV ‡ IXOO\ IXQFWLRQDO HQWHUSULVH DQG developer evaluations
SALES MANAGER Jon Sawyer 603-547-7695 jsawyer@d2emerge.com
The Smart Choice for Text Retrieval® since 1991
dtSearch.com 1-800-IT-FINDS
PRESIDENT & CEO David Lyman
D2 EMERGE LLC www.d2emerge.com
CHIEF OPERATING OFFICER David Rubinstein
003_SDT042.qxp_Layout 1 11/17/20 5:01 PM Page 3
Contents
VOLUME 2, ISSUE 42 • DECEMBER 2020
2020: The year everything changed
FEATURES GitOps: It’s the cloud-native way
page 17 page 6
NEWS 4 14
Harness announces CI, DevOps modules for software delivery
28
testRigor helps to convert manual testers to QA automation engineers
COLUMNS 30 31
There’s a time and a place to take on risk
News Watch
page 8
The resurgence of enterprise architecture
GUEST VIEW by Andrew Lau Velocity? Who gives a flying Fortran? ANALYST VIEW by Jason English The great digital scattering
page 10
Test Automation: From the mundane to the creative domain page 23
Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 2 Roberts Lane, Newburyport, MA 01950. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2020 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 2 Roberts Lane, Newburyport, MA 01950. SD Times subscriber services may be reached at subscriptions@d2emerge.com.
004,5_SDT042.qxp_Layout 1 11/17/20 9:39 AM Page 4
4
SD Times
December 2020
www.sdtimes.com
NEWS WATCH Report finds open-source talent storage In the latest Linux Foundation and edX 2020 Open Source Jobs Report, while 81% of respondents say hiring opensource talent is a top priority, 93% of hiring managers report trouble finding the skills. The report also revealed the top desired skill sets include Linux, DevOps, cloud and security. According to the foundation, these skills align with the technologies of highest importance to opensource professionals, which are cloud and containers, AI and machine learning, security, Linux, networking, and edge computing. In order to address the open-source skills gap, more and more businesses are leveraging online open-source training for employees. Eighty percent of employers reported they now provide online training, which is up from 66% two years ago.
HCL Software unveils VoltMX HCL Software is expanding on its low-code capabilities and partnerships with the release of HCL Volt MX. The new solution is a low-code application development platform designed for professional developers building multi-experience applications. According to the company, developers are often bogged down learning the specific skills for specific platforms such as Android and iOS. Additionally, there is a finite number of developers who are actually experienced in one particular platform. Volt MX is designed so developers can
FSF asks for feedback on high priority projects The Free Software Foundation (FSF) has announced a call for feedback from the community. This feedback will be used to create an update to the FSF’s High Priority Free Software Projects (HPP). According to the foundation, the HPP initiative draws attention to specific area of development and projects of strategic importance, with the ultimate goal of freedom for all computer users. The list guides volunteers, developers, funders, and companies to projects that will best utilize their skills. “The HPP list has enormous potential, and it’s important to get feedback from the community so it reflects the current state of free software,” said Zoë Kooyman, program manager at the FSF. “The HPP list provides focus to projects and developers, as well as to supporters looking to fund free software projects. Free software is the only answer to respecting users in the increasingly digital environment we all live and work in, and the HPP list can help guide software’s continued path to freedom.” build native mobile apps, progressive web applications, and wearable solutions once and deploy them anywhere. Additionally, Andrew Manby, associate vice president of product management for HCL Software’s digital solutions, explained that current citizen developer offerings don’t really provide the deep capabilities necessary to accelerate productivity for developers. “For professional developers, you have to have the horsepower and you have to have the capabilities and that’s what ultimately led to Volt MX,” he said.
Microsoft revamps VS extension model Microsoft has invested heavily in developer solutions, adding enhancements to Visual Studio like GitHub Codespaces, Git Integrations, and IntelliCode Team Completions. Now, the company is planning on creating a new extensibility model for Visual Studio extensions. According to Microsoft, this new model will make extensions more reliable, easier to write, and supported locally and in the cloud. The company explained
that one of the problems with extensions today is that in-proc extensions have minimal restrictions over how they can influence the IDE, which sometimes leads to them corrupting Visual Studio if the extension crashes. One of the biggest changes Microsoft will make is to make extensions out-ofproc, ensuring increased isolation between internal and external APIs and leading to fewer crashes.
Next.10 packed with front-end features Vercel, the Next.js company, announced Next.js 10 at its user conference with new ways for front-end developers to create rich web experiences. “Performance, or lack of it, is the most critical factor in the success or failure of the modern web site,” said Guillermo Rauch, CEO of Vercel. “Next.js 10 addresses the most critical pain points developers face when optimizing their workflows and websites to deliver high quality, highlyperformant content at scale.” Top features in this release include automatic image optimization, internationalized routing and automatic lan-
guage detection, quick-start ecommerce capabilities and continuous Web Vitals analytics.
Catchpoint takes on employee experience Catchpoint released a new solution that brings its enduser monitoring solution inside to organizations so they can spot and remediate problems with employee devices, applications and their network. The new Employee Experience Monitoring solution brings together RUM and endpoint analytics with global user sentiment data and synthetic monitoring to help organizations with growing numbers of remote workers troubleshoot problems. It does this by placing an agent on the employee’s device to monitor the device itself — CPU, memory and device metrics.
Kobiton acquires Mobile Labs Mobile experience platform provider Kobiton announced that it acquired one of its competitors, Mobile Labs Inc. The acquisition will allow developers and QA teams to deliver apps faster by leverag-
004,5_SDT042.qxp_Layout 1 11/17/20 9:39 AM Page 5
www.sdtimes.com
ing artificial intelligence across real-devices spanning cloud and on-premises deployments, according to Kobiton. “The combined Mobile Labs and Kobiton platform will deliver on that vision onpremises, in the cloud or as a hybrid solution for the most demanding organizations in the world,” said Dan McFall, CEO of Mobile Labs.
CodeSentry to detect security blind spots
whereas before developers would need to manually manage and install such dependencies. There is a new peer dependency algorithm that ensures a validly matching peer dependency is found at or above the peer-dependent’s location in the node_modules tree, the npm team explained.
LF AI merges with ODPi
GrammaTech has announced a new software composition analysis (SCA) product, CodeSentry, that is designed to detect vulnerabilities in application components including binaries, and create a detailed software bill of materials. According to the company, it identifies blind spots and allows security professionals to measure and manage risk quickly throughout the SDLC. With the bill of materials, CodeSentry can detect the components and vulnerabilities associated with them, including network components, GUI components, or authentication layers.
The LF AI Foundation and ODPi merged to support a growing portfolio of technologies and to drive open source collaboration across AI and data. The LF AI Foundation supports open-source innovation in artificial intelligence, machine learning and deep learning while ODPi focuses on big data solutions. Together, the organizations will make up the LF AI Data Foundation, and will enable additional collaboration and integration of AI/ML/DL and data. The effort will build and support an open community and a growing ecosystem of open source AI, data and analytics projects.
Npm 7.0’s long-awaited features
Opera’s new no-code approach
Npm v7.0.0 introduces a number of highly requested features, such as Workspaces, the ability to automatically install peer dependencies, and package-lock v2 and support for yarn.lock Workspaces are a set of features that offer support from managing multiple packages within a single top-level package. This release also aims to make it easy to automatically install peer dependencies,
Continuous orchestration platform provider Opsera has created a new approach for software delivery that combines CI/CD tools with no-code automation. According to the company, by orchestrating tools, pipelines, and insights through a single platform, customers will see benefits such as faster time to deployment, resource optimization, and a crossfunctional perspective with KPIs that better correlate technical performance with
business outcomes. Key features of the new platform include toolchain automation, declarative pipelines built using drag-and-drop workflows, and unified insights and logs.
IBM releases Code Risk Analyzer The Code Risk Analyzer is a focused effort to bring security and compliance analytics to DevSecOps. It can be configured to run at the beginning of a developer’s code pipeline and it reviews and analyzes Git repositories for known issues with any open-source code that needs to be managed. It helps provision tool-
December 2020
SD Times
chains, automates builds and tests, and enables users to control quality with analytics, according to the company. “The trend toward decentralized cloud-native developer teams creating, modifying, and redeploying their work on a daily, or more frequent basis, has sparked a transformation in security and compliance processes for business applications,” Shripad Nadgowda, senior software engineer at IBM, wrote in a blog post. “As a result, it has become critical to equip developers with a new set of cloud-native capabilities and tools, such. Code Risk Analyzer, that can be easily embedded into existing development workflows.” z
People on the move
n ShiftLeft has expanded its board of directors to include cybersecurity influences Stuart McClure and Adam Fletcher. McClure is the former CEO and founder of Cylance while Fletcher is the CISO of the global investment firm Blackstone. The new board of directors will help guide ShiftLeft’s next stage of growth in the application security market. n Cloud engineering company Pulumi announced Jay Wampold as its new chief marketing officer, Lindsay Marolich as its senior director of demand generation, Kevin Kotecki as its vice president of sales, and Lee-Ming Zen as its vice president of engineering. The expansion of its executive team will accelerate the company’s on-going research and development and go-tomarket strategies. n Scaled Agile, the provider of SAFe, has inducted six new fellows into its SAFe Fellow Program. The program recognizes individuals who exhibit the highest level of mastery and thought leadership in the practice of SAFe. The new fellows are: Debbie
Brey, Michael Casey, Cheryl Crupi, Andrew Sales, Carl Starendal, and Robin Yeman. n Intelligent edge company Wind River is also expanding its leadership team with AI, 5G, and digital transformation industry experts. The new executive appointments include Cyra Richardson as its chief product officer, Michael Gale as its chief marketing officer, and Paul Miller as its chief technology officer.
5
006,7_SDT042.qxp_Layout 1 11/17/20 2:22 PM Page 6
6
SD Times
December 2020
www.sdtimes.com
GitOps: It’s the cloudBY CHRISTINA CARDOZA
M
ost of the approaches introduced in the software development space are designed to make life easier for developers, but what about operations? DevOps was designed to make development and operation systems work better together, and as more teams successfully adopt DevOps, there is an opportunity to tackle and improve the operations process. According to Steve George, COO of the Kubernetes management platform provider Weaveworks, when you are running applications continuously, operations become a big chunk of the life cycle and the cost to run those applications. “Ultimately you want to improve the development of those applications, that’s why the operations piece is so important,” he said. GitOps is a new rising force in the industry that is enabling developers to take on more IT operations responsibilities.
What is GitOps? GitOps was first coined in 2017 by Weaveworks co-founder and CEO Alexis Richardson. Weaveworks’ definition of GitOps is “a way to do Kubernetes cluster management and application delivery. It works by using Git as a single source of truth for declarative infrastructure and applications. With GitOps, the use of software agents can alert on any divergence between Git with what’s running in a cluster, and if there’s a difference, Kubernetes reconcilers automatically update or roll back the cluster depending on the case. With Git at the center of your delivery pipelines, developers use familiar tools to make pull requests to accelerate and simplify both application deployments and operations tasks to Kubernetes.” Sven Efftinge, CEO of Gitpod, a development environment solution provider, simplifies the definition as a
Emerging practice enables developers to take on more ITOps responsibilities way to make developers more aware of the operations part or a means to deploying software with a process around it. Weaveworks’ Davis explained the company first realized GitOps was going to be big when they were working on their SaaS offering, which involves Kubernetes, networking and observability, and someone noticed that if they “pushed a button,” the entire system would “blow away.” So, they pushed the button, and the system “blew away,” but because they had GitOps practices like declarative configuration in place, they were able to get the system back up and running in little time. “That was when we had the realization that there was something to this new cloud-native operational pattern,” she said. According to Davis, GitOps is best suited for cloud native and scenarios where you have applications with cloud-native architectural patterns like circuit breakers and service discovery.
GitOps and Kubernetes GitOps is most commonly associated with Kubernetes because applications
that are taking advantage of cloud-native patterns work best in a Kubernetes setting, Davis explained. Kubernetes is not mandatory for GitOps, but it does provide key elements for implementing it such as being able to declare a state, add reconcilers and an extensible API. “Kubernetes is going to be and is the major platform used by enterprises going forward. If you started building an application or operating a large application today, your question has to be why not Kubernetes?”, said Weaveworks’ George. Sheng Liang, co-founder and CEO of Rancher Labs, the enterprise Kubernetes management company, explained that Kubernetes’ ability to declare a state is key because it eliminates risk. “With Kubernetes, everything becomes declarative. You say this is what I want the state of my infrastructure or cloud application to be, and Kubernetes makes it happen. It monitors it on an ongoing basis. If things go bad or something breaks, it does its best to get it back to that desired state, and if it can’t, it alerts you,” he said.
006,7_SDT042.qxp_Layout 1 11/17/20 2:22 PM Page 7
www.sdtimes.com
- native way GitOps is a natural extension of this since Git enables developers to store that desired state, and “because the desired state description is a document, you just store it in Git and every time something changes, you just push out the new version. If it goes bad, because Git stores the previous version, you can just go back and then Kubernetes will do whatever it takes to get you back up,” Liang went on to explain. “That is why the declarative way of controlling your infrastructure and deploying applications and using GitOps to manage it is becoming very popular.” According to Priyanka Sharma, general manager at the Cloud Native Computing Foundation (CNCF), GitOps is to Kubernetes as Git was to Linux. “Kubernetes really unleashes the power of cloud computing, containers, and just building software fast and resiliently, but it’s not going to be super useful if developers can’t use it quickly. GitOps is basically utilizing the Git workflows that every developer is used to,” she said. “Not everyone who is touching Kubernetes is using GitOps, but I know everyone wants to because it would make their life easier.”
GitOps and DevOps GitOps is also being declared as the “next big thing for DevOps” because of their strong connection. According to Weaveworks’ Davis, while DevOps doesn’t have a concrete set of practices, GitOps does provide a concrete way of doing DevOps. For instance, Davis explained the top DORA metrics include frequent deployments, shorter lead time, mean time to recover, and change failure rate. “There is a direct correlation between those metrics and GitOps patterns,” she said. GitOps enables self-service for development teams because at the platform layer, you can have developers request the resources they need, provide them in a way that is configured, secure and compliant, and have the ability to roll some-
thing back if something goes wrong. “Folks generally want to have a more reliable way to run and deploy applications. That has always been the driving force behind the whole DevOps and GitOps movement,” said Rancher’s Liang. Because developers are familiar with Git, it also helps them take a larger operations role. “How many frictions do you create in a developer workflow when you are asking them to do more now than ever,” said CNCF’s Sharma. “If you want a developer to take on more operational responsibilities, it’s going to be better if they can do it in a workflow they are used to using. That’s why you need GitOps. It becomes an easy, universal language for developers to understand and thereby start being comfortable running and orchestrating their own containers, or turning on and off cloud computing resources.”
What GitOps is not The definition of what is and what isn’t GitOps has been one of the more controversial issues around GitOps, CNCF’s Sharma explained. People believe if it doesn’t do x, y or z specifically, then it isn’t GitOps, but that’s just one flavor of GitOps, according to Sharma. “Anyone who is utilizing the Git workflow to do operations in any way, in my opinion, is GitOps,” she said. “My philosophy is if it is a Git-based workflow making a developer operationalize his or her own code successfully, that is a GitOps workflow.” Sharma believes a lot of people don’t even know they are using GitOps because they are so used to using Git to check code.”If that is the case, they might not know how much further it can enable them in their Kubernetes journey,” she said. Gitpod’s Efftinge echoed similar sentiments, saying that DevOps pipelines are typically utilizing GitOps because they are using Git as a central canonical source of truth for everything
December 2020
SD Times
that is automated. “Basically, you put everything into Git and then from there you drive automation, CI/CD, deployments, and new development,” he said. However, Weaveworks’ Davis said it is so much more than that. While many emphasize the Git in GitOps, it’s really about the Ops part. “Just like the early days of microservices where we saw businesses trying to take legacy apps and patterns, and stick them into microservices and deploy them… people are starting to take old operational patterns, store them in Git and expect magic,” she explained. “We are pretty adamant that isn’t GitOps. Just because you put something in Git doesn’t make it GitOps. It isn’t actually the central part of GitOps. Ops is the central part of GitOps.” Git is important because it has certain semantics like an immutable version history, but it needs to be connected to software agents. Weaveworks’ 4 principles of GitOps are: 1. The entire system is described declaratively 2. The canonical desired system state is versioned in Git 3. Approved changes can be automatically applied to the system 4. Software agents are used to ensure correctness and alert on divergence “The key cloud-native pattern, which is reconciliation of the fact that you are never done, that things are always correcting themselves, and you always have to respond to change is something that is popularized from Kubernetes,” she said. “The biggest misconception is that people don’t think about the reconciliation loops. And it goes back to cloud native, which is all about constant change so you are constantly reconciling.” “Kubernetes and cloud native are changing the way we are going to be developing and operating applications and GitOps is speaking to those management practices. In many ways, the future for GitOps is the same for cloud native. It is helping teams take advantage of it. We are right at the beginning of that journey,” Weaveworks’ George added. z
7
008-9_SDT042.qxp_Layout 1 11/17/20 9:37 AM Page 8
8
SD Times
December 2020
www.sdtimes.com
There’s a time and a place to take on risk
But introducing new tools and processes on critical projects is neither BY GEORGE TILLMANN
George Tillmann is a retired programmer, analyst, management consultant, CIO, and author of ‘Project Management Scholia: Recognizing and Avoiding Project Management’s Biggest Mistakes,’ from which this article is excerpted.
I
magine undergoing some serious surgery at your local hospital. The nurse tells you that they are all excited about your surgery. Your surgeon is very famous but quite new to the hospital and the surgical staff has never worked with him before, and they are not familiar with his operating room procedures. Further, there is exciting buzz about the new operating room technology that was delivered just the day before that will radically change how your operation is performed. You will be the first person they try it out on. Farfetched? Of course it is. No one would ever put a new surgical team together with new technology and new operating room procedures with a real patient. It is a scenario for disaster. A new systems development project? Well, it’s done all the time, isn’t it! IT is abuzz. Senior business management just approved the development of a new expanded order management system. They approved a budget to purchase a new NoSQL database, 70 new workstations, a half-dozen servers, and hire seven new programmers and analysts. This will give IT the opportunity to try that DevOps approach it has heard so much about. See any parallel? New IT technology is expensive. Just purchasing one database management system can set an organization back six figures even before the required new hardware and staff training. Many IT organizations cannot afford to purchase such items out of their operating budgets. Big-ticket items have to wait for the big systems development projects with their business management-funded big budgets. New, high profile projects, with funding from project sponsors, can be a time of renewal for IT. That’s the good news. The bad news is that all those new items increase the risk of project failure. Go back to the surgery example. Being operated on by a famous surgeon is good. Having the latest operating technology is also a plus. Keeping hospital procedures up to date is a sign that the organization is trying to do better. Throwing them all together for the first time—not a good idea. Yet, that is exactly what IT does. Given two projects, one a three-month, 9 person-month, internal to IT inventory system, and the other an 18-month, 200 person-month, business critical order processing system; which gets the tried and true tools and techniques, and which gets the high-risk, brand new tools, and techniques? It’s insane. Ideally, you want your best-trained staff, using tried and true tools and techniques, on the most important projects, and you want to train staff and try, test, and learn how to use new tools and techniques on small, ideally internal-to-IT projects. How do you do that given that IT can only afford to purchase the new tools and techniques when undertaking a new business-sponsored project? The accompanying diagram frames the conundrum. The four categories in the diagram just might hold the answer to IT’s problem. Category 1: Missed Opportunity Assigning IT’s best people—experienced in the use of IT’s methods, techniques, and tools—to non-business-critical work is a waste of resources. Putting your best people on low risk/low reward projects can result in bored development staff and senior management wondering if systems development is overstaffed.
008-9_SDT042.qxp_Layout 1 11/17/20 9:37 AM Page 9
www.sdtimes.com
Category 2: Best Foot Forward This category is IT’s raison d’être—its reason for being. Being able to staff a high-profile business critical application with IT’s best and brightest using technology they are familiar with is the ideal. It’s the lowest risk with the highest reward. Category 3: Learning Experience This is IT boot camp. It is the ideal place to train inexperienced staff as well as try out new methods, tools, and techniques. The challenge is finding the funding to bankroll the low priority work. This is a good test of IT’s ability to sell systems development investment to business management. There is also a silver lining for IT. While IT is working to automate the business, it is often one of the least automated departments in the business. Certainly a case of the
This death march is often predetermined by the inability of IT to sell to business management the need to invest in new staff and/or technology before committing the farm. Why are there Category 4 projects? The answer is because that is how funding works. IT wants a new database management system and four servers to support it and senior business management says, what for? Why can’t you use what you have? However, if senior business management wants that exciting new business product and IT says, yes, we can do it, but we will need a new… You get the idea.
What can systems development do? There are no great answers, but a few not-so-great-ones can provide some help. It’s unfortunate, but Category 4
TRIED AND TRUE TECHNOLOGY AND STAFF
1 – Missed Opportunity • Experienced staff, and NON- BUSINESS • Existing technology CRITICAL SYSTEM Could have been used to train staff or learn new technology
BUSINESS CRITICAL SYSTEM
2 – Best Foot Forward • Experienced staff, and • Existing technology Where IT shines
shoemaker’s kids. Because of either priorities or budgets, IT is one of the most labor-intensive departments in many organizations with numerous opportunities for automation. However, getting approval for Category 3 projects can be a challenge. Some strong IT management, a well prepared business case, and a good project champion should grease the skids for IT to identify, approve, and fund Category 3 projects, even if they are not kicked-off right away. Why not start them right away? Because the smart IT manager will always have a number of Category 3 projects waiting in the wings for just the right time (see below) to launch them. Category 4: Suicide Alley Run! The risk of failure is very high.
NEW TECHNOLOGY AND/OR STAFF
3 – Learning Experience • New technology and/or • Mix of new and experienced staff Good training ground
4 – Suicide Alley • Inexperienced staff, and/or • New technology Resume time
projects will not simply vanish on their own. However, there are a few things IT can do. The remedy is risk shifting, turning that risky Category 4 project into two projects, a Category 3 followed by a Category 2. The idea is to shift the risk from the Category 4 project to a Category 3 project. Below are three options in order of preference. Option A. Schedule a non-business critical project just before the business critical one. Because many non-business critical projects are short and business critical projects long, it might be possible to schedule a non-business critical project, containing the new staff, tools, or techniques, before the business critical project kicks off. A 6- to 8-week lowpriority project might just provide the
December 2020
SD Times
training and tool familiarity staff need to get up to speed for that business critical project, thus turning that Category 4 disaster into a Category 3/Category 2 win. This is why the smart IT manager always has a Category 3 project waiting for the right moment. Option B. Run a non-business critical project simultaneously with the business critical project. Sometimes the business critical project needs to start immediately. IT might have to start the non-business critical project in parallel with the business critical project. Not ideal, and the non-business critical project will probably negatively impact business critical project schedules, but it is unavoidable. The more disparity between the nonbusiness critical and business critical schedules (the longer the business critical project schedule) the better. This option is not as good as Option A but it might have to do. Option C. Bundle the Category 4 project with a non-business critical Category 3 project, doing the nonbusiness critical work first. If running a non-business critical project before, or in parallel with, a business critical project is not possible, then the best IT can do is to find a nonbusiness critical project related to the business critical project, and bundle the two projects together making a new single project. Then the non-business critical tasks could be tackled first, using the new staff and new technology, allowing a de-facto training period before the business-critical tasks are started. The challenge is finding the right non-critical project that, from a user perspective, fits in well with the critical project, can be built before the critical project work, and the business is willing to fund. The project manager is faced with many technical and personnel project issues but perhaps the most important success factor is risk management, in this case shifting risk from business critical applications to non-business critical applications. It is hard to find a single action that can benefit the business, IT, the project team, and the project manager. This just might be one of them. z
9
010-12_SDT042.qxp_Layout 1 11/17/20 5:02 PM Page 10
10
The resurgence of SD Times
December 2020
www.sdtimes.com
E
BY CHRISTINA CARDOZA nterprise architecture (EA) is making a comeback. While the method for understanding and visualizing all of the business processes within an organization has been around for decades, recent changes and current trends in the industry is making organizations take a second look at how they do things. According to Saul Brand, senior director analyst at Gartner, research shows 76% of clients are either starting, restarting or renewing their EA practices. “What we mean by renewing [refers to] clients who are doing some form of enterprise architecture. They are most typically doing solutions and technical architecture, and doing a foundational traditional approach to EA. What is happening is that is not enough,” he said. The ongoing need to digitally transform and respond to the COVID-19 pandemic is bringing new meaning to why and how businesses do enterprise architecture. “Enterprise architecture is the process of capturing enough of your organization's information systems, IT infrastructure, application portfolio, and workforce in a way to identify meaningful and progressive changes to the enterprise,” said Tom O’Reilly, chief operations officer for Sparx Systems, an EA company.
Why is EA becoming important again? In just the last couple years, organizations have been going through a number of different transformations such as cloud transformations, the shift to microservice architecture, the removal of outdated technology, and the introduction of new systems, André Christ, CEO and co-founder of LeanIX, an EA and cloud governance company.
enterprise
Because enterprise architecture enables a business to map out all their systems and processes and how they connect together, EA is becoming a “very important method and tool to drive forward digital transformation,” said Christ. He explained that since most transformations don’t start off as greenfield projects, about 70% of them fail due to their existing IT landscape. Having a solid baseline, which EA aims to provide, is crucial for any transformation initiative. “The reason for this is that once you’ve started a transformation program, you discover new dependencies because of applications connected to other systems that you never knew of before. So replacing them with better applications, with newer interfaces, and with better APIs all of a sudden isn’t as easy as you thought when you were starting the transformation program,” he explained. Businesses also want to understand where their investments in the IT landscape are going, and connect the business strategic goals to the activities in their transformation program. “This is where enterprise architecture can help you. It allows you to look at this whole hierarchy of objectives and programs you are setting up, the affected applications you are having, and the underlying changes in detail,” said Christ. The main benefits organizations are looking to get out of EA are: avoiding redundant systems, reducing the need to
pay for additional support costs for those redundant systems, removing the risk of outdated technologies, accelerating the speed in which they can introduce new apps, and the ability to better integrate systems, Christ went on to explain. “This is fundamentally what’s going on with many of our clients. The recognition that enterprise architecture is important. The recognition that it is front and center to digitization as we rethink about creating new business models and new business designs,” Gartner’s Brand added. Additionally, the rapid changes businesses had to make as a result of the COVID-19 pandemic is making EA a significant practice within the enterprise, according to Martin Owen, vice president of product strategy at data governance company erwin. When COVID hit, the majority of businesses hadn't thought about how they were going to change their business structure, work from home, or adapt their short-term and long-term objectives. Businesses had to make sense of all their processes and systems to change its business continuity planning and ensure they would not have to go through this type of disruption unprepared again. But, before they could do anything they needed a blueprint to map everything out and easily understand what changed, when, how, and why. EA became a strategic tool to support, pre-
010-12_SDT042.qxp_Layout 1 11/17/20 5:02 PM Page 11
www.sdtimes.com
architecture pare, assist, strategize and implement changes needed to tackle the crisis. Activities like disaster recovery and business continuity planning became easier because EA provided visibility into what current processes, systems, and people the businesses had, and what they are doing, Owen explained. However, Gartner sees businesses have gone back to normal and are now focusing on investing in the IT estate to create a digitized operating mode. “The role of EA is really reverting back to what it was pre-COVID, but at an even more accelerated pace. COVID kind of opened the eyes for enterprise architecture's importance. But now that we've overcome the initial impact of COVID, and we are now into this recovery phase, EA elevated itself,” said Brand.
The history and evolution of EA Gartner has noticed this increased interest and revival of EA since 2013. EA has been around since 1987, but crashed and burned in 2012 because it failed to provide businesses value, according to Brand. “Enterprise architecture became very ‘ivory towerish.’ It focused very much on this idea of doing solutions and technical architecture. It became very engrossed in this idea of command and control, governance, assurance, standards and review boards,” said Brand. “It just generally became very much of a function within
December 2020
SD Times
IT that even IT struggled to understand.” Some of the ways businesses tackled EA in the past was through the Zachman Framework, the Open Group Architec-
of an enterprise’s architecture. This practice is not going anywhere, Brand explained, it’s just moving down the list of EA priorities. “If enterprise architecture is going to be of value, it’s starting point is different,” said Brand. “The emphasis today with modern EA is first by looking at the strategic conceptual contextual things.” With the EAM approach, businesses tend to try to solve the how before they reach the what and why.
tural Framework (TOGAF), and the Federal Enterprise Architecture Framework (FEAF), according to LeanIX’s Christ. The Zachman Framework was released in the 1980s to enable organizations to start meaningful conversations with the information systems team, create business value through architectural representations, evaluate tools and optimize approaches to development. TOGAF enabled businesses to design, implement, guide, and maintain the enterprise through controlled phases, also known as the Architecture Development Method. The FEAF was designed for the U.S. Government to start implementing EA practices within federal agencies. These frameworks had benefits and value, but Brand noticed clients “invariably ran into a problem” because they were not “delivering a business value and these clients had to quickly rethink about restarting their EA practice,” he explained. “Now, the challenge for them is thinking about having to take often a rudimentary or foundational enterprise architecture and having to bring it up to the next level of capability and maturity.” What is necessary is an operating model that enables EA to design the IT estate and enable future-state capabilities that drive “customer centricity and targeted outcomes,” according to Brand. For that reason, he has seen the purpose of EA change from 2012 to take more of a business-outcome-driven approach and start by linking to business direction and strategy. The traditional way of doing EA has been through Enterprise Architecture Management (EAM), which is the practice of establishing and maintaining a set of guidelines and principles to govern and direct the design and development
Gartner’s eight steps to starting, restarting or renewing a businessoutcome-driven EA program are: 1. Adopt business-outcome-driven EA 2. Construct a value proposition 3. Start with business architecture 4. Determine organizational design 5. Determine skill sets and staffing 6. Determine governance and assurance 7. Determine business value metrics 8. Construct a charter The first three steps establish an approach to EA while the following five execute and operationalize EA. Enterprises first need to understand the technologies, find a practical use case and then operationalize it with the technologies and the existing IT estate. The key concepts driving a modern EA practice is planning, designing, innovating, orchestrating, navigating and operationalizing, according to Brand. And today, Brand is seeing even more of an evolution with efforts to transform EA into an internal management consultancy. “Our clients are doing business-outcome-driven EA, but they recognize they have to deliver it in a different way, and hence they tend to use the management consultancy model,” said Brand. “What we are talking about is an extension of business-outcome-driven EA, but the catch is how we deliver this to the organization so that value is understood.” This EA as an internal management consultancy transformation has been an ongoing trend since 2016, and it involves utilizing fusion teams, which is a concept where business and IT lead the use of technology to create new business designs and models. “It’s not the old days where it’s simply business says and IT does. This is a relationship continued on page 12 >
11
010-12_SDT042.qxp_Layout 1 11/17/20 5:02 PM Page 12
12
SD Times
December 2020
www.sdtimes.com
< continued from page 11
between business and IT people jointly making decisions about investment in their IT estate.”
EA beyond 2020 Last year, Gartner’s enterprise architecture predictions focused on the “importance of information architecture becoming more prevalent” and clients having to think about “stepping up their game.” This year, Gartner is seeing the idea of enterprise architecture become more involved and focused on the composable enterprise. “We do see EA becoming more front and center to helping build a composable IT estate that is quicker, better and able to deliver speed to value and time to market.” “Composable business is a natural acceleration of the digital business that you live every day. It allows us to deliver the resilience and agility that these interesting times demand,” said Daryl Plummer, distinguished vice president analyst, during the opening keynote at virtual Gartner IT Symposium/Xp 2020 in October. “We’re talking about the intentional use of ‘composability’ in a business context — architecting your business for real-time adaptability and resilience in the face of uncertainty.” Brand believes EA will continue to evolve, adapt and respond to the changing world. “Remember, we are building digitally technology-enabled and datadriven business models, and enterprise architecture is front and center to delivering all of that. It is the intersection between business and IT,” he said. By 2023, Gartner expects 60% of organizations will depend on EA to lead digital innovation. “Today’s enterprise architects are responsible for designing intelligence into the business and operating models, identifying ways to help their organization use data, analytics and artificial intelligence to plan, track and manage digital business investments,” Brand stated. Brand also predicts by 2023, EA tools will be more intelligent to support customer experience, product design, machine learning and IoT. He went on to explain that in order for EA leaders to
Gartner’s 13 worst EA practices
Part of the reason Gartner believes enterprise architecture previously didn’t provide any business value and “crashed and burned” in 2012 is because organizations were not looking at the bigger picture. “Despite best efforts, many EA programs fall from “best practices” to “worst practices.” Enterprise architecture and technology innovation leaders must be vigilant and navigate away from practices that sink EA efforts.” Gartner wrote in a post.
The top 13 worst EA practices an organization can make are: 1. Not linking business strategy and targeted business outcomes 2. Confusing technical architecture with Enterprise Architecture 3. Focusing on the current-state architecture first 4. Excessive governance and overbearing assurance 5. Creating a standard for everything 6. Being engrossed in the art and language of EA instead of business outcomes
and industry reference models 8. Adopting an “ivory tower” approach to EA 9. Lack of continuous communication and feedback 10. Restricting the EA team to IT resources only 11. Lack of key performance metrics 12. Purchasing an EA tool before understanding the use cases and critical capabilities 13. Thinking that EA is ever done
7. Strict adherence to EA frameworks
Saul Brand, senior director analyst at Gartner, points out if businesses look at Gartner’s top 8 best practices to being successful at EA, governance and assurance doesn’t happen until step six. Thinking of EA as a form of guardrails is a “worst practice” because it makes EA a command and control practice when it should be more of a “center of higher knowledge and knowledge sharing.” Another worst EA practice to point out is when organizations run out and buy a tool first. “There’s an old saying: ‘a fool with a tool is still a fool,’” said Brand. Clients who run out and buy a tool first become obsessed with integrating and implementing it. “They forget the real purpose of enterprise architecture is that value proposition.” z
demonstrate business value today they must design for intelligence, refocus on information architecture, lead digital innovation, and leverage intelligent tools. However, he recommends organizations decide what they want to get out of EA first, get buy-in and mandate, ensure the practice is valuable and interesting, then start to think about how a tool can be implemented. Spark Systems’ O’Reilly sees tools organizations maturing to a more teambased approach for EA going forward. “It is imperative that EA needs to be contributed to and accessible by each and every member of an organization (with a few EA heads to govern the process),” O’Reilly explained. “Companies may have been able to treat EA as an ivory tower practice pre-COVID, however now that many organizations are taking advantage of their employees working from home...a team-based
accessible solution is required.” One of the main challenges for EA practices is knowing how much information to capture and the level of detail. “Many organizations agree that they need enterprise architecture and will spend the next 10 years capturing their current system. Unfortunately at that point when they are ‘done’ their model is 10 years out of date. To overcome this there needs to be some conscious decision to decide when enough information has been gathered to make a decision. The easiest way to achieve this is to have every member of your team contribute to the overall picture on a daily basis as part of their work,” O’Reilly explained. Erwin’s Owen added that being able to automate software discovery, and integrate with data modeling tools will also help EA provide insight into process, people, the organization and the technologies. z
Full Page Ads_SDT042.qxp_Layout 1 11/17/20 9:26 AM Page 13
Collaborative Modeling
Keeping People Connected ®
®
®
®
®
Application Lifecycle Management | Jazz | Jira | Confluence | Team Foundation Server | Wrike | ServiceNow ®
Autodesk | Bugzilla
sparxsystems.com
TM
®
®
®
| Salesforce | SharePoint | Polarion | Dropbox
TM
| *Other Enterprise Architect Models
Modeling and Design Tools for Changing Worlds
®
014_SDT042.qxp_Layout 1 11/17/20 9:35 AM Page 14
14
SD Times
December 2020
www.sdtimes.com
DEVOPS WATCH
Harness announces CI, DevOps modules for software delivery BY CHRISTINA CARDOZA
CI/CD and DevOps tool provider Harness is preparing for the next-generation of software delivery with the announcement of two new modules at its {Unscripted} 2020 inaugural user conference. The new modules, Continuous Integration Enterprise and Continuous Features, are designed to enforce quality and compliance, reduce the time to remediate vulnerabilities, and provide continuous deployment. Continuous Integration Enterprise is an AI/ML-powered continuous integration solution for building and testing cycles. The module comes from Harness’ acquisition of CI company Drone.io and includes familiar features such as container-based pipelines, quick bootstrapping, low maintenance, customizations, and parallel step execution. Continuous Features is a feature flag management solution that tackles “feature sprawl” and “progressive delivery,” and reduces the time it takes to release features. It includes a visual interface for feature verification and analytics, and an easy-to-use dashboard to monitor flags. “The rise of cloud-native architectures and increased fragmentation of infrastructure environments makes shipping software in a scalable and secure
manner an incredible challenge.” said Jyoti Bansal, CEO and co-founder, Harness. “That’s why Harness is focused on building the next generation of software solutions and contributing to open source projects to make it easier to get to better software more quickly. Whether our customers are running applications in containers, in the cloud or on-premises, we are here to help. We are meeting our customers where they are and taking them to where they need to be in an increasingly multi-cloud world.” The beta for these modules is expected to be available in Q4. The company also made a number of updates to its continuous delivery, continuous verification and continuous efficiency solutions. For continuous efficiency, Harness’ “Project Lightning” will help eliminate cloud waste and save users 4075% on their public cloud bill, according to the company. Harness’ continuous delivery-as-aservice platform features a new user interface, GitOps, pipelines-as-code, bi-directional sync, conflict management, and templates for standardizing deployments. The continuous verification solution provides greater visibility into the impact of changes within any environment. z
In other DevOps news n Checkmarx is now available on the AWS Marketplace with AWS DevOps Competency. The application security testing provider recently had partnership activity with GitHub and GitLab. It provides automated solutions to simplify and speed up security testing in fast-paced DevOps environments. According to the company, it now possesses both the AWS Security and DevOps competencies, which underscores its commitment to helping move DevOps initiatives to the cloud. n Logz.io released its DevOps Pulse 2020 survey, which revealed rising observability challenges. According to the survey results, the rising observability challenges stem from an increased adoption of cloud-native technologies such as microservices, serverless, and Kubernetes with 87% of respondents reporting that their main challenges lie in these environments. The most common difficulties running Kubernetes are monitoring and troubleshooting at 44%, and other problematic areas include security (35%), networking (34%), and cluster management (30%) n Tasktop announced the Flow Institute to bridge IT and business leaders together. The newly launched online community for business leaders offers custom courses and content to gain practical knowledge and skills, as well as better understand value stream management and Tasktop Flow Metrics. The institute will cover topics such as outcome-based recognition of software development and business goals, provide ongoing input and knowledge around the impact of product-based thinking, and more. n WhiteSource announced native integration with Microsoft Azure DevOps services. The integration is meant to provide Azure DevOps users with more visibility over open-source components with real-time security and compliance alerts and a detailed risk report. According to the company, this will reduce remediation time in pre and post-release stages. z
Harness Continuous Efficiency gives DevOps teams context and visibility into cloud costs.
Full Page Ads_SDT042.qxp_Layout 1 11/17/20 9:26 AM Page 15
VSMDC-house ad.qxp_Layout 1 11/17/20 12:10 PM Page 1
presents
Next year’s date:
March 10, 2021
Join your peers for a day of learning Virtual VSM DevCon is a one-day, digital conference examining the benefits of creating and managing value streams in your development organization. At Virtual VSM DevCon, you will learn how to apply value stream strategies to your development process to gain efficiencies, improve quality and cut costs.
Highlights from last year’s sessions: l
An examination of the VSM market
l
What exactly is value?
l
Slow down to speed up: Bring your whole team along on the VSM journey
l
Why developers reject Value Stream Management — and what to do about it
l
You can measure anything with VSM. That’s not the point
l
Who controls the flow of work?
Taught by leaders
l
Tying DevOps value streams to business success
on the front lines of Value Stream
l
Making VSM actionable
l
Value Stream Mapping 101
l
How to integrate high-quality software delivery into the Value Stream
l
Transitioning from project to product-aligned Value Streams
l
The 3 Keys to Value Stream infrastructure automation
REGISTER FOR FREE TODAY! https://www.accelevents.com/e/VSMDevCon2021 2020 Sponsors A
Event
017-21_SDT042.qxp_Layout 1 11/18/20 12:11 PM Page 17
2020 The Y Yeear in Review
2020: The year everything changed BY DAVID RUBINSTEIN
The year 2020 started much like every other year: software development shops were humming, in-person conferences attracted large crowds, moves to the cloud and for businesses to take on new software delivery initiatives were continuing apace. Then we hit March. The explosion of the novel coronavirus would redefine how, when and where we work. The virus forced organizations to close and workers to stay home, requiring businesses to quickly get tools into everyone’s hands, adopt new collaboration software and figure out VPN access to, and security of, their assets on the fly.
With entire organizations working remotely, companies like Zoom and Cisco benefitted from office meetings going digital, while new platform players sprung up to host virtual events that became ubiquitous in 2020.
Testing goes hands off during the pandemic with automation BY JAKUB LEWKOWICZ
The importance of automated testing has mushroomed in importance in 2020 as many companies realized the primary way to connect with consumers is through apps and digital applications, which in turn has increased the amount of testing that needs to be done. The pandemic has also created a distributed workforce and prompted the need for alternate methods of testing that don’t require being on site. “Before the pandemic, a lot of mobile testers were relying on the few physical devices they kept in a drawer at work. Now, they’re realizing they need access to a device cloud that provides the same interactive capabilities desktop and web developers get using virtual machines,” said Dan McFall, the president and CEO of Mobile Labs. This year, we saw automated testing, continuous testing and security testing continue to grow. Non-traditional testing such as feature experimentation, Visual
AI, and chaos engineering also advanced to keep pace with organizational demands in the digital age. “Application changes occur several times a day now. These changes need to work on many browsers, devices, operating systems and different environments, so you need to do far more work in far less time,” said Gil Sever, CEO and cofounder of Applitools. “You can’t manually write and maintain all the scripts needed, so you need Visual AI to take over these rote aspects of the work.” Back in March, Applitools released Ultrafast Grid, which simplifies cross-browser testing by eliminating the need to tediously run functional and visual tests individually across all browsers and viewpoints. Solution providers have focused on the need for an AI-driven approach that can be utilized for both legacy and modern cloud-native technologies. For example, in October, Tricentis announced Tosca 14 with Vision AI,
Meanwhile, workers struggled to find a balance between working from home and NOT working. Many, in fact, reported that because there were no places to go and not much else to do, they found they were working many more hours than they normally would. As we look back at various sectors of the industry, it’s important to note that most of the year’s efforts were aimed at keeping businesses going even as they were turned on their heads. It’ll be interesting to see how many of these changes in how we work remain in place once a vaccine has been distributed and things could return to what we used to know as normal. z
which automatically recognizes and identifies visual user interface elements and controls across any form factor the same way humans do to aid in the automated generation of robust test cases. “Test automation technology has evolved from script-based, to modelbased, and is now moving towards AIbased approaches,” Tricentis wrote in a post. This year also saw major acquisitions. In June, Keysight Technologies acquired Eggplant, a software test automation platform that leverages artificial intelligence and analytics to automate test creation and test execution. Then in November, mobile experience platform provider Kobiton acquired its competitor Mobile Labs to enable developers and QA teams to deliver apps faster by leveraging artificial intelligence across real-devices spanning cloud and on-premises deployments. “There is an urgent need to master testing at both large scale and high velocity to ensure high-quality software delivery. To succeed, application leaders need to develop their teams’ competency in autonomous testing to remove testing bottlenecks and accelerate release cadence,” Gartner wrote in its recently published Innovation Insight for Autonomous Testing this year. z
17
017-21_SDT042.qxp_Layout 1 11/17/20 6:35 PM Page 18
18
2020 The Y Year ear in Review
Microsoft unifies .NET ecosystem, takes on more advanced AI, and addresses remote work BY CHRISTINA CARDOZA
Microsoft this year finally completed its plans to merge .NET Core, .NET Standard and .NET Framework. The company first announced the ambitious effort last year, and was able to complete the task on schedule with the official release of .NET 5.0 — along with ASP.NET Core, EF Core, C#9 and F#5 — at its .NET Conf 2020 in November. “You’d think that “November 2020” was a cheque that could not be cashed given all the challenges this year, however, .NET 5.0 has been released on time,” Richard Lander, program manager for the .NET team, said in a blog post. In addition to unifying its .NET solutions, .NET 5.0 features improved performance across several components and .NET libraries, reduced P95 latency, and expanded platform scope with Arm64 and WebAssembly. But that’s not it for the company’s .NET unification journey. Microsoft already revealed .NET 6.0 will be released in a year with more focus on Xamarin developers being able to use the platform. Aside from the .NET 5.0 release,
one of the biggest announcements of the year was in September when Microsoft made the decision to exclusively license OpenAI’s GPT-3 language model — the autoregressive language model that outputs human-like text. Microsoft plans to develop and deliver advanced AI solutions and create new solutions based on advanced natural language generation. In response to the ongoing pandemic, Microsoft quickly added new capabilities to Microsoft Teams that tackled the needs of the new remote workforce. Features included ability to avoid background distraction, minimize background noise, participate in large meetings, work offline, and break
out chat windows. The pandemic also forced the company to take its annual Build conference online in May where it announced Windows Terminal 1.0, Azure Synapse Link, Azure Cognitive Services updates, Project Reunion, the open-sourcing of Windows Package Manager Preview, and the acquisition of Softomotive for low-code robotic process automation capabilities. Other notable announcements throughout the year included investments to Visual Studio, updates to TypeScript, and new open-source tools. Visual Studio Online became Visual Studio Codespaces in May to represent the solution’s ability to do more for developers than just edit in the browser. The Visual Studio extension model was redesigned in October to not only make extensions more reliable, but easier to write and support locally or remotely. The general availability of Microsoft Edge Tools for VS Code extensions was also announced in October to make it easier for web developers to do more with Visual Studio. After many updates to its program-
Java celebrates 25 years BY JENNA SARGENT
There were two new major Java releases this year, Java 14 and 15. Java 14 introduced features such as pattern matching for instanceof, a packaging tool, NUMA-aware memory allocation for G1, and more. Java 15 introduced developer productivity enhancements like the Edwards-Curve Digital Signature Algorithm (EdDSA), hidden classes, and text blocks. In May, Java celebrated its 25th anniversary. To celebrate the anniversary, JetBrains compiled data from multiple sources to look at the current state of the language. The survey revealed that over a third of develop-
ers use Java as a primary language, and that it is the second primary language of developers following JavaScript. A major recent change in the Java ecosystem was the loss of support from Oracle. In 2019, Oracle changed its Java licensing model so that only companies with a paid commercial Java subscription would receive updates to Java SE. This change caused 80% of the Java community to start considering other support options. A survey conducted by Azul Systems in February revealed that preferred use of Oracle JDK dropped from 70% to 32%. A majority of users
017-21_SDT042.qxp_Layout 1 11/17/20 6:36 PM Page 19
19
ming language for application-scale JavaScript, TypeScript 4.0 was released in August. The latest version is meant to represent the next generation of the language, and focused on expressivity, productivity and scalability. Lastly, the company released a number of new tools into open source. Application Inspector’s source code analyzer for identifying “interesting” features and metadata was released in January. GW-Basic Code, the BASIC interpreter written in assembly language, was brought to GitHub in May for historical and educational purposes. Project Tye, an experiment project meant to make it easier to develop, test and deploy microservices was announced in May. The company open-sourced its extension of TensorFlow for Windows, TensorFlow-DirectML, in September. The solution is meant to bring Tensorflow beyond its traditional GPU support and to native Win32 and Windows Subsystem for Linux. Project OpenFuzz was released in September to reduce the complexity of fuzz testing and help developers find and fix bugs at scale. And Playwright for Python was announced in October for automated end-to-end tests written in Python. z shifted to free or supported OpenJDKbased deployments of Java. Another change in the Java community this year was the news that OpenJDK contributor BellSoft was teaming up with VMware to improve OpenJDK. At the time this was announced, the main areas for improvement were to enhance support for ARM processors and optimize Java for cloud deployments and microservices architectures. In addition, Python finally surpassed Java as the most popular programming language on some developer reports. In JetBrains’ 2020 State of the Developer Ecosystem Report, Python surpassed Java as the most-used language. Java still remained the most widespread primary language. z
DevOps practices continue to expand, enbrace new ideas BY DAVID RUBINSTEIN
The practice of DevOps — bringing Agile development together with changes in infrastructure for running cloud-native applications — has changed the development industry over the last decade. And it has done its job well. A Fortune Business Insights report released in January of this year projected the size of the DevOps tools market will reach $14.9 million by 2026, noting that in 2018, it was only a $3.7 million market. Integrations, mostly through APIs, have brought testing, governance and security right into the development pipelines of many organizations, facilitating a “shift left” that has left some developers feeling put upon and more than a bit overwhelmed, though more are buying in to taking on those responsibilities. In 2020, we saw DevOps continue to expand into areas such as value streams, GitOps and most recently BizOps. Value stream management is about eliminating bottlenecks and gaining efficiencies in product delivery were the goal. The same is true in software development, only trickier because the number of moving pieces is greater and the final product isn’t always the same. After a core group of companies defined and led the market — CloudBees, ConnectALL, digital.ai (which formed this year as the combination of CollabNet, VersionOne and XebiaLabs), HCL Software, IBM and Plutora — this
year saw a second wave of companies bringing tools to the market. In fact, in a “Predicts 2021” paper released in early October, analysis firm Gartner said that by 2023, 70% of organizations will use value stream management to improve flow in the DevOps pipeline, leading to faster delivery of customer value. Also in October, a small consortium of companies, led by Broadcom, produced a BizOps Manifesto, which they say is the next evolution of DevOps. Where DevOps has made great strides in tying development more closely to operations, BizOps seeks to do the same thing with development and operations being more closely tied to business outcomes. The manifesto itself decrees that business outcomes are the primary measure of success for an organization. Meanwhile, GitOps continued to gain traction as organizations derived value from keeping their infrastructure configurations and applications inside their code repositories for greater speed and flexibility of software deployments. Last month, CodeFresh created GitOps 2.0 to provide, in its words, patterns and standards that improve software delivery and reliability. The company said GitOps 2.0 will address an observability gap that existed before, as well as issues of dealing with multiple environments and secrets. z
017-21_SDT042.qxp_Layout 1 11/17/20 6:36 PM Page 20
20
2020 The Y Year ear in Review
How AI and machine learning moved forward in 2020 BY JENNA SARGENT
AI and machine learning saw several steps forward in 2020, from the first beta of GPT-3, stricter regulation of AI technologies and conversations around algorithmic bias, and strides in AIassisted development and testing. GPT-3 is a neutral-network-developed language model created by OpenAI. It entered its first private beta in June this year, and OpenAI reported that it had a long waitlist of prospective testers waiting to assess the technology. Among the first to test the beta were Algolia, Quizlet, Reddit, and researchers at the Middlebury Institute. GPT-3 is being described as “the most capable language model created to date.” It is trained on key data models like Common Crawl, a huge library of books, and all of Wikipedia. In September, Microsoft announced that it had teamed up with OpenAI to exclusively license GPT-3. “Our mission at Microsoft is to empower every person and every organization on the planet to achieve more, so we want to make sure that this AI platform is available to everyone — researchers, entrepreneurs, hobbyists, businesses — to empower their ambitions to create something new and interesting,” Kevin Scott, executive vice president and chief technology officer for Microsoft, wrote in a blog post. The ethics of AI and its potential
biases were also more heavily talked about this year, with the Black Lives Matter movement bringing more attention to an issue that has been talked about in the industry for the past few years. Anaconda’s 2020 State of Data Science report revealed that social impact that stems from bias in data and models was the top issue that needed to be addressed in AI and machine learning, with 27% of respondents citing it as their top concern. In April, Washington state passed facial recognition legislation that ensured upfront testing, transparency, and accountability for facial recognition. The law requires that government agencies can only deploy facial recognition software if they make an API available for testing of “accuracy and unfair performance differences across distinct subpopulations.” In June, IBM also decided to sunset its facial recognition software in order to address the responsible use of such technology by law enforcement. “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” IBM CEO Arvind Krishna wrote
in a letter to congress. “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.” Nate Custer, senior manager at testing automation company TTC Global, said there are a number of areas where AI and machine learning could have a positive impact. Test selection is at the top of the list, specifically, the ability to test everything in an enterprise, not just web and mobile apps. The second most promising area is surfacing log difference, so that if a test took longer than it should to run, the tool might suggest that delay was the result of a performance issue. A third area is test generation using synthetic test data. Gartner analyst Thomas Murphy believes that it’s still early days for autonomous testing. AI has also made its way into development tools. In a conversation on the podcast “What the Dev?,” OutSystems’ senior product marketing manager Forsyth Alexander explained how development tools have incorporated AI to help make developers more productive. These AI-enabled platforms can help developers with coding and discover problems as they’re created. It is expected that all of this automation and AI-assisted tooling will help, not replace, human workers. An IBM report from earlier this year revealed that 45% of respondents from large companies had adopted AI, and 29% of respondents from small and mediumsized businesses did the same. Those companies are still in the early days of adoption and are looking for ways to utilize it to bolster their workforce. Former IBM CEO Ginni Rometty said in March that she prefers the term augmented intelligence over artificial intelligence. “AI says replacement of people, it carries some baggage with it and that’s not what we’re talking about,” Rometty said. “By and large we see a world where this is a partnership between man and machine and that this is in fact going to make us better and allows us to do what the human condition is best able to do.” z
017-21_SDT042.qxp_Layout 1 11/17/20 6:36 PM Page 21
21
Security issues increase as the world becomes more digital BY JAKUB LEWKOWICZ
The year 2020 saw a tremendous shift towards doing business online due to COVID-19, and cybercriminals have taken this opportunity to up their attacks, both in frequency and scope. The FBI reported that the number of complaints about cyberattacks to their Cyber Division is up to as many as 4,000 a day. That represents a 400% increase from what they were seeing pre-coronavirus. In June, Microsoft also reported that COVID-19 themed attacks, where cybercriminals get access to a system through the use of phishing or social engineering attacks, have jumped to 20,000 to 30,000 a day in the U.S. alone. Particularly alarming were nextgeneration cyber attacks aimed at actively infiltrating open-source software supply chains, which saw a 430% increase since last year, according to the 2020 State of the Software Supply Chain report published in August. “Attackers are always looking for the path of least resistance. So I think they found a weakness and an amplifying effect in going after open-source projects and open-source developers,” said Brian Fox, the chief technology officer at Sonatype. “If you can somehow find your way into compromising or tricking people into using a hacked version of a very popular project, you’ve just amplified your base right off the bat. It’s not yet well under-
stood, especially in the security domain, that this is the new challenge.” However, proper tooling, such as the use of software composition analysis (SCA) solutions, can ameliorate some of these issues. To improve open-source security, the Linux Foundation launched a new initiative called OpenSSF in August. OpenSSF is committed to collaboration and working both upstream and with existing communities to advance open source security for all as opensource software has become more pervasive in data centers, consumer devices, and services. The concept of DevSecOps in which security tools are used earlier in the development workflow, in developers’ IDEs, but also in their code management systems and their build tools, has grown in importance. To further DevSecOps, GitHub launched a new code scanning capability in October that scans code as it is created and provides reviews within pull requests and other GitHub experiences. Soon after, IBM released its Code Risk Analyzer that can be configured to run at the beginning of a developer’s code pipeline and it reviews and analyzes Git repositories for known issues with any open-source code that needs to be managed. Despite new tools emerging, a report published in October by WhiteSource found that 73% of developers
sacrifice security for speed. “There are a lot of advantages to the proliferation of automated tools throughout the DevSecOps pipeline. However, managing and orchestrating all of them has also become a process in itself, that can take up a lot of time, and also create further friction between teams using a variety of different tools,” said David Habusha, vice president of product at WhiteSource. Also, 2020 saw the enactment of the California Consumer Privacy Act on January 1st. As of September, when we covered the topic, there were no major fines issued. However, the GDPR regulation that has been in effect since May 2018, has picked up steam. As of February 2019, nine months after GDPR took effect, only 91 fines had been issued, and most of them were small fines, while as of August 2020, there have been 347 fines issued, totalling close to $209 million. “For GDPR it took almost one year before the bigger fines started taking effect. Because of the fact that CCPA went into a stretch period with COVID, it was a kind of silent launch. In the next six months we will see more and more of the people, the activists trying to enact their rights and we will see more of the effects of this regulation,” said Jean-Michel Franco, the director of product marketing for Talend. z
Full Page Ads_SDT042.qxp_Layout 1 11/18/20 11:13 AM Page 22
Empower your manual testers to build automation 20X faster than in Selenium
Decrease your time spent on test maintenance 200X
Decrease your total cost of ownership 100X per automation test
testRigor helps you in three ways: 1. It will generate tests for you based on how your end-users are using your app in your production. 2. The execution level for the tests is based on plain English to help you to express tests from an end-user perspective (as opposed to the white-box concepts like XPath). 3. On top of that it has a Chrome plugin to help you to build tests while performing manual testing.
Ready to accelerate your d dev cycles? REQUEST A DEMO O testrigorr.com/r .com/request-trial
023-27_SDT042.qxp_Layout 1 11/17/20 6:48 PM Page 23
www.sdtimes.com
December 2020
SD Times
Buyers Guide
TEST AUTOMATION: From the mundane to the creative domain BY JAKUB LEWKOWICZ
T
he drastic increase in volume of tests and the speed of software production has necessitated more efficient automated testing to handle repetitive tasks. The growing “shift-left” approach in Agile development processes has also pushed testing much earlier in the application life cycle. “There is a challenge to testing in the sense that we need to do it more frequently, we need to do it for more complex applications, and we need to do it at a higher scale. This is not feasible without automation, so test automation is a must,” said Gartner senior director Joachim Herschmann, who is on the app design and development team. In fact, in last year’s Forrester Wave: Global Continuous Testing Service Providers, it found that traditional testing services don’t cut it for many organizations anymore: 20 of 25 reference customers said that they are adopting continuous testing (CT) services to support their Agile and DevOps initiatives within
a digital transformation journey. Of those CT services, clients say automation is the most impactful and differentiating for delivering better software faster. Investment in automated testing is expected to rise from $12.6 billion in 2019 to $28.8 billion by 2024, according to a report by MarketsandMarkets, a B2B research firm. The pandemic has also driven the importance of autonomous testing, as many companies realized the primary way to connect with consumers is through apps and digital applications, which in turn increased the amount of testing that needs to be done. The situation created a distributed workforce that needed to evolve the way they do testing. “With the effects of COVID, organizations had to execute a two year plan in two months,” said Mark Lambert, vice president of strategic initiatives at Parasoft, a software company that specializes in automated software testing. The current major shift that has occurred in autonomous testing is that it is no longer primarily driven by code
but is actually driven by data, according to Herschmann. Anything that involves AI is driven by data. These data sources include user stories or requirements that could stem from documents that say what an expected piece of functionality is. This requires natural language processing and technologies that can read the document and instruct the intent and then create a test case. Other data points include existing test results in which users can identify patterns in their tests and see what their failure points were before. Automated testing tools can also scan data or feedback that’s supplied in app stores or even social media to find information that the testers may have missed. “Very often there is a discrepancy between what the project manager envisions about a product versus how it’s used in reality. There’s a gap in testing there and now we can capture that,” said Herschmann. Tooling can also generate unit tests continued on page 24 >
23
023-27_SDT042.qxp_Layout 1 11/17/20 6:48 PM Page 24
24
SD Times
December 2020
www.sdtimes.com
< continued from page 23
automatically because it looks at GitHub, where there are millions of projects, scans it, and trains the model based on that code. “By the way, writing unit tests is a task that developers hate, so if that can be done automatically, that’s great,” Herschmann said. Test automation also looks at log data such as web server logs or other log files and captures the information of how users have used the applications. This can then be used to to extract customer journeys and create common test scenarios based on that. “We’re for the first time really tapping into these data sources, and we’re using that to enhance test automation. Where it all leads to is we’re finally getting to a point where the full life cycle of testing is actually increasingly automated,” Herschmann said. As the move to Agile has increased, more companies implemented the test automation pyramid strategy with unit level testing at the base, where the largest amount of automated tests need to be done, followed by API testing, and lastly UI testing. “There are a lot of excellent opensource tools in the market when it comes to unit testing, but UI-based functional end-to-end testing is where there are a lot of challenges,” said Artem Golubev, the co-founder and CEO of testRigor, a plain English-based testing system. Golubev stressed the need for an effective solution in this particular area. “These are difficult in particular because of stability and maintainability and it is difficult for teams to even build tests for this in the first place.”
Automated testing does not eliminate manual testing Although companies have become increasingly aware of the speed and accuracy that comes with automated testing, this did not eliminate the necessity of manual testing at organizations. “In general humans are really good at the creative domain, domain knowledge workflows, but they’re very bad at repetitive tasks. So if I can point a machine
and tell it to go ahead and verify a particular use case, such as looking for specific numbers on a page and making sure they all match, that is a great job for a machine to do. It’s a bad job for a human, because as we really start to have more domain knowledge, those kinds of workflows bore us and we make mistakes,” Parasoft’s Lambert said. Meanwhile, people add value in understanding how the application should be used and the problem that the application is trying to solve. Manual testing is a very valuable part of the process, Lambert explained. The expansion of AI in test automation has also led to tremendous benefits in test stability, maintainability, and
being able to generate the tests, however, AI will not be able to replace humans in the near future when it comes to testing, according to Golubev. “In cases of bot-based generated tests, it’s the AI that guides the bot through your application in order to be able to build proper end-to-end tests out of the box. There are also machine learningbased models that automatically assess if your page is rendered properly from an end user’s perspective,” Golubev said. Golubev noted that AI will not be able to replace humans in the near future when it comes to testing. “There is no such thing, and there won’t be in the next 20 years, something such as overarching AI. With the current models and how they work in 2020, the compute is just not there,” said Golubev.
Test automation drivers Lambert said that there are three primary use cases that are driving the adoption and application of test automation: compliance, the need to
accelerate delivery, and the reduction of operational outages. “First, compliance is one of those things that’s non-negotiable and it really is a bottleneck at the end of the delivery pipeline,” Lambert said. “Whether it’s for following PII, GDPR, PCI, or countless other regulations, the organizations that implement compliance in an automated manner are the organizations that really succeed in really delivering on the second important use case: accelerating delivery, according to Lambert. However, accelerating delivery is not just about the quantity of tests put out in the shortest period of time. This phase primarily has to be about focusing on the quality of automated tests. The third major point of automated testing focuses on eliminating production outages and on doing continuous verification and validation as one goes through the process. “If you’re just accelerating and not worrying about quality, that might work for the first release, maybe the second release iteration, but certainly if you don’t have that in place, and if you don’t have the testing to check, you’re going to start failing as you move forward,” Lambert added. “If you build quality into your accelerated delivery process, then you could deliver with confidence and make sure you don’t have those production outages.” When beginning with test automation, organizations not only have to figure out how to create their test automation, but identify what things to automate because not everything can be automated, according to Lambert. Then, organizations need ways, practices and technologies to help them with the creation process. While many organizations getting started with test automation tend to look for the simplest approach by looking for tools that are easy to use and that can be plugged into the pipeline, Lambert said that it is best to think long term. “One thing you have to look at is how is that going to scale? So a technology that you’re bringing in, or a capability that you’re bringing in might satisfy the use case that you have today, but is continued on page 26 >
Full Page Ads_SDT042.qxp_Layout 1 11/17/20 9:26 AM Page 25
023-27_SDT042.qxp_Layout 1 11/17/20 6:49 PM Page 26
26
SD Times
December 2020
www.sdtimes.com
How does your solution help organizations implement automated testing? Artem Golubev, co-founder and CEO of testRigor “testRigor is a functional end-to-end testing tool for web and mobile designed to automate away the work of manual testers. With testRigor, organizations can expect up to 200 times less test maintenance as well as up to 20 times faster speed and ease of test creation. As an autonomous, self-healing functional UI regression and exploratory testing solution, testRigor reduces grunt work and accelerates delivery. It gives users 90% or more test coverage with AI-driven tests. In addition, it reduces QA overhead, helps with leading a more efficient QA team, and allows for painless scalability. testRigor analyzes your end-user usage metadata to autonomously create functional end-to-end tests that cover all important test cases. Since the tests are in plain English, this eliminates the need to set up and maintain Selenium and Appium. Our system will allow your lead QA engineers to build a framework, and manual QA engineers will then be able to build tests on that framework or from scratch without any need to code anything. However, when it comes to unit and API testing, testRigor should be used as a complementary solution rather than a replacement because it would be overkill to use it to only test APIs. Use it for SMS, phone call, and downloaded file testing as well as for testing systems where you don’t control the underlying code. TestRigor also offers unique features such as test generation based on actual end-user behavior in production and additional tools to further simplify test maintenance to an absolute minimum. Our goal at testRigor is to allow our customers to have the most valuable test suite as possible with as little effort on their end as possible.” Mark Lambert, vice president of strategic initiatives at Parasoft According to a recent Forrester survey, quality continues to be a priority and the primary metric for measuring the success of software deliveries. With the continued pressure to release software faster and with fewer defects, it’s not just about speed — it’s about delivering quality at speed. Managers must ask themselves if they are confident in the quality of the applications being delivered by their teams. Continuous quality is a must for every organization to efficiently reduce the risk of costly operational outages and to accelerate time-to-market. A critical element to reaching your quality targets is a scalable and maintainable automated testing strategy. When automated tests can be easily created and maintained, your team can focus on the overall quality of the application and verify the use cases, rather than the test scripts themselves. Parasoft solutions leverage artificial intelligence (AI) to enable rapid test creation, self-healing, smart test execution, and other capabilities that streamline your test automation workflows. A leader in the Forrester Wave: Continuous Functional Test Automation Suites 2020, Parasoft provides a complete and integrated quality suite. From deep code analysis for security and reliability, through unit, API, and UI test automation, to performance testing and service virtualization, which enable verification of nonfunctional business requirements, Parasoft helps you build quality into your software development process. “Parasoft’s continuous testing shines in API testing, service virtualization and integration testing, and the combined automation context.” — The Forrester Wave: Continuous Functional Test Automation Suites 2020. According to the report, if you are “looking for a genuine partner in testing, with strong and long-living roots in the testing space and complex technical systems to test, [you] should take a serious look at Parasoft.” Learn how Parasoft helps increase confidence and accelerate delivery of reliable, secure, and compliant software. www.parasoft.com z
< continued from page 24
it going to satisfy the use case in six months time when you start expanding out to additional use cases or additional applications in your organization?,” Lambert said. Once the tests are created, organizations then have to consider how to maintain their tests. “Say I get up and running and everything starts rolling great. And then the next sprint starts and that next sprint is not actually only introducing new functionality. It’s actually making changes to existing functionality. So my tests need to be maintained along with the underlying code and capabilities of the underlying application,” Lambert said. This is where testing functionalities such as self-healing come in to make sure that everything doesn’t collapse in the middle of a sprint. This functionality stops the continuous integration process from failing, and then also gives users ways of easily refactoring existing test cases so that they don’t have to throw them away and start again. “As I’m moving further through my development life cycle, my number of tests grow, and this is where test execution becomes critical. So you have to start looking at your test suites and saying, okay, what capabilities are available for me that can optimize my test execution to focus on the key business risks and optimize my test suites. This is so that I can get rapid feedback inside of a sprint and can continue accelerated delivery from sprint to sprint,” said Lambert. Gartner’s Herschmann explained that while test automation solves one problem, it can create another if not utilized properly. “I’m doing everything manually and I’m using automation now to accelerate that. Well that solves my problem of not able to run enough tests,” Herschmann said. “The new problem that I’ve potentially created now is that with all the tests that I run, I can no longer actually look at all of the test results and make sense of what I’m seeing here. So that’s why the test insights part, as an example, is now becoming the focus. I need something that helps me to do this in an automated fashion so that the result is that now I’m notified of the specific instances of where a test has failed or the patterns of where they have failed.” z
023-27_SDT042.qxp_Layout 1 11/17/20 6:49 PM Page 27
www.sdtimes.com
December 2020
SD Times
A guide to automated testing tools n Applitools: Applitools is built to test all the elements that appear on a screen with just one line of code. Using Visual AI, you can automatically verify that your web or mobile app functions and appears correctly across all devices, all browsers and all screen sizes. n Eggplant (acquired by Keysight Technologies) Eggplant Digital Automation Intelligence (DAI) enables automating up to 80% of activities including test-case design, test execution, and results analysis. This allows teams to rapidly accelerate testing and integrate with DevOps at speed. n HPE Software’s automated testing solutions simplify software testing within fast moving Agile teams and for continuous integration scenarios. Integrated with DevOps tools and ALM solutions, HPE automated testing solutions keep quality at the center of today’s modern applications and hybrid infrastructures. n IBM: Quality is essential and the combination of automated testing and service virtualization from IBM Rational Test Workbench allows teams to assess their software throughout their delivery life cycle. n Micro Focus: AI-powered intelligent test automation reduces functional test creation time and maintenance while boosting test coverage and resiliency. Users can test both the front-end functionality and back-end service parts of an application to increase test coverage across the UI and API. n Microsoft’s Visual Studio helps developers create, manage, and run unit tests by offering the Microsoft unit test framework or one of several third-party and opensource frameworks. n Mobile Labs (acquired by Kobiton) Mobile Labs remains the leading supplier of in-house mobile device clouds that connect remote, shared devices to Global 2000 mobile web, gaming, and app engineering teams. Its patented GigaFox is offered on-premises or hosted, and solves mobile device sharing and management challenges during development, debugging, manual testing, and automated test-
n
FEATURED PROVIDERS n
n Parasoft: Parasoft helps organizations continuously deliver quality software with its market-proven, integrated suite of automated software testing tools. Supporting the embedded, enterprise, and IoT markets, Parasoft’s technologies reduce the time, effort, and cost of delivering secure, reliable, and compliant software by integrating everything from deep code analysis and unit testing to web UI and API testing, plus service virtualization and complete code coverage, into the delivery pipeline. Bringing all this together, Parasoft’s award winning reporting and analytics dashboard delivers a centralized view of quality enabling organizations to deliver with confidence and succeed in today’s most strategic ecosystems and development initiatives — cybersecure, safety-critical, agile, DevOps, and continuous testing. n testRigor: testRigor helps organizations reduce time spent on test maintenance by 200 times, improve test stability, and dramatically improve the speed of test creation. This is achieved through its support of “plain English” language that allows users to describe how to find elements on the screen and what to do with those elements from the end-user’s perspective. People creating tests on their system build 2,000+ tests per year per person. On top of it, testRigor helps teams deploy their analytics library in production to make systems automatically produce tests reflecting the most frequently used end-to-end flows from production. ing. A pre-installed and pre-configured Appium server provides “instant on” Appium test automation. n NowSecure: Through the industry’s most advanced static, dynamic, behavioral and interactive mobile app security testing on real Android and iOS devices, NowSecure identifies the broadest array of security threats, compliance gaps and privacy issues in custom-developed, commercial, and business-critical mobile apps. n Orasi is a leading provider of software testing services, utilizing test management, test automation, enterprise testing, continuous delivery, monitoring, and mobile testing technology. n Perfecto: Users can pair their favorite frameworks with Perfecto to automate advanced testing capabilities, like GPS, device conditions, audio injection, and more. n ProdPerfect: ProdPerfect is an autonomous, end-to-end (E2E) regression testing solution that continuously identifies, builds and evolves E2E test suites via data-driven, machine-led analysis of live user behavior data. It addresses critical test coverage gaps, eliminates long test suite runtimes and costly bugs in produc-
tion, and removes the QA burden that consumes massive engineering resources. ProdPerfect was founded in January 2018 by startup veterans Dan Widing (CEO), Erik Fogg (CRO), and Wilson Funkhouser (Head of Data Science). n Progress: Telerik Test Studio is a test automation solution that helps teams be more efficient in functional, performance and load testing, improving test coverage and reducing the number of bugs that slip into production. n SmartBear With powerful test planning, test creation, test data management, test execution, and test environment solutions, SmartBear is paving the way for teams to deliver automated quality at both the UI and API layer. n Synopsys: A powerful and highly configurable test automation flow provides seamless integration of all Synopsys TestMAX capabilities. n Tricentis Tosca provides a unique Model-based test automation and test case design approach to functional test automation—encompassing risk-based testing, test data management and provisioning, service virtualization, API testing and more. z
27
028-29_SDT042.qxp_Layout 1 11/17/20 5:20 PM Page 28
28
SD Times
December 2020
www.sdtimes.com
INDUSTRY SPOTLIGHT
testRigor helps to convert manual testers T
esting is a crucial piece of the software life cycle but QA teams often can’t produce enough test coverage quickly enough because they are bogged down by test maintenance. Test maintenance often takes more than 50% of a QA team’s time. testRigor, a plain English-based testing system, is easing the pain points of QA teams by reducing grunt work and overhead, and accelerating the speed of delivery with its test maintenance and test generation solution. testRigor was designed to autonomously generate tests and dramatically reduce the need to maintain those tests. “Our tool is a functional endto-end testing tool for web and mobile designed to automate away the work of manual testers,” said Artem Golubev, CEO of testRigor.
Test maintenance doesn’t have to be time consuming When the company set out in 2015, it had a mission to help customers expand their test coverage and generate thousands of tests, but the company quickly found test maintenance as the number one problem it needed to solve. “We realized we needed to help our customers maintain our tests, otherwise those tests would be completely useless and thrown away,” said Golubev. What the company saw was QA teams spending man-years building out hundreds of tests and then being completely bogged down maintaining the code. If there was a major change to their solution such as changing the checkout process from just clicking “buy” to a flow involving clicking “add to cart” and “Cart,” then a majority of the tests would fail and the company would need to invest more man-months adapting thousands of tests to the new flow — only to resort to manual testing only in the meantime. Content provided by SD Times and
“You have a huge test suite with tons of useless code that almost all fails. Do you want to spend years fixing it or do you want to try to figure something else out? So people figure something else, and basically get back to manual testing because that is the only way they can fix them,” Golubev said. This results in QA teams having to use valuable time to maintain regression tests, or the business decides it’s not worth the time and falls back to manual regression. In fact more than 50% of QA engineering time is spent on test maintenance. As a result, according to Golubev, about 70% of functionality is still tested manually. “It’s as if you’d need to stop and fix your car every mile on the road,” Golubev added. “That is a big problem for companies. We want to help companies solve that in a meaningful way,” Golubev continued. According to Golubev, testRigor can help manual QA testers build tests 20 times faster than with Selenium and reduce test maintenance by 200 times vs Selenium. On top of the plain English, testRigor provides a browser plugin that can be used to record tests while testers are per-
forming their manual regression tests that further speeds up the test creation.
Test generation that ensures you’re covered Another problem QA engineering teams face is making sure they have created enough tests that cover all the functionality within a solution. Test creation can be hard, expensive and inefficient if not done properly, Golubev explained. With testRigor, customers can not only build up to 1,000 tests anywhere between two weeks and two months, but it uses artificial intelligence to constantly learn from end users and create tests based on most frequently used end-to-end flows from production. Because it is not dependent on whitebox information like XPaths, tests are also more stable and adaptable. “Tests are automatically created based on mirroring how your end-users are using your application in your production plus tests which are produced to map your most important functionality out of the box. It is achieved by using our JavaScript library in your production environment to capture metadata around the functionality & flows
028-29_SDT042.qxp_Layout 1 11/17/20 5:19 PM Page 29
www.sdtimes.com
to QA automation engineers
testRigor is a plain English-based testing system for test generation and test maintenance.
your users are taking,” according to testRigor’s website. This helps also to ensure that the tests that are generated are actually helpful and reflect the most important areas, eliminating the question of what needs to be covered. According to Golubev, legacy test approaches usually struggle to provide more than 30% of test coverage, while testRigor provides more than 90% of click-through coverage out of the box. The solution leverages the same low-code, low-maintenance, plain English-based platform that customers can use to build their tests manually. Its JavaScript library and browser plugin ensures tests cover the most frequent and business critical functionality and flows, and tests are run in parallel so prioritization doesn’t become an issue. Additionally, because testRigor uses plain English-based tests, users can easily see what happened, and explore the tree of paths to find out which paths are covered without having any coding experience. All the tests can be run on a branch, test or production environment within minutes by running tests in parallel.
“It’s like if you had to pedal your bike to go faster. It is a known problem for you unless you invent an engine. We are that engine for pedaling your bicycle,” Golubev added.
A new era of testing With the test maintenance, creation and automation issues solved, testRigor is hoping to usher in a new era of intelligent testing. For instance, many automated testers look to the testing pyramid as an example on how to create a balanced testing strategy. Typically, at the top of the pyramid, you have end-to-end testing, followed by integration testing and unit testing. The wide-spread belief is that you can’t have a lot of end-to-end tests because they are flaky, slow to create and pain to maintain. testRigor wants to take the testing pyramid and mold it into a testing hourglass, expanding the end-to-end testing portion. The reason why end-to-end testing is just a small portion of the testing pyramid is because typically end-toend tests are too hard to build and too hard to maintain, according to Golubev. With testRigor, Golubev explained it
December 2020
SD Times
has solved the stability and maintainability aspects of end-to-end testing as well as the ability to generate tests based on actual end-user behavior in production, paving the way for more end-to-end testing. “We are allowing customers to have a lot of end-to-end tests because we believe this is exactly how systems should be tested otherwise you don’t get actual proof that stuff actually works on behalf of your end users,” Golubev said. “In 2020 and beyond, it is paramount to be able to move faster. You can’t resort back to manual testing anymore. People that move faster end up not only with less issues, but also have positive business impacts.” testRigor’s four principles for any testing system in 2020 (and beyond) are: l No setup required, eliminating painful onboarding and test environment creation l The ability to generate tests when possible l Never failing without a good reason. Flakiness is not a good reason for test failure. l Ability to maintain tests easily. The testRigor system is particularly good at acceptance-level functional UIlevel regression tests. It is complementary, not a replacement, for unit and API tests, and supports calling APIs and APIs testing. Being able to perform exploratory tests and regression testing faster can be a huge differentiation factor between your competitors, Golubev explained. “Rather than dedicating testing resources to this type of tedious, repetitive & time-consuming work, you can offload that testing to testRigor and redeploy your testers to your core testing needs,” the company wrote. Other features include SMS, phone call, audio, email, and downloaded files testing. It can also provide tests for systems teams don’t control the underlying code for such as Salesforce, MS Dynamics, SAP implementations and RPA scenarios. Learn more at testrigor.ai z
29
030_SDT042.qxp_Layout 1 11/17/20 9:35 AM Page 30
30
SD Times
December 2020
www.sdtimes.com
Guest View BY ANDREW LAU
Velocity? Who gives a flying Fortran? Andrew Lau is CEO and co-founder at Jellyfish.
t’s time to get honest with ourselves about velocity. Or is it Velocity? Capital-V “Velocity” has been trumpeted for nearly two decades as one of, if not THE most, important ways to measure success on the engineering team [it’s also been criticized as dangerous]. The idea here is that by tracking Velocity, we can better understand how much work a team is able to complete in a sprint, and thus we can plan better, see and measure improvement in processes, and ultimately deliver code faster. I want to be clear: innovations in process and the way we work because of Agile have been tremendous. I’m not here to discredit the notion of Velocity and its usefulness on the team itself. But if you’re a VP of Engineering who proudly shows improving Velocity and related metrics to your CEO, watch out! Your CEO probably doesn’t understand what she is signing up for and certainly doesn’t care about actual story points or Velocity, so showing these to her is only decreasing her trust in your ability to do the job. How many times have I spoken to VPs of Engineering who are struggling to make their CEOs understand or care? They present updates to their executive teams monthly, and each month are asked for something different. At last, discouraged, they call me up and I can hear the exhaustion in their voices when they ask, “What am I supposed to show my CEO?” “What do you show him currently?” is my follow-up question. And the answer is usually something like, “Story points, velocity change, and…” before I cut them off. No wonder their CEO doesn’t care. These metrics have no meaning to the job a CEO does, and worse, they are misleading. Don’t get me wrong, story points and various Agile metrics are great innovations and tools for engineers and teams to help estimate, plan, and communicate amongst themselves [don’t get me started on the days of waterfall]. But boy have we screwed up choosing the word Velocity. It simply means a different thing to business leaders. CEOs hear velocity when we say Velocity. Consider the following imaginary situation: VPE asks CEO if she wants Velocity, to which her
I
How many times have I spoken to VPs of Engineering who are struggling to make their CEOs understand or care?
response is a definitive “Yes! 100% I want everything up and to the right. Faster, better, all of the above!” “Great!” VPE responds, “You can have it! In fact, let’s go crazy. Let’s give bonuses based off of it!” Six months later VPE has exciting news for CEO, “Look how well the engineering team is doing. Look how great our Velocity is!” CEO furrows her brow, “Wait, what?! I still don’t have Widget X and Feature Y that the sales team is screaming for, and no one works past 5:30pm! And now I have to pay out these bonuses?!?!?!” Okay, fine. Maybe that’s a little bit dramatic. But the point holds. While keeping track of Velocity as an engineering leader is helpful to know for the health of a specific team and how the Agile process is working, it is utterly meaningless to the business. As VPE, your first job is to figure out what your job is. Your CEO thinks your job is to get things shipped. Quickly. Without bugs. So that her sales team can sell more stuff and existing customers will be happier. Secondarily, it is to hire and grow a team for the sake of shipping more things. Whether you totally agree or not, let’s start here. What you communicate with your CEO needs to speak to her expectations of you first. After you successfully communicate that, only then might you be able to get her and the rest of the executive team to engage on the inner workings of your process. This is not to say that all agile metrics are useless to the CEO, but remember that, for this purpose, they are only mechanisms to help your CEO recognize how she might achieve the company’s goals. VPEs, wake up! You’re an executive. Your job is to understand the business, and then project that into your own functional expertise. The CEO wants this, and hired you to do this, even if she doesn’t directly ask you for it. So, the next time you find yourself wondering, like so many of us have done, “What the heck am I supposed to show my CEO,” try to keep this in mind. You might think about giving a read on the state of your deliverables, an understanding of what key initiatives or types of work are consuming your team’s efforts, or an update on ongoing efforts to improve quality for your customers. Whatever you decide on, instead of showing them Velocity, think about the goals that are important to the business—that are important to the CEO—and work from there. z
031_SDT042.qxp_Layout 1 11/17/20 2:22 PM Page 31
www.sdtimes.com
December 2020
SD Times
Analyst View BY JASON ENGLISH
The great digital scattering ough year. COVID and an uncertain economic outlook have made these very trying times for almost all people, including those lucky enough to be able to work from home as a part of the digital workforce that powers our business infrastructure. Looking back, you could see signs of a great digital scattering coming for many software development-related businesses — not only the ones catering to restaurants and service sector jobs. Companies without the digital backbone to shift to remote development and operations quickly found themselves in hot water. If constant planning meetings and fire-fighting war rooms are the norm, what happens when all of that is replaced by a miniature Hollywood Squares matrix of heads and screen cams? Companies accounted for this change, for a few weeks, with grace and understanding that not everyone’s home makes a good office. Bad connections, barking dogs, interrupting kids. Employees with WFH policies had a head start here, by making such arrangements ahead of time. Some companies even vacated office space in stride, never looking back. But the remote scattering of workers takes a toll. Does innovation and quality start to suffer — even as many IT staff commonly report working more hours to meet increased demand, unburdened with commuting? Knowledge workers become more distracted, or frustrated, or unproductive working at home, leading to a reconsideration of how to either instill some sense of culture online, or safely return some work to the office. Engineers in a product-oriented organization appreciate self-determination, so long as they are contributing valued work. Shared platforms for remotely governing agile software delivery from requirements, to repositories, to release trains and issue resolution are improving every day. The very environments from which we deliver software keep scattering as well, as the old threetiered application stack in the datacenter gives way to SaaS-based platforms, cloud infrastructure and microservices, calling APIs and running workloads in ephemeral containers and serverless architectures. Were we meant to move the application estate off-premises for good, along with the teams? The outlook for permanent departure of IT resources from shared offices and datacenters isn’t
T
100% rosy, even for the software-driven company. First, you have the need for (groan) IT Governance, and (groan) Compliance to standards, and (groan) Legacy Applications to maintain — you know, everything we’ve already built, the stuff that keeps the business running. This forces creativity about orchestrating both cloud and on-premises application infrastructure as a Hybrid IT application estate. Zeroing in on the development shop itself, we start to see how the in-person Agile Scrum or daily standup meeting packs a lot of collaborative value, as creative solutions often arise from live interaction, and intangibles like camaraderie and morale naturally pull DevOps teams together. A few months in, the scattering hurts. Many companies are laying off IT staff due to business losses, or competitive takeovers. Other companies are seeing spikes in demand, and cannot recruit fast enough, let alone properly mentor the new staff coming into this decentralized workplace. A full-stack developer friend recently joined a mid-sized company with several loosely interconnected teams. They were stunned to find that while DevOps practices and distributed cloud architecture were in the job description, very little ‘tribal knowledge’ could be conveyed. Teams that worked together in the pre-COVID era take it for granted that newer employees come in with none of that situational awareness. What do I need access to? What review gets my code on the release train? Who supports my code on Day 2 once it is in production? There’s no replacing ideas that come up amidst the scuttlebutt at lunch, or a tap on the shoulder — “can you look at my screen and tell me what’s going on here?” It’s OK to ask clueless questions and pose crazy hypotheses in person, with other humans. In text and email format, those interchanges are impersonal, and documented. We don’t know when a new normal will be realized for software development. Until then, let’s embrace the need to solve this digital scattering with human empathy, and an understanding that we all want to deliver better software. z
Jason English (@bluefug) is a Principal Analyst and CMO of Intellyx.
Were we meant to move the application estate off-premises for good, along with the teams?
31
Full Page Ads_SDT016.qxp_Layout 1 9/21/18 4:14 PM Page 28
SD T Times imes News on Mond day The latest news, news analysis and commentary delivvered to your inbox!
• Reports on the newest technologies affecting enterprise deve developers elopers • Insights into the e practices and innovations reshaping softw ware development • News from softtware providers, industry consortia, open n source projects and more m
Read SD Times Ne ews On Monday to o keep up with everything happening in the software devvelopment industrry. SUB BSCRIBE TODA AY! Y!