SD Times February 2021

Page 1

FC_SDT044.qxp_Layout 1 1/25/21 10:18 AM Page 1

FEBRUARY 2021 • VOL. 2, ISSUE 44 • $9.95 • www.sdtimes.com


IFC_SDT043.qxp_Layout 1 12/22/20 11:36 AM Page 2

Instantly Search Terabytes

www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein drubinstein@d2emerge.com NEWS EDITOR Christina Cardoza ccardoza@d2emerge.com

dtSearch’s document filters support: ‡ popular file types ‡ emails with multilevel attachments ‡ a wide variety of databases

SOCIAL MEDIA AND ONLINE EDITORS Jenna Sargent jsargent@d2emerge.com Jakub Lewkowicz jlwekowicz@d2emerge.com ART DIRECTOR Mara Leonardi mleonardi@d2emerge.com

‡ web data

CONTRIBUTING WRITERS Jacqueline Emigh, Lisa Morgan, Jeffrey Schwartz, George Tillmann

2YHU VHDUFK RSWLRQV LQFOXGLQJ ‡ efficient multithreaded search

CONTRIBUTING ANALYSTS Enderle Group, Gartner, IDC, Intellyx

‡ HDV\ PXOWLFRORU KLW KLJKOLJKWLQJ ‡ forensics options like credit card search

CUSTOMER SERVICE SUBSCRIPTIONS subscriptions@d2emerge.com ADVERTISING TRAFFIC Mara Leonardi mleonardi@d2emerge.com

Developers:

LIST SERVICES Jessica Carroll jcarroll@d2emerge.com

‡ 6'.V IRU :LQGRZV /LQX[ PDF26 ‡ &URVV SODWIRUP $3,V IRU & -DYD DQG NET with NET Standard / 1(7 &RUH

.

.

.

‡ )$4V RQ IDFHWHG VHDUFK JUDQXODU GDWD FODVVLILFDWLRQ $]XUH $:6 DQG PRUH

REPRINTS reprints@d2emerge.com ACCOUNTING accounting@d2emerge.com

ADVERTISING SALES PUBLISHER David Lyman 978-465-2351 dlyman@d2emerge.com

Visit dtSearch.com for ‡ KXQGUHGV RI UHYLHZV DQG FDVH VWXGLHV ‡ IXOO\ IXQFWLRQDO HQWHUSULVH DQG developer evaluations

SALES MANAGER Jon Sawyer 603-547-7695 jsawyer@d2emerge.com

The Smart Choice for Text Retrieval® since 1991

dtSearch.com 1-800-IT-FINDS

PRESIDENT & CEO David Lyman

D2 EMERGE LLC www.d2emerge.com

CHIEF OPERATING OFFICER David Rubinstein


003_SDT044.qxp_Layout 1 1/22/21 4:23 PM Page 3

Contents

VOLUME 2, ISSUE 44 • FEBRUARY 2021

FEATURES

NEWS 4

News Watch

13

Enable BizOps across the enterprise

15

The modern risks of open-source code

16

Boring But Deadly: The most uninteresting reason your project might fail

What’s all the fuss about Rust?

page 6

UI testing a key part of delivering good user experiences An Open Testing Platform page 20

page 10

COLUMNS 30 31

BUYERS GUIDE

GUEST VIEW by Adam Frank The evolution of DevOps to DevAI INDUSTRY WATCH by David Rubinstein Flash is gone, but its legacy lives on

Data without integration is just data page 25

Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 2 Roberts Lane, Newburyport, MA 01950. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2021 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 2 Roberts Lane, Newburyport, MA 01950. SD Times subscriber services may be reached at subscriptions@d2emerge.com.


004,5_SDT044.qxp_Layout 1 1/22/21 3:51 PM Page 4

4

SD Times

February 2021

www.sdtimes.com

NEWS WATCH The State of JavaScript 2020 The latest State of JavaScript report revealed that while React and Vue are still the most popular JavaScript frameworks, the framework Svelte is starting to establish itself as a contender. Svelte has a user base that is 89% satisfied with it. Sixtysix percent of survey respondents were interested in using it, while only 15% currently use it, which is a big increase from 2019 when only 8% of respondents used Svelte. In comparison, React was used by 80% of respondents this year, Angular by 56%, and Vue.js by 49%. In addition, the report found the JavaScript frameworks Angular and Ember have seen major dropoffs in satisfaction over the years.

Python named TIOBE Index’s language of 2020 Python was once again named the TIOBE Index programming language of the year. This is the fourth time it has been declared the programming language of the year. TIOBE Index awards the programming language of the year to the language with the most popularity. Over the last year, Python has seen a 2.01% uptake followed by C++ at 1.99%, C at 1.66%, Groovy at 1.23% and R at 1.10%. According to TIOBE, main factors for Python’s popularity include ease of learning and high productivity. “These two qualities are key in a world that is craving for more developers in all kinds of fields. Python already tested the second position some months

Swimm to help developers understand codebases Swimm, a developer onboarding and team collaboration tool provider, announced that it raised $5.7 million in seed funding and also launched its platform for sharing information about codebases. The solution is designed to help developers quickly understand and navigate codebases. According to the company, understanding other people’s code is a common challenge that comes up in organizations onboarding, training, remote work, project or context switching initiatives. “We immediately saw how much time, resources and effort companies had to invest into training developers — even experienced ones,” said Oren Toledano, co-founder and CEO of Swimm. “In most companies, developers were just thrown into the code pool at the deep end, and it became a question of sink or swim. It’s an expensive learning curve.”

ago and it will for sure swap places with Java permanently soon,” TIOBE explained. “Will Python also beat C? Well, C has still one trump card to play: its performance, and this will remain the case for some time to come. So I guess it will certainly take some years for Python to become the new number 1 in the TIOBE index.”

Rookout’s X-ray for third-party dependencies Debugging platform Rookout has announced a new X-ray vision feature that will enable developers to look into thirdparty dependencies and check for bugs within them. The company explained that debugging third-party code, whether from a vendor, an open-source project, or an ex-employee, can present a number of challenges, especially when it comes to debugging. According to Rookout, developers are used to viewing these dependencies as black boxes that they have no visibility into. This challenge is amplified when developers are working

remotely and can no longer go over to a colleague’s desk to troubleshoot.

Lenovo unveils smart glasses for the enterprise Lenovo unveiled at CES new AR smart glasses designed to change the way employees interact with their workspaces whether they’re working remotely or from the office. The company expects the ThinkReality A3 lightweight AR smart glasses to be available later this year. “As increasingly distributed workforces and hybrid work models become the reality of a new normal, small and large businesses around the world are looking to adopt new technologies for smart collaboration, increased efficiency, and lower downtimes,” Lenovo wrote in its announcement. The company will also provide a PC edition for virtual monitors. The ThinkReality A3 PC Edition will enable users to see large monitors in their field of view and to use Windows software tools apps.

Quest Software acquires erwin Global systems management, data protection and security software company Quest Software has announced it is acquiring erwin for its data modeling, data governance, and business process modeling solutions. According to Quest, this acquisition will help users to reap the benefits from all of their data as well as drive significant data initiatives. “Data-centric projects are rapidly accelerating across the enterprise. Together, Quest and erwin will continue to deliver database tools aimed at helping companies know their data, alleviating concerns about where and how their data is used,” said Patrick Nichols, the CEO of Quest Software. “Quest’s focus on ‘Where next meets now’ drives every new capability we add to our products and each new investment.” Erwin will add new capabilities that focus on driving significant data initiatives and the deployment of modern applications all while remaining compliant, according to Quest Software.


004,5_SDT044.qxp_Layout 1 1/22/21 3:51 PM Page 5

www.sdtimes.com

AWS unveils chaos engineering tool AWS is enabling teams to address application weaknesses with the introduction of the AWS Fault Injection Simulator. The simulator is a chaos engineering tool expected to be generally available this year. According to the company, the new offering will come packed with pre-built templates for creating the desired disruptions whether that’s for server latency or database errors. It also contains controls and guardrails such as automatically rolling back or stopping the experiment if certain conditions are met. Then, teams can quickly roll back to the pre-experiment state. Teams will also have access to a range of fine-grained controls during the experiments to gradually or simultaneously impair how different resources perform in a production environment as it is scaled up.

Angular developers want better runtime and documentation Runtime performance and better documentation are top priority for Angular developers. The Angular team recently reported its 2020 Developer Survey Results to find out how developers are doing with the mobile and desktop application framework, and where they can improve. When looking at how developers are dealing with updates, more than half the respondents stated they had a smooth migration experience. Thirty-eight percent said the upgrade process was neutral and 8% found it hard. The survey ran after the release of

Angular 9, which had the most significant changes to the framework since 2016. Since the survey, the team has released Angular 10 and Angular 11.

GCC front-end for Rust gets new funding Open Source Security, Inc. has announced new funding for the GCC front-end for Rust project. The funding will go towards full-time and public development efforts. GCC front-end for Rust is an open-source project designed to provide an alternative Rust compiler for GCC. “The origin of this project was a community effort several years ago where Rust was still at version 0.9; the language was subject to so much change that it became difficult for a community effort to play catch-up. Now that the language is stable, it is an excellent time to create alternative compilers. The developers of the project are keen “Rustaceans” with a desire to give back to the Rust community and to learn what GCC is capable of when it comes to a modern language,” the team wrote on its GitHub page.

CData acquires forceAmp and DBAmp solution CData has acquired forceAmp and its flagship product DMAmp. According to the company, this will strengthen its Salesforce data connectivity and integration capabilities. “For more than 15 years, DBAmp has served as a reliable, trustworthy SalesforceSQL Server integration solu-

tion, used in enterprises worldwide, including by most of the Fortune 500. Designed as an intuitive Salesforce integration solution, DBAmp creates a bi-directional link between SQL Server and Salesforce that allows customers to access Salesforce data directly from SQL Server,” the company wrote in its announcement. With forceAmp, CData will continue its SQL-centric

February 2021

SD Times

approach to data connectivity, provide an alternate technical approach to integration, and extend its presence in the Salesforce ecosystem. In addition, the forceAmp team will join CData and continue to maintain and extend forceAmp’s data connectivity and integration technologies. forceAmp’s founder Bill Emerson will become senior vice president of product strategy at CData. z

People on the move

n Bill Staples has been promoted to president and chief product officer of New Relic, an observability company. Staples will report to founder and CEO Lew Cirne, and will be responsible for the oversight of strategy, corporate development, marketing and technical support in addition to his existing product management, engineering and design role. The news comes after Michael Christenson announced he will be taking a step back from his day-to-day operating role as president and chief operating officer. Christenson will continue to serve as the company’s chief operating officer through the end of the fiscal year, and serve on the company’s board of directors. n Greg Nicastro is joining SmartBear’s team as EVP/GM of products and technology. He will bring with him more than 30 years of experience in product strategy and development, and over 20 years of executive experience. Previously, he was the chief product officer at CloudHealth Technology, and held the vice president of product development position at Veracode. n IBM has appointed Martin Schroeter to lead its new managed infrastructure company NewCo. NewCo will focus on managing and modernizing IT infrastructure. Previously, Schroeter served as IBM’s senior vice president of global markets, where he oversaw global sales, client relationships and satisfaction, and worldwide geographic operations. He also held the role of chief financial officer for the company, along with other executive roles. He joined IBM in 1992. n Stack Overflow has announced Debbie Shotwell as its new chief people officer. Shotwell will be responsible for attracting and retaining the best talent and creating an employee experience that’s dedicated to collaboration, diversity inclusion and belonging. Previously, she was the chief people officer at Saba Software and held leadership positions across BigCommerce, Good Technology Pracific Pulmonary Services, Taleo and PeopleSoft.

5


06-9_SDT044.qxp_Layout 1 1/22/21 11:50 AM Page 6

6

SD Times

February 2021

www.sdtimes.com

BY CHRISTINA CARDOZA

he Rust systems programming language is still in its infancy, having only released the first stable version of the language in 2015, but that hasn’t stopped it from rising to the top of developer charts. Last year, it broke onto the TIOBE Index Top 20 list of the most popular programming languages for the first time, and it continues to win over more developers. It has been voted the most loved programming language on the Stack Overflow Developer Survey for the last five years. Out of the nearly 65,000 developers surveyed, the report found 86% are using the language and are interested in continuing to develop with it.

What is all the fuss about? Developers who have never used Rust might be wondering what all the hype is about. According to Jake Goulding, StackOverflow’s top Rust contributor, “the short answer is that Rust solves pain points present in many other languages, providing a solid step forward

with a limited number of downsides.” Rust was first started as a personal project from Mozilla developer Graydon Hoare in 2006. In 2009, Mozilla started sponsoring the project and the first introduction to the language happened in 2010. The initial selling point was the promise of memory safety, according to Armin Ronacher, director of engineering at Sentry, an application monitoring and error tracking company. “They didn’t compromise on the core promise of memory safety,” he said. “If you think about all the other languages developed over the last couple of years in that space, there is really no contender. There hasn’t been a language that was ever designed to be as good as C and C++. Rust didn’t compromise as being a replacement for those languages as other languages have.” Jim Blandy, free software developer and author of the Programming Rust book, explained that while typically almost all programming is done in a high-level language like JavaScript, Java, Python or TypeScript, when

developers need to program something that involves memory usage or the computer processors, high-level languages don’t work. The reason for this is because those languages use a garbage collector to attempt to reclaim memory that is no longer in use. Garbage collection can have a negative impact on resource consumption, performance and program execution. There are other systems programming languages like C and C++ that don’t use garbage collection, but they are difficult to program with. According to Blandy, C and C++ enforce rules that make sense in theory, but don’t work in practice. “It’s possible to have rules that are easy to understand, but impossible to follow,” he explained. “It’s like if someone put a chess game in front of you and told you to win it, you know the rules and you'll do your best, but you can’t just say okay I am going to win it. C and C++ have been putting programmers into that exact situation.” Rust is designed to accomplish memory safety without the need of a traditional garbage collector. “We agree


06-9_SDT044.qxp_Layout 1 1/22/21 11:50 AM Page 7

www.sdtimes.com

in this industry that memory safety is necessary because the moment you start to touch bad memory, you are implicitly opening yourself up to a bunch of vulnerabilities,” said Ronacher. “If you look into C and C++ projects, most security vulnerabilities have been a result of a program not dealing with memory properly.” Rust is able to ensure memory safety through its ownership and borrowing system, which includes a set of rules that the compiler checks to make sure memory usage is safe, the program is free of memory errors when it is compiled, and features don’t slow down the program. “All programs have to manage the

way they use a computer’s memory while running. Some languages have garbage collection that constantly looks for no-longer-used memory as the program runs; in other languages, the programmer must explicitly allocate and free the memory. Rust uses a third approach: memory is managed through a system of ownership with a set of rules that the compiler checks at compile time,” the Rust team wrote in its documentation.

Beyond memory safety Memory safety is why developers come to the language, but they stay for the package manager and Cargo ecosystem,

C and C++ enforce rules that make sense in theory, but don’t work in practice.

— Jim Blandy, author

February 2021

SD Times

according to Sentry’s Ronacher. According to the Rust team, developers love Rust’s build system and package manager because it is able to handle a lot of tasks such as building code, downloading libraries, and building libraries. “The fact that you can build your project with all the dependencies without having to pull in another tool is what makes people happy. It comes out of the box full-featured and with a good development experience,” said Ronacher. When free software developer Blandy looked into why people choose the languages they choose, the number one consideration is not the language itself, but the libraries and tools they connect to. According to Ronacher, developers enjoy working with Rust over C and C++ because it’s less prone to crashes and its tooling is much stronger. He explained that while the compiler integration for C and C++ might be good, the tooling is bad and requires external dependencies. Additionally, if developers want to reuse code someone else wrote, it’s hard to do in C and C++ because of a lack of a package manager. Ronacher also explained many C++ libraries are difficult to add to large projects because of the lack of standardized strings and common types. Rust’s packaging ecosystem makes it easier to do iterative development and doesn’t require developers to reimplement everything or make everything consistent with the codebase. “That is a huge reason why Rust is so successful, because you can actually both integrate with the existing codebases, but when you start from scratch you can put in so many dependencies much easier than you can in C and C++,” said Ronacher. As a result of the memory safety, Rust also resolves a lot of concurrency issues, making it much easier to write concurrent programs with Rust than other languages, Blandy explained. “Memory safety bugs and concurrency bugs often come down to code accessing data when it shouldn't. Rust's secret weapon is ownership, a discicontinued on page 8 >

7


06-9_SDT044.qxp_Layout 1 1/22/21 11:50 AM Page 8

8

SD Times

February 2021

www.sdtimes.com

< continued from page 7

pline for access control that systems programmers try to follow, but that Rust's compiler checks statically for you,” the Rust team wrote in a post. “For concurrency, this means you can choose from a wide variety of paradigms (message passing, shared state, lock-free, purely functional), and Rust will help you avoid common pitfalls.” Helping to avoid those pitfalls actually forces developers to write good quality code, according to Maxwell Flitton, an R&D software engineer in financial tech and author of Rust Web Programming. “When you compile, you are very confident that it is going to work and you are less likely to get bugs that creep in,” he said. Beyond the code, Rust has a strong and welcoming community around it, according to Shane Miller, the senior

engineering manager of the Rust platform at AWS, which recently announced it would sponsor the project. “Rust really focuses on providing a great experience for people. The Rust community is particularly welcoming, reaching out to those who haven’t traditionally participated in systems programming or open source.”

The limitations of Rust Rust is both safe and manages to reach really close to what the C and C++ languages can do, but it is a more restrictive language and comes with its limitations, according to Blandy. For instance, the way Rust is able to provide security guarantees is by restricting what developers can do. “Rust takes pointers and pointers are this functional thing in every language — in Java every object is a pointer to

that object. They are ubiquitous and Rust restricts how you can use them. It basically says there are some rules. I am only going to let you use them in a certain fashion. It challenges the programmer,” said Blandy. Blandy also mentions that the debugging support in Rust is not up to the level of C++. In addition, while Rust is memory efficient, it takes way too long to compile programs. “It rivals C++ in how slow it is to compile the code. Once it is compiled, it runs really fast, but the compile times are really bad,” said Ronacher. And since it is still so new, developers might run into situations or environments that no one has worked on before. For instance, if you want to develop something for Bluetooth, in JavaScript there is a rich ecosystem, but in Rust

Top tech companies turn to Rust

After working with and incorporating the language in many of its services, Amazon announced in 2019 that it would sponsor the Rust project. The first notable product the company wrote in Rust was Firecracker, an open-source virtual machine manager. The language is also used in Lambda, EC2 and S3 Amazon services. “Since that launch we’ve found more opportunities to build high-performance customer experiences quickly and securely. Rust helps us deliver fast, robust services to AWS customers at scale, and our responsibility and investment in the Rust ecosystem and community of builders grows with our adoption,” said Shane Miller, the senior engineering manager of the Rust platform at AWS. At the end of last year, Amazon revealed it started to hire Rust contributors to ensure the language got the time and resources necessary to improve. The company also started to build a team around the Rust asynchronous runtime Tokio, and invest in developer tools, infrastructure components, interoperability and verification. “We believe Rust changes the game when it comes to writing safe systems software. Rust provides the performance and control needed to write low-level systems, while empowering soft-

ware developers to write robust, secure programs,” said Miller. In 2019, Microsoft also revealed it started to experiment with Rust after it saw a need for a safer systems programming language. “We’re using languages that, because they are quite old and come from a different era, do not provide us the ability to protect ourselves from … vulnerabilities,” Ryan Levick, cloud developer advocate at Microsoft, said in a video. “C++ is not a memory safe language and no one would really pretend that it is,” he said. Microsoft has since been moving toward using Rust over C and C++, and rewriting low-level components of the Windows codebase in Rust. “For C++ developers used to writing complex systems, using Rust as a developer is a breath of fresh air. The memory and data safety guarantees made by the compiler give the developer much greater confidence that compiling code will be correct beyond memory safety vulnerabilities. Less time is spent debugging trivial issues or frustrating race conditions. The compiler warning and error messages are extremely well written, allowing novice Rust programmers to quickly identify and resolve issues in their code,” Adam Burch, software engineer at Microsoft, explained in a post. z


06-9_SDT044.qxp_Layout 1 1/22/21 11:50 AM Page 9

www.sdtimes.com

there is just a barely maintained Bluetooth module, according to Ronacher. Web development has also been an area where Rust hasn’t been too strong, but Flitton believes that’s changing, especially as the world becomes more digital. The 2020 State of the Developer Ecosystem Report by JetBrains revealed while Rust is mostly used for systems programming, 44% of respondents are now using it for web development. “In the beginning, Rust was mostly associated with embedded systems or robotics, but it has matured and stable frameworks have started to come out. The possibilities with web development are opening up for some core services that take a lot of traffic or require a lot of processing power,” he said. Learning Rust has also been a problem in the community because a lot of developers feel too intimidated to take it on, according to Flitton. “Rust has this very nasty image of, it is going to be really hard to code. It’s really fast, but going to be hard to code...but not really. It is memory safe and if you focus on a few quirks initially then you can pick it up quite quickly,” he said. The problem has been because it forces you to do good quality coding, it can be annoying for developers who picked up bad habits and don't want to change the way they code. “There is a saying that a bad workman always blames his tools. When I first started to get Rust programs compiled, I was incredibly frustrated because it kept saying you haven't thought about this. I am not compiling because this doesn’t seem safe. So I was quite frustrated, but when you look at it it is just forcing safety,” said Flitton. Ronacher added that the way developers learn Rust is different from the way they learn other languages. “If you go into the mindset learning Rust the same way you approach other languages, you are going to have an uphill battle because the language will punish you in ways that you wouldn't expect,” he said. However, Flitton believes Rust can help developers realize all the things they’ve been doing wrong and in turn make them better for it. z

February 2021

SD Times

The Rust team reveals its focus on stability is starting to pay off

Source: Rust 2020 Survey

A key focus of improving the Rust language over the last year has been on stabilizing features, and according to the newly released Rust 2020 Survey, those efforts have paid off. Survey respondents in general felt that stability of the language has been improving. The rust-analyzer and IntelliJ Rust plugin were key projects highlighted in the survey as having happy users. According to the team, almost 75% of respondents felt that they saw “at least some improvement” in the IDE, but users of those two projects in particular said that they were especially happy. Forty-seven percent of rust-analyzer users and 40% of IntelliJ users noted “a lot of improvement.” In addition, the number of users that rely on a nightly compiler is continuing to drop. This year only 28% of Rust developers cited that they rely on one part of the time, compared to 30.5% last year. Those developers that use a nightly compiler cite being able to use the Rocker web framework as a key reason. The next largest reason for using a nightly compiler was for const generics, but the team noted that as a minimal version of const generics is almost stable, there will soon be less reliance on using a nightly compiler for this. Eighty-three percent of the Rust 2020 Survey respondents stated that they used Rust, which, according to the team, is an all-time high for the yearly survey. Seven percent of the respondents said that they used to use Rust but no

longer do. When asked why they didn’t use Rust anymore, 35% said that they hadn’t learned it yet, 34% said their company wasn’t using the language, and 19% said switching to Rust would slow them down in comparison to their current language of choice. The future looks promising, however, with almost half of respondents with knowledge on hiring decisions saying that their employers planned to hire Rust developers within the next year. Rust continues to be more common in production at companies over hobbyist use. Forty percent of respondents that work in software say that they use Rust at work. The amount of Rust in production was 10,000 lines of code or more at 44% of respondents’ workplaces, which is up from 34% last year. The improvements that respondents want to see in the language include C++ interop, better ways of learning Rust, reducing compile times, GUI library support, and increasing community involvement. Specifically, users want to improve the Rust community for nonEnglish speakers and increase the number of the large corporate sponsors in the community, which will make it easier for developers to make the case to use Rust at work. The team surveyed 8,323 developers for the Rust 2020 Survey, which was available in 14 different languages, with the majority (75%) of respondents being English speakers. z —Jenna Sargent

9


010-12_SDT044.qxp_Layout 1 1/25/21 10:08 AM Page 10

10

SD Times

February 2021

www.sdtimes.com

UI testing a key part of delivering good user ser experience has always been an important factor in the success of an application, but in an increasingly digital-only world, its importance is only increasing. If all goes perfectly, the user doesn’t think about what’s going on behind the scenes. When a button does what it’s supposed to when it’s clicked on, the user goes about their day, possibly even completing a transaction on the site. But when that button takes several seconds to do anything — or, worst-case scenario, never actually does anything — the user will be frustrated, and perhaps jump to a competing site. According to Eran Bachar, senior product management lead at software company Micro Focus, UI testing is the part of the process that tests for usability, which encompasses making sure that customers who are going to use the application understand the different flows. Another element of UI testing, especially in the web and mobile space, is testing user experience. “If you’re waiting more than two seconds for a specific operation to be completed, that can be a problem, definitely. The moment when you click on a button, when you click on an icon, when you click on any element, you should have a very, very fast responsive kind of an action,” said Bachar. Clinton Sprauve, director of product marketing at testing company Tricentis added: “The goal of UI testing is to understand all aspects from a UI perspective on the business process side, but also on the technical side to make sure there aren’t any things that are wrong from a functional testing point.” UI testing may be the tip of Mike Cohn’s popular Testing Pyramid — which is a triangle that describes how long to spend doing each phase of test-

U

ing — but that doesn’t mean UI testing can be ignored and just slapped onto the end of a testing cycle. Testing Pyramid slower

more integration

UI Tests Service Tests more integration

Unit Tests

faster

“I believe UI testing is too often an afterthought during product development, and is often placed much too late in the development life cycle,” Jon Edvald, CEO and co-founder of automation platform Garden. “It can be a serious drag on productivity and release cycles to do UI testing — and in fact any type of testing — too late in the process. The later an issue is discovered, the more costly the feedback loop becomes, with all the context switching involved and the terribly slow iteration speeds.”

Google’s Web Vitals One measurement that can be used to quantitatively measure user experience is Google’s Web Vitals. According to Guillermo Rauch, CEO of front-end web development company Vercel, these Web Vitals were the first metrics created that focused entirely on user experience. One of the Web Vitals is Largest Contentful Paint (LCP), which measures “how fast the meaningful part of the page took to load,” he explained. Rauch pointed out that everyone has likely visited a website where it looks like everything had loaded, but the content is still being loaded, so images, videos, or sometimes text might show up five seconds after everything else.

“So this Largest Contentful Paint metric allows us to say how long did it take for us to load what the user is actually interested in? So when you talk about a visual storefront, it’s not just text, it’s also the picture of the coat that you want to buy,” Rauch explained. First Input Delay is another Web Vital that measures how long it takes from the user pressing a button, for example, to the site reacting. “If I tap on ‘buy’ is it reacting immediately? We take that for granted, but we’ve all been to websites where we tap and it doesn’t do anything so we kind of intuitively tap again. But a big percentage of users don’t tap again. So they just leave the website. We’re now starting to measure


010-12_SDT044.qxp_Layout 1 1/25/21 10:08 AM Page 11

www.sdtimes.com

experiences

these user experience metrics very diligently,” said Rauch. Another reason to care about these Web Vitals is that they’re not just used by development teams to measure how happy their users are, but Google can use them to determine search engine rank, Rauch said. In other words, a website that has poor Web Vitals may rank lower, even if they’ve done the proper search engine optimization (SEO) for their site.

Get key stakeholders involved in the process Involving product users in UI testing is an important part of the process for companies developing products. At Sorenson, which develops communications products that serve the Deaf com-

munity, QA engineering manager Mark Grossinger, who is Deaf, explained that this is an important part of his team’s testing process. His team is constantly reevaluating the needs of its users so that it knows what tools and features to provide, and UI testing is an important step in the process. In order to do their UI testing, Sorenson works closely with the Deaf community, as well as other Deaf employees within the company. It’s important for them to do this because as Grossinger describes, “if you have a hearing individual who does not know what a Deaf person needs or how they would use a product, how would they develop it?” One example of where that feedback

February 2021

SD Times

loop made a big contribution was in developing video mail, which Sorenson calls SignMail. “As a hearing person, you can leave a voicemail if your recipient doesn’t answer your call,” Grossinger said. “In the past, there was no equivalent for the Deaf community, so we developed that feature. Now, if a Deaf person is unable to answer a call, the interpreter (or a Deaf caller) can leave a video mail (in American Sign Language), which gives the Deaf person a functionally equivalent message.” Another example that Grossinger noted was developing the Group Call feature. He explained that hearing individuals can utilize conference calls if they want to talk with multiple people, so Sorenson used feedback from its customers to create an equivalent feature. While Sorenson specifically serves the Deaf community, Grossinger noted that the need to involve the people who would be using the product is key no matter who the user base is. “When you’re developing software, you need to have people within whatever community you’re serving, whether it’s accessibility or something else,” Grossinger said. “You need to get those people to give authentic feedback. If you don’t, you could end up developing software or an app that doesn’t meet the intended user’s needs. So, I think that from the beginning of any project, when you’re at the drawing board and beginning the innovation process, you need to talk about the stakeholders and the features they need.” According to Grossinger, diversity is the key to successful UI testing. This includes, not just a diverse development team, but diversity among all departments of the company, such as sales and marketing. “Sometimes on a smaller team you notice that details can get missed without having that diversity,” said Grossinger. “Diversity also means thinking about various demographics: older populations, younger populations, socioeconomic status, education levels, people who are savvy with technology and those who are not. All those perspectives need to be included in the project development.” continued on page 12 >

11


010-12_SDT044.qxp_Layout 1 1/22/21 11:49 AM Page 12

12

SD Times

February 2021

www.sdtimes.com

While automated testing has been a big buzzword around the industry, it hasn’t quite made its way fully into UI testing yet. “It’s unfortunately quite common for most or even all UI testing to be manual, i.e. performed by developers and/or testers, essentially clicking or tapping around a UI, often following predefined procedures that need to be performed any time a change has been made,” said Garden’s Edvald. “This may be okay-ish

an application reaches some level of complexity and variation,” said Edvald. “Even for less extreme scenarios, say web applications that have fewer dimensions to test, the cost-benefit quickly starts to tip in favor of greater automation. It’s usually a good idea to also do some level of manual, exploratory testing, but it’s possible to greatly reduce the surface area to cover, and thus the time and resources involved.” While automating UI testing is the ideal, it’s not always practical for every organization out there. For many organizations, there just isn’t the

for small and simple applications but can quickly balloon into a massive problem when the UI gets more complex and has a lot of variations.” Edvald described his experience witnessing this firsthand when doing development for the menstrual period tracking app Clue. According to Edvald, it was important to the company to have the app available to as many different users as possible, so it is supported on a variety of devices, from “ancient Android phones to the latest iPhones, tons of different screen sizes and operating system versions, many different languages etc.” Having all of these different devices, with varying screen sizes and operating systems, led to massive complexity when it came to testing. Manual testing wasn’t going to be possible because a QA team wouldn’t be able to manually test every possible combination at the speed with which the app was being developed and released. To solve the problem, they hired a quality engineer and put more effort into automation. “Being able to programmatically run a large number of tests is critical when

expertise to do all of the automation, and do it properly. According to Tricentis’ Sprauve, most companies still rely on manual testing for that reason. For example, they will have QA testers sitting in front of a computer and manually performing test steps. “One of the issues with UI automation in some instances is what’s known as flaky tests,” Sprauve said. “That could either be because of the type of the tool that you’re using, or it’s because of a lack of skills of how you build that automation. So one of the biggest challenges is how do I build consistent and resilient automation that won’t break when there are changes or if there are changes, how quickly can we recover to make sure that we get those items addressed so that we’re not just spending time making sure our automation is working versus actually testing. So that’s one of the biggest challenges that an organization faces.” Michael O’Rourke, senior product manager at Micro Focus, noted that most customers average around 30% to 40% automation, and they only have a few customers that are more than 80%

< continued from page 11

Manual vs. automated UI testing

automated. “It takes a lot of work to be able to get to that and it involves a lot of different types of transformations that need to be incorporated in there,” said O’Rourke. “Those customers that put more emphasis on automation are generally the ones that are going, but when it comes to different time constraints, some customers find it easier to be able to only automate certain tests, but then still leave some manual processes behind, which is why it generally ends up roughly around 40%.” Other than the challenges with complexity and automation, another common challenge for QA teams is maintenance. According to O’Rourke, sometimes a developer may make a change to the UI and then when it gets sent back to the QA person, the test fails. “And they’ll spend a lot of time trying to troubleshoot to figure out what happens because the UI change obviously wasn’t properly documented or told to the QA team. That’s where a lot of the challenges come because they have to go back and modify a lot of the scripts and do a lot of maintenance on it anytime it changes,” he said. This is especially challenging early on in product life cycles when a product is being developed, O’Rourke explained. “This could be a very big problem for a lot of those different customers who have to frequently test continuously over and over again where breaks are and then go back and adjust tests,” he added. Vercel’s Rauch predicts that going forward, the web will just keep becoming more user personalized and user experience will continue being an important part of QA. “The core Web Vitals are here to stay, they’re great for developers, they’re great for business people to find this alignment, but more important than that, it’s not just those core Web Vitals,” said Rauch. “I think measuring user experience especially in ways that are unique to your product and unique to your channels is also going to be very, very important. This is just again, the name implies, these are vital signs. This is just like measuring your heart rate, but also if you want to have a great fitness performance, you gotta measure a lot of other things, not just your heart rate.” z


013_SDT044.qxp_Layout 1 1/22/21 3:47 PM Page 13

www.sdtimes.com

February 2021

SD Times

INDUSTRY SPOTLIGHT

Enable BizOps across the enterprise T

he year 2021 began with a long list of lessons learned from 2020. Although competitiveness and digital transformation were already driving the need for greater business agility before 2020, the pandemic forced organizations to embrace extreme forms of agile to survive or thrive. In fact, “the new normal” has become synonymous with navigating unprecedented uncertainty.

What is BizOps? BizOps enables businesses to operationalize agility across the enterprise so they can adapt faster to change. “The fundamental challenge is that business processes and data aren’t flowing through the organization the way they should so that you constantly know how you’re performing against business objectives,” said Laureen Knudsen, chief transformation officer at Broadcom. “That flow should be happening from the top-level strategy, beyond customers, and all the way back up.”

Business leaders embrace approach The unpredictable nature of today’s business environment has made traditional long-term planning futile. Although business planning is still necessary, organizations must now prepare for several possible scenarios. Business and service delivery models are constantly adapting to pandemicrelated restrictions and lockdowns. At the same time, organizations should be in a position to capitalize on opportunities that arise along the way. BizOps provides a hypothesis-driven business approach that allows businesses to adapt faster to change in a datainformed way. “Developers and IT have been trying to optimize what they’re doing for years. But even if you’ve done everything right, you’re only optimizing two to four perContent provided by SD Times and

cent of your entire flow,” said Knudsen. “You can’t just set a business objective and force the way people achieve that objective. You need to let them experiment and share what they found with others so the business can take advantage of new ideas and best methods.”

BizOps is an end-to-end concept Although BizOps can benefit the entire business, organizations must first dissolve the barriers that created operational silos and learn how to work as a single cohesive team. The best mentors in the company tend to be DevOps and IT leaders who are steeped in agile methods and who have helped increase organizational efficiency with IT.

‘You’ve got to keep your objective front and center. What changes is how you’re going to hit that.’ —Laureen Knudsen These mentors can teach and inspire other parts of the organization to decompose problems into smaller, more manageable pieces and work crossfunctionally. They can also help their less agile counterparts get comfortable with the concept and value of unknown unknowns. According to the 2021 Gartner View from the Board of Directors Survey, seven out of 10 boards of directors have made digital business acceleration a part of their company’s post-pandemic strategy. As a result, enterprises are hiring a chief digital officer (CDO) to respond to the current COVID-19 crisis. In some organizations, that role was spearheading digital transformation before the pandemic hit. Their experience and expertise have helped companies adapt faster to whatever is occurring in the moment. Other companies have been less prepared for the degree of change over

the same period. Their organizations suffer from business processes and data flows that are stuck in silos. BizOps helps dissolve the barriers so companies can benefit from fast feedback loops, meet customer expectations and reduce lingering bottlenecks. “We’re talking about putting iterative processes into the top level of an organization,” said Knudsen. “When you have a hypothesis-driven business in which you form a hypothesis and test it.” Essentially, everyone in the business adopts a continuous improvement mindset with leadership leading by example.

Shifting to outcome-based planning Traditional planning is dead. Companies can no longer plan, act and evaluate after the fact. Instead, they must Plan, Do (form hypotheses), Check (validate the hypothesis) and Act. They also need a continuous feedback loop that minimizes risk by surfacing important signals faster than before. “You’ve got to keep your objective front and center. What changes is how you’re going to hit that,” said Knudsen. “We need to try different things until we find the right path to get there.” Fundamentally, organizations must adopt a test-and-learn mindset.

Check out the BizOps coalition Returns on software investments are difficult to measure and they rarely produce expected business outcomes. The BizOps Coalition recently unveiled the BizOps Manifesto, which is a declaration of values and principles. It’s designed to better align with and continually improve software development and operations in a manner that advances digital business, through a combination of technology, culture and communication. The number of CEOs, tech executives, and luminaries participating in the BizOps Coalition is growing. Development and IT leaders are encouraged to join. z

13



015_SDT044.qxp_Layout 1 1/22/21 4:39 PM Page 15

www.sdtimes.com

February 2021

SD Times

INDUSTRY SPOTLIGHT

The modern risks of open-source code T

he amount of open-source code being used in modern applications has exploded. According to multiple surveys, a large majority of enterprises are reporting that open-source components and third-party libraries are being implanted into their applications, both internal and outward-facing. Developers acknowledge that utilizing open source allows them to both speed up software development and focus more on unique code attributes instead of recreating what’s already been successfully established. But the question of whether or not open source is as secure as proprietary code has come to the fore with this uptake in usage. In its defense, open source is vulnerable to hackers because they can see the code, but because of the large number of contributors to many opensource projects, more people can quickly respond to vulnerabilities in the code and remediate them when discovered. Or Chen, director of R&D and manager of the software composition analysis (SCA) group at software security solutions provider Checkmarx, said open-source developers have a higher awareness than they did before about security and vulnerabilities, but many don’t follow enterprise standards of secure coding practices. Among those standards, he said, are ensuring code is being reviewed throughout all software development stages, that there are gatekeepers to the repository, and that you notify your customers and consumers about newly discovered vulnerabilities. Most importantly, he noted, are ensuring the developers using open source in their applications understand the components, who the maintainers of that code are, and that repositories are being scanned on an ongoing basis. “Developers should continue to Content provided by SD Times and

upgrade their packages even if there is no new vulnerability or new functionality they need. Just make sure you’re up to date,” Chen said. “It will make your life easier when you have to update it because of a vulnerability.” Another key standard developers using open source should follow is scanning the repositories with security tools similar to how they’re already using static code analysis. But in this case, software composition analysis is also needed to quickly scan your codebase to detect open-source libraries including direct and transitive dependencies, identify the specific versions in use, and identify any associated vulnerabilities and licenses risks you need to know about. The ideal scenario is finding a solution that offers both static and open-source scanning capabilities so developers can take a one-stop-shop approach. Chen did note that there are differences between using open-source components or frameworks that are backed by organizations such as the Linux Foundation or commercial providers of open source such as Red Hat and many others, vs. using open source built by a small community of developers who may not have enterprise standards in place. Vulnerabilities could be found in those smaller projects, yet it might take a longer time to remediate the vulnerability because there might not be anyone checking the board for newly discovered issues, and the community supporting the project might not have strong practices in place for remediation, he said.

Modern security risks The big frameworks, web servers, and languages such as React, Angular, Django, and Spring are backed by big vendors who use industry standards. But small packages that Chen said are used for specific needs such as a math calculation or

connection to a database, may not have that same kind of backing. “I do see, based on our customer usage, that every big enterprise might find itself with an open-source component developed by [small communities], and those packages might have been abandoned and no longer being maintained. “So, when you find a security vulnerability in a smaller package, there may not be anyone available to immediately resolve it,” he continued. “This is part of what we call the modern risks of open source and the technical debt one may inherit from packages that are no longer being supported by the community. This liability also increases risk beyond just vulnerabilities and licenses issues.”

Newer environments are a challenge Chen pointed out that organizations experiencing an uptick in digital transformation are working with microservices and are consuming a lot of open source to help them bootstrap their efforts to get their projects production-ready as quickly as possible. The lack of best practices in some of these cases, he said, will hurt them in the long run. “Hackers are creating all kinds of fake packages, typosquatting packages on bogus sites, or just injecting vulnerable code into existing packages. In addition, hackers often use open source for their own financial advantage, so when you run your application in production, they just want you to run their Bitcoin miner.”

Bottom line At the end of the day, open source is here to stay. It brings immense benefits to developers in enabling them to build applications faster and at-scale. However, as malicious actors increasingly target these components to access sensitive and valuable data, leveraging SCA solutions that not only detect vulnerabilities earlier in development cycles, but also help prioritize and remediate issues more effectively and efficiently, are essential. z

15


016-18_SDT044.qxp_Layout 1 1/22/21 11:48 AM Page 16

16

SD Times

Boring But Deadly: February 2021

www.sdtimes.com

The most uninteresting reason your project might fail BY GEORGE TILLMANN

I

n 1998, NASA launched the Mars Climate Orbiter, a $300 millionplus spacecraft that traveled more than 400 million miles to the red planet. Even with speeds exceeding 3 miles per second, the journey took almost 9.5 months. Upon reaching its destination, the spacecraft fired its rockets to ease into a Mars orbit. Radio transmission was disrupted as the craft spun behind the planet and was expected to only resume 21 minutes later as it reappeared on the other side. Only it didn’t. The spacecraft was never heard from again. NASA investigators determined that a definitional problem doomed the craft. The engine burn was timed to slow the craft down and place it in a safe orbit 68 miles above the surface of the planet. However, the instructions for the burn were wrong, resulting in the craft attempting to orbit Mars at a disastrous altitude of only 35 miles. The problem: The NASA’s spacecraft was programmed to use the metric system while the commercial contractor sent burn information to NASA using the English system (pound-seconds instead of Newton-

seconds). The burn lasted too long; the probe attempted too low an orbit, and burned up in the Mars atmosphere. This is a pretty dramatic case of the definitional problem. The engineers’ thinking was sound, their math was accurate, their numbers were correct, just the definition of those numbers (English versus metric) was misunderstood. A contentious area, fueled by definitional problems, is downtime. Imagine a situation where every Sunday at 2:00 a.m., IT takes email offline for preventive maintenance. At 2:10 a.m., the VP of marketing tries to send an email to the head of Europe only to get a message that email is not available. Later that month, the VP of marketing is surprised to see an IT report saying that there was no email downtime for that month. Result: The VP resolves never to trust IT reports again. Who is right? Was email down that month? Depends on what you mean by down. IT believes that down means an unscheduled interruption in service, while the user believes that

George Tillmann is a retired programmer, analyst, systems and programming manager. This article is excerpted from his book Project Management Scholia: Recognizing and Avoiding Project Management’s Biggest Mistakes (Stockbridge Press, 2019). He can be reached at georgetillmann@gmx.com.

down means the service is unavailable, regardless of cause. Both groups have well-formulated, complete, and appropriate definitions, both definitions are adequate, accurate, and acceptable. They just don’t agree. Of more consequence is user management being told by IT that the total cost of their new accounting system will be $1,000,000, only to find that IT did not include in its budget transition, end-user documentation, or training costs. Question: who is correct? Answer: both. When multiple parties are right, or


016-18_SDT044.qxp_Layout 1 1/22/21 11:48 AM Page 17

www.sdtimes.com

at least not wrong, then the basis of the disagreement is probably definitional. In the example above, IT believes that the development cost consists of the hardware, software, and staffing expenses to analyze, design, code, and test a system, but not implementation, user documentation, and user training charges, which are to be borne by the user organization. The user believes that the check he wrote for developing the system was the entire cost of getting his new application up and running, and is unaware of any additional “hidden” costs.

The problem Arguably, half of all IT-user disputes are definitionally based or exacerbated. The definitional problem appears in multiple forms.

Problem 1: Multiple Conflicting Well-Defined Definitions. One word or concept can have multiple legitimate and accepted but inconsistent definitions. Each party is using a formal definition that is well defined, correct, and broadly accepted by users in the field, just different. The NASA case is a good example of this. Both the NASA team and the external contractor used complete, accurate, and accepted definitions of thrust. Each used its definition cor-

rectly and arrived at a situationally correct answer. Each team was aware that there were two distinct legitimate definitions. Unfortunately, each team assumed that the other team was using the same definition it was. Problem 2: Multiple But Not All Well-Defined Definitions. One word or concept can have multiple definitions, with at least one a legitimate, detailed, and correct definition and at least one an inexact, although popular, definition. This is a huge category that can be very frustrating for professionals. It usually involves a very technical and exact definition that is used by professionals in a field and the same word having a similar but much less exact definition used by non-professionals. Take the word, theory. For a scientist a theory is a proposition that has been tested, probably multiple times, and has been proven to be sufficiently correct to predict future events. In empirical science, there is no higher state of truth. The popular definition of theory is quite different. Non-scientists use the word theory to mean unproven and, at best, a guess. The ambiguous use of theory has led to some interesting if fruitless discussions. Anti-evolutionists point out that evolution is only a theory and therefore not proven science. Scientists say that the fact that it’s a scientific theory says that it is proven. Problem 3: Non-Specific, Fuzzy, or Incomplete Definitions. While there are words that do not have a specific and well-accepted definition, it is often the case that some people know, or think they know, what they mean. Many projects define effort in person-hours, person-months, or personyears; where person-year is the work one person routinely completes in one calendar year. Person-months are everywhere in IT and throughout many non-IT project-focused organizations as well. It is not an overly technical term and probably 95 per-

February 2021

SD Times

cent of the people you meet on the street will tell you they know what it means. It is undoubtedly one of the most ubiquitous project-related terms. But what does it mean? Exactly how much work gets done in a person-year? Experience shows that the IT year can vary from fewer than 1,500 work hours to more than 2,000 hours. Why the difference? Many organizations remove activities such as vacation, training, and administrative time from the person-year. Other organizations believe that these activities are the cost of doing business and should be included in the definition. Some take a stand in the middle counting certain activities in the person-year, such as administrative time, but precluding others, such as vacations. Yet, anecdotal evidence indicates that fewer than one IT organization in four has an exact definition of effort, formally publishes the definition, communicates it to relevant parties, and ensures that whenever it is used it is used correctly. This problem can appear in IT where two project teams, or an IT organization and an external contractor, have different and poorly communicated definitions for common systems development words. Problem 4: Obfuscation. Sometimes words are used to hide rather than elucidate meanings. Just bought a 2019 Ford Focus? That’s a used car, while the 2016 BMW across the street is a pre-owned vehicle. Interested in decision support systems (DSS)? Well that’s old hat. Years ago the DSS gave way to executive information systems (EIS). The difference? Well, if there is any, then at least 90 percent of it is marketing hype. The EIS? Well it was replaced with online analytical processing (OLAP). The difference? You guessed it. OLAP? Well it was replaced by, er, was it business intelligence (BI) or predictive modeling (PM)? Whatever! Were all of these continued on page 18 >

17


016-18_SDT044.qxp_Layout 1 1/22/21 11:49 AM Page 18

18

SD Times

February 2021

www.sdtimes.com

< continued from page 17

the same? No, there were differences and improvements. Did all deserve a different name? No. General Motors might change the design of the Corvette every year, but they still call it a Corvette.

What you can do This is one of the few IT areas where the solution is relatively simple, easy to implement, and painless to use. Solution One: Define, Define, Define. It might take a little upfront effort, but all technical and even business terms should be formally defined and placed in a project glossary. Do it once and it can be used again and again. The project glossary should be part of the contract between the user and IT— even vendors and IT. It can also be an appendix to reports, a chapter in development documents, or referenced as a stand-alone text. Definitionally challenged? Find a source (book, vendor documents, websites, etc.) that you can live with in toto or as input to your own project glossary. Solution Two: Define First, Then Develop. The project glossary should exist before any development is started, planning completed, or costs discussed.

This is important for two reasons. First, it is easier to gain agreement on terms before anyone has a vested interest in the definition. Agreement on who will pay for end-user training before any contracts are signed is a lot easier than the argument that can occur after budget finalization. Down the road, surprises are almost never good. Second, a number of mid- or postproject disagreements can be settled easily by referring to published and agreed-upon definitions. The same is true within the project team. Defining documentation or testing can abrogate many potential project squabbles as team leaders maneuver to meet deadlines. Solution Three: Agreement On Operational Definitions Is Required; Agreement On Conceptual Definitions Is Not. A conceptual definition is an intangible, theoretical, and abstract concept that you hold to be true. You might believe that man never landed on the moon, that climate change is a hoax, or that existence is not a predicate. That is your right. An operational definition, also called a working definition, is an agreement to use a precise definition in a specific context. The context for an operational

definition can be temporary (before the year 2000 or starting next year, etc.) or limited to a specific project, department, location, company, or even country. Operational definitions make it easier for everyone on a team to work together. For example, you might strongly disagree with the team’s definition of temporary storage, but, for the purposes of communication and team harmony, you agree to use it during the project. Solution Four: Keep the Definitions Updated. Technology changes rapidly and vendor marketing terms even more so. The project glossary should be revisited at the beginning of each project to see whether any definitions need to be modified, new terms introduced, or old terms removed. Unless you are a lexicographer, you probably do not find the subject of terminology that interesting and certainly not as a part of project management. It does have a sort of bowtie image. However, a great many of the problems a project manager will encounter involve IT and sometimes even business terminology. It is important that developers and users understand what is being said, even if they do not necessarily agree with the definitions used. z

The Little Book of Big Mistakes and How to Avoid Them

Project Management Scholia focuses on the 17 most consequential reasons IT projects fail and presents ways the project manager can avoid these problems by reading the danger signs and taking timely corrective action. The book dives into the often painful lessons learned — not from the library or the classroom — but from the corporate trenches of real-world systems development.

By George Tillmann

Available on Amazon

George Tillmann is a retired programmer, analyst, management consultant, CIO, and author.


019_SDT044.qxp_Layout 1 1/22/21 11:48 AM Page 19

www.sdtimes.com

February 2021

SD Times

DEVOPS WATCH

Quali announces new funding to address infrastructure complexity in enterprise DevOps BY CHRISTINA CARDOZA

Infrastructure automation company Quali has announced a $53 million new round of funding co-led by Greenfield Partners and JVP. Quali revealed it intends to use the new funding to streamline enterprise DevOps infrastructure through its CloudShell platform, as well as continue to grow its partnerships and team. “With the growing adoption of DevOps and CI/CD best practices, beyond early adopters, challenges related to scaling such practices are becoming more prevalent — removing operational, process and knowledge bottlenecks, reducing costs, assuring governance and unifying infrastructure. This is where Quali shines,” stated Yoav Tzruya, general partner at JVP and chairperson of Quali. Quali’s CloudShell platform aims to simplify cloud infrastructure by making Infrastructure as Code accessible and application development and deployment secure, according to the company. In a recent webinar on SD Times, Quali’s Vice President of Product Strategy David Williams said to think of DevOps as a train, and infrastructure as the track that the train runs on. In order

In other DevOps news n BMC announced new capabilities and enhancements to advance enterprise DevOps in the latest update of its Automated Mainframe Intelligence and Compuware portfolios. zAdviser was updated with new dashboards to provide DevOps teams with information on how Compuware DevOps solutions and capabilities are performing. Topaz for Total Test was updated with improved automated testing capabilities that simplify automated test case creation, expand test

Think of DevOps as a train, and infrastructure as the track that the train runs on to successfully transform mature DevOps practices into powerful business transaction tools, infrastructure first needs to be effectively managed and understood. “Infrastructure has always been an underpinning part of IT, but it is becoming massively sophisticated and you need to understand what it is actually you are doing to support the product you are developing,” he said. Shay Grinfeld, Quali’s new board

member and manager partner at Greenfield Partners added: “Our investment thesis was built around Quali’s unique ability to streamline cloud and infrastructure complexity for application and technology teams with the growing adoption of DevOps methodology and CI/CD procedures. We are impressed with their recent growth and are excited about the opportunity to build an exceptional business.” z

coverage and reduce rework.

DevOps workflows, a developer-friendly wizard web UI, application lifecycle management, and observability capabilities.

n Incident management company Kintaba announced new Automations that make it easy for teams to automate decision making during major incidents and outages. Users can now automate when to add the right people, tags and oncall rolls within incidents. The company plans to continue to add to its list of conditions and actions. n KubeSphere, the open-source distributed operating system, is now available on AWS Quick Start. KubeSphere provides automated IT operation and

n Reliability platform provider StackPulse has exited stealth mode with a mission to take DevOps to the next level. The company also announced a $20 million series A round of funding led by GGV Capital it will use to continue to advance its reliability platform. The platform features incident response and management capabilities, detection and remediation services and support for existing CI/CD or GitOps pipelines. z

19


020-23_SDT044.qxp_Layout 1 1/22/21 11:46 AM Page 20

20

SD Times

February 2021

www.sdtimes.com

BY WAYNE ARIOLA

his is a rather unique time in the evolution of software testing. Teams worldwide are facing new challenges associated with working from home. Digital transformation initiatives are placing unprecedented pressure on innovation. Speed is the new currency for software development and testing. The penalty for software failure is at an all-time high as news of outages and end-user frustration go viral on social media. Open-source point tools are good at steering interfaces but are not a complete solution for test automation. Meanwhile, testers are being asked to do more while reducing costs. Now is the time to re-think the software testing life cycle with an eye

T

towards more comprehensive automation. Testing organizations need a platform that enables incremental process improvement, and data curated for the purpose of optimizing software testing must be at the center of this solution. Organizations that leverage multiple open-source or proprietary testing tools must consider an Open Testing

Wayne Ariola is a recognized thought leader on software testing topics such as continuous testing, risk-based testing, service virtualization, API testing and opensource. Wayne has created and marketed products that support the dynamic software development, test, and delivery landscape. He has driven the design of many innovative technologies and received several patents for his inventions.

Platform to keep pace with agile and enterprise DevOps initiatives.

What is an Open Testing Platform? An Open Testing Platform (OTP) is a collaboration hub that assists testers to keep pace with change. It transforms observations into action — enabling organizations to inform testers about critical environment and system changes, act upon observations to zero in on ‘what’ precisely needs to be tested, and automate the acquisition of test data required for effective test coverage.


020-23_SDT044.qxp_Layout 1 1/22/21 11:47 AM Page 21

www.sdtimes.com

The most important feature of an Open Testing Platform is that it taps essential information across the application development and delivery ecosystem to effectively test software. Beyond accessing an API, an OTP leverages an organization’s existing infrastructure tools without causing disruption — unlocking valuable data across the infrastructure. An OTP

and shared across teams • The demand for test data is established by the model and reused for team members • The validation data sets are fit to the logic identified by the model • The prioritization of test runs can dynamically fit the stage of the process for each team, optimizing for vectors such as speed, change,

February 2021

SD Times

ics, Defect Management, API Management, etc. An Open Testing Platform curates data for software testing by applying known patterns and machine learning to expose change. This new learning system turns observations into action to improve the effectiveness of testing and accelerate release cycles.

Why is an Open Testing Platform needed?

business-risk, maintenance, etc. Models allow teams to identify critical change impacts quickly and visually. And since models express test logic abstracted from independent applications or services, they also provide context to help testers collaborate across team boundaries.

Data curated for testing software allows any tester (technical or nontechnical) to access data, correlate observations and automate action.

Model in the middle At the core of an Open Testing Platform is a model. The model is an abstracted representation of the transactions that are strategic to the business. The model can represent new user stories that are in-flight, system transactions that are critical for business continuity, and flows that are pivotal for the end-user experience. In an OTP, the model is also the centerpiece for collaboration. All tasks and data observations either optimize the value of the model or ensure that the tests generated from the model can execute without interruption. Since an OTP is focused on the software testing life cycle, we can take advantage of known usage patterns and create workflows to accelerate testing. For example, with a stable model at the core of the testing activity: • The impact of change is visualized

Automation must be driven by data. An infrastructure that can access real-time observations as well as reference a historical baseline is required to understand the impact of change. Accessing data within the software testing life cycle does not have to be intrusive or depend on a complex array of proprietary agents deployed across an environment. In an overwhelming majority of use cases, accessing data via an API provides enough depth and detail to achieve significant productivity gains. Furthermore, accessing data via an API from the current monitoring or management infrastructure systems eliminates the need for additional scripts or code that require maintenance and interfere with overall system performance. Many of the data points required to optimize the process of testing exist, but they are scattered across an array of monitoring and infrastructure management tools such as Application Performance Monitoring (APM), Version Control, Agile Requirements Management, Test Management, Web Analyt-

Despite industry leaders trying to posture software testing as value-added, the fact is that an overwhelming majority of organizations identify testing as a cost center. The software testing life cycle is a rich target for automation since any costs eliminated from testing can be leveraged for more innovative initiatives. If you look at industry trends in automation for software testing, automating test case development hovers around 30%. If you assess the level of automation across all facets of the software testing life cycle, then automation averages about 20%. This low average automation rate highlights that testing still requires a high degree of manual intervention which slows the software testing process and therefore delays software release cycles. But why have automation rates remained so low for software testing when initiatives like DevOps have focused on accelerating the release cycle? There are four core issues that have impacted automation rates: • Years of outsourcing depleted internal testing skills • Testers had limited access to critical information • Test tools created siloes • Environment changes hampered automation

Outsourcing depleted internal testing skills The general concept here is that senior managers traded domestic, internal expertise in business and testing processes for offshore labor, reducing Opex. With this practice known as labor arbitrage, an organization could reduce headcount and shift the responsibility for software testing to an army of outcontinued on page 22 >

21


020-23_SDT044.qxp_Layout 1 1/22/21 11:47 AM Page 22

22

SD Times

February 2021

www.sdtimes.com

< continued from page 21

sourced resources trained on the task of software testing. This shift to outsourcing had three main detrimental impacts to software testing: the model promoted manual task execution, the adoption of automation was sidelined and there was a business process “brain-drain” or knowledge drain. With the expansion of agile and the adoption of enterprise DevOps, organizations must execute the software testing life cycle rapidly and effectively. Organizations will need to consider tightly integrating the software testing life cycle within the development cycle which will challenge organizations using an offshore model for testing. Teams must also think beyond the simple bottom-up approach to testing and re-invent the software testing life cycle to meet increasing demands of the business.

The Value of an Open Platform

Testers lacked critical information Perhaps the greatest challenge facing individuals responsible for software testing is staying informed about change. This can be requirements-driven changes of dependent application or services, changes in usage patterns, or late changes in the release plan which impact the testers’ ability to react within the required timelines. Interestingly, most of the data required for testers to do their job is available in the monitoring and infrastructure management tools across production and pre-production. However, this information just isn’t aggregated and optimized for the purpose of software testing. Access to APIs and advancements in the ability to manage and analyze big data changes this dynamic in favor of testers. Although each organization is structurally and culturally unique, the one commonality found among agile teams is that the practice of testing software has become siloed. The silo is usually constrained to the team or constrained to a single application that might be built by multiple teams. These constraints create barriers since tests must execute across componentized and distributed system architectures.

Ubiquitous access to best-of-breed open-source and proprietary tools also contributed to these silos. Point tools became very good at driving automated tests. However, test logic became trapped as scripts across an array of tools. Giving self-governing teams the freedom to adopt a broad array of tools comes at a cost: a significant degree of redundancy, limited understanding of coverage across silos, and a high amount of test maintenance. The good news is that point tools (both open-source and proprietary) have become reliable to drive automation. However, what’s missing today is an Open Testing Platform that assists to drive productivity across teams and their independent testing tools.

Changes hampered automation Remarkably, the automated development of tests hovers at about 30% but the automated execution of tests is half the rate at 15%. This means that tests that are built to be automated are not

likely to be executed automatically — manual intervention is still required. Why? It takes more than just the automation to steer a test for automation to yield results. For an automated test to run automatically, you need: • Access to a test environment • A clean environment, configured specifically for the scope of tests to be executed • Access to compliant test data • Validation assertions synchronized for the test data and logic As a result, individuals who are responsible for testing need awareness of broader environment data points located throughout the pre-production environment. Without automating the sub-tasks across the software testing life cycle, test automation will continue to have anemic results.

Leveling the playing field Despite the hampered evolution of test automation, testers and software development engineers in test (SDETs) are


020-23_SDT044.qxp_Layout 1 1/22/21 11:47 AM Page 23

www.sdtimes.com

being asked to do more than ever before. As systems become more distributed and complex, the challenges associated with testing compounds. Yet the same individuals are under pressure to support new applications and new technologies — all while facing a distinct increase in the frequency of application changes and releases. Something has got to change. An Open Testing Platform gives software testers the information and workflow automation tools to make open-source and proprietary testing point tools more productive in light of constant change. An OTP provides a layer of abstraction on top of the teams’ point testing tools, optimizing the subtasks that are required to generate effective test scripts or no-code tests. This approach gives organizations an amazing degree of flexibility while significantly lowering the cost to construct and maintain tests. An Open Testing Platform is a critical enabler to both the speed and effectiveness of testing. The OTP follows a prescriptive pattern to assist an organization to continuously improve the software testing life cycle. This pattern is 'inform, act and automate.' An OTP offers immediate value to an organization by giving teams the missing infrastructure to effectively manage change.

Inform the team as change happens What delays software testing? Change, specifically late changes that were not promptly communicated to the team responsible for testing. One of the big differentiators for an Open Testing Platform is the ability to observe and correlate a diverse set of data points and inform the team of critical changes as change happens. An OTP automatically analyzes data to alert the team of specific changes that impact the current release cycle.

Act on observations Identifying and communicating change is critically important, but an Open Testing Platform has the most impact when testers are triggered to act. In some cases, observed changes can automatically update the test suite, test exe-

cution priority or surrounding sub-tasks associated with software testing. Common optimizations such as risk-based prioritization or change-based prioritization of test execution can be automatically triggered by the CI/CD pipeline. Other triggers to act are presented within the model-based interface as recommendations based on known software testing software algorithms.

Automate software testing tasks When people speak of “automation” in software testing they are typically speaking about the task of automating test logic versus a UI or API. Of course, the scope of tests that can be automated goes beyond the UI or API but also it is important to understand that the scope of what can be automated in the software testing life cycle (STLC) goes far beyond the test itself. Automation patterns can be applied to: • Requirements analysis • Test planning • Test data • Environment provisioning • Test prioritization • Test execution • Test execution analysis • Test process optimization

Key business benefits By automating or augmenting with automation functions within the software testing life cycle, an Open Testing Platform can provide significant business benefits to an organization. For example: • Accelerating testing will improve release cycles • Bringing together data that had previously been siloed allows more complete insight • Increasing the speed and consistency of test execution builds trust in the process • Identifying issues early improves capacity • Automating repetitive tasks allows teams to focus on higher-value optimization • Eliminating mundane work enables humans to focus on higher-order problems, yielding greater productivity and better morale

February 2021

SD Times

Software testing tools have evolved to deliver dependable “raw automation.” Meaning that the ability to steer an application automatically is sustainable with either open-source or commercial tools. If you look across published industry research, you will find that software testing organizations report test automation rates to be (on average) 30%. These same organizations also report that automated test execution is (on average) 16%. This gap between the creation of an automated test and the ability to execute it automatically lies in the many manual tasks required to run the test. Software testing will always be a delay in the release process if organizations cannot close this gap. Automation is not as easy as applying automated techniques for each of the software testing life cycle sub-processes. There are really three core challenges that need to be addressed: 1) Testers need to be informed about changes that impact testing efforts. This requires interrogating the array of monitoring and infrastructure tools and curating data that impacts testing. 2) Testers need to be able to act on changes as fast as possible. This means that business rules will automatically augment the model that drives testing – allowing the team to test more effectively. 3) Testers need to be able to automate the sub-tasks that exist throughout the software testing lifecycle. Automation must be flexible to accommodate each team need yet simple enough to make incremental changes as the environment and infrastructure shifts. Software testing needs to begin its own digital transformation journey. Just as digital transformation initiatives are not tool initiatives, the transformation to sustainable continuous testing will require a shift in mindset. This is not shift-left. This is not shift-right. It is really the first step towards Software Quality Governance. Organizations that leverage multiple open-source or proprietary testing tools must consider an Open Testing Platform to keep pace with agile and enterprise DevOps initiatives. z

23


RedHat044.qxp_Layout 1 1/25/21 11:48 AM Page 1


www.sdtimes.com

February 2021

SD Times

Buyers Guide

A

s digital transformation initiatives have picked up steam due to the coronavirus pandemic, companies are getting a lot more disparate data that needs to be put together. They’re looking towards data integration solutions to streamline that process. A survey conducted by IDC in December last year found that 94% of data engineers, data integration specialists, data stewards, chief data officers are integrating up to five types of data. This integration process begins with ingestion process, and includes steps such as cleansing, extract, transform, load (ETL) mapping, and transformation. Once complete, the data integration ultimately enables analytics tools to produce effective, actionable business intelligence and compiles it all into a single, unified view. Without unified data, a single report typically involves logging into multiple accounts on multiple sites, accessing data within native apps, copying over the data, reformatting, and cleansing, all before analysis can happen, which can be very time consuming for data scientists and developers to set up. One major factor that necessitates an effective data integration strategy is the fact that there are many different types of data that an organization ingests. The data can come from mainly three different sources, according to Sameer Parulkar, the product marketing director for Red Hat Integration. One is data that is stored in traditional ERP systems, data warehouses, or data lakes. Another is data in motion, which can come from millions of devices, different customer touchpoints and engagements points as well as physical stores. Last is data in action, which is generated by developers, data scientists

tom of the list, but the bottom is still over 50% of people who responded to the survey back in December,” Bond said. “I had to answer a lot of inquiries about whether data lakes are going to kill the need for data integration? And no, it doesn’t kill it at all. You still have all those different types of data being stored in that one place. You still need to integrate that data to make any sense of it.” Meanwhile, gathering all of the data is only one side of the coin, according to Bond. It’s also about understanding that data to make the most use out of data integrations. “A lot of data integrations have been around for bringing multiple disparate data sets together, trying to understand the correlations between them and then coming up with some sort of insight,” Bond said. “You can only get that insight when putting data together.” Bond added that data integrations are most frequently used for master data which is data about the people, places, and things that your organization cares about. “You’ve gotten this data distributed over the place and so bringing that together is a data integration problem, reconciling all the different versions of that data is a master data management data integration problem, and now you have to find the most important, not necessarily the most recent version of the truth, but the best version of the truth for that particular entity in that organization and the context within which that data is being used,” Bond said. Other common data challenges that organizations face come down to the way the data itself is stored. For example, data from legacy systems often has missing markers such as times and

Data without integration is just data BY JAKUB LEWKOWICZ and architects to develop applications or services. “All this data needs to be collected, aggregated, managed, and stored, but there are different data formats. There are different data definitions across different touch points, across different data sources. All of this has to be reconciled in one way or another in a secure way. What about the data quality? All of these are important elements and common pain points with data integration,” Parulkar said. The various forms of data are also being managed across many different types of data management technologies. At the top of the list are spreadsheets and relational databases. Other common types are analytical databases and mainframes, according to Stewart Bond, the research director of IDC’s Data Integration and Intelligence Software service. “Data lakes are surprisingly at the bot-

continued on page 28 >

25


025-29_SDT044.qxp_Layout 1 1/22/21 5:42 PM Page 26

26

SD Times

February 2021

www.sdtimes.com

How does your solution help customers with their data integration initiatives? Eric Madariaga, Chief Marketing Officer at CData Software “Ultimately, we solve data connectivity challenges through standardization — providing a common layer for accessing different data and systems. We then combine that with tooling to solve specific problems within messaging, data pipeline connectivity, and data management. While APIs have become the standard for integration, varying applications and processes all approach APIs a different way. There are different protocols, different ways to authenticate and secure APIs, and different data structures and payloads. You can submit an invoice to a partner in a B2B transaction, and the EDI and protocol request will look very different from the API integration that you would make to your ERP or CRM system. We focus on standardizing connectivity between these systems and provide a comprehensive suite of tools to support integration. For example, CData Sync is a data pipeline tool focused on bulk data movement for data warehousing initiatives. Instead of providing real-time connectivity to data like our data virtualization components, Sync replicates data from hundreds of different systems into a common database or data warehouse like Snowflake, BigQuery, or any standard RDBMS where you can do analytics or other processing. Our ArcESB product is another tool which is more messagedriven. It’s really good at moving data from one application to another and handling the business logic that often drastically differs between two companies. All our solutions leverage established standards to simplify integration between applications and data. Everything we do centers around the context of making it as easy as possible for organizations to leverage their data assets and get the most value from their data, no matter which systems and tools they may be using.”

Arawan Gajajiva, a principal solution architect at Matillion “One of the key differentiators with Matillion versus other tools is we subscribe to what’s called an E-L-T architecture and where that’s different is we push all of the data transformations to your cloud data warehouse. This is different from many other tools out there that use E-T-L where the data transformation is done in the data integration layer. With Matillion’s E-L-T architecture, the benefit here is that as your data volumes grow, your cloud data warehouse is going to be really well-suited for these really large workloads. What Matillion offers here is maximizing your investment in your cloud data warehouse technology so that as your data volumes grow, your Matillion footprint doesn’t need to grow. We also offer an easy UI to make it all intuitive at the end of the day. Matillion ETL is our flagship product that is deployed into cus-

tomers’ cloud environments. There are separate versions of Matillion ETL based upon which cloud you’re using whether it’s AWS, Azure, GCP, and then also separate versions depending on which cloud data warehouse you’re using. So that means different products for AWS Redshift or Snowflake for example. We also offer Matillion Data Loader for free as a SaaS product for loading data into your cloud data warehouse, with native integrations to many popular common on-premise and cloud data sources including Salesforce, Excel, and Google Analytics. Matillion ETL comes with an extensive list of pre-built data source connectors for on-premises and cloud databases, SaaS applications, documents, NoSQL sources, and more to quickly load data into your cloud data environment as well as the ability to Create Your Own Connector to easily build custom connectors to any REST API source system.”

Sameer Parulkar, Product Marketing Director for Red Hat Integration “With the Red Hat Integration portfolio, we have a set of cloud-native capabilities that enable customers to connect their data whether that’s through aggregating that data or real-time sharing of that data. Red Hat Integration includes our messaging capability based on Apache ActiveMQ, which is used to share data in real-time. We offer data streaming capabilities based on Apache Kafka which is increasingly being adopted to stream and share data. The portfolio also contains the Apache Camel-based integration framework for data connectivity and API Management to share, secure, control, analyze and monetize APIs. We also offer Red Hat Runtimes, a set of products, tools, and components for developing and maintaining cloud-native applications. All of that is supported with Red Hat Integration. The key aspect is that all these capabilities are container- native or cloud-native. Red Hat OpenShift provides a good foundation to host and scale data workloads, AI workloads or AI machine learning workloads and deploy them across hybrid cloud whether it’s on-premise, a private cloud or a public cloud. Our overall focus is to support event-driven architecture and initiatives. Moving forward, we’re working to expand our traditional messaging and the data streaming capabilities with a focus on Kafka in the Kubernetes environment and expand our API capabilities to support event driven use cases. We want to take our change data capture and serverless data sharing capabilities to the next level. We want to make it much easier for customers to adopt these technologies.” z


Full Page Ads_SDT039.qxp_Layout 1 1/25/21 6:12 PM Page 27

Transform your data. Transform your business. Data is your most valuable resource. Unlock its full potential in the cloud. Break down barriers, connect data teams, and drive measurable transformation across your enterprise. See how Cisco, Slack, and DocuSign achieved competitive advantage across their cloud data initiatives with Matillion and Snowflake. get.matillion.com/casestudies

Quickly integrate and transform your data in Snowflake, Amazon Redshift, Google BigQuery, Microsoft Azure Synapse and Delta Lake on Databricks to unleash the potential of your cloud.

Copyright Š 2021 Matillion

Learn more at Matillion.com


28

SD Times

February 2021

www.sdtimes.com

< continued from page 26

dates for activities and data that’s taken in from outside sources might not have the same level of detail in the data as the ones from internal sources. “The more you know about your data, the higher quality data you have, and the better you can integrate that data,” Bond added. Also, different parts of the organization need all that context whether that’s data governance, data quality management, analytics, data science, AI, or machine learning. “Data without intelligence, it’s just data,” Bond said. This has all exacerbated the need for data integration tools that can simplify each step of the process to save time. Organizations are looking towards solutions that have a lot of pre-built connectors and ones that they can port to hybrid cloud models without needing to rebuild the integrations. The data integration tools of today need to be able to work natively in a single cloud, multicloud, or hybrid cloud environment. “As organizations are shifting to hybrid cloud and full cloud, there’s a number of different systems that need to connect on premise and across the different applications and services that they’re using. So as that grows, the integration challenges get more and more difficult,” said Eric Madariaga, the chief marketing officer at CData Software.

Accelerated data integration initiatives “The pandemic has accelerated digital transformation, and I’ve long stated that data is the lifeblood of digital transformation,” IDC’s Bond said. “It’s all virtual and data is so much more part of what we’re doing these days than it’s ever been before.” When the pandemic first hit, there was some negative impact on the big data and analytics budgets and data integration is a part of that. However, as the economic situation changed a little bit over time, the budgets for spending on big data and analytics started to increase and budgets have continued to increase as the economy starts to return to growth, Bond explained. “When you look at what has happened during the pandemic across

many industries, everyone is adapting to this new way of reaching customers,” Red Hat’s Parulkar said. For example, governments are trying to help their citizens so that they don’t have to physically go somewhere to access their services, retail stores are trying to find new ways for their customers to get their items shipped, ordered online, or done by curbside pickup, as well as the growth of many different delivery services like DoorDash and GrubHub. Data plays an important role in making all of these processes more efficient, Parulkar explained. The demand for data integrations already existed before the pandemic, but when it hit, the priorities changed, Arawan Gajajiva a principal solution architect at Matillion explained. “Companies are generating exponential amounts of data every year and they’re recognizing they need to corral, wrangle, and get value out of their data, so that hasn’t changed with COVID. What I have seen, though, is that as COVID hit us last year, priorities have changed,” Gajajiva said. “It’s not that they didn’t have a data focus, it’s just what data they were focused on. And so a great example is that we’re seeing that customers were really prioritizing getting that COVID data loaded into their data warehouses.” When it comes to types of data, IDC saw a significant increase in the demand for spatial data within the last nine months. This type of data can be used to map out where the virus is and what the numbers are in a particular location.

The size and type of organization matters Organizations have had to restructure their infrastructure to handle the influx of data and companies of different sizes are approaching this in different ways. Large companies have teams that are dedicated to managing data and managing the whole process around getting the data where it needs to be, and also getting the people the tools that they need to be able to process the data, whether that’s analytic tools, alerts and notifications, Mike Albritton, the managing director of ArcESB explained.

“On the other hand, small and medium size businesses, the SMB market, may or may not have teams to own that process and so they’re often looking for something that’s more out of the box, or maybe even looking for a service provider to come and help set up some sort of process for them,” Albritton said. Red Hat’s Parulkar added that as the size of a company increases, the data integration and the data analysis and data quality becomes much more important. On the other hand, IDC’s Bond said it has much more to do with how much data you need to function as an organization rather than your size. “With a startup that was born in the cloud and maybe their business is focused on data, they’ve got much greater data integration needs because data is such a core part of what they do as a business. Another company that isn’t based on that data, and they get data as a byproduct of what they do and they use data out of that to get better at what they do, their data integration needs might not be as significant as that startup that was born with data.” Data integration tool providers are adding new functionalities such as AI and machine learning to predict what’s happening and also to build automations that will handle integrations. AI and ML are being infused into these data management and data integration products and using intelligence about the data and sometimes intelligence from the data to automate some of those activities. APIs are also becoming more prevalent to integrate between businesses as opposed to the traditional EDI process. APIs are much more agile, much quicker to market, and a lot easier for customers to connect in the data integration space, according to Albritton. Another trend is bringing analytics capabilities to the cloud which opens up a lot more resources. “We’re seeing people moving into the cloud because of cost and scalability, but now once you’ve put your analytics workload in the cloud, you’ve got a lot of different cloud services such as AI and ML that are now available for you when you’re ready,” Gajajiva said. z


www.sdtimes.com

February 2021

SD Times

A guide to data integration tools n Boomi’s low-code, unified platform helps organizations adapt and overcome the fundamental challenges of today’s business market by helping to instantly connect everyone to everything. Over the past seven years, Boomi has grown and evolved its Boomi AtomSphere Platform. The platform now includes the option of data catalog and preparation, powered by next-generation AI capabilities of Unifi Software, which Boomi acquired in December 2019. n IBM DataStage of IBM Cloud Pak for Data combines data integration with DataOps, governance and analytics. Features include a full spectrum of data and AI services, parallel engine and automated load balancing, metadata support for policy-driven data access, automated delivery pipelines for production, and prebuilt connectors and stages. Other data integration products from IBM include InfoSphere Information Server for Data Integration and IBM BigIntegrate. n Informatica has been named a leader in the Gartner Magic Quadrant for Data Integration Tools for 15 consecutive years. Its integration hub includes the ability to publish/subscribe curated data, automated data processing and storage on Hadoop, governed data integration, and cloud orchestration. n Microsoft offers SQL Server Integration Services for on-premises data integration tasks and Azure Data Factory for Azure-based data integration tasks. According to the Gartner Magic Quadrant for Data Integration Tools, users turn to Microsoft’s data integration solutions for its low total cost of ownership, speed of implementation, ease of use and ability to integrate with other Microsoft services. n Mulesoft’s Anypoint Platform enables businesses to spend less time on the communication between databases and applications and more time working on core business processes. Businesses can easily synchronize, share, migrate and manage data with visibility and control over application, performance and operational management.

n

FEATURED PROVIDERS n

n CData Software: CData Software is a leader in data access and connectivity solutions. It specializes in the development of data drivers and data access technologies for real-time access to online or on-premise applications, databases and web APIs. The company is focused on bringing data connectivity capabilities natively into tools organizations already use. It also features ETL/ELT solutions, enterprise connectors, and data visualization. n Matillion: Matillion’s data transformation software empowers customers to extract data from a wide number of sources, load it into their chosen cloud data warehouse (CDW) and transform that data from its siloed source state, into analytics-ready insights — prepared for advanced analytics, machine learning, and artificial intelligence use cases. Only Matillion is purpose-built for Snowflake, Amazon Redshift, Google BigQuery, and Microsoft Azure, enabling businesses to achieve new levels of simplicity, speed, scale, and savings. Trusted by companies of all sizes to meet their data integration and transformation needs, Matillion products are highly rated across the AWS, GCP and Azure Marketplaces. Matillion is dual-headquartered in Manchester, UK and Denver, Colorado. Learn more about how you can unlock the potential of your data with Matillion’s cloud-based approach to data transformation. Visit us at www.matillion.com. n Red Hat: Red Hat Integration is a comprehensive set of integration and messaging technologies that provide developers and architects with cloud-native tools for connecting applications and systems. It offers capabilities for application and API connectivity, API management and security, data integration and transformation, service composition, service orchestration, real-time messaging, data streaming, change data capture, and cross-datacenter consistency — all combined with a cloud-native platform and toolchain to support the full spectrum of modern application development. n Oracle Data Integrator is a comprehensive data integration platform that is capable of handling high-volume, high-performance batch loads, event-driven, trickle-feed integration processes, and SOAenabled data services. The latest version of the solution improved its developer productivity capabilities and user experience with a redesigned user interface and deeper integrations. n Precisely, previously Syncsort, integrates data with cloud and next-generation platforms as well as IT operations solutions in order to unlock the full potential of enterprise data. The company offers security information and event management (SIEM) solutions, real-time CDC and ETL solutions, IT operations analytics and management, and cloud data warehousing. n Qlik’s data integration platform, when combined with the company’s analytics platform and its data literacy-as-a-service offering, delivers an end-to-end approach to Active Intelligence. Unlike traditional approaches, Active Intelligence realizes the

potential in data pipelines by bringing together data at rest with data in motion for continuous intelligence derived from real-time, up-to-date information. n Software AG’s webMethods OneData enables users to easily reconcile, cleanse and synchronize master data. Additionally, it improves data integration and management by allowing users to make trusted data available to everyone through various access pointers, eliminate data redundancies, and manage data integration projects from mergers and acquisitions. n SnapLogic’s Intelligent Integration Platform uses AI-powered workflows to speed all stages of IT integration projects – design, development, deployment, and maintenance – whether on-premises, in the cloud, or in hybrid environments. n Talend Data Fabric includes a suite of apps to ensure enterprise data is complete, clean, uncompromised, and readily available to everyone who needs it throughout the organization. z

29


030_SDT044.qxp_Layout 1 1/22/21 11:46 AM Page 30

30

SD Times

February 2021

www.sdtimes.com

Guest View BY ADAM FRANK

The evolution of DevOps to DevAI Adam Frank is a product and technology leader with more than 15 years of AI and IT Operations experience.

I

n this digital-first era, companies have accelerated the development of new services and priorities to support customer needs. A recent study by Windward Consulting shows that 64% of those surveyed are pivoting their new services or products as a result of the global pandemic, which has a direct impact on the customer experience. This rapid movement to digital services has increased the amount of data flowing into the IT team, putting extra pressure on DevOps practitioners to both maintain and improve upon the customer experience. Unfortunately, this data is simply too much for the human mind to handle. It can quickly bog down dev teams, placing their focus on operations instead of innovation. The traditional role of DevOps has been forced into putting out fires to keep the lights on. But we’re shifting to a new way of working where practitioners can leverage automation to reduce toil and spend more time working on innovative services that provide a better overall customer experience. The DevOps culture of tomorrow looks different than today because of automation. Because there’s less ops and more AI involved in the role of DevOps, I predict the role will evolve from DevOps to what we call DevAI. DevAI is about using AI to surface actionable information in a production environment or throughout the development and delivery pipeline. It’s for people and teams responsible for development and operations, embedded as part of the “code to customer pipeline.” This surfaces actionable information throughout the SDLC and, most importantly, prevents incidents from ever getting to production. If incidents do occur in production, the DevAI work will continue to surface the actionable information to enable automated roll backs — like code changes or configuration changes — optimizations and even modifications to correct the performance impact. Here’s how DevOps will transform into the new DevAI and how practitioners can prepare for the shift.

Together, observability and AI provide insight into what’s normal and what’s not.

Leverage observability and AI The amount of data developers face on a daily basis is astronomical. It’s quite literally impossible for the

human eye to evaluate it all and, more importantly, know how to use it to take action. Not to mention, most of it is irrelevant noise. Finding relevant information is a never-ending tunnel with no light at the end. And if practitioners spend their time sifting through data and working their way through the dark tunnel, they have no time to focus on the customer-impacting innovations that matter most. Enter: intelligent observability. Together, observability and AI provide insight into what’s normal and what’s not. Once the normal is identified, dev teams have a better understanding of which events need action and which are simply noise filtering through the system. They have a clear success path through tangible metrics, like the time to identify an issue, the time to resolve those customer-facing issues, and average time to spin up a new service. These reflect customer satisfaction — ultimately pointing back to the business value they provide.

Improve the DevOps culture and way of working As an industry, we’re experiencing an organizational change in the DevOps leadership model. In the past, DevOps practitioners have been told what to do by someone else who manages them. But in the shift from DevOps to DevAI, authority is distributed across the team and practitioners are empowered to self-discover new innovations with the help of automation. Because trust is distributed from the top down, practitioners are more confident to surface potential new services or fixes to current system issues to the broader team. In this fastpaced environment, businesses want their dev teams to move at lightning speed — and the only way to achieve that is by adopting technology that empowers them to do so and giving them the trust they need to find new ways of working or dream up innovative, competitive services. As teams implement automated solutions like intelligent observability, they not only increase productivity to meet the fast-paced demands of businesses today, but they also reduce toil by automating the tedious, repetitive tasks that beat them down day after day. In turn, they can focus on innovative, rewarding tasks that make their jobs more enjoyable and rewarding. As long as there’s software, it will always need to be improved. z


IBC_SDT044.qxp_Layout 1 1/22/21 4:39 PM Page 31

www.sdtimes.com

February 2021

SD Times

Industry Watch BY DAVID RUBINSTEIN

Flash is gone, but its legacy lives on he news late last year that Adobe was pulling the plug on its plug-in Flash media player was not unexpected. The world has moved on from plug-ins, and as we have seen over the years, better solutions come along to replace older ones, even if that “OG” software dominated its space through ubiquity. Deals with Apple and Microsoft ensured that by 2001, the Flash player would be installed on 98% of computing devices available at that time. Flash, first created by engineers at now-defunct Macromedia, brought the world animations and video. But it did more than that. It brought a new generation of computer enthusiasts into software development, fascinated by the games that could be created for Flash and by websites that routinely employed multimedia offerings. “From my point of view, you have to look at what the web was at the time,” said Andrew Borman, digital games curator at The Strong National Museum of Play located in Rochester, N.Y. “It was very Web 1.0. It was relatively static web pages meant for people to consume what was there … mostly text. But as you get Flash, you’re able to do a lot more of those image heavy things that, you know, the early HTML standards really weren’t meant for multimedia. And that really started to allow for video — or animation would be more accurate — animation in a way that computers of the time could handle and internet connections of the time could handle. And that was really what made it special.” It was Flash that moved Web 1.0 from the consumption of static pages to enabling websites to provide a much fuller experience that previously could only be gotten from CD-ROM discs back then. And it became the de facto standard for implementing multimedia in websites. That, Borman said, is because it was the only software you could use to do it. Since then, he noted, there have been numerous efforts to recreate the same tooling in different other standards, such as HTML5, which overcame browser interoperability issues. Many good things, though, often have a but. In the case of Flash, its ‘but’ is that it was closed source. Borman said that users were locked into the Adobe ecosystem of having to use its Flash Creator program to build animations.

T

“The benefit of HTML5 is it can replicate much of what Flash seemed to have done in some way, shape, or form, sometimes a little more difficult to do, from talking to developers,” he explained. “But the benefit is it’s open source. So you can always see what’s going on, and that will hopefully limit a lot of the security issues that plagued Flash over the years. And it really comes without a lot of the CPU load that came with Flash.” Most of all, though, Borman was nostalgic about everything Flash. He made the point that it made gamers out of people who wouldn’t necessarily call themselves gamers. “My mom sat down and played games on Pogo, and all sorts of other Flash sites, and she is not a gamer. That part of it is so important if you look at Flash, as a video game platform alone, there were hundreds of millions of people that installed Flash, and that’s a lot of people playing games.” And, because Flash was inexpensive compared to game console development kits, this opened up possibilities for potential game designers to create things that they just didn’t have before. “One thing I think that perhaps people don’t recognize enough about Flash, is that by making all these games accessible, you got a lot of people involved in computer science that may not otherwise have gotten into computer science. People just started playing games and thought they were cool, and wondered ‘How did they do that? I want to write a game,’” Borman said. Not only did Flash create an opening for future developers, but Flash games — wittingly or not, according to Borman — “made computers accessible to a lot of people. It made people interested in computers and a lot of ways and I think helped drive the whole move towards ubiquity of every home having a computer.” At the museum, Borman gets to look back on some of the early games and animations created with Flash, and marvel at how influential it was in changing what could be done with websites. “Some of it was really great. And now you look at TV shows and that sort of thing, they really have a Flash look to them. So it opened up so many avenues, even beyond games. There’s all its historical elements to it that are still really valuable.” z

David Rubinstein is editor-in-chief of SD Times.

Flash brought a new generation of computer enthusiasts into software development.

31


014_SDT043.qxp_Layout 1 1/22/21 12:08 PM Page 1

presents

March 10, 2021

Join your peers for a day of learning Virtual VSM DevCon is a one-day, digital conference examining the benefits of creating and managing value streams in your development organization. At Virtual VSM DevCon, you will learn how to apply value stream strategies to your development process to gain efficiencies, improve quality and cut costs.

Highlights from last year’s sessions: l

An examination of the VSM market

l

What exactly is value?

l

Slow down to speed up: Bring your whole team along on the VSM journey

l

Why developers reject Value Stream Management — and what to do about it

l

You can measure anything with VSM. That’s not the point

l

Who controls the flow of work?

Taught by leaders

l

Tying DevOps value streams to business success

on the front lines of Value Stream

l

Making VSM actionable

l

Value Stream Mapping 101

l

How to integrate high-quality software delivery into the Value Stream

l

Transitioning from project to product-aligned Value Streams

l

The 3 Keys to Value Stream infrastructure automation

REGISTER FOR FREE TODAY! https://events.sdtimes.com/valuestreamdevcon Sponsored by A

Event


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.