SD Times July 2020

Page 1

JULY 2020 • VOLUME 2, ISSUE NO. 37 • $9.95 • www.sdtimes.com


IFC_SDT036.qxp_Layout 1 5/20/20 10:52 AM Page 4

Instantly Search Terabytes

www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein drubinstein@d2emerge.com NEWS EDITOR Christina Cardoza ccardoza@d2emerge.com

dtSearch’s document filters support: ‡ popular file types ‡ emails with multilevel attachments ‡ a wide variety of databases ‡ web data

‡ efficient multithreaded search ‡ HDV\ PXOWLFRORU KLW KLJKOLJKWLQJ ‡ forensics options like credit card search

ART DIRECTOR Mara Leonardi mleonardi@d2emerge.com

CONTRIBUTING ANALYSTS Enderle Group, Gartner, IDC, Intellyx

CUSTOMER SERVICE SUBSCRIPTIONS subscriptions@d2emerge.com ADVERTISING TRAFFIC Mara Leonardi mleonardi@d2emerge.com LIST SERVICES Jessica Carroll jcarroll@d2emerge.com

Developers: ‡ 6'.V IRU :LQGRZV /LQX[ PDF26 ‡ &URVV SODWIRUP $3,V IRU & -DYD DQG NET with NET Standard / 1(7 &RUH

.

Jakub Lewkowicz jlwekowicz@d2emerge.com

CONTRIBUTING WRITERS Jacqueline Emigh, Lisa Morgan, Jeffrey Schwartz

2YHU VHDUFK RSWLRQV LQFOXGLQJ

.

SOCIAL MEDIA AND ONLINE EDITORS Jenna Sargent jsargent@d2emerge.com

.

‡ )$4V RQ IDFHWHG VHDUFK JUDQXODU GDWD FODVVLILFDWLRQ $]XUH $:6 DQG PRUH

REPRINTS reprints@d2emerge.com ACCOUNTING accounting@d2emerge.com

ADVERTISING SALES PUBLISHER David Lyman 978-465-2351 dlyman@d2emerge.com

Visit dtSearch.com for ‡ KXQGUHGV RI UHYLHZV DQG FDVH VWXGLHV ‡ IXOO\ IXQFWLRQDO HQWHUSULVH DQG developer evaluations

The Smart Choice for Text Retrieval® since 1991

dtSearch.com 1-800-IT-FINDS

SALES MANAGER Jon Sawyer 603-547-7695 jsawyer@d2emerge.com

PRESIDENT & CEO David Lyman CHIEF OPERATING OFFICER David Rubinstein

D2 EMERGE LLC 80 Skyline Drive Suite 303 Plainview, NY 11803 www.d2emerge.com


003_SDT037.qxp_Layout 1 6/24/20 3:54 PM Page 3

Contents

VOLUME 2, ISSUE 37 • JULY 2020

FEATURES

NEWS 4 10

News Watch

Feature experimentation: Walk before you run

Atlassian delivers information to DevOps teams

page 12

Jamstack brings front-end development back into focus page 6

Continuous testing isn’t optional anymore

COLUMNS

page 16

28 GUEST VIEW by Lin Sun 5 reasons I’m excited about Istio’s future

29 ANALYST VIEW by Jason English The software supply chain disrupted

The modern world of application monitoring 30 INDUSTRY WATCH by David Rubinstein BizOps: Bridging the age-old divide

page 22

Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 80 Skyline Drive, Suite 303, Plainview, NY 11803. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2018 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 80 Skyline Drive, Suite 303, Plainview, NY 11803. SD Times subscriber services may be reached at subscriptions@d2emerge.com.


004,5_SDT037.qxp_Layout 1 6/24/20 1:47 PM Page 4

4

SD Times

July 2020

www.sdtimes.com

NEWS WATCH Report: Python overtakes Java this year For the past few years, reports have indicated that Python was quickly rising to the top of the list of the most used programming languages. Now Python has finally crossed the finish line. JetBrains revealed its 2020 State of the Developer Ecosystem report, which found that while Java is the most widespread primary language, Python surpassed it in the list of languages used in the last year. According to the report, it is the most studied language, with 30% of respondents starting or continuing to learn Python in the last 12 months. Python is also one of the top three languages developers plan to adopt or migrate to, accompanied by Go and Kotlin.

Digital.ai’s value stream acquisitions TPG Capital-backed company Digital.ai has announced the acquisition of Numerify and Experitest. Numerify is an artificial intelligence analytics company that will provide AI-powered business analytics while Experitest is a continuous quality provider that will add continuous testing to Digital.ai’s value stream platform. According to the company, Numerify will become the “central nervous system” of the Digital.ai platform. It will advance the platform’s machine learning capabilities with its analytics engine and provide AI-driven insights for DevOps teams. Experitest will bring on its continuous quality solutions to help the company reduce

risk, provide error-free experiences, and deliver highly protected apps at scale.

Microsoft brings VS Code Go to the Go project The Go team has officially stepped up as the new maintainer of Microsoft’s Go extension for VS Code. The news comes as 41% of Go developers stated that VS Code is their primary code editor, according to the Go developer survey. “Both the Go and Visual Studio Code teams recognize the importance of Visual Studio Code to the Go community and believe strongly in an open tooling ecosystem for Go developers,” the VS Code team wrote in a blog post.

Dart introduces null safety The Dart programming language has reached a new major milestone with the technical preview of its new null safety feature, the Dart team announced. According to the team, this feature has been in development for over a year, and is the biggest addition to the Dart language since Dart 2. “Null safety helps you avoid a class of bugs that are often hard to spot, and as an added bonus enables a range of performance improvements,” team members Filip Hracek and Michael Thomsen wrote in a post.

Android 11 beta now available Android 11 Beta is now available for early adopters and developers, offering new ways to connect devices and media

and significantly improving privacy settings. To make communication easier, Android 11 moves all conversations across multiple messaging apps to a dedicated space in the notification section, making it easier to see, respond to, and manage conversations all in one place, the team explained. Every Android release will also have new privacy and security controls that let users decide how and when data on their device is shared.

Undo brings debugger to Java Undo is trying to make it easier for developers to debug Java applications. It recently announced an early access beta program for its product LiveRecorder for Java. LiveRecorder allows developers to record, replay, and reverse debug Java applications. LiveRecorder is the company’s flagship product, but up

until this beta it only supported C/C++ applications. “Obviously C++ is a big market, but it’s relatively niche. Java is much larger and gets particularly in that application space, so we’re very excited to have an offering in that space,” Greg Law, co-founder and CTO of Undo told SD Times.

Adobe Flash EOL is coming Adobe is reminding users that it will stop distributing and updating Flash Player after December 31, 2020. Once the go-to choice for building web applications, Flash began to show its age during the inception of smartphone development — starting with the release of the iPhone in 2007 which did not support the technology. Adobe will continue issuing regular Flash Player security patches, maintain OS and browser compatibility, and add features and capabilities

People on the move

n Former SAP chief product officer Abdul Razack has joined Google Cloud as its vice president of technology solutions and solutions engineering. Razack has more than 25 experience in enterprise technology. He will be responsible for defining the company’s overarching solution strategy, which spans across infrastructure, application modernization, data analytics, and cloud AI. n Sandra Bergeron has been added to the Sumo Logic board of directors as an independent board member. Bergeron is a security industry veteran currently on the board of directors for F5 and Qualys. Additionally, Sumo Logic expanded its advisory board to include Lisa Hamitt, an AI industry veteran. According to the company, these additions will help advance its growth and market leadership as well as provide real-time insights to digital businesses. n Tasktop has promoted Nicole Bryan to chief product officer. Bryan was previously the vice president of product development at the company. In her new role, she will oversee all aspects of product development such as engineering, product strategy, product operations, product marketing and product management.


004,5_SDT037.qxp_Layout 1 6/24/20 1:48 PM Page 5

www.sdtimes.com

as determined by Adobe through the end of 2020.

Project Tye makes it easier to work with microservices According to Microsoft, developers often want to run more than one service or project at once when building an app. This can be hard to set up, and even once it is set up, there is a steep learning curve to get a distributed app running on a platform like Kubernetes. Microsoft’s Project Tye aims to address this challenge and has two main goals. The first goal is making the development of microservices easier by running many services with a single command, using dependencies in containers, and discovering addresses of other services using simple conventions. The second goal is automating deployment of .NET applications to Kubernetes. This is achieved by automatically containerizing .NET apps, generating Kubernetes manifests with minimal configuration, and using a single configuration file.

JetBrains joins the big data space JetBrains announced that Big Data Tools is now available as EAP for DataGrip and PyCharm Professional. This addition aims to address problems that involve both code and data. The company first indicated plans to support more big data tools last year when it announced a preview of the IntelliJ IDEA Ultimate plugin with Apache Zeppelin notebooks integration. Since the plugin started with only Scala support, it made sense to only make it available for IntelliJ

July 2020

SD Times

GitLab beefs up DevSecOps portfolio GitLab announced two acquisitions focused on providing security to its platform. Peach Tech is a security firm that specializes in protocol fuzz testing and dynamic application security testing, and Fuzzit is a continuous fuzz testing solution. Through fuzz testing, also referred to as fuzzing, developers can provide bad inputs to a program to find bugs, crashes, and faults that could be exploited. With the addition of coverage-guided and behavioral fuzz testing into the DevSecOps toolchain, organizations can find vulnerabilities and weaknesses that traditional QA testing techniques often miss, according to GitLab. IDEA Ultimate. But now that the team has added support for a wider set of scenarios and tools, JetBrains felt it was time to extend out the capabilities and make it available to other IDEs. “We believe the plugin will extend the capabilities of DataGrip users when it comes to working with distributed file storage systems and columnar file formats. At the same time, the users of PyCharm who use PySpark or who also work with data will benefit from having this plugin available in their IDE,” the team wrote in a post.

Chef’s integrated DevSecOps portfolio Chef announced new capabilities designed to enable enterprises to build competitive advantage through automation and DevSecOps innovations. The new Chef Compliance solution combines existing Chef technology with policydriven remediation and content based on Center for Internet Security (CIS) benchmarks. It works through a fivestep process across the compliance life cycle: acquire access to CIS-certified and Chef-hardened and curated content, define compliance baselines, detect and monitor

the compliance posture by detecting deviations, remediate with newly available remediation capabilities, and maintain comprehensive and up-to-date visibility across heterogeneous estates.

Harness releases Continuous Efficiency Harness has announced a new solution designed to make it easier for development and DevOps teams to manage the cost of containerized applications and microservices that are running in cloud environments. With Continuous Efficiency, these teams are given immediate visibility into the cost of their applications, microservices, and clusters. “Skyrocketing cloud costs are an unsolved problem that burden startups and large enterprises alike. The challenge is how to balance developer self-service and oversight of cloud resources with visibility, predictability and governance around the public cloud,” said Jyoti Bansal, CEO and co-founder of Harness. “With budgets under the microscope at every company, the lesson is clear: equip developers with the tools and visibility they need to optimize cost, just like they get today for managing application performance and quality.”

CloudBees expands software delivery Management platform CloudBees has announced new integrations with top continuous integration and continuous delivery (CI/CD) engines for its Software Delivery Management (SDM) platform. The new integrations include Google Cloud Build and Tekton. “Our customers want to be able to move faster and innovate,” said Pali Bhat, vice president, product and design at Google Cloud. “We’re pleased to work with CloudBees to integrate its platform with Cloud Build and Tekton pipelines, expanding our partnership to enable greater DevOps velocity and accelerate time to market for our joint customers.” Integrated capabilities will include: l A central view of all product and feature development l Ability to create policies that trigger actions l Ability to create reports related to development l Real-time insights into ontime delivery of features l Real-time value stream management for modelling and visualizing the software delivery process l A flexible application framework for linking tools together in the toolchain. z

5


006-8_SDT037.qxp_Layout 1 6/23/20 3:29 PM Page 6

6

SD Times

July 2020

www.sdtimes.com

Jamstack brings front-end development back into focus

usinesses that want to attract, engage and retain more online customers need to provide an exceptional front-end solution. It’s the first thing users see when they come to a website, and it’s the first impression digital businesses can give. Traditionally, when front ends are coupled with the back end, developers have to be full-stack experts and be able to build a full-stack solution, according to Guillermo Rauch, CEO of Vercel, a web development solution provider. “In some ways what was happening was you weren’t getting your cake and eating it too because the back end wasn’t strong enough and the front end was quite limited,” said Rauch. Further, a website that required a web server constantly running to deliver a program often led to site lag times, and left the system more open for attack, according to Matt Biilmann, CEO and

B

BY CHRISTINA CARDOZA co-founder of Netlify, a modern web development platform provider. This development conundrum is now being addressed with a new rising development and architectural approach called Jamstack, which comes with the promise of providing faster, more accessible, more maintainable and globally available websites and applications. Jamstack stands for JavaScript, API, and Markup. The term was created by Netlify in 2015, but has recently been gaining more traction. “We coined the term ‘Jamstack’ in 2015 to better define what developers were already starting to do — decouple the frontand back-end web and apps, focus on best practices of speed and availability, and redefine their workflows,” Biilmann explained.

According to Biilmann, as organizations have moved away from monolithic architectures to microservices, there has been a natural separation between the front end and the back end, enabling developers to focus on building that front-end layer and owning the whole life cycle around it. “As the web has progressed and the demands on the experiences we are building and the devices we are reaching have gone up, we have had to build layers of abstractions that take some of the complexity away and makes it possible for a developer to work without considering those lower layers of the stack. That has been one of the driving forces behind the idea of the Jamstack,” Biilmann said in a keynote at this year’s Jamstack Conference. Jamstack leverages pre-rendering to help developers build faster websites, aims to provide a more secure infra-


006-8_SDT037.qxp_Layout 1 6/23/20 3:30 PM Page 7

July 2020

SD Times

Image: freeCodeCamp.org

www.sdtimes.com

Jamstack defined

structure with fewer points of attack, is able to scale through global delivery, and speed up the development and deployment cycle. “This idea is that the stack has moved up a little. We have transcended from thinking about the stack in terms of the specific programming language we use on the server, from the web server we run on, or from the specific database and instead [we are] thinking at the layer of what gets delivered to the end users in terms of pre-built markup, in terms of the JavaScript that runs directly in the browser, and in terms of these APIs we have access to. By doing this, we are able to let developers focus on building websites instead of focusing on infrastructure and we are able to make the performance part of the platform itself instead of making it something that developers have to have,” Biilmann said.

Jamstack is a front-end development approach for modern web development. “Jamstack was born of the stubborn conviction that there was a better way to build for the web. Around 2014, developers started to envision a new architecture that could make web apps look a lot more like mobile apps: built in advance, distributed, and connected directly to powerful APIs and microservices. It would take full advantage of modern build tools, Git workflows, new front-end frameworks, and the shift from monolithic apps towards decoupled front ends and back ends,” Matt Biilmann, CEO of Netlify, wrote in an ebook about Jamstack. The ‘J-A-M’ in Jamstack stands for: JavaScript: Going beyond just the programming language, the Jamstack leverages JavaScript’s advanced constructs, object syntax, variations and compilers. In addition to JavaScript, Jamstack solutions can be built with PHP, Ruby, Python and other languages. According to Netlify, it’s not about a collection of specific software and technologies, rather it is a set of best practices. APIs: These enable the front end to be separated from the back end, allowing for more modular development and the ability to leverage third-party tools. Markup: Prebuilt markup enables websites to be delivered as static HTML files, which provides faster performance. According to Netlify, some best Jamstack practices are: l Service the entire project directly from a CDN l Put everything into Git to reduce contributor friction and simplify staging and testing workflows l Take advantage of modern build tools such as Babel, PostCSS, and Webpack l Automate builds using webhooks or a publishing platform l Use atomic deploys to hold live changes until all changed files are uploaded l Ensure your CDN can handle instant cache invalidation so you know “when a deploy went live, it really went live.” z —Christina Cardoza

The rise of mobile has also contributed to the rise of Jamstack. “We saw the web reimagined for mobile apps. If you think about Spotify, no one thinks they should be downloading it every time they use it and at the same time no one thinks that they would be downloading all the music in the world on their phone either. There would be no room. You download the app, but you speak to a service to stream the music. That was what we saw the web would need in order to be viable and fight back,” Chris Bach, president and co-founder of Netlify, said.

While the Jamstack is not focused on specific technologies, it does provide a “prescription” for building web applications. Any project that tightly couples the client side with servers is not considered Jamstack. Some examples of this would be a site built with server-side CMS, a single-page app with isomorphic rendering, and a monolithic server-run web app relying on a back-end language. “It is almost saying abide by this protocol and you are going to build a great website or a great application,” Vercel’s continued on page 8 >

7


006-8_SDT037.qxp_Layout 1 6/23/20 3:30 PM Page 8

8

SD Times

July 2020

www.sdtimes.com

< continued from page 7

Rauch said. Those protocols include: 1. Decoupling from the back end to allow the front end to be freely deployed globally, directly to a CDN 2. Prebuilding pages into static pages and assets 3. Leveraging APIs to take to backend services Often, a misunderstanding is that the static pages Jamstack delivers are flat and boring, but Vercel’s Rauch explained since you pre-render the page and attach JavaScript to it, when the visitor visits the page, JavaScript gets executed and the page comes to life. “I tend to compare the Jamstack to the printing press,” Rauch explained. “The main idea is that you pre-render pages and then you distribute them throughout a global CDN, meaning you only do the computation once. When you think about printing your page and then being able to very cheaply and quickly duplicate it throughout the entire world, the server costs go down because you did the work of printing the page once and were able to clone it all over the world. That also means you

can clone it right where the visitor is.” Rauch continued, “Front end is the largest place for reinvention for companies. A lot of investment has gone into back-end technology and boring infrastructure, low-level technologies. What we noticed is there has been an underinvestment or under-appreciation of the technology that is actually closer to the customer.” Netlify’s Biilmann believes just as LAMP stack, (Linux, Apache HTTP Server, MySQL, and PHP) is no longer used as a term to create websites and web applications, Jamstack will eventually just become the way of doing things and won’t need to be referred to as the Jamstack anymore. “The Jamstack is going to succeed in a way where in a number of years we will stop calling it Jamstack because it will just be the way websites are built,” he said.

The state of the Jamstack in 2020 Five years after the term has been coined, the Jamstack is starting to see rapid expansion, growth and maturity, according to a recent survey. The State of the Jamstack in 2020 survey revealed

Jamstack vs serverless It is common for developers to get Jamstack and serverless mixed up because Jamstack is a subset of serverless. Since Jamstack focuses on front-end development that is decoupled from the back end, it doesn’t require or depend on a server. “With the Jamstack, complex, monolithic applications could now be disassembled into small, independent components that are easier to parse and understand. The introduction of serverless and the emergence of the API further cemented the Jamstack as the perfect paradigm for building streamlined, and lightweight applications that scaled efficiently,” Divya Tagtachian, developer advocate at Netlify, wrote in a post. According to Guillermo Rauch, CEO of Vercel, a web development solution provider, serverless is just such a vague term, while the Jamstack is more prescriptive. “With Jamstack, it tells you to pre-render markup, use JavaScript on the client side and query an API. If I tell you to build a website using serverless, you would look at me like ‘what are you talking about?’ When it comes down to building an application, I like to tell people how to actually do it, so I am a big fan of betting on Jamstack,” he explained. Colby Fayock, a front-end engineer and UX designer, added that while Jamstack and serverless do have many similarities and philosophies, not all Jamstack apps are always going to be a serverless app. “Consider an app hosted in static storage on the cloud provider of your choice. Yes, you might be serving the app in a serverless way, but you might be dealing with an API that utilizes Wordpress or Rails, both of which are certainly not serverless,” Fayock wrote in a post. “Combining these philosophies can go a long way, but they shouldn’t be confused as the same.” z —Christina Cardoza

44% of developers have been using it for a year, with 37% using it for 1-2 years. Eleven percent of the respondents reported they have been leveraging Jamstack for 4 or more years. The survey was conducted by Netlify and received more than 3,000 responses from software development professionals. While 36% of respondents leveraging the Jamstack are newer developers (with 4 years or less of experience), the survey found 38% of respondents using Jamstack have 8 or more years of experience. “The overall picture of the Jamstack is that of a thriving community that is growing fast as a wave of mainstream adoption continues, driven by fantastic scaling, high performance, and workflows and tooling that developers love,” Laurie Voss, senior data analyst at Netlify, wrote in a post. The reason for using Jamstack included improving performance, uptime, speed of development, security, and compliance. The top use cases included building consumer software, internal tooling, and enterprise software. The survey also asked about the Jamstack tooling ecosystem, and how satisfied developers were with the tools and frameworks available. Respondents revealed using React, Gatsby, Next, Nuxt and 11ty JavaScript frameworks. Additionally, enterprise developers are more likely to use TypeScript than other developers, and GraphQL had the most satisfied users for API protocols. Other findings included: Jamstack developers are building fully static sites, single page web apps, and fully dynamic sites; a third of respondents using Jamstack have sites that serve millions of users; and 63% of Jamstack developers don’t work at a purely tech company and come from advertising and marketing, education, media, finance and business support industries. “With the continued growth of tools and services in this community ecosystem, along with so many powerful web properties redefining how developers can do more with less, the next wave of web development is here, and it’s the Jamstack,” said Biilmann. z


Full Page Ads_SDT037.qxp_Layout 1 6/23/20 9:17 AM Page 9


010_SDT037.qxp_Layout 1 6/24/20 1:46 PM Page 10

10

SD Times

July 2020

www.sdtimes.com

DEVOPS WATCH

Atlassian delivers information to DevOps teams work or update status. “This is powered by third-party inteAtlassian wants to improve the way Atlassian is also releasing a new code grations because we want that open tool development, IT operations and busi- review experience to enable developers chain. We know that the best teams are ness teams work together by allowing to easily identify changes, and create using many tools to do this work right. them to share and get the right infor- action items in one place with automa- "Integrations with Mabl, Sentry and mation in the tools they are already tion. The new DevOps Automation Snyk really bring in this idea that as a working in. developer is coding, if there In a recent DevOps Trends are security vulnerabilities, we Survey, the company found want to tell them at that DevOps tools and practices moment so we can stop the that teams use to make their vulnerabilities before they get lives easier actually come with to production,” said Prince. new challenges such as disconAutomated change managenected tools, manual processes ment is also available with Jira and collaboration blockers. Service Desk Cloud and Bit“Teams often spend a lot of bucket Pipelines. This feature time inside tools doing coordinacan pause the CI/CD process, tion, status updates and tool concreate a change request and figuration instead of getting back trigger a deployment once to the core benefits,” Suzie approved. The company’s risk Prince, head of product for Bitassessment engine can score bucket Cloud at Atlassian, told The “Your Work” dashboard in Bitbucket Cloud now includes assigned the risk of a change and autoSD Times. “The best teams col- Jira issues, enabling teams to more easily move from one task to the approve and deploy those laborate and have a shared next without jumping between tools. changes. Additionally, the understanding, and that’s how change management view can you respond best to your business needs.” Triggers can automate low-level tasks streamline high-risk changes and proTo provide better collaboration and like updating Slack or Twitter channels vide traceable change request informaa shared understanding among DevOps so teams can manually focus on more tion. Automated change management teams, the company is announcing 12 high-level tasks like features with high provides support for Bitbucket, Jenkins, new features and integrations focused risk or high value. CircleCI and Octopus Deploy. on bringing teams back to “collaboraFor review, test and deployment If security vulnerabilities do happen tion, coding and building secure soft- phases, Atlassian released a new pull in production, Atlassian also wants to ware instead of all this kind of coordina- request experience for Bitbucket Cloud give teams the power to escalate and tion work,” Prince explained. that enables faster code reviews with troubleshoot quickly. The company is The new features follow the value consolidated lists of tasks, integrated Jira providing an incident investigation chain through planning, tracking, build- issue creation, and activity feed filters. dashboard that will provide the potening, continuous integration and deployThe new Atlassian VS Code Integra- tial cause and the developer that made ment. tion aims to bring the development the change so the change can be rolled For planning and tracking, one key pipeline into developers’ editors with back and the incident resolved. feature is the introduction of the “Your the ability to access their task list from Opsgenie and Bitbucket Cloud inteWork” dashboard in Bitbucket Cloud. Jira Software Cloud, perform code gration also puts alerts in one place as The dashboard has been expanded to review, and use CI/CD tracking. well as filters out the noise so the right include assigned Jira issues that allow On the DevSecOps side of things, the people can focus on the right issues and teams to move from one task to the next company announced code insights in take the right action, Prince explained. within the dashboard instead of having Bitbucket, which includes integration “DevOps hasn’t yet fulfilled its to jump around between tools. Accord- with Mabl for test automation, with Sen- promise,” said Prince. “We want to ing to Prince, one of the major blockers try for automated monitoring and with bring time back for developers; we the company sees is that developers Snyk for catching critical security vulnerwant to get them back to coding to have to look in multiple places to find abilities early. innovating to delivering value.” z BY CHRISTINA CARDOZA


VSMDC-house ad.qxp_Layout 1 5/21/20 9:25 AM Page 1

July 22, 2020

Virtual VSM DevCon is a one-day, digital conference examining the benefits of creating and managing value streams in your development organization. At Virtual VSM DevCon, you will learn how to apply value stream strategies to your development process to gain efficiencies, improve quality and cut costs.

Join your peers for a day of learning Value Stream Management

Taught by leaders on the front lines of Value Stream

As software development, delivery and performance become more complex due to modern architectures, value streams can help organizations unlock the bottlenecks and eliminate process waste to continuously improve how they work and deliver better experiences to their customers. Value stream management concepts are critical when the product changes frequently due to opportunities in the markets, the materials change due to the complexity of modern software architectures and means of delivery, and the output is often changing based on customer demands and expectations.

REGISTER FOR FREE TODAY! https://events.sdtimes.com/valuestreamdevcon Sponsored by A

Event


012-15_SDT037.qxp_Layout 1 6/24/20 1:49 PM Page 12

12

SD Times

July 2020

www.sdtimes.com

Feature experimentation:

Walk before BY CHRISTINA CARDOZA

oftware innovation doesn’t happen without taking risks along the way. But risks can be scary for businesses afraid of making mistakes. There is another way, according to Jon Noronha, senior vice president of product at Optimizely, a progressive delivery and experimentation platform provider. Feature experimentation, he said, allows businesses to go to market quicker while improving product quality and minimizing the fear of failure.

S


012-15_SDT037.qxp_Layout 1 6/24/20 1:52 PM Page 13

www.sdtimes.com

you run “I like to think of feature experimentation as a safety net. It’s something that gives people the confidence to do something bold or risky,” he said. “Imagine you are jumping on a trapeze with no net. You’re going to be really scared to take even the smallest step because if you fall, you’re going to really hurt yourself. When there is a net, you know the worst thing that can happen is you land on the net and bounce a little bit.” Feature experimentation is that net that allows you to leap, but catches you if you fall, Noronha explained. It enables businesses to take small risks, roll it out to a few users, and measure the impact of changes before releasing it to 100% of the user base. Christopher Condo, a principal analyst at the research firm Forrester, said, “In order to be innovative, you need to really understand what your customers want and be willing to try new experiences. Using feature experimentation allows businesses to be more Agile, more willing to put out smaller pieces of functionality, test it with users and continue to iterate and grow.” However, there are still some steps businesses need to take before they can squeeze out the benefits of feature experiment. They need to learn to walk before they can run.

Progressive Delivery: Walk Progressive delivery is the walk that comes before the run (feature experimentation), according to Dave Karow, continuous delivery evangelist at Split, a feature flag, experimentation and CD solution provider. Progressive delivery

assumes you have the “crawl” part already in place, which is continuous delivery and continuous integration. For instance, teams need to have a centralized source of information in a place where developers can check in code and have it automatically tested for basic sanity with no human intervention, Karow explained. Without that, you won’t see the true promise of progressive delivery, John Kodumal, CTO and co-founder of LaunchDarkly, a feature flag and toggle management company, added. “Imagine a developer is going to work on a feature, take a copy of the source code and take a copy of their plan and work on it for some time. When they are done, they have to

July 2020

SD Times

merge their code back into the source code that is going to go out into production,” Karow explained. “In the meantime, other developers have been making other changes. What happens is literally referred to in the community as ‘merge hell.’ You get to a point where you think you finished your work and you have to merge back in and then you discover all these conflicts. That’s the crawl stuff. It’s about making changes to the software faster and synchronizing with coworkers to find problems in near real-time.” Once you have the crawl part situated, the progressive delivery part leverages feature flags (also known as feature toggles, bits or flippers) to get features into production faster without breaking the application. According to Optimizely’s Noronha, feature flags are one layer of the safety net that feature experimentation offers. It allows the development teams to try things at lower risks and roll out by slowly and gradually enabling developers to expose key functionalities with the goal of catching bugs or errors before they become widespread. “It’s making it easier to roll things out faster, but be able to stop rollouts without a lot of drama,” Karow said. One way to look at it is through the continued on page 14 >

Some examples of feature flags Feature flags come in several different flavors. Among them are:

n Release flags that enable trunk-based development. “Release Toggles allow incomplete and un-tested codepaths to be shipped to production as latent code which may never be turned on,” Pete Hodgson, an independent software delivery consultant, wrote in a post on MartinFowler.com. n Experiment flags that leverage A/B testing to make data-driven optimizations. “By their nature Experiment Toggles are highly dynamic - each incoming request is likely on behalf of a different user and thus might be routed differently than the last,” Hodgson wrote. n Ops flags, which enable teams to control operational aspects of their solution’s behavior. Hodgson explained “We might introduce an Ops Toggle when rolling out a new feature which has unclear performance implications so that system operators can disable or degrade that feature quickly in production if needed.” n Permission flags that can change the features or experience for certain users. “For example we may have a set of ‘premium’ features which we only toggle on for our paying customers. Or perhaps we have a set of “alpha” features which are only available to internal users and another set of “beta” features which are only available to internal users plus beta users,” Hodgson wrote. z

13


012-15_SDT037.qxp_Layout 1 6/24/20 1:50 PM Page 14

14

SD Times

July 2020

www.sdtimes.com

Experimenting with A/B testing A/B testing is one of the most common types of experiments, according to John Kodumal, CTO and co-founder of LaunchDarkly, a feature flag and toggle management company. It is the method of comparing two versions of an application or functionality. Previously, it was more commonly used for front-end or visual aesthetic changes done to a website rather than a product. For instance, one could take a button that was blue and make it red, and see if that drives more clicks, Jon Noronha, senior vice president of product at Optimizely, a progressive delivery and experimentation platform provider, explained. “In the past several years, we’ve really transitioned to focusing more on what I would call feature experimentation, which is really building technology that helps people test the core logic of how their product is actually built,” he said. A/B testing is used in feature experimentation to test out two competing theories and see which one achieves the result the team is looking for. Christopher Condo, a principal analyst at the research firm Forrester, explained that “It requires someone to know and say ‘I think if we alter this experience to the end user, we can improve the value.’ You as a developer want to get a deeper understanding of what kind of changes can improve the UX and so A/B testing comes into play now to show different experiences

< continued from page 13

concept of canary releases, according to Kodumal, which is the idea of being able to release some change and controlling the exposure of that change to a smaller audience to validate that change before rolling it out more broadly. These flags help minimize the blast radius of possible messy situations, according to Forrester’s Condo. “You’re slowly gauging the success of your application based on: Is it working as planned? Do customers find it useful? Are they complaining? Has the call value gone up or stayed steady? Are the error logs growing?” As developers implement progressive delivery, they will become better at detecting when things are broken, Condo explained. “The first thing is to get the hygiene right so you can build software more often with less drama. Implement progressive delivery so you can get that all the way to production. Then dip your

from different people and how they are being used.” According to Dave Karow, continuous delivery evangelist at Split, a feature flag, experimentation and CD solution provider, this is especially useful in environments where a “very important person” within the business has an opinion or the “highest paid person” on the team wants you to do something and a majority of the team members don’t agree. He explained normally what someone thinks is going to work, doesn’t work 8 or 9 times out of 10. But with A/B testing, developers can still test out that theory, and if it fails they can provide metrics and data on why it didn’t work without having to release it to all their customers. A good A/B test statistical engine should be able to tell you within a few days which experience or feature is better. Once you know which version is performing better, you can slowly replace it and continue to iterate to see if you can make it work even better, Condo explained. Kodumal explained A/B testing works better with feature experimentation because in progressive delivery the customer base you are gradually delivering to is too small to run full experiments on and achieve the statistical significance of a fully rigorous experiment. “We often find that teams get value out of some of the simpler use cases in progressive delivery before moving onto full experimentation,” he said. z

toes into experimentation by making sure you have that data automated,” said Split’s Karow.

Feature experimentation: Run Feature experimentation is similar to progessive delivery, but with better data, according to Karow. “Feature experimentation takes progressive delivery further by looking at the data and not just learning whether or not something blew up, but why it did,” he said. By being able to consume the data and understand why things happen, it enables businesses to make better datadriven decisions. The whole reason you do smaller releases is to actually confirm they were having the impact you were looking for, that there were no bugs, and you are meeting users’ expectations, according to Optimizely’s Noronha. It does that through A/B testing,

multi-armed bandits, and chaos experiments, according to LaunchDarkly’s Kodumal. A/B testing tests multiple versions of a feature to see how it is accepted. Multi-armed bandits is a variation of an A/B test, but instead of waiting for a test to complete, it uses algorithms to increase traffic allocations to see how features work. And chaos experiments refer to finding out what doesn’t work rather than looking for what does work. “You might drive a feature experiment that is intended to do something like improve engagement around a specific feature you are building,” said Kodumal. “You define the metric, build the experiment, and validate whether or not the change being made is being received positively.” The reason why feature experimentation is becoming so popular is because it enables development teams to deploy code without actually turning


012-15_SDT037.qxp_Layout 1 6/24/20 1:50 PM Page 15

www.sdtimes.com

July 2020

SD Times

Feature experimentation is for any company with user-facing technology Feature experimentation has already been used among industry leaders like eBay, LinkedIn and Netflix for years. “Major redesigns...improve your service by allowing members to find the content they want to watch faster. However, they are too risky to roll out without extensive A/B testing, which enables us to prove that the new experience is preferred over the old,” Netflix wrote in a 2016 blog post explaining its experimentation platform. Up until recently it was only available to those large companies because it was expensive. The alternative was to build your own product, with the time and costs associated with that. “Now there is a growing marketplace of solutions that allow anyone to do the same amount of rigor without having to spend years and millions of dollars building it in-house,” said Dave Karow, continuous delivery evangelist at Split, a feature flag, experimentation and CD solution provider. Additionally, feature experimentation used to be a hard process to get started with, with no real guidelines to follow. What has started to happen is the large companies are getting to share how their engineering teams operate and provide more information on what goes on behind the scenes, according to Christopher Condo, a principal analyst at the research firm Forrester. “In the

it on right away. You can deploy it into production, test it in production, without the general user base seeing it, and either release it or keep it hidden until it’s ready, Forrester’s Condo explained. In some cases, a business may decide to release the feature or new solution to its users, but give them the ability to turn it on or off themselves and see how many people like the enhanced experience. “Feature experimentation makes that feature a system of record. It becomes part of how you deliver experiences to your customers in a varied experience,” said Condo. “It’s like the idea of Google. How many times on Google or Gmail has it said ‘here is a brand new experience, do you want to use it?’ And you said ‘no I’m not ready.’ It is allowing companies to modernize in smaller pieces rather than all at once.” What feature experimentation does is it focuses on the measurement side, while progressive delivery focused on

past, you never gave away the recipe or what you were doing. It was always considered intellectual property. But today, sharing information, people realize that it's really helping the whole industry for everybody to get better education about how these things work,” Condo said. Today, the practice has expanded into something that every major company with some kind of user-facing technology can and should take advantage of, according to Jon Noronha, senior vice president of product at Optimizely, a progressive delivery and experimentation platform provider. Noronha predicts feature experimentation “will eventually grow to be adopted the same way we see things like source control and branching. It’s going to go from something that just big technology companies do to something that every business has to have to keep up.” “Companies that are able to provide that innovation faster and bring that functionality that consumers are demanding, they are the ones that are succeeding, and the ones that aren’t are the ones that are left behind and that consumers are starting to move away from,” John Kodumal, CTO and co-founder of LaunchDarkly, a feature flag and toggle management company, added. z

just releasing smaller pieces. “Now you are comparing the 10% release against the other 90% to see what the difference is, measuring that, understanding the impact, quantifying it, and learning what’s actually working,” said Optimizely’s Noronha. While it does reduce risks for businesses, it doesn’t eliminate the chance for failure. Karow explained businesses have to be willing to accept failure or they are not going to get very far. “At the end of the day, what really matters is whether a feature is going to help a user or make them want to use it or not. What a lot of these techniques are about is how do I get hard data to prove what actually works,” Karow explained. To get started, Noronha recommends to look for parts of the user experience that drive traffic and make simple changes to experiment with. Once they prove it out and get it entrenched in one area, then it can be

quickly spread out to other areas more easily. “It’s sort of addictive. Once people get used to working in this way, they don’t want to go back to just launching things. They start to resent not knowing what the adoption of their product is,” he said. Noronha expects progressive delivery and feature experimentation will eventually merge. “Everyone's going to roll out into small pieces, and everyone’s going to measure how those things are doing against the control,” he said. “What both progressive delivery and feature experimentation do is provide the ability to de-risk your investment in new software and R&D. They give you the tooling you need to think about decomposing those big risky things into smaller, achievable things where you have faster feedback loops from customers,” LaunchDarkly’s Kodumal added. z

15


016-20_SDT037.qxp_Layout 1 6/24/20 3:55 PM Page 16

16

SD Times

July 2020

www.sdtimes.com

Continuous testing isn’t optional

BY LISA MORGAN

DevOps and CI/CD practices are maturing as organizations continue to shrink application delivery cycles. A common obstacle to meeting time-to-market goals is testing, either because it has not yet been integrated throughout the SDLC or certain types of testing are still being done late in the SDLC, such as performance testing and security testing. Forrester Research VP and principal analyst Diego Lo Giudice estimates that only 20% to 25% of organizations are doing continuous testing (CT) at this time, and even their teams may not have attained the level of automation they want. “I have very large U.S. organizations saying, ‘We’re doing continuous delivery, we’ve automated unit testing, we’ve automated functional testing, we shifted those parts of the testing to the left, but we can’t leave performance testing to the end because it breaks the cycle,” said Lo Giudice. The entire point of shifting left is to minimize the number of bugs that flow through to QA and production. However, achieving that is not just a matter of developers doing more types of tests. It’s also about benefitting from testers’ expertise throughout the life cycle. “The old way of doing QA is broken and ineffective. They simply focus on quality control, which is just detecting bugs after they’ve already been written. That’s not good enough and it’s too late. You must focus on preventing defects,” said Tim Harrison, VP of QA Services at software quality assurance consultancy SQA². “QA 2.0 extends beyond quality control and into seven other

areas: requirements quality, design quality, code quality, process quality, infrastructure quality, domain knowledge and resource management.”

What’s holding companies back Achieving CT is a matter of people, processes and technology. While some teams developing new applications have the benefit of baking CT in from the beginning, teams in a state of transition may struggle with change management issues. “Unfortunately, a lot of organizations that hire their QA directly don’t invest in them. Whatever experience and skills they’re gaining is whatever they happen to come across in the regular course of


016-21_SDT037.qxp_Layout 1 6/25/20 10:36 AM Page 17

www.sdtimes.com

Software Testing

anymore business,” said SQA2’s Harrison. Companies tend to invest more heavily in development talent and training than testing. Yet, application quality is also a competitive issue. “Testing has to become more of the stewardship that involves broader accountability and broader responsibility, so it’s not just the testers or the quality center, or the test center, but also a goal in the teams,” said Forrester’s Lo Giudice. Also holding companies back are legacy systems and their associated technical debt. “If you’ve got a legacy application and let’s say

there are 100 or more test cases that you run on that application, just in terms of doing regression testing, you’ve got to take all those test cases, automate them and then as you do future releases, you need to build the test cases for the new functionality or enhancements,” said Alan Zucker, founding principal of project management consultancy Project Management Essentials. “If the test cases that you wrote for the prior version of the application now are changed because we’ve modified something, you need to keep that stuff current.” Perhaps the biggest obstacle to achieving CT is the unwillingness of some team members to adapt to change because they’re comfortable with the status quo. However, as Forrester’s Lo Giudice and some of his colleagues warn in a recent report, “Traditional software testing has no place in modern app delivery.”

Deliver value faster to customers CT accelerates software delivery because code is no longer bouncing back and forth between developers and testers. Instead, team members are working together to facilitate faster processes by eliminating traditional cross-functional friction and automating more of the pipeline. Manish Mathuria, founder and CEO of digital engineering services company Infostretch, said that engineering teams benefit from instant feedback on code and functional quality, greater productivity and higher velocity, metrics that measure team and deployment effectiveness, and increased confidence

July 2020

SD Times

about application quality at any point in time. The faster internal cycles coupled with a relentless software quality focus translate to faster and greater value delivery to customers. “We think QA should be embedded with a team, being part of the ceremony for Agile and Scrum, being part of planning, asking questions and getting clarification,” said SQA2’s Harrison. “It’s critical for QA to be involved from the beginning and providing that valuable feedback because it prevents bugs down the line.”

Automation plays a bigger role Testing teams have been automating tests for decades, but the digital era requires even more automation to ensure faster release cycles without sacrificing application quality. “It takes time to invest in it, but [automation] reduces costs because as you go through the various cycles, being promoted from dev to QA to staging to prod, rather than having to run those regression cycles manually, which can be very expensive, you can invest in some man-hours in automation and then just run the automation scripts,” said SQA2’s Harrison. “It’s definitely super valuable not just for the immediate cycle but for down the road. You have to know that a feature doesn’t just work well now but also in the future as you change other areas of functionality.” However, one cannot just “set and forget” test automation, especially given the dynamic nature of modern applications. Quite often, organizations find that pass rates degrade over time, and if corrective action isn’t taken, the pass rate eventually becomes unacceptable. To avoid that, SQA2 has a process it calls “behavior-based testing,” or BBT, which is kind of like behavior-driven development (BDD) but focused on quality assurance. It’s a way of developing test cases that ensures comprehensive quantitative coverage of requirements. If a requirement is included in a Gherkin-type test base, the different permutations of test cases can be continued on page 18 >

17


016-20_SDT037.qxp_Layout 1 6/24/20 3:56 PM Page 18

18

SD Times

July 2020

www.sdtimes.com

ency and helps the team members move forward with their tasks,” said Mathuria. “Software automation engineers start the process of automation of the application by mocking the backend systems whether UI, API, end points or database interaction. Service virtualization also automates some of the edge scenarios.”

< continued from page 17

extrapolated out. For example, to test a log-in form, one must test for combinations of valid and invalid username, valid and invalid password, and user submissions of valid and/or invalid data. “Once you have this set up, you’re able to have a living document of test cases and this enables you to be very quick and Agile as things change in the application,” said SQA2’s Harrison. “This also then leads to automation because you can draw up automation directly from these contexts, events, and outcomes.” If something needed to be added to the fictional log-in form mentioned above, one could simply add another context within the given statement and then write a small code snippet that automates that portion. All the test cases in automation get updated with the new addition, which simplifies automation maintenance. “QA is not falling behind because they’re actually able to keep up with the pace of development and provide that automation on a continuous basis while keeping the pass rates high,” said Harrison.

Service virtualization saves time Service virtualization is another speed enhancer because one no longer waits for resources to be provisioned or competes with other teams for access to resources. One can simply mock up what’s needed in a service virtualization tool. “I remember working on a critical application one time where everything had gone great in test and then when we moved the application changes to prod, things ground to a halt because the con-

AI and machine learning are the future

The old way of doing QA is broken, and ineffective, said SQA2’s Harrison.

figurations in the upper and lower environment differed,” said Project Management Essential’s Zucker. “With service virtualization that goes away.” Within the context of CT, service virtualization can kick off automatically, triggered by a developer pushing a feature out to a branch. “If you’re doing some integration testing on a feature and you change something in the API, you’re able to know that a new bug is affected by the feature change that was submitted. It makes testing both faster and more reliable,” said SQA2’s Harrison. “You’re able to pinpoint where the problems are, understand they are affected by the new feature, and be able to give that feedback to developers much quicker.” Infostretch’s Mathuria considers service virtualization a “key requirement.” “Service virtualization plays a key role in eliminating the direct depend-

Service virtualization is another speed enhancer.

Vendors have already started embedding AI and machine learning into their products in order to facilitate more effective continuous testing and to speed application delivery cycles even faster. The greatest value comes from the pattern recognition pinpointing problem areas and providing recommendations for improving testing effectiveness and efficiency. For example, Infostretch’s Mathuria has observed that AI and machine learning help with test optimization, recommendations on reusability of the code base and test execution analysis. “As the test suites are increasing day by day, it is important to achieve the right level of coverage with a minimum regression suite, so it’s very critical to ensure that there are no redundant test scenarios,” said Mathuria of test optimization. Since test execution produces a large set of log files, AI and machine learning can be used to analyze them and make sense out of the different logs. Mathuria said this helps with error categorization, setup and configuration issues, recommendations and deducing any specific patterns. SQA2’s Harrison has been impressed with webpage structure analysis capabilities that learn a website and can detect a breaking change versus an intended change. However, he warned if XPaths have been used, such as to refer to a button that has just moved, the tool may automatically update the automation based on the change, creating more brittle XPaths than were intended. The use cases for AI and machine learning are virtually limitless, but they are not a wholesale replacement for quality control personnel. They’re “assistive” capabilities that help minimize speed-quality tradeoffs. z


Full Page Ads_SDT037.qxp_Layout 1 6/23/20 9:17 AM Page 19


016-20_SDT037.qxp_Layout 1 6/24/20 3:56 PM Page 20

20

SD Times

July 2020

www.sdtimes.com

Forrester’s recommendations for building a successful continuous

O

rganizations are moving to continuous testing (CT) out of necessity because business competitiveness demands faster release cycles. In fact, teams can’t deliver on the promises of DevOps and CI/CD if testing isn’t part of continuous processes and the pipeline. Forrester Research VP and principal analyst Diego Lo Giudice and some of his colleagues, recently published a report that includes 12 essential mustdos that span people, practices and technology. The following is based on a recent interview with Lo Giudice in which he shared insights that are explained in greater detail in the report.

People Continuous testing requires testing team transformation. Instead of having a centralized test center where all the testers reside, executing and managing all the tests, there’s now a hub-andspoke structure which includes a small center of excellence and testers that are assigned to different teams. “The traditional way, you had a development team that would write the code and throw it over to the test center to do the testing to find bugs. That’s not the way we operate today because testers are in the Agile teams and what’s in the central team is a small team that’s focusing on best practices,” said Lo Giudice. “The central team is maybe recommending tools, harvesting the good practices from different teams and formalizing and sharing them among the teams. So, there’s a shift from a centralized test center to a federated test center.” The testers working in Agile teams need Agile skills including the ability to talk with developers and product owners from the business. “That’s a different testing persona,” said Lo Giudice. “The testing persona

of the past would look for bugs and be happy he found a lot of bugs. Now [that developers and testers are] on the same team, they have shared goals. Quality is one of them. The tester helps prevent bugs from happening so the tester gets involved earlier on in designing the test cases, helping the developers formalize the unit testing, making sure that developers are doing their unit testing and that they’re covering as much code as possible. [The testers are] helping developers build better quality code from the beginning.” Also, to align their efforts and jointly produce better quality code, developers and testers also need to share common metrics. “In the past, we never measured if the level of automation is improving. We never measured how long automation takes because when these teams measure the execution of automation, they check in code in a CI tool and execution kicks off. If it’s suddenly taking longer, then something is going on,” said Lo Giudice.

“That’s an indication that the release will be stopped, that the code that was checked in will go back to the team to figure out what the problem was.”

Practices Behavior-driven development (BDD) is one of the practices teams are adopting. Many of them are using Cucumber, a BDD development tool and Gherkin, its ordinary language parser because when test cases and test scenarios are written in ordinary language, everyone can understand them. “It helps the collaboration between the product owner from the business, the tester and the developers. The product owner will write what he wants in terms of the behavior of the application together with the test cases and then people will understand that language. He can start thinking about how to write the automation for it and depending on the tools that might be generated from the DSL,” said Lo Giudice. Other teams have adopted test-dri-


016-20_SDT037.qxp_Layout 1 6/24/20 3:56 PM Page 21

www.sdtimes.com

testing capability

ven development (TDD), which differs from BDD. “TDD is different because it impacts the life cycle. It’s writing the test cases and then the code that passes the test cases,” said Lo Giudice. Shifting left is another popular practice which involves testing as soon as a new sprint or product development starts. More types of testing have been shifting left over time, and that will continue to be the case. Right now, a lot of organizations are focused on shifting performance testing left because leaving it to the end is too late. “Testers are part of the team and involved early on. It’s about starting testing, and unit testing is one way of shifting left, but it’s about the testers working with the product owners and the team defining test cases or user acceptance criteria right away when we start writing the user stories in the background,” said Lo Giudice. Service virtualization is also essential for shifting testing left because devel-

opers and testers can mock up resources instead of filing a ticket and then waiting for operations to make a resource available or competing with others to access a resource. Forrester stopped covering service virtualization separately because it doesn’t have its own market, so it’s now included as part of continuous testing. “You don’t need the full service virtualization capabilities that the tools three to five years ago were offering, but simplified versions that help you do a stub very quickly,” said Lo Giudice. Teams also need to shift testing right as well as left. “It’s monitoring the view into production. If you’re deploying your features frequently in production and the developer can monitor some of the code that they’re deploying, they can prevent performance issues from happening,” said Lo Giudice. Finally, exploratory testing is replacing the old way of manual testing. Manual testing isn’t going away, but its uses are diminishing.

Technology The tech stack is more focused on smart automation than traditional test automation. Smart automation uses AI and machine learning to help developers focus on what matters, which simplifies and speeds testing. “Smart automation tools leverage machine learning and AI to generate from requirements more precise test cases that would optimize the business coverage, so that’s at the design level,” said Lo Giudice. “But there’s also automation of test execution. When code gets checked in, do I have to run all my regression tests or based on the change can I figure out the ones that need to be run and shorten the execution?” API testing is also important because developers are writing more

July 2020

SD Times

API and microservice-based applications. Beyond that, there should be fully layered testing and version control with all assets stored centrally. “All the testing assets necessary to speed up the automation end up being stored together with the code, so you store the code that’s being tested, the code that we use for writing the test cases, and all other assets so I can version all that,” said Lo Giudice. “If I find a bug and need to review and update the test automation, I can do that very quickly, so in the technology stack, CI/CD integration with application life-cycle management remains fundamental.” For advanced performance testing, test data management is recommended. “You can’t use the old way of doing test data generation when we’re cycling fast on testing continuously,” said Lo Giudice. “You have to have something that integrates into the sprint or the life cycle and updates the data all the way through.” Self-service provisioning of test environments is also essential. That’s accomplished in the cloud, spinning up and spinning down test environments.

Expect AI and machine learning to impact vendor rankings At the time of this writing, Forrester is about to release its Forrester Wave on Continuous Test Automation. Of the 26 criteria used in the wave, more than half of criteria (15 or 16) focus on functionality. “A very large portion of those had a question around how and why are you using any ML or AI in this capability,” said Lo Giudice. “The response was the vendors have finally moved onto this, so this year you’re already seeing the use of AI in the tools and the way they’re using it. They’re using it to make things smarter.” How, exactly, will be covered in the August issue of SD Times, which will include a deep dive on machine learning and AI in the context of CT. z

NEXT MONTH: AI AND MACHINE LEARNING IN TESTING

21


22

SD Times

July 2020

www.sdtimes.com

Buyers Guide

BY DAVID RUBINSTEIN

A

pplication performance monitoring is more important thatnever, due to the rising complexity of software applications, architectures and the infrastructure that runs them. When monitoring tools first were developed, the systems they were looking at were fairly simple — it was a monolithic application, running in a corporate-owned data center, on one network. The idea was to watch the telemetry — Why were response times so low? Why wasn’t the application available? — analyze signals that came in, and find the right person to resolve the issue. And, in a world where ‘instant gratification’ wasn’t yet a thing, users wouldn’t howl if it took some time to resolve the issue. Applications weren’t a driver of business then, they were seen as supporting business. Today, with the explosion of microservices, containers, cloud infrastructures and devices on which to access applications, the old APM tools aren’t up to the complexity. And users certainly won’t tolerate slow responses or failing shopping carts. This guide will look at two monitoring software providers who have created solutions coming at the problem from different perspectives, and what they see as necessary to effectively monitor today’s application performance. Catchpoint CEO Mehdi Daoudi has flipped how the industry should look at monitoring on its head, from two angles. First, legacy APM tools have

been obsessed with what’s going on internally — where the bad code is, what part of the network is slow, etc. Today, organizations need to understand the user experience, and then infer from that where the problem is. Digital experience monitoring, which is what Catchpoint offers, takes an outside-in view of application performance, where others look at internals to try to understand what the customer is experiencing. Second, Daoudi believes the idea of buying monitoring solutions before understanding what problem the enterprise is trying to solve is backwards. He told SD Times that businesses should first identify the problems that exist in their systems, and then apply tooling to that. Lightstep CTO and co-founder Daniel “Spoons” Spoonhower said the pain of finding and resolving problems in application performance hasn’t changed in … well, forever. Technologies have changed, organizations have changed, and monitoring tools need to change. He said the promise of APM is to use data to be able to explain what’s happening, so data collection becomes

critical, It’s important for today’s monitoring tools to present engineers with context, and should emphasize tracing as a way to get that context and begin to understand the causal relationships and dependencies that are at the root of system problems and failures, he said. Lightstep takes a decidedly insideout view of monitoring, but enables integrations with other types of monitoring tools to round out the offering, including the user experience.

Software and systems complexities Technology has become more complex, as noted above. But just as individual development teams are working on smaller pieces of the overall application puzzle, it’s the setup of those teams — working autonomously on their project, not necessarily concerned with the other parts — that makes it more difficult to get to the root cause of problems. “If I’m just sitting by myself in my garage running hundreds of microservices, [monitoring] is probably not that much worse,” Spoonhower said. “I think the thing that happened is that microservices allowed these teams to work independently, so now you’re not just doing


022-26_SDT037.qxp_Layout 1 6/24/20 3:59 PM Page 23

www.sdtimes.com

one release a week; your organization is doing 20 or 30 releases a day. … I think it’s more about the layers of distinct ownership where you as an individual services owner can only control your one service. That’s the only thing you can really roll back. But you’re dependent on all these other things and all of these other changes that are happening at the same time — changes in terms of users, changes in terms of the infrastructure, other services, third-party providers — and the gap where tools are really falling down has more to do with the organizational change than it has to do with the fact that we’re running in Docker containers.” Daoudi agreed that fragmentation is a major impediment to understanding what’s going on in software performance. He used the image of six blindfolded people and an elephant to describe it. One person grabs its tail and thinks he has a rope. One holds a tusk and thinks it’s a spear of some kind. One touches his massive side and thinks it’s a wall. None of them, though, can grasp that what they’re touching are parts of something much larger. They can’t see that. “When you think about it, let’s say you and I run this company and we have an e-commerce platform. We’re running it on Google Cloud. Our infrastructure is Google Cloud, we’ve built our services, the shopping cart, inventory, we hook up to UPS to ship t-shirts to people. You have to have an understanding of the environment this is working on, then you have the components of Google Cloud that are not available to you. But when you think about delivering that web page to a user in Portland so they can buy a tshirt, look how much they have to go through. They have to go through TMobile in Seattle, through the internet, and we’re probably using NS-1 for our network, and on our sites we’re tracking

some ads and doing A/B testing. The challenge with monitoring is, and why it’s still so hard to capture the full picture of the elephant, is that it’s freaking complex. I can’t make this up. It’s just very complex. There is no other thing.”

Observability is a good start The goal of monitoring, Daoudi said, is to be able to have an understanding of what’s broken, why it’s broken, and where it’s broken. That’s where observability comes in. Catchpoint defines observability as “a measure of how well internal states of a system can be inferred from knowledge of its external output.” Catchpoint has created observability.com to address this, and, as Daoudi noted, observability is a way of doing things — not a tool. Spoonhower described observability as giving organizations a way to quickly navigate from effect back to the cause. “Your users are complaining your service is slow, you just got paged because it’s down, you need to be able to quickly — as a developer, as an operator — move from the effect back to what the root cause is, even if there could be tens of thousands or even millions of different potential root causes,” he said. “You need to be able to do that in a handful of mouse clicks.” And that is why the use of artificial intelligence and machine learning is

July 2020

SD Times

growing in importance. Today, with the massive amounts of data being collected, it’s unreasonable to believe humans can digest it all and make correct decisions from all the noise coming in. “I think anything that has AI in it is going to be hyped to some extent,” Spoonhower said. “For me, what’s really critical here, and what I think has fundamentally changed in terms of the way APM tools work, is that we don’t expect humans to draw all of the conclusions. There are too many signals, there’s too much data, for a human to sit down and look at a dashboard and use their intuition to try to understand what’s happening in the software. We have to apply some kind of ML or AI or other algorithms to help sift through all the signals and find the ones that are relevant.” Daoudi said observability is focused on collecting the telemetry and putting it in one place where it can be correlated. “AIOps is a fancy word for what you and I probably remember as event correlation back in the day, right? It’s a set of rules. You need to define the dependencies.. this app runs on this server, or this container … whatever. If you don’t understand, then all of this is just signals, more alerts, more people getting tired of responding at 2 o’clock in the morning to alarms, or not seeing the problem at all.” Adding to the technical complexity is continued on page 24 >

The importance of tracing Lightstep’s observability platform — founded on best-in-class distributed tracing — integrates traces, logs, and metrics to provide the fastest root cause analysis solution available and explain the answer to the most important question in production software: what changed? Lightstep tracks changes across services and across your stack so that, whether you’re responding to an outage or just debugging a bad deployment, it will automatically surface the telemetry that can explain what’s happening. Tracing is the backbone of root cause analysis in any microservice-based or other distributed system. Traces make the interactions between services explicit and tie the performance of individual services back to the end-user requests that they might affect. That is, tracing makes the causal relationships between services explicit. With a trace, you can see how an error deep in your stack affects your user-facing API or which service is having the biggest impact on user experience. In aggregate, traces can reflect even more patterns, including those that can be hard to see when considering just a single request. Lightstep analyzes thousands of traces for each deployment and incident (and even on demand) to build a model of how your application is behaving. By comparing traces before and after a change and correlating metrics and logs with that change, you won’t need to sift through endless dashboards or page after page of logs: Lightstep shows you only the signals that matter so you can quickly navigate from effect back to cause. z

23


022-26_SDT037.qxp_Layout 1 6/24/20 3:59 PM Page 24

24

SD Times

July 2020

www.sdtimes.com

< continued from page 23

the fact that teams are changing and being reorganized, and that services aren’t static. Spoonhower said, “Establishing and maintaining service ownership, and understanding what that is, I think, is sort of a double-edged problem, both from a leadership point of view where you’re trying to understand, wait, I know this service here is part of the problem but who do I talk to about that? On the other side, from the teams, what I’ve seen is teams often will get a few services dumped on them that were left over from a reorg or somebody left, and that’s a really stressful position to be in because at some level, they are in control but they don’t have the knowledge to do that... There should be a way when I get paged to quickly get a view of how that service is behaving and how it’s interacting with other services, even if I’m not an expert in the code.” Collecting data, and putting it in one place to be able to ‘connect the dots’ and see the bigger picture, is what modem monitoring tools are bringing to the table. “The biggest problem I see with monitoring is not too many alerts; it’s actually missing the whole thing,” Daoudi said.

Tools are only part of the solution Both Spoonhower and Daoudi were quick to point out that tools are important for monitoring, but they are just tools. At the heart of monitoring is the need for organizations to quickly understand why releases are failing or why performance has gone down. Spoonhower said: “I think the pain is that the costs of achieving that are quite high, either in terms of the raw dollars if you’re paying a vendor, if you’re paying for infrastructure to run your own solution; or just the amount of time that it takes an engineer to.. they did a deployment, and now they’re going to sit and stare at a dashboard for 20 or 30 minutes. That’s a lot of time when they could be doing something else.” He lamented the fact that the legacy APM approach is tools-centric. “Even the names, like logs, is not a solution to

An issue of ‘understandability’ This word understandability is starting to be heard, as the next evolutionary step beyond observability. And understandability is key to finding problems more quickly. Mehdi Daoudi, CEO at digital experience monitoring solution provider Catchpoint, recalled an effort when he was running monitoring at DoubleClick years ago. “I sat down with a bunch of engineers and engineering managers, and said, ‘OK, We keep deploying stuff, you guys don’t react to alerts, you’re not acting on the data… so let’s sit down and I remember buying the biggest whiteboard I could find at the time, and we literally diagrammed what the user interaction with our system looked like. Which system did it call? What function? Which database? Which tables?’ We literally spent four and a half months drawing the entire diagram of the system, from A to Z, as if I had shot a tracer bullet to see everything that it touched. Then we said, OK, now that we understand which function, which system, now let’s go and put monitoring around that. That was the click to making sure the monitoring was actionable.’ “At DoubleClick, no matter how many tools we threw at the problem, we were still having performance issues, still having that, because the tools were not helping us understand,” he said. “What are we trying to solve? What are the metrics we care about? Where do we need those metrics so we can understand the relations between these interactions, and then we can put the tools in place?” z

a problem; it’s a tool in your tool belt. Metrics … it’s a kind of data, and I think the way we think of it and I think the right way to think of it is, what problems are people trying to solve? They’re trying to understand what the root cause of this outage is, so they can roll it back and go back to sleep. And so, by focusing a little bit more on the workflows, we’ll figure out as a solution what the right data to help you solve the problem is. It shouldn’t be up to you to say, ‘Ahh, this is a metrics problem; I should be using my metrics tool. Or this is a logging problem; use the log tool’. No. It’s a deployment problem, it’s an incident problem, it’s an outage problem.” Catchpoint’s Daoudi said people have the unreasonable expectation that they can simply license one tool that can cover every aspect of monitoring. “There is no single tool that does the whole thing,” he said. “The biggest mistake people make is they get the tool first and then they ask questions later. You should ask, ‘What is it that I want my monitoring tools to help me answer?’ and then you start implementing a monitoring tool. What is the question, then you collect data to answer the question. You don’t collect data to ask more questions. It’s an infinite loop. “I tell customers, before you go and invest gazillions of dollars in a very expensive set of tools, why don’t you

just start by understanding what your customers are feeling right now,” Daoudi continued. “That’s where we play a big role, in the sense of ‘let me tell you first how big the problem is. Oh, you have 27% availability. That’s a big problem.’ Then you can go invest in the tools that can show you why you have 27% availability. Buying tools for the sake of buying tools doesn’t help.”

All about the customer The technology world is playing a bigger role in driving business outcomes, so the systems that are created and monitored must place the customers’ interests above all else. For retailers, for example, customers more often are not getting their first impression of your brand by walking into a store — especially true today with the novel coronavirus pandemic we’re under. They’re getting their first impressions from your website, or your mobile app. “A lot of people are talking about customer centricity. IT teams becoming more customer centric,” Daoudi explained. “Observability. SRE. But let’s take a step back. Why are we doing all of this? It’s to delight our customers, our employees, to not waste their time. If you want to go and buy something on Amazon, the reason you keep going back to Amazon is that they don’t waste our time. z


Full Page Ads_SDT037.qxp_Layout 1 6/23/20 9:16 AM Page 25


022-26_SDT037.qxp_Layout 1 6/24/20 4:00 PM Page 26

26

SD Times

July 2020

www.sdtimes.com

A guide to monitoring tools n

FEATURED PROVIDERS n

n Catchpoint Systems: Catchpoint offers innovative, real-time analytics across its Digital Experience Monitoring solution through the use of synthetic monitoring and user sentiment tools, which provide an outside-in view of user experiences. The tools work together to give a clear assessment of performance, as users can either contact Catchpoint directly through its portal, or Catchpoint can learn of issues over social media channels. Synthetic monitoring allows testing from outside of data centers with expansive global nodes, and RUM and user sentiment providing the clearest view possible of end-user experiences. n LightStep: LightStep’s mission is to deliver insights that put organizations back in control of their complex software applications. Its first product, LightStep [x]PM, is reinventing application performance management. It provides an accurate, detailed snapshot of the entire software system at any point in time, enabling organizations to identify bottlenecks and resolve incidents rapidly.

n AppDynamics: The AppDynamics Application Intelligence Platform provides a real-time, end-to-end view of application performance and its impact on digital customer experience, from end-user devices through the back-end ecosystem—lines of code, infrastructure, user sessions and business transactions. n Dynatrace provides software intelligence to simplify enterprise cloud complexity and accelerate digital transformation. With AI and complete automation, our allin-one platform provides answers, not just data, about the performance of applications, the underlying infrastructure and the experience of all users.

developers, Instana’s AutoTrace technology automatically captures context, mapping all your applications and microservices without continuous additional engineering. n New Relic: New Relic’s comprehensive SaaS-based New Relic Software Analytics Cloud provides a single powerful platform to get answers about application performance, customer experience, and business success for web, mobile and back-end applications. New Relic delivers code-level visibility for applications in production that cross six languages — Java, .NET, Ruby, Python, PHP and Node.js — and supporting more than 70 frameworks.

n InfluxData: APM can be performed using InfluxData’s platform InfluxDB. InfluxDB is a purpose-built time series database, real-time analytics engine and visualization pane. It is a central platform where all metrics, events, logs and tracing data can be integrated and centrally monitored. InfluxDB also comes built-in with Flux: a scripting and query language for complex operations across measurements.

n Oracle: Oracle provides a complete endto-end application performance management solution for custom and Oracle applications. Oracle Enterprise Manager is designed for both cloud and on-premises deployments; it isolates and diagnoses problems fast, and reduces downtime, providing end-to-end visibility through real user monitoring; log monitoring; synthetic transaction monitoring; business transaction management and business metrics.

n Instana: Instana is a fully automatic Application Performance Monitoring (APM) solution that makes it easy to visualize and manage the performance of your business applications and services. The only APM solution built specifically for cloud-native microservice architectures, Instana leverages automation and AI to deliver immediate actionable information to DevOps. For

n OverOps: OverOps captures code-level insight about application quality in realtime to help DevOps teams deliver reliable software. Operating in any environment, OverOps employs both static and dynamic code analysis to collect unique data about every error and exception—both caught and uncaught — as well as performance slowdowns. This deep visibility into an appli-

cation’s functional quality not only helps developers more effectively identify the true root cause of an issue, but also empowers ITOps to detect anomalies and improve overall reliability. n Pepperdata: Pepperdata is the leader in Application Performance Management (APM) solutions and services for big data success. With proven products, operational experience, and deep expertise, Pepperdata provides enterprises with predictable performance, empowered users, managed costs and managed growth for their big data investments, both on-premise and in the cloud. n Plumbr: Plumbr is a modern monitoring solution designed to be used in microservice-ready environments. Using Plumbr, engineering teams can govern microservice application quality by using data from web application performance monitoring. Plumbr unifies the data from infrastructure, applications, and clients to expose the experience of a user. This makes it possible to discover, verify, fix and prevent issues. n Riverbed: Riverbed application performance solutions provide superior levels of visibility into cloud-native applications — from end users, to microservices, to containers, to infrastructure — to help you dramatically accelerate the application life cycle from development through production. n SmartBear: AlertSite’s global network of more than 340 monitoring nodes helps monitor availability and performance of applications and APIs, and find issues before they hit end consumers. The Web transaction recorder DejaClick helps record complex user transactions and turn them into monitors, without requiring any coding. n SOASTA: The SOASTA platform enables digital business owners to gain continuous performance insights into their real-user experience on mobile and web devices—in real time and at scale. n SolarWinds: The SolarWinds APM Suite— Pingdom, AppOptics, and Loggly—combines user experience monitoring with custom metrics, code analysis, distributed tracing, log analytics, and log management. z


Full Page Ads_SDT037.qxp_Layout 1 6/23/20 9:16 AM Page 27


028_SDT037.qxp_Layout 1 6/24/20 1:37 PM Page 28

28

SD Times

July 2020

www.sdtimes.com

Guest View BY LIN SUN

5 reasons I’m excited about Istio’s future Lin Sun is a senior technical staff member on IBM Cloud.

O

n May 24, 2017, IBM and Google announced the launch of Istio, an open technology that enables developers to seamlessly connect, manage, and secure networks of different microservices— regardless of platform, source, or vendor. I’ve been working on Istio since its 0.1 release and want to celebrate Istio’s third birthday by highlighting five things about Istio and its future that excite me. 1. Continuous usability improvements. We want to make sure users can quickly get started with Istio service mesh—from installation to onboarding their microservices to the mesh, to tightening the security policy of the microservice communication, and operating the service mesh at large scale safely and securely. Incremental usability improvements in each release now make it possible to use a single command to install Istio, to describe a given Kubernetes service or pod, or to analyze the whole cluster for Istio resources. With Istio 1.6, I don’t need to look up istio.io to figure out how to install it. We have nice status output from istioctl install command now. 2. Amazing collaboration within the community. About a month before Istio 1.6, I started to entertain the idea of central Istiod within the community. If you are not familiar with the concept, a central Istiod is where you run an Istiod control plane on a cluster to manage data planes on a remote cluster. We set a fairly aggressive goal to dark-launch this feature in 1.6, and we hit various roadblocks as part of the delivery. Through the wonderful collaboration within the environment working group and with contributors from Google and Haiwei jumping in to help out, we were able to meet our goal of dark-launching this feature in Istio 1.6. The best part is this is just the beginning. We’ve got various working groups excited about this deployment model, how to simplify the concept with zero config on the data plane, how to provide a seamless experience to our istioctl users, and more. 3. Continuous innovation. A goal of ours was to make the onboarding experience as simple as possible for our users, with almost zero change to their existing services. The community implement-

The Istio ecosystem is growing... and there are multiple vendors building solutions on top of Istio.

ed intelligent protocol detection for inbound and outbound traffic in Istio 1.3 and 1.4. While automatic protocol detection is great, it causes performance concerns with some users. Now Istio 1.6 directly consumes the appProtocol field in Kubernetes 1.18 Service object. 4. Rich ecosystem. The Istio ecosystem is growing with projects like Admiral, Emcee, iter8, and there are multiple vendors building solutions on top of Istio. Multiple cloud providers offer a managed Istio experience to simplify the install and maintenance of the Istio control plane. For example, Istio on IBM Cloud enables you to install Istio with a single action along with automatic updates and life cycle management. Istiobased Satellite mesh service announced earlier this month enables users to easily manage applications across environments. Additionally, vendors are building solutions to allow users to easily extend Istio through its sidecar via Solo’s WebAssembly Hub, or visualize the mesh via Red Hat’s Kiali. 5. The future of Istio. The community is focused on continuing to make Istio easy to use and as transparent as possible, with little or zero configuration. Users should be able to deploy their services into the mesh and enjoy the benefits of the mesh without any disruption. They should also be able to move their services out of the mesh easily if they don’t believe the mesh provides enough value to justify the additional cost that comes with the sidecars and control planes. If we can eliminate the surprises and make Istio boring for our users, that would be a huge win for the project. As developers and operators journey to cloudnative with microservices, I expect Istio adoption to increase. Users will push the boundaries of Istio— from adopting it in a single cluster, to exploring single service mesh across multiple Kubernetes clusters or services running across virtual machines and Kubernetes. I expect us to continue making stabilizing and securing our multicluster and mesh expansion support while developing mesh federation stories to allow multiple heterogeneous mesh or homogeneous mesh to federate. If this excites you, come join us and become an Istio contributor to make Istio better. Once you have a pull request merged, you can submit a membership request to become an Istio contributor. z


029_SDT037.qxp_Layout 1 6/24/20 1:38 PM Page 29

www.sdtimes.com

July 2020

SD Times

Analyst View BY JASON ENGLISH

The software supply chain disrupted S

ince COVID-19 took hold as a global pandemic, we have seen a lot of focus in the United States on improving our healthcare supply chain, by eliminating barriers to coordination among the many parties needed to source, build, transport and sell pharmaceuticals and equipment that medical professionals need. In the software industry, we tend to think of constraints in project management terms: with productivity or features delivered, as governed by a budget function of time and resources, minus failures. Feed plans and component ‘designs and materials’ in one end of the software factory, and depending on how many skilled developers and testers are working together over time, excellent software ‘products’ roll out the other end. Valuable innovation has never been achieved by treating the development shop like such a factory. This is why Agile methodologies were born, and the DevOps movement later took hold, to enable greater collaboration and buy-in, higher levels of customer-centricity and quality, and an automation mindset that accelerates delivery. The software supply chain — coordinating a complex web of the right people, systems and data, all contributing at the right time to deliver software for customers — is now experiencing its own black swan moment in these unprecedented times. The software supply chain may still have ineffable constraints in common with supply chains of other industries, besides time and resources. 1. Liquidity. This is the #1 issue for any other supply chain — how efficiently does money flow through the system? Whole suites of financing, factoring and settlement solutions try to solve this in the conventional supply chain world, where margins are thin and the time value of money is critical. In today’s SaaS and cloud-based IT world, customers buying on a monthly (MRR) basis may ask for leniency during a crisis, when funding for innovative new ventures is scarce. Partners will also be asked to step up and ease the burden. Budgets aren’t idealistic exercises for accountants to deal with anymore, as IT executives will become more conscious of cash flow than ever. 2. Collaborative forecasting. Supply-anddemand forecasting is never an internal exercise — it requires analysis, requests and promises among

all parties before orders are issued and goods assembled and delivered. Any IT business unit would do well to evaluate its own ability to not only forecast customer demand, but understand the readiness of all of its services partners, component software vendors and infrastructure providers to successfully deliver on customer promises. 3. Inventory and WIP. Most ‘real’ companies maintain inventory buffers of both parts and finished goods, as well as a certain amount of work-inprocess (WIP) in order to deal with volatile supply and demand, which represents an ongoing cost of business. While the DevOps movement has it right that technical debt is a primary bottleneck to progress, there’s still a lot of code that has productive value and can’t possibly be replaced, especially if it is maintained by other parties. 4. Quality and compliance. Meeting standards and delivering products that work as promised to meet SLAs and internal SLOs that avoid customer churn, penalties and risk is universal to all industries. 5. Life cycle service. In the automotive industry, it’s assumed that 60% or more of the total cost paid by customers for a vehicle will be spent on fuel, maintenance and parts, and not the initial purchase — so the necessity of capturing customer support and service revenue is more important than ever for software, even if some of that work is conducted by valued partners.

Jason English (@bluefug) is a Principal Analyst and CMO of Intellyx.

The software supply chain... is now experiencing its own black swan moment.

The Intellyx Take These constraints seem like old hat to an old supply chain hack. Not that supply chain software vendors avoid hoarding inventory better than the rest of the software world — most wouldn’t retire a product even if it has one remaining install in a china factory in Timbuktu. In all industries, the profitability of upgrading an existing customer is three to five times higher than that of capturing a new one. This is why some vendors take a pause on new deals or projects in favor of eliminating constraints in their software supply chains to deliver for existing customers. z

29


030_SDT037.qxp_Layout 1 6/24/20 4:40 PM Page 30

30

SD Times

July 2020

www.sdtimes.com

Industry Watch BY DAVID RUBINSTEIN

BizOps: Bridging the age-old divide David Rubinstein is editor-in-chief of SD Times.

S

ince my introduction into the software development industry in 1999, there has been one theme underlying all our coverage of tools, processes and methodologies: Getting business and IT to work closer together. At first, this divide was chalked up to the fact that the two sides did not speak the same language. The business side didn’t understand what was involved in producing an application, and the developers created applications based on unclear or imprecise requirements, which led to a lot of finger-pointing, misunderstanding and more bad communication. Today, as businesses go digital and release software that no longer just supports the business but actually drives it, the need for the two sides to come together has never been greater. There are now common collaboration tools that both sides can use for project tasks, progress tracking and more. A recent conversation I had with Serge Lucio, the general manager of the enterprise software division at Broadcom, led to a discussion of what the company is calling ‘digital BizOps.’ This concept links the planning, application delivery and IT operations at organizations by the use of tools that ingest data across the entire spectrum to provide a 360-degree view of what’s being produced and if it aligns with business goals. It uses automation to make decisions along the way that drive value. Broadcom, Lucio said, is looking to deliver insights for the different parts of the organizations. For application delivery, Broadcom wants to give teams the ability to “release with confidence. That is, I have a release that’s ready to be deployed to production. Is it really ready? You have tests, you may have security violations. These are the data points that release engineers and operations teams are looking at.” At the planning level, where corporate higherups are deciding what the business needs, the questions are, is the release strategically aligned with business goals? Are we going to deliver on-time, and on-budget? They need data, for planning and investment management purposes, to ultimately see if what is in the works matters most to the business. Then, Lucio explained, from the perspective of

There is some risk involved in letting machines make business decisions.

IT operations management, they need to triage problems to see which are most impactful on the business, and which are the highest priority to the business to resolve. Some would say this looks a lot like AIOps. Others see some elements of value stream in this approach. Tom Davenport, a distinguished professor of information technology and management at Babson College who has spoken with Broadcom about BizOps, says the concept seems more aspirational than technological or methodological at this point, but has the goal of automating business decision-making. “I’ve done some work in that area in the past, and I do think that’s one way to get greater alignment between IT and business people,” Davenport told me. “You just automate the decision and take it out of the day-to-day hands of the business person, because it’s automated.” Davenport can see organizations automating decisions in areas such as human capital management. “You have more and more data now out of these HCM systems and you’re starting to see some recommendations about, ‘You should hire this person because they’re likely to be a high performer,’ based on some machine learning analysis of people we’ve hired in the past who’ve done really well. Or, they’re likely to leave the organization, so if you want to keep them, you might want to offer an enticement to keep them on. Some of that already is being done in marketing. “A lot of that, the offers that get made to customers, particularly the online ones — are almost all automated now,” Davenport explained. Davenport pointed to an example of a casino hotel operator who had automated pricing on rooms, but that often was overridden by front-desk staff. So, they did a test. He explained: “Let’s see, do we make more money when we allow the front desk to override, and is there any implication for customer satisfaction? Or do we make more money with fully automated decisions? And it turns out, the automated decisions were better.” There is some risk involved in letting machines make business decisions. But Davenport said if those decisions are driven by data, it’s not as risky as it might seem. “Use data to make the decision,” he said, “and use data to test whether it has a better outcome or not.” z


Full Page Ads_SDT037.qxp_Layout 1 6/23/20 9:18 AM Page 32


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.