MAY 2021 • VOL. 2, ISSUE 47 • $9.95 • www.sdtimes.com
IFC_SDT046.qxp_Layout 1 3/29/21 9:26 AM Page 2
Instantly Search Terabytes
www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein drubinstein@d2emerge.com NEWS EDITOR Christina Cardoza ccardoza@d2emerge.com
dtSearch’s document filters support: popular file types emails with multilevel attachments a wide variety of databases
SOCIAL MEDIA AND ONLINE EDITORS Jenna Sargent jsargent@d2emerge.com Jakub Lewkowicz jlwekowicz@d2emerge.com ART DIRECTOR Mara Leonardi mleonardi@d2emerge.com
web data
CONTRIBUTING WRITERS Jacqueline Emigh, Lisa Morgan, Jeffrey Schwartz, George Tillmann
2YHU VHDUFK RSWLRQV LQFOXGLQJ efficient multithreaded search
CONTRIBUTING ANALYSTS Enderle Group, Gartner, IDC, Intellyx
HDV\ PXOWLFRORU KLW KLJKOLJKWLQJ forensics options like credit card search
CUSTOMER SERVICE SUBSCRIPTIONS subscriptions@d2emerge.com ADVERTISING TRAFFIC Mara Leonardi mleonardi@d2emerge.com
Developers:
LIST SERVICES Jessica Carroll jcarroll@d2emerge.com
6'.V IRU :LQGRZV /LQX[ PDF26 &URVV SODWIRUP $3,V IRU & -DYD DQG NET with NET Standard / 1(7 &RUH
.
.
.
)$4V RQ IDFHWHG VHDUFK JUDQXODU GDWD FODVVLILFDWLRQ $]XUH $:6 DQG PRUH
REPRINTS reprints@d2emerge.com ACCOUNTING accounting@d2emerge.com
ADVERTISING SALES PUBLISHER David Lyman 978-465-2351 dlyman@d2emerge.com
Visit dtSearch.com for KXQGUHGV RI UHYLHZV DQG FDVH VWXGLHV IXOO\ IXQFWLRQDO HQWHUSULVH DQG developer evaluations
SALES MANAGER Jon Sawyer 603-547-7695 jsawyer@d2emerge.com
The Smart Choice for Text Retrieval® since 1991
dtSearch.com 1-800-IT-FINDS
PRESIDENT & CEO David Lyman
D2 EMERGE LLC www.d2emerge.com
CHIEF OPERATING OFFICER David Rubinstein
Contents
VOLUME 2, ISSUE 47 • MAY 2021
FEATURES
NEWS 4
News Watch
Progressive Delivery:
6
Scaling up Agile requires a change of Pace
Giving limited users a taste of new software before it’s widely deployed
11
Introducing Pregrssive Delivery
19
Agile Fuels Value Stream Management
20
Four key metrics for measuring productivity
25
Maximizing the ROI of your Agile transformations
COLUMNS 32
GUEST VIEW by Scott Schwan How compliance fits into DevOps
33
GUEST VIEW by Darren Breommer Use hackathons to validate your product
34
ANALYST VIEW by Peter Hyde Succeeding as a remote Agile software development team
page 8
Confusing the what with the how page 14
Agile at 20:
BUYERS GUIDE
Understanding where it’s been, where it’s going
Digital experience monitoring the key to supporting a distributed workforce page 28
page 22 Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 2 Roberts Lane, Newburyport, MA 01950. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2021 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 2 Roberts Lane, Newburyport, MA 01950. SD Times subscriber services may be reached at subscriptions@d2emerge.com.
004-5.qxp_Layout 1 4/23/21 12:57 PM Page 4
4
SD Times
May 2021
www.sdtimes.com
NEWS WATCH Eclipse Adoptium to support releases of OpenJDK builds
infrastructures, and cloud platforms,” the Eclipse Adoptium website states.
The Eclipse Foundation announced that when AdoptOpenJDK transitions to the foundation under its new name, Eclipse Adoptium, it will focus on building a new infrastructure called Eclipse Temurin for building and releasing JDK releases. AdoptOpenJDK is a project that allows Java developers to download OpenJDK binaries. “Our goal is to meet the needs of both the Eclipse community and broader runtime users by providing a comprehensive set of technologies around runtimes for Java applications that operate alongside existing standards,
Microsoft’s OpenJDK build available as a preview Microsoft has announced a preview of its build of OpenJDK, which is the open-source distribution for Java. Microsoft’s OpenJDK build includes binaries for Java 11, and the company has also released an early access binary for Java 16 for Windows on ARM. Microsoft’s OpenJDK build is based on OpenJDK source code and follows the same build scripts that are used by the Eclipse Adoptium project. It has also been tested against the Eclipse Adoptium Quality
People on the move
n Satish Ramakrishnan has been appointed the vice president of engineering at Ahana, a self-service analytics company for Presto. At Ahana, Ramakrishnan will be in charge of engineering and technical operations as well as Ahana’s vision of simplifying ad hoc analytics for all organizations. n Amazon announced Adam Selipsky, CEO of Tableau, will join the company to run its Amazon Web Services division. Before joining Tableau in 2016, Selipsky was the vice president of sales, marketing and support at AWS for 11 years. He will rejoin the company on May 17. n Technology industry veteran Kevin Thompson is joining the software testing company Tricentis as its new CEO and chairman of the board. Thompson will replace Sandeep Johri, who led the company for nearly eight years. Johri will remain a member of the board of directions. Previously, Thompson worked SolarWinds for 14 years in a number of leadership roles such as CFO, COO, president and CEO. Additionally, he held senior positions at SAS and Red Hat. n Stephen Clark has joined Cambridge Quantum Computing as head of artificial intelligence. Previously, Clark was the senior staff research scientist at DeepMind, responsible for grounded language learning in virtual environments. Clark also worked at the University of Cambridge Department of Computer Science and Technology and Oxford University Department of Computer Science.
Assurance suite and passed the Java Technical Compatibility Kit (TCK) for Java 11. The TCK is used to verify compatibility with the Java 11 specification. According to Microsoft, its OpenJDK build can be used as a drop-in replacement for any OpenJDK distribution in the Java ecosystem.
RedisRaft: Strong-consistency deployment option Redis Labs announced that it is adding stronger consistency, integrated data models, low sub-millisecond latency for a globally deployed database and artificial intelligence to further Redis as a real-time data platform. RedisRaft will be available in Redis 7.0 and will enable Redis to be deployed and run in a strongly consistent fashion, according to the company. Additionally, the company announced new out-of-thebox integration of RedisJSON and RediSearch, enabling developers to natively store, index, query, and perform fulltext search on documents that accelerate application modernization initiatives.
Google unveils Logic programming language Google introduced the opensource Logic programming language, designed to compile to SQL and run on Google BigQuery with experimental support for PostgreSQL and SQLite. According to the company, the language was created to make it easier for developers who have to deal with the challenges of SQL. The challenges with coding using SQL often include constructing state-
ments from long chains of English words and limited support for abstraction. SQL is also rarely tested because testing SQL queries” sounds rather esoteric to most engineers. To solve these challenges, Logic was created as a concise language that supports the clean and reusable abstraction mechanisms that SQL lacks while supporting modules and imports.
Applitools publishes Automation Cookbook Applitools has released an Automation Cookbook to help upskill developers and test engineers. The new cookbook will feature free bite-sized videos, test automation recipes and a Test Kitchen to practice in for free. The cookbook was created by a team from Applitools Test Automation University. According to Colby Fayock, a developer advocate at Applitools who contributed to the cookbook, the goal is to give engineers quick visual answers to their frequently asked questions instead of them having to sift through long video tutorials, online forms or Q&A threads to find answers.
Tasktop announces VIZ VSM Portfolio Insights The new Tasktop Viz VSM Portfolio Insights aims to bridge the IT-to-business gap for value stream management. According to the company, the VSM Portfolio Insights Dashboard rolls up metrics and analytics generated at the individual product value stream level to the executive plane. It then displays metrics such as the progress of the
004-5.qxp_Layout 1 4/23/21 12:57 PM Page 5
www.sdtimes.com
shift from project to productbased IT, the ability to respond rapidly to the market, the business processes capable of acceleration and more. “The new portfolio-level VSM insights highlight what’s working and what still needs improvement, so leaders can drill down to the product level as needed and work with their direct reports on progressing toward business goals and continuous improvement,” explained Nicole Bryan, CPO at Tasktop. “The emphasis on business impact and longterm value also shifts perceptions from IT being seen as a cost center to what it actually is — a profit center.”
Amazon’s open-source fork of Elasticsearch and Kibana In response to Elastic changing its open-source software licenses on Elasticsearch and Kibana, Amazon has introduced the OpenSearch project. The project is a community-driven, open-source fork of Elasticsearch and Kibana. Amazon announced earlier this year it would be creating and maintaining an Apache License, Version 2.0 fork of the open-source products. “We are making a long-term investment in OpenSearch to ensure users continue to have a secure, high-quality, fully open-source search and analytics suite with a rich roadmap of new and innovative functionality. This project includes OpenSearch (derived from Elasticsearch 7.10.2) and OpenSearch Dashboards (derived from Kibana 7.10.2),” the AWS team wrote in a post. “Additionally, the OpenSearch project is the new home for our previous distribution of Elasticsearch.”
May 2021
SD Times
tive director at the Linux Foundation.
NativeScript 8.0 adds support, guide
IntelliJ IDEA 2021.1 features Code With Me service Code With Me is a collaborative development platform designed to enable users to share projects in their IDE and work on it with others in real time. It also includes smart context-based code autocompletion, navigation between objects, and methods declarations and usages. Code With Me features the ability to pair program, swarm program, teach and mentor, and conduct interviews. It will be available through a community, premium and enterprise plan. The community plan will be free of charge with the ability to run unlimited 30-minute sessions with up to three users. IntelliJ IDEA 2021.1 also includes new integration with Space, the company’s team collaboration platform for viewing, cloning and reviewing teammates’ code and writing Space automation scripts.
Catchpoint releases enhanced version of WebPageTest API Digital experience monitoring solution provider Catchpoint announced upgrades to the WebPageTest API that provide deeper performance metrics, immediate test results and integrations with CI/CD tools. The company also announced the API, which had been limited to a small numbers of users, is now widely available. WebPageTest is a performance measurement tool that can scale to test millions of sites per day, the company wrote on its website. According to Catchpoint’s announcement, the updated API offers “instant, programmatic access to WebPageTest data and test infrastructure,” including side-by-side video comparison of user experi-
ence from around the world.” Catchpoint acquired WebPageTest API in September.
New Linux research division launches The Linux Foundation has launched a new research division to look at the impact of open source. Linux Foundation Research aims to broaden the understanding of opensource projects, ecosystems, and impact by looking at open source collaboration. “As we continue in our mission to collectively build the world’s most critical open infrastructure, we can provide a first-of-its-kind research program that leverages the Linux Foundation’s experience, brings our communities together, and can help inform how open source evolves for decades to come,” said Jim Zemlin, execu-
The latest version of the NativeScript framework is now available. NativeScript 8.0 features more streamlining of the core of the framework so that it can serve as a good foundation for future enhancements, as well as the release of a new Best Practices Guide. Key features include: • Apple M1 support • Accessibility support • CSS box-shadow support • CSS text-shadow support • A hidden binding property • An official eslint package • Support for creative view development using the new RootLayout container
Open VSX Registry joins Eclipse Foundation The Open VSX Registry is an alternative to the Microsoft Visual Studio Marketplace for VS Code extensions. The goal is to increase flexibility for extension users, extension publishers and tool developers since the Visual Studio Marketplace doesn’t allow for extensions with the increasing number of opensource tools and technologies that support the VS Code extension API, the Eclipse Foundation explained. The Open VSX Registry is built on the Eclipse Open VSX project and it allows for the use of extensions from VS Code and forks of VS Code such as VSCodium, to Eclipse Theia, Eclipse Che, Gitpod, Coder, and SAP Business Application Studio. z
5
006,7_SDT047.qxp_Layout 1 4/22/21 8:02 AM Page 6
6
SD Times
May 2021
www.sdtimes.com
Scaling up Agile requires a change of Pace Organizations lose when senior devs need too much time to bring juniors up to speed
S
oftware teams and organizations today are looking to scale faster than ever. The pressure to release features at an increasing rate, while keeping bugs to a minimum is only exacerbated by the growing size of dev teams needed to deliver said features. We add more and more devs to a team, but only get incremental returns, all the while the experienced, senior devs seem to be delivering less and less of their high value code, which can make or break the product. The approaches that have gotten us this far have stalled. Instead of adding people to a team, in order to grow we need to look differently how people, already in a team, work together.
Before we dig into that, let’s look at the story so far. The Waterfall model was essentially the Peter Cronin is director of ViAGO International, and the author of Pace.
BY PETER CRONIN first project management method to be introduced to the software engineering industry. At the time, software was just a small part of larger infrastructure projects. Due to the rigidity of specifications for those projects, there was little room for variation in the process. Any changes made to the specifications during development would have resulted in high costs, so the very rigid and structured approach of Waterfall worked well. As software became more prominent in business use, and ultimately personal use, there was a rise in the number of much smaller software applications. This, in turn, resulted in a rise in the number of software companies creating such applications, and a rise in the issues with the rigid and deterministic approach of Waterfall. Agile was born out of the need to address these challenges. Software teams identified that it was far more useful to
have a process that could respond to changes in customer requests, and to get something basic working quickly, and then adjusting and iterating from there. Sprints, the most applied aspect of Agile, enabled software companies to create value for customers much quicker. It also enabled teams to be more responsive and reduce the amount of rework that resulted from changing specifications. And here we are in the present times. Despite the evolution of software development approaches through the years, and the benefits that have come with it, issues that arise from team and organization growth remain unresolved.
So, what is going on? Let’s take a small development team and follow them as they scale. Our dev team, part of a start-up, has five developers. Of the five, one is an extremely experienced senior developer, another couple are senior devs, and the last two are juniors with far less experi-
006,7_SDT047.qxp_Layout 1 4/22/21 8:02 AM Page 7
www.sdtimes.com
ence. Before the juniors came on board, the three senior developers would coordinate themselves and just get on with it. But as the team has grown, they have needed to add a bit more structure into their sprint planning and execution, so that the whole team had plenty of work to do for the fortnight. As well as this, the most senior dev started to spend his time assisting the two new juniors. Naturally, this limits the other work that he can do. Coincidentally (or perhaps not a coincidence at all) two new pressures have arisen: to produce more features; and, at the same time, to fix up the quality based on the bugs resulting from the new developments. Our most senior dev, who has become the de facto team leader, complains to the founder about needing more assistance. They are, of course, old mates who have been in the business from the start, so convinces the founder to authorize more hires. At this point the team has a real structure and is sure to plan out everyone’s work to ensure it’s getting the most out of the team! This growing team requires a fair amount of the senior dev’s time, but that’s to be expected to keep the machine running. On top of this, the founder gets calls with ‘urgent’ customer requests and ignores the sprint load to expedite them into the senior dev’s workload.
Back to the question: what’s going on? Why would teams all over the world do this? These issues don’t arise from malice and they certainly don’t arise from stupidity (given the calibre of minds involved). Instead, they come from two assumptions we make about how teams should operate. Firstly, we assume all team members’ contribution is equal. At the end of the day there is no “I” in “team” and we all do our bit around here. This assumption is evident in the way we plan work. We hold planning meetings where everyone has a say in what work they should be doing. The focus in these planning meetings is on the inputs and an even spread of load, rather than on the output of the team. Secondly, we assume time not
working is wasted time. The more everybody does, the more we get done as a team. Right? This becomes obvious in situations where we have a team member who has an hour to spare in a day. Instead of being comfortable with letting that team member twiddle their thumbs, we will find something ‘more useful’ for them to do. Maybe start investigating a quick bug fix? These assumptions are based on reasonable efficiency drivers we have as human beings, but these assumptions don’t apply effectively to software teams.
Let’s examine them more deeply! 1. All team members contribute equally to the output of the team. Every team has one person who is the most skilled person in that team. This gap in the skill level is magnified by their experience with the code base and the product, which creates a very large discrepancy in value of the code that is written by them, as opposed to that written by the most junior person. This does not mean that junior devs are not valuable, but instead it simply clarifies the type of work, and the value-add that is able to be done at each tier of seniority. This is crucial because, by default, the most skilled senior dev acts as the bottleneck for the work the team can deliver as a whole, and even more so for the high value work the team can deliver, which leads us to conclude that NOT all members’ contributions are equal. 2. Idle time is a waste of time. A team of people working together is not like swimmers in a pool, each swimming in their own lanes. There are many interdependencies in the work the team members do, which means we will never have an even load across a single sprint, and some people will be idle from time to time. Forcing an even load is planning to fail. If the first assumption is wrong, and not all team members’ contributions are equal, we should then be maximizing the contribution of the most skilled resource. This may be done in many ways, one being that they are idle for 30 minutes in between tasks because picking up another task would make them late to do a
May 2021
SD Times
handover to the bottleneck resource. Sometimes, not working is the best contribution a team member can make!
How do we fix this? The answer is conceptually simple, but much harder to implement. What the team needs is more capacity for the bottleneck end of the bottle (the most senior dev), not for the body of the bottle (the team as a whole). By increasing the capacity of the body, we are just putting more strain on the bottleneck, as opposed to focusing on widening the bottleneck, which increases the output of the whole team. So the answer is to coordinate more effectively around the bottleneck, then to protect the team’s work from the impact of variation, and finally to accelerate the flow of work through the team. These three initiatives make up ‘Pace,’ an Agile-friendly framework which replaces Scrum in many teams. To take something tangible from this article, here are 4 immediately applicable Pace rules to minimize bottlenecks and maximize team performance: 1. Ensure there is a steady supply of work for the bottleneck. As the bottleneck controls our output, and time lost from the bottleneck is lost for the team, we ensure there is always valuable work for the bottleneck. 2. Offload the bottleneck from unnecessary tasks. All work that can be done by others is assigned to them, freeing the bottleneck to only do work they must do. Check for the efficiency trap of planning tasks for the bottleneck because they are faster. 3. Plan others’ work around the bottleneck. The bottleneck’s work (including others’ work that requires the bottleneck) is planned first. Then, others’ work which does not interact with the bottleneck’s can be planned. 4. Ensure quality inputs into the bottleneck. To minimise the risk of bottleneck rework, extra quality steps such as tests or checklists are introduced immediately before the bottleneck. Pace applies these proven rules and ensures they produce significant benefits — almost instantly, and certainly over the longer term. z
7
8
SD Times
May 2021
www.sdtimes.com
Progressive delivery: Giving limited users a taste of new software before it’s widely deployed BY JAKUB LEWKOWICZ
S
ometimes continuous delivery just isn’t enough for organizations that are constantly testing and adding features, especially those that want to roll out features to progressively larger audiences. The answer to this is progressive delivery. The term progressive delivery was created in mid-2018 by Adam Zimman, the VP of Platform at LaunchDarkly, and James Governor, analyst and cofounder at RedMonk, to expand on continuous delivery’s notion of separating deployments and releases for organizations. Organizations that adopted continuous delivery early on were primarily software-first organizations and their main delivery of value was through some sort of software package. Companies that didn’t have software as their only source of value faced challenges that weren’t really addressed by continuous delivery. “When you start talking to the business, continuous deployment and con-
tinuous delivery tend to sound a little bit scary. If you talk to the business and say, look, we aren’t going to decouple these things. You decide when the business activation happens and you can do that because something is very well tested and you can test in production, you could be confident about when the services are rolled out and this will de-risk what you’re doing, then it sounds like they’re back in control,” Governor said. All of the core testing concepts of progressive delivery existed in continuous delivery. Now, it’s a matter of what’s actually getting the focus since there are a lot more things organizations can do while utilizing the cloud. Progressive delivery is a term that can be applied to a set of disciplines that people are already using now, whether that’s delivery and production excellence or organizations that are effectively testing and have a high level of confidence in their operations with a culture of troubleshooting and
observability. “If you look at Google, Amazon, and Microsoft from a public cloud perspective, they are all doing stuff like this even though they don’t always call it progressive delivery,” Governor said. “Once you start getting into banks and telcos, then it’s becoming a more generally applicable set of approaches and technologies.” Progressive delivery really boils down to two core tenets: release pro-
www.sdtimes.com
gression and delegation, according to Zimman. Release progression is all about adjusting the number of users that are able to see or interact with new features and new code at a pace that is appropriate for one’s business. It’s also about expanding it out only to the appropriate parties at any given time as part of the testing. That could mean only offering the feature to early access beta users first and then
expanding it out to a trusted user group before expanding it out to everyone. Or maybe, the end state is to only give access to the people who are on the premium plan. “The thing that [continuous delivery] stopped short of was it was more of a binary mentality,” Zimman said. “So it was either on or off for everyone, as opposed to this notion that we’re really focused on this ability for increasing your blast radius.”
May 2021
SD Times
Practicing release progression helps with the testing aspect of software delivery because the individual or team that built a new feature or a new widget can choose to deploy it and be the only ones that can interact with it. “Everybody is testing in production. Some people do it on purpose, but if you’re not testing in production on purpose, chances are that you are continued on page 10 >
9
10
SD Times
May 2021
www.sdtimes.com
< continued from page 9
going to be burned by a bad release or a lack of consistency between your test environment and your production environment.” The other core aspect, release delegation, focuses on shifting release control from the engineering and operations organization out to the business owner. “As soon as you move out of the realm of pure software organizations, in which their only value is through their software, you start recognizing that the business owners are actually looking for greater control and greater ability to impart change on digital experiences,” Zimman said. Business owners can then customize what features they want to release to certain customers and even give the end users the ability to toggle certain features on and off all, while having guardrails put in place to make sure that the releases meet an industry’s compliance requirements. A lot of companies are looking to do that autonomously and not have to go back to the engineering or operations team for the ability to control features, especially when it comes to things like beta testing, A/B testing or experimentation, according to Zimman. Ravi Lachman, an evangelist at Harness, said that progressive delivery comes from getting feedback and this is especially important in the software development model of today where a lot of the time you’re doing the unknown and you don’t know what the impact is going to be. One of the quintessential firms that has relied on feedback for progressive delivery is Facebook. “If you take it back 10 years ago, and you and I were downloading Facebook from the App Store, you and I would have two different download sizes and there’d be a reason for that. They’d be shipping different features for you and I,” Lachman said. “For example, I really like fried chicken and I’m on several fried chicken groups on Facebook.They might say, you know what, target him with cookspecific things and so how they started
doing it was with the concept of progressive delivery. We’re not going to give all the users the same thing, and we want to be able to make sure that we can retract those features if they’re not performing well, or we can roll those features out if they are doing well and determine how we provide feedback and how we choose to deploy across our entire user base or our entire infrastructure.” One common way that organizations are going about progressive delivery is by using feature flags. Feature flags give users fine-grained control over their deployments and remove the need to change config files, do blue-green deployments and perform rollbacks. A new functionality would be wrapped up in a feature flag and then deployed to a new version of the application to a single production environment, allowing only users from the designated canary group to access the new functionality. However, having too many feature flags at once can lead to sprawl and a difficulty in keeping track of what feature flags are out there. This prompted a demand for feature flag management solutions, which serve as a central spot for the management of the flags with a common API that tracks the whole feature flag life cycle — for example, what was the logic? How do you turn it on? How do you turn it off? Where did it go?
Progressive delivery is maturing Progressive delivery is starting to become a more mature practice as vendors are coming and coalescing around it. Governor said that this is the stage when it gets interesting because if you have a set of practices and then package them as a platform, it becomes something that a broader set of constituents can use. In addition to new tooling, it’s also about shifting the delivery side of the equation mostly from the context of engineering readiness to business readiness. “We don’t want to make any
changes whatsoever to the deployment side of that equation because we want engineers to continue to develop at the pace of innovation, however fast they are comfortable with creating new technologies, features and code. They should continue to have that flexibility to do that creation and deployment into a production environment so that it is something where they’re able to test,” Zimman said. Now, the release side of the equation is really the delivery of value, Zimman noted. In the context of engineering readiness, something is released when it’s ready. On the other hand, business readiness puts the business in charge of when and how to release new feature functionality or release when customers are actually ready to adopt this new feature functionality. This might be great for a company running a deal-a-day site because their value is changing on a daily cadence, Zimman said. Getting started with progressive delivery really requires getting all aspects of the business on board. One has to talk to product management about experimentation with progressive delivery, talk to the business about delegating the service activation to the business and having delegated users, and then talk to software developers and say that this technology won’t slow them down and will just enable you to move more rapidly and with higher quality, Governor explained. “The question that I like to ask enterprises is are you comfortable shipping code on a Friday afternoon?,” Governor said. “There are some people that will be like, no, the last thing I want to do is roll something out at 5PM on a Friday, because if something goes wrong, then there goes the weekend. Some organizations are like ‘well, yeah, that’s where we’re getting to, we do enough testing’ and really begin to say, yeah, we can ship a new service whenever. We have that confidence because we’ve done the engineering work and the cultural work in order to be able to do this. That’s progressive delivery.” z
www.sdtimes.com
May 2021
SD Times
INDUSTRY SPOTLIGHT
Introducing Progressive Delivery A
pplication success depends on delivery speed, product quality and perceived value, but it’s hard to get all three right. Faster release cycles often equate to lower code quality and the “value” developers think they’re providing may fail completely from the end user’s point of view. Progressive Delivery helps by taking the guesswork out of what works, what doesn’t and why. “Historically, you’d deploy a new version of software and everybody would see it. For a lot of reasons, this ends up being less than optimal,” said John Kodumal, CTO and co-founder of LaunchDarkly, a feature flag company. “One of the biggest reasons is risk because if there’s a problem, error or suboptimal behavior, that’s going to be exposed to your entire user base.”
What is Progressive Delivery? Progressive Delivery refines the release step, separating it from deployment so the potential damage caused by a bug or poor application design can be limited to a much smaller user base than the entire population. What’s more, teams can decide just how granular they want the level of control to be. Regardless of how granular they choose to get, the concept is the same: start small and then roll out to progressively larger cohorts. That way, the impact of a change can be assessed before it becomes a liability. “The two core ideas driving Progressive Delivery are release progression and delegation,” said Kodumal. “Release progression allows you to adjust the number of users and what they are able to see. Delegation progressively delegates the control of a feature to the owner who is most responsible for an outcome.” For example: • Developers can decouple deploys from releases and test in production Content provided by SD Times and
• Data scientists can experiment and run A/B tests • Sales and customer success can ensure appropriate entitlement and manage plans • Operations can invoke kill switches and safety valves • Security can ensure role-based access control and compliance • Product owners can do dark launches and beta testing • Marketing can synchronize launches and target markets There are two ways to affect Progressive Delivery: using feature flags or canary deployments. “With feature flags, you can observe a change over a longer period of time and compartmentalize the change. Feature flags also provide the smallest level of granularity such as an individual commit, an individual developer’s work or the most engaged customers,” said Kodumal. “Canary releases are really just the aggregation of all the changes from the last deployment to the new deployment, but with either mechanism, you can do something as simple as a percentage rollout.” Feature management is essential to achieve Progressive Delivery because
canary releases alone can be too constraining. It’s almost impossible to do several canary releases simultaneously, but with feature management, changes can be segregated at any level of granularity, such as squad-based, team member-based, or commit-based, for example. The isolated changes can then be measured independently. “Feature management essentially gives you the benefit of a canary process, but on a per-change basis as opposed to a per-deploy basis,” said Kodumal. “It gives you more ability to parallelize more availability in a bigger team and pinpoint what changes connect to a positive or negative impact.” With a canary release, it also may not be possible to identify the cause of a negative impact. However, if the change is guarded by a feature flag, then the feature flag can simply be turned off with the rest of the deployment kept live. Organizations with hundreds of engineers working in parallel find the level of precision greatly beneficial because they can control releases, very precisely, without adding an unwieldy level of complexity that’s hard to manage. Product quality also tends to improve. continued on page 12 >
11
12
SD Times
May 2021
www.sdtimes.com
INDUSTRY SPOTLIGHT < continued from page 11
Continuous Delivery and Progressive Delivery go hand in hand Many organizations have adopted CI/CD for competitiveness reasons. Typically, their industry has been disrupted by cloud-native companies that deliver software orders of magnitude faster. In fact, elite companies ship software 106 times faster and their applications fail seven times less often than slower-moving companies. They’re also doing 208 times more code deployments and can recover from incidents 2,604 times faster. While continuous deployment remains out of reach for most organizations, they’re still able to hone their DevOps practices and CI/CD pipelines. Progressive Delivery is simply the next step. In fact, Continuous Delivery and Progressive Delivery are not mutually exclusive. Continuous Delivery accelerates release frequency and it can help improve product quality. However, in the absence of Progressive Delivery, Continuous Delivery can’t guarantee that the most recent updates will resonate with customers. “Most people think of Continuous Delivery as the steps up to deployment. Progressive Delivery expands the life cycle by allowing you to minimize changes and react quickly to change beyond the deployment phase and into the runtime phase,” said Kodumal. “Continuous Delivery allows you to achieve a faster cycle time to production by reducing the size of changes that are pushed to production. Progressive Delivery is a dynamic version of that.” In fact, the organizations that are in the best position to take advantage of Progressive Delivery are those that are already doing Continuous Delivery, but it isn’t an absolute requirement. For example, if an organization releases software every quarter, they can minimize the risk associated with code changes if the changes are protected by feature flags. “Regardless of where teams are in their journey, they all tend to want the
same thing,” said Kodumal. “They want to deploy when they want and release when their customers are ready.” Adding Progressive Delivery to Continuous Delivery enables teams to push a change to production quickly and measure the impacts of that change right away. The rapid feedback cycles allow teams to have more confidence in the changes they’re making.
Benefits of Progressive Delivery A surprising benefit of Progressive Delivery is that it enables teams who have not adopted CI/CD yet to adopt it safely. Progressive Delivery also enables DevOps because it provides collaborative capabilities and helps increase release velocity without inter-
regulated companies have been slow to adopt even Agile practices, let alone DevOps or CI/CD. They want assurances that if their release velocity increases, the price of that speed won’t be incidents, outages, lawsuits or regulatory fines. Using Progressive Delivery, they’re able to limit risk exposure in a manner that’s auditable while delivering value faster to customers. The operative word when it comes to Progressive Delivery is control. Using feature flags, organizations can control: • Which capabilities groups or even individual users can access and experience • When those changes are delivered to different customers
The core ideas driving Progressive Delivery are release progression and delegation, according to John Kodumal, CTO of LaunchDarkly.
fering with system reliability. “When software teams adopt DevOps and CI/CD, they often see the speed benefit, but they’re less confident about code quality, user perceptions and whether they’ve done enough to minimize the risk of security vulnerabilities,” said Kodumal. “Progressive Delivery enables you deploy new code faster and with an unprecedented level of confidence because it gives you the safeguards you need to minimize risk potential.” Risk is the main reason why highly
• Who can release features • The impact of less-than-optimal code In fact, Microsoft reduces its launch risks using ring deployments, which is a Progressive Delivery technique. Specifically, it starts with a core group (which is actually a canary deployment). If the deployment meets the target performance criteria, then it moves to the next ring, which involves more users, and so on. GitHub starts with “staff ships,” which are also canary deployments. That
www.sdtimes.com
May 2021
SD Times
way if a deployment misses the mark, it can be fixed before any customers see it. If the deployment is successful, then the rollout continues among customers. Software teams deploying microservices applications to Kubernetes clusters can take advantage of Progressive Delivery using service meshes or feature management. If using a service mesh, the process starts with a bluegreen deployment in which an old version of the software and a new version of the software run on separate servers. The service mesh runs a canary test by routing a subset of users to the new application and the rest of the users to the old application. If the canary test fails, then all traffic is routed to the old version. Feature management provides more control.
The best way to adopt Progressive Delivery Feature flagging software is readily available, but as with any tool, the results depend on a combination of people, processes and tools. “If you’re in the planning phase of a Progressive Delivery practice, make sure it’s a collaborative exercise with the team,” said Kodumal. “Your PM, your designer, your engineering — collectively the trio that is in charge of delivering a new change or a new piece of functionality — should talk about how to expose that feature to users.” For example, they might decide to roll a change out gradually over a 10day period in 5% increments on a daily basis. If any of the metrics indicate a negative impact, then the change could simply be rolled back. Alternatively, if the situation involves a rebrand or other initiative involving non-software assets such as documentation or a marketing site, they might choose a rollout strategy. “You want to bring culture and process change thinking into your planning phase so you’re clear about why you’re releasing a change and what would be considered a positive impact. You also want to consider the metrics so you can understand whether the
Feature flags, as seen in LaunchDarkly’s dashboard allow organizations to test functionality in smaller cohorts before wide release.
release is working the way you expect it to,” said Kodumal. “A feature management tool enables you to do all that quickly and efficiently.” As with many things, teams can decide whether to use open-source tools, build their own tool or license a tool from a company such as LaunchDarkly. The teams using open source or homegrown tools tend to discover they need enterprise capabilities such as a collaboration layer, security and permissions. “Ultimately, Progressive Delivery software controls how users experience your product, what features they’re seeing and what features they’re not seeing. It’s a mission critical piece of your stack,” said Kodumal.
Transform your application portfolio Companies that have become adept at Progressive Delivery aren’t just using feature flags to minimize risks. They’re also using it as a means of controlling
software over the long term. That way, they can have one codebase that delivers multiple product experiences. For example, companies offering different subscription or on-premises product levels might have gold, silver, and bronze plans, each of which offers different functionality or capabilities that can easily be controlled using feature flags. Without feature flags, there may be multiple code branches or arcane controls that determine what an individual user sees. “Customers are now running longerterm experiments, doing A/B testing, optimization and personalization. It’s amazing how many capabilities you can unlock once you have a good feature management practice in place,” said Kodumal. “Once you begin to practice feature management, it fundamentally transforms the way you build software for the better.” Learn more at www.launchdarkly .com. z
13
14
SD Times
May 2021
www.sdtimes.com
Confusing the
What with the How BY GEORGE TILLMANN
I
magine you are building a house. You get all your tools, lay out the lumber, and start constructing the first room. As you are building the room, you decide if it’s a living room, or a kitchen, or a bathroom. When you finish the first room you start on the second, again deciding, as you build, what kind of room it will be. Let’s face it, no one would do that. A rational person would first figure out what the house should look like, the number of rooms it needs to contain, how the rooms are connected, etc. When the plans for the house are complete, then the correct amount of supplies can be delivered, the tools taken out, and construction begun. Architects work with paper and pen to plan the house, then, and only then, carpenters work with tools and lumber to build it. Everyone associated with home building knows that you figure out what is wanted before determining how to build it. Arguably, the fundamental principle
of systems development (FPSD) is also to figure out what the system is supposed to do before determining how to do it. What does the user want? What does the system have to do? What should system output look like? When the what is understood, then the how can start. How will the system do it? How should the system generate needed output? The how is the way users get what they want. The IT industry has long recognized that confusing the what with the how is a major cause of project failure resulting in user dissatisfaction from poor or absent functionality, cost overruns,
George Tillmann is a retired programmer, analyst, systems and programming manager, and CIO. This article is adapted from his book Project Management Scholia: Recognizing and Avoiding Project Management’s Biggest Mistakes (Stockbridge Press, 2019). He can be reached at georgetillmann@gmx.com.
and/or missed schedules. Ensuring that the what is completely understood before attempting the how is so important that it was engraved into the dominant system development life cycle methodology (SDLC) of the time—the waterfall approach. For those of you who just recently moved out of your cave, the waterfall SDLC consists of a series of sequential phases. Each phase is only executed once, at the completion of the previous phase. A simple waterfall approach might consist of five phases: analysis, design, coding, testing, and installation. (See Fig. 1) In this approach, the analysis phase is completed before the design phase starts. The same is true for the other phases as well. The good news is that with the
www.sdtimes.com
waterfall approach, systems developers did not have to remember to put the what before the how because their SDLC took care of it for them. Then iterative and/or incremental (I-I) development came along and, the rest, as they say, got a little dicey. Although there are dozens of I-I approaches, they are all variations of the same theme: make systems development a series of small iterative steps, each of which focuses on a small portion of the overall what and an equally small portion of the how. In each step, create just a small incremental part of the system to see how well it works. Vendors like to depict I-I development as a spiral rather than a waterfall, showing the iterative and incremental nature of these approaches. (See Fig. 2) continued on page 16 >
Fig. 1
Fig. 2
May 2021
SD Times
15
16
SD Times
May 2021
www.sdtimes.com
< continued from page 15
Using an I-I approach such as prototyping, a session might consist of a developer sitting down with a user at a computer. The user tells the developer what is needed, and the developer codes a simple solution on the spot. The user can then react to the prototype, expanding and correcting it where necessary, until it is acceptable. This is obviously a very different way to develop systems than the waterfall approach. What might not be so obvious is that the various I-I methodologies and techniques, such as rapid application development, prototyping, continuous improvement, joint application development, Agile development, and so on, still involve figuring out what is wanted before determining how to do it. Rather than looking at I-I as a picturesque spiral, an I-I approach can be viewed as a string (or vector for you programming buffs) of waterfall phases where each cycle consists of a sequence of mini-analysis, mini-design, etc. phases. However, rather than each phase taking six months they could be no longer than six weeks, or six days, or six hours. (See Fig. 3)
Why? How could something so well understood as ‘put the what before how’ be so ignored? Here are three common reasons for this troublesome behavior. Reason 1. Impatience (Excited to Get Started): Many in IT (the author included) started their careers as programmers. Programming is a common entry-level position in many system development organizations. It is understandable that new (and not so new) project team members are anxious to start coding right away. In their haste, FPSD is not so much ignored as shortchanged—corners cut, important (sometimes annoying) users not interviewed, schedules compressed, etc. The result is an incomplete understanding of exactly what the users want. Reason 2. Not Understanding the Value of Analysis: Analysis, or whatever your call it (requirements, logical design, system definition, etc.) is the process of learning from users their requirements (the what) and then documenting that information as input to system design (the how). However, analysis has endured some heavy criticism over the past few decades. Some feel that it is overly laborious, time con-
Fig. 3
It might take a half-dozen cycles of sitting down with a user to figure out the requirements (the what) and then coding the results (the how) before showing them to the user for additional information or changes, but the principle is always the same—understand the what before determining the how. However, too many developers throw out the baby with the bathwater. In rejecting the waterfall approach, they mistakenly ignore the basic what before the how — the FPSD. The result is the reappearance of that prewaterfall problem of project failure resulting in user dissatisfaction from poor or absent functionality, cost overruns, and/or missed schedules.
suming, error prone, or just not needed at all. The result can be an incomplete understanding of what the new system needs to do. Reason 3. Confusion about the FPSD and the Waterfall Approach: The waterfall SDLC is viewed by many as a relic of IT’s past, offering little for today’s developers (not entirely true, the waterfall approach still has its uses, but that is a subject for another time). Unfortunately, the what-how distinction is closely tied to this approach so any rejection of the waterfall approach contributes to skepticism regarding any what-how talk. What’s a project manager to do? How do you ensure the what is under-
stood before the how is undertaken? There are three things the project manager can do. Training – The problem with most systems development rules and guidelines is that the reason they should be followed is not always obvious. Or has been forgotten. Systems developers tend to be an independent and skeptical bunch. If it was easy to get them do to something, then documentation would be robust and project managers would earn half what they do now because their hardest job would have disappeared. No, managing a team of developers is like teaching an ethics class in Congress—difficult, underappreciated, and often exhausting. The one saving grace is that systems developers like to create great systems. The vast majority of developers take great pride in doing a good job. The project manager needs to tap in to that enthusiasm. The easiest way to get systems developers to do something (other than forbidding them from doing it) is to convince them that doing it is in their and the project’s interest and that separating the what from the how is in that category. But wait. You can hear the challenge right now. “We are using Agile development, so we don’t need to separate the what from the how.” The answer is that the purpose of
www.sdtimes.com
the what-how distinction is not to create separate development phases, but is to make developers think before they act—to ensure that before the design hits the page or a line of code is entered on a screen, the problem is mulled over in those high-priced heads. Is this approach counter to Agile development or any iterative-incremental approach? No. Read the books and vendor manuals more closely.
There is not one author or vendor who believes you can skip the thinking if you use their tool or technique. The problem is that many of them are not sufficiently vocal about the value of thinking before acting. Discipline – The problem is usually not knowing (most know to complete the what before starting the how), the problem is doing. A good way to get team members to do the right thing is to codify the desired behavior before the project kicks off. Rules, standards, and strong suggestions presented before a project starts are more likely to be accepted and followed by team members than mid-project changes, which can be seen as criticisms of team member behavior. The project manager needs to lay out the project rules of engagement, including such things as the SDLC method or approach to follow, the techniques and tools to use, documentation to produce, etc., all focused on ensuring the what is completely understood before starting the how. Then comes the hardest part of the entire project—enforcement. The project manager needs to ensure that the rules of engagement are followed. Failure to enforce project rules can under-
May 2021
SD Times
cut the project manager’s credibility and authority. A few public executions early in the project do wonders for maintaining that project manager mystique. Collaboration – Want to influence a systems developer? Need to convince developers to follow systems development best practices? Then have him or her collaboratively meet with other systems developers. The team walkthrough is a great vehicle for this. In a team walk-through the developer presents, demonstrates, and defends his or her work, not to users, but to other team members. The developer walks the other team member through the user requests, his or her analysis of those requests, their solution to the request, and finally any demonstrable work products. This friendly IT environment is a useful way to test if the developer’s work is thorough, efficient, and complete. This is should be a slam-dunk. Team walk-throughs can be very motivational, inspiring (shaming) underperforming developers into producing better results while providing overachievers an opportunity to show off. In both cases, the user, the project, and the project manager win. z
The Little Book of Big Mistakes and How to Avoid Them
Project Management Scholia focuses on the 17 most consequential reasons IT projects fail and presents ways the project manager can avoid these problems by reading the danger signs and taking timely corrective action. The book dives into the often painful lessons learned — not from the library or the classroom — but from the corporate trenches of real-world systems development.
By George Tillmann
Available on Amazon
George Tillmann is a retired programmer, analyst, management consultant, CIO, and author.
17
047_SDT032.qxp_Layout 1 1/17/20 5:23 PM Page 1
Reach software development managers the way they prefer to be reached A recent survey of SD Times print and digital subscribers revealed that their number one choice for receiving marketing information from software providers is from advertising in SD Times. Software, DevOps and application development managers at large companies need a wide-angle view of industry trends and what they mean to them. That’s why they read and rely on SD Times.
Isn’t it time you revisited SD Times as part of your marketing campaigns? For advertising opportunities, contact SD Times Publisher David Lyman +1-978-465-2351 • dlyman@d2emerge.com
www.sdtimes.com
May 2021
SD Times
INDUSTRY SPOTLIGHT
Agile Fuels Value Stream Management S
oftware teams unknowingly paved the way to modern business by operationalizing Agile practices. Since the dawn of the millennium, they’ve been working cross functionally to release better quality software faster. In the meantime, C-suite executives have been warned that their organizations need to become agile just to survive in today’s era of digital disruption. More recently, businesses have been executing digital transformation initiatives that change the way the company thinks, operates and engages with customers and partners. Now, they’re embracing value stream management to ensure that they’re actually delivering value. “Digital transformation has accelerated the need for organizational agility, but if you’re not delivering the kind of value customers expect, your ROI will suffer,” said Laureen Knudsen, chief transformation officer of Broadcom’s enterprise software division. “Value stream management helps ensure that you’re focusing on what matters most to your customers.” In a recent BizOps Coalition survey, 83% of business and technology leaders said that organizational agility is critical for their businesses in 2021, but only half rate their digital maturity good or excellent. Similarly, 95% believe that digital transformation is about business outcomes but 62% admit their companies are adopting technology for technology’s sake. Worse, only 50% of business technology requests cite actual business objectives. Contrast that with the operating norms of the market movers and shakers and the fundamental issue is that the company is operating in silos, which slows value delivery and impedes data flows.
Content provided by SD Times and
What’s Driving the Need for Value Stream Management Countless cloud-native companies have disrupted entire industries because they have a more compelling value proposition. For example, why wait for a taxi if a driver can be at your door in less than five minutes? Digital disruptors are systematically eliminating friction in value chains. Importantly, they’re also disrupting themselves because soon after their innovation changes the rules of the game, competitors adopt the same strategy. Since no one innovation will continue to provide a sustainable competitive advantage forever, they experiment fiercely, failing from time to time, in an effort to discover the next thing that will move markets. “Companies are finding that they have more people devoted to software development than any other part of the company now because with digital transformation, every company is a software company,” said Knudsen. “John Deere has more software developers than hardware engineers. Even local restaurants now have apps.” However, not all organizations are able to provide the same level of value because some of their operating models are outdated. They’re clinging to rigid hierarchies and siloed operations when value streams must flow throughout the organization.
end-to-end,” said Knudsen. “It’s looking at entire processes and really streamlining all of that. You start with the value you want to provide your customers and involve all of the pieces of your organization that will be required to deliver that value. The company wins and the customer wins.” Meanwhile, the entire company should have adopted an ethos of continuous improvement that applies to everything — careers, operational efficiency, customer relationships, growth and even the definition of success. “You need to get comfortable with the fact that even at the strategic level, you don’t know the fastest path to value, so you want to get something in the hands of customers as quickly as possible so you can get that feedback which will tell you you’re on the right track,” said Knudsen. “You also need to keep your eye on the market because in 2021, we’re going to see a lot of new things we haven’t seen before.” Surprisingly, 97% of survey respondents claim to have agile teams and great agile tools, but only 3% running their business on data. Nearly seven in 10 are unable to create business metrics from their data and three quarters don’t have real-time data available for timely decision-making. Organizational leaders can sense there are bottlenecks and constraints impeding value delivery but it’s difficult to pinpoint where they are without data. A value stream management solution can help provide the kind of visibility modern organizations need to make accurate, timely decisions.
How to Affect Value Stream Management Organizational agility, digital transformation and value stream management all require a cross-functional operating model that encourages cooperation and collaboration. “You have to optimize processes
Master Value Stream Management Join Broadcom for the Value Stream Management Summit online, June 23, which will address the strategic and tactical issues today’s companies must master. www.vsmsummit.com z
19
20
SD Times
May 2021
www.sdtimes.com
DEVOPS WATCH
Four key metrics for measuring productivity BY JAKUB LEWKOWICZ
A recent report found the four best ways to measure DevOps productivity is to look at: duration, mean time to recovery, throughput and success rate. According to the State of Software Delivery report from CI/CD platform provider CircleCI, companies who optimize on those four metrics are some of the most successful software companies out there. In a webinar with SD Times on how to help teams build better software faster, technical content marketing manager at CircleCI Ron Powell and Contentful software engineer Sergiy Tupchiy looked at the four metrics and what it really looks like in practice. “It does look different when you try to actually apply the measurement of these to your case. And then it also looks different when you try to describe the value of optimizing on these four metrics to the rest of your organiza-
tion,” Powell explained in the webinar. The first metric, duration, measures the length of a workflow. While teams want to be around five minutes, Powell found the 50th percentile looks something closer to about 10 minutes. “If you’re really well above 10 minutes, I think that that’s probably a pretty
good place to start, trying to find ways to make improvements,” Powell said. Powell said the next category, mean time to recovery, is the most valuable metric and the one organizations should focus on lowering. Companies in the 50th percentile have a mean time to recover of under an hour. For the third category, Powell explained monitoring throughput is more valuable than trying to match some other organization that builds
In other DevOps News n Digital.ai recently released the Digital Transformation Progress report to look at the state of digital transformations and what can be done to improve outcomes. The report found a majority of leaders are looking for more civility into their business planning processes. Additionally, 94% want to see software development and delivery better connected with business objectives. “Most of today’s Agile and DevOps tools are designed for the workgroup, making visibility and alignment fairly easy to achieve at the team level. However, scaling to the enterprise is far more complex, as organizations must break down silos and manage teams of teams that embrace different cultures, tools, and systems,” said Derek Langone, head of strategic transformation at Digital.ai. Listen to Derek's conversation with SD Times editor-in-chief David Rubinstein on the "What the Dev?" podcast, available on buzzsprout, Spotify or Apple Podcast. n GrammaTech has updated CodeSonar to provide deeper integration of static application security testing (SAST) within
software in a different paradigm. Lastly, success rate depends on what is being measured. Powell said companies should be looking at 90 to 95% success rate for their main branches. The success rate can dip lower for the feature branches that deal with building cutting-edge software. There are some proven ways in which to create a more successful DevOps environment that were outlined by Contentful’s Tupchiy. One way is to form an internal engineering productivity team and to standardize data sources to define what is applicable and what can be scrapped. Engineering teams should also act on the insights and tie in the KPI data with other engineering business systems. To learn more about the practices and metrics that will help teams build better software and faster in 2021 watch the webinar in the resources section of sdtimes.com. z
DevOps pipelines. According to the company, embedding SAST into CI/CD pipelines is critical for shifting left and baking security into DevOps workflows. The latest release features visualization and analysis enhancements, and GitLab integration. n Harness announced new end-to-end integrations to help DevOps and engineering teams with their multi-cloud software deployments. The software delivery platform now integrates with Amazon GovCloud, Microsoft Azure and Google Cloud Platform. “With these integrations, Harness is answering that call, providing an abstraction layer between cloud deployment complexity and developers, so every company can deliver next-generation software faster than ever,” said Jyoti Bansal, CEO and co-founder of Harness. n Software AG announced new DevOps capabilities in its webMethods platform to help businesses with their digital transformations. The new and improved DevOps capabilities include: APIs for all API management and microservices deployment functionality, containerized runtimes, helm charts and CI/CD samples to simplify update rollout. z
Collaborative Modeling
Keeping People Connected ®
®
®
®
®
Application Lifecycle Management | Jazz | Jira | Confluence | Team Foundation Server | Wrike | ServiceNow ®
Autodesk | Bugzilla
sparxsystems.com
SDTimes-PCS-Nov-2020.indd 1
TM
®
®
®
| Salesforce | SharePoint | Polarion | Dropbox
TM
®
| *Other Enterprise Architect Models
Modeling and Design Tools for Changing Worlds
17/11/20 12:55 pm
22
SD Times
May 2021
www.sdtimes.com
BY CHRISTINA CARDOZA
t has been 20 years since the Manifesto for Agile Software Development was published, and even longer since the idea was first formed, and yet there still isn’t a clear understanding in the industry of what Agile really is.
I
www.sdtimes.com
“Far too many teams that claim to be ‘Agile’ are not. I’ve had people — with a straight face — tell me they are ‘Agile’ because they do a few Scrum practices and use a ticketing tool. There is an awful lot of misunderstanding about Agile,” said Andy Hunt, one of the authors of the manifesto and co-author of the book “The Pragmatic Programmer.” According to Dave Thomas, co-author of “The Pragmatic Programmer” and the Agile Manifesto, just the way Agile is used in conversations today is wrong. He explained Agile is an adjective, not a noun, and while the difference may be picky, it’s also really profound. “The
May 2021
whole essence of the manifesto is that everything changes, and change is inevitable. And yet, once you start talking about ‘Agile’ as a thing, then you’ve frozen it,” said Thomas. However, Alistair Cockburn, a computer scientist and another co-author of the manifesto, believes that Agile being misunderstood is actually a good thing. “If you have a good idea, it will either get ignored or misinterpreted, misused, misrepresented, and misappropriated…The fact that people have misused the word Agile for me is a sign of success. It’s normal human behavior.” continued on page 24 >
SD Times
23
24
SD Times
May 2021
www.sdtimes.com
Looking back at the manifesto The Agile Manifesto was created to uncover these better ways of working, developing and delivering software. It includes four core values and 12 principles. From Feb.11-13, 2001, 17 thought leaders met at the Snowbird ski lodge in Utah to try to find some common ground on software development. That common ground became known as the Manifesto for Agile Software Development. At the time, those 17 software developers had no idea what was to come from the industry or how Agile would even play out over the next 20 years. “The Agile Manifesto fundamentally changed or incrementally changed how people approached work and focused on the customer,” said Dave West, CEO and product owner at Scrum.org. “Twenty-five to thirty years ago, I worked for an insurance company in the city of London and we didn’t care about the insurance. We didn’t care about the customer. We just wrote code on the specs. The fact that today we have customer collaboration, the fact we now respond to change, all those behaviors have resulted in a lot of fabulous software.” The world, however, is very different from when the Agile Manifesto was written that it has some wondering if it still is relevant in today’s modern, digital world. Dave West According to Robert Martin, one of the authors of the Agile Manifesto and author of “Clean Agile: Back to Basics,” the manifesto itself is just a marker in time. “It does not need any augmentation because it is not a living, evolving document. It is just something that was said 20 years ago. The truth of what was said in the document remains true today,” he said. Fellow manifesto co-author Dave Thomas believes that the manifesto actually applies even more today as software is moving faster than ever, people are adapting to remote work, getting feedback and adjusting as they go. “It’s becoming clear you can’t plan a year out anymore. You are lucky if you can plan a month out, and so you are constantly going to be juggling and constantly going to be reprioritizing. The only way to do that is if you have the feedback in place already to tell you what the impact is going to be of this decision versus that decision,” said Thomas. z
< continued from page 23
What is Agile? One thing that is missing from the Agile Manifesto is an actual definition of Agile. In one of Hunt’s books, “Practices of an Agile Developer,” he defined Agile development as an approach that “uses feedback to make constant adjustment in a highly collaborative environment.” “I think that’s pretty much spot on. It’s all about feedback and continuous adjustments. It’s not about standup meetings, or tickets or kanban boards, or story points,” said Hunt. But Thomas believes there is a good reason a definition wasn’t included in the manifesto, and that’s because Agile is contextual. “Agile has to be personal to a particular team, in a particular environment, probably on a particular project because different projects will have different ways of working,” he noted. “You cannot go and buy a pound of Agile somewhere. It doesn’t exist, and neither can a team go and buy a two-day training course on how to be Agile.” Thomas does note he doesn’t mind Hunt’s definition of Agile because you have to work at it. “None of this can be received knowledge. None of it can be defined because it’s all contextual. The way of expressing the values that we had was so powerful because it allowed it to be contextual,” he said. Dave West, CEO and product owner at Scrum.org, believes the real reason people don’t understand Agile is because of social systems, not the practice, the actual
work or even the problems they are looking to solve. “Over and over again, we see this sort of pattern that agility is undermined not by the work, not even by the skills of the practitioners, but by the social systems and the context that those practitioners are working in...Bosses want to know when something is going to be done, but when you ask them what it is they want you to deliver, they can’t tell you that...but they want to know when it is going to be done,” he explained. If we really want to take the opportunity that Agile presented, we need to change the system agility runs within, according to West. For instance, he said while Fidelity was one of the first companies to ever do Scrum, they are still wrestling with the ideas around it today because they didn’t necessarily change the way they incentivize people.
It’s about the core principles To get back to the true meaning of Agile, we need to get away from the terms and get back to the four core principles, according to Danny Presten, founder of the Agile.ai community. Delivering incremental value, having a good look at the work, being able to prioritize and improve cycles is “what really makes Agile hum. It’s not the terms. The more people get focused on the principles and the less they are focused on the terms, the better Agile will be,” said Presten. A great starting point for teams that have only experienced waterfall or haven’t had as much success with softcontinued on page 26 >
www.sdtimes.com
May 2021
SD Times
INDUSTRY SPOTLIGHT
Maximizing the ROI of Agile efforts across the enterprise starts with shared vision A
gile is hard. After over 20 years, organizations are still failing to realize the full benefits of Agile at scale. They’ve seen the impact of Agile at the team level, being able to improve productivity, decrease risks and costs, and increase revenue, but they are failing to maximize those benefits across the enterprise. “As your organization starts to grow, you’re trying to accomplish so much more. So how do you do that? It’s not just by adding more Agile teams across the business. It’s also not just about speed or delivery velocity. We’ve learned along the way that it’s about creating a shared vision across the business and building the right things,” said Brook Appelbaum, director of Product Marketing at portfolio management and work management company Planview. Organizations who have seen success at the team level often make the mistake of adding more Agile project teams. Sure, they end up with a bunch of fastmoving teams, but those teams are operating in silos, without the necessary coordination and connection. “They are not really realizing the true benefits of Agile,” Appelbaum explained. To be successful at scale, organizations have to take the learnings of their Agile teams and apply them at scale: incorporating feedback loops, iterating and coordinating work across a much more complex structure — and key to all of this is a shared vision. By connecting teams and enabling them to see how their work fits into the bigger picture, Appelbaum believes organizations can improve their ability to plan, coordinate, improve their time to market, and gain a host of other Agile benefits. “As you start to grow, some of the building blocks of Agile — transparency, collaboration, continuous improveContent provided by SD Times and
ment — become so much more important because you have to be really thoughtful about how you engage, explain, coordinate, share, and iterate to create value at every turn,” she said.
Calculating your Agile ROI In an effort to help organizations better understand their Agile efforts at scale, Planview recently released the Agile Transformation ROI Calculator. The calculator aims to shine a light on Agile’s art of the possible. It is designed as a way to facilitate hard conversations around Agile transformation successes and failures, and provides organizations with a way to look at cost optimization, reducing time to market, and improving employee job satisfaction and productivity. “Often when an Agile Transformation isn’t going as expected, it’s due to reasons that are harder to measure: cultural mismatches, ‘performative’ Agile or lack of executive commitment,” said Appelbaum. “The ROI calculator provides critical information that shines a light on some of these challenges, and provides real metrics that can help organizations carve a path forward to success.” The ROI calculator highlights metrics that include how long it takes to get a major initiative or epic to completion, and if those times can be improved. It sheds light on whether teams are working in isolation or if they’re able to communicate, coordinate, plan and work together as a cross-functional group of teams. Additionally, it estimates job satisfaction and reduced employee turnover for Agile team members that are able to
utilize their favorite Agile team tools. For example, different teams like to work with different tools, Planview provides integration capabilities so teams can still work within their existing processes or tools of choice without disrupting their workflow or productivity. As a result, organizations can get that consolidated view of what’s going on across the business, while keeping their key developer talent happy. However, Appelbaum also noted that, “Organizations must begin to identify and address some of the challenges that are preventing them from realizing true success with Agile at scale, whether cultural, procedural, or even technical: for example, are the products you're using limiting your ability to scale? It’s paramount to figure out how to connect [teams], how to get them to plan together, synchronize and collaborate together because that’s really what it’s all about, being able to take these big, important, complex initiatives and decompose them into bite-size chunks of incremental value delivery.” Planview plans to bolster the calculator in the future. “One of the things we hesitate to talk about is Agile risk and failure, but it does happen...there is still a level of failure in every type of project and we are looking to add some metrics around those mis-steps as well,” said Appelbaum. The calculator was designed as a way of facilitating a conversation within the organization about their current and future Agile successes. If you’d like to learn more about Planview’s Agile ROI calculator or Planview’s Agile solutions, visit www.planview.com. z
25
26
SD Times
May 2021
www.sdtimes.com
< continued from page 24
ware delivery is to start with Scrum, according to Hunt, but it should only be used as a starting point. “Modern Agile thought goes much further than Scrum, into true continuous development and delivery, committing directly to main many times a day... the goal has always been to shorten the feedback loops, to be able to change direction quickly, to leverage the team’s learning,” Hunt continued. Presten compared learning to be Agile with learning to play an instrument. “As you start out, you read the sheet music. It helps make momentum happen for you and gives clarity, but if it stops there and all we do is mindlessly read the sheet music and go through the motions, then there’s a problem,” he said. A good way to look at it is to look at how much feedback you are getting and when you are getting it, said Thomas. “The only way to be Agile is to be constantly adapting to your environment. Sometimes that can be minute by minute, sometimes it’s day by day and sometimes it’s week by week, but if you’re not getting feedback as often as you can, then you are not doing Agile,” he said. Cockburn explained there have been three waves of Agile. The first was at the team scale, then Agile started to move to the organization scale, and now we are in the third wave which is at a global scale. The global scale
includes finance departments, HR departments, legal departments, entire supply chains, governments, social projects, distributed teams and even different geographies. “It’s not just teams. In fact, it’s not merely organizations. It’s not merely software. It’s not really products. It’s global adoption,” said Cockburn. Cockburn went on to explain that the reason Agile is being looked at on a global scale is because of VUCA: volatility, uncertainty, complexity, and ambiguity. He said the world is “VUCA” and that became even more evident with COVID, the lockdowns and the distributed ways of working for every person, team, industry, company and event country. Everyone needs to have the ability to move and change direction quickly with ease, he said. “This is the new and current world. It is happening.
Agile long ago stopped being only about software; it is now completely general. One can look at those values and principles and extrapolate them to any endeavor,” said Cockburn.
If they could go back... If Thomas had a chance to go back in time and change anything about the manifesto, he said he would remove the 12 principles and just leave the four values in it because they dilute the manifesto and give an idea that there is a certain way to do Agile. “I would make the manifesto just that one page and then possibly just because it may not be obvious to people, explain why it doesn’t tell you what to do,” he said. Peter Morlion, a programmer dedicated to helping companies and individuals improve the quality of their code, believes the 12 Agile principles are still relevant today. “That’s because they’re based on economic reality and human nature, two things that don’t really change that much. If anything, some principles have become more radical than they were intended to be. For example, we should deploy weekly or even daily and we can now automate more than we imagined in 2001. On the other hand, some principles have been given a different meaning than we imagined in 2001: individuals no longer need to be in the same room for effective communication for example,” he recently wrote in a blog post. Because of Agile, we have been able to adapt to those principles, and while we can’t be face to face in the wake of the pandemic, we can do video calls because of the software that was influenced by the idea of Agile, Agile.ai’s Presten explained. If Presten were present at the Snowbird meeting back in 2001, he said he would probably give a hat tip to what outcomes can be expected from Agile, so that those principles can be mapped back to those outcomes to help people understand the what and why of Agile. “I am finding a lot more success and getting value from Agile by setting organizational goals like ‘hey, we want to get better at predictability,’ and then taking steps to get better,” he said Scrum.org’s West, who was not one of the original authors of the manifesto, believes one thing the manifesto was very quiet on was how you measure success and feedback to inspect it, adapt it and improve it. There are a number of new initiatives coming out to provide organizations with better outcomes such as value stream management and BizOps. According to West, one thing these approaches and Agile all have in common is inspection and adaptation, and the idea of rapid feedback loops and observation. He thinks any of these approaches will help. If you are a software engineer, the Agile Manifesto may be better to look at. If you are on the business side of things, the BizOps Manifesto might be a better start, but ultimately he said to begin with the customer, the problem and the outcome you seek.
www.sdtimes.com
May 2021
SD Times
Technical Agile not been a good evolution from the early days of the manifesto While it is normal for ideas to get diluted over time, Robert till today. It has been a separation, not a unification. I’m still Martin, one of the authors of the Agile Manifesto and author of waiting for that unification,” he added. “Clean Agile: Back to Basics,” believes that the meaning of He explained without that unification, there will be an Agile has become more than just diluted; it has lost its way. increasing number of software catastroHe explained that Agille was originally developed by prophes. “We’ve already seen quite a few grammers for programmers, but after a couple of years there and they have become fairly significant. was a shift to bring Agile to project management. We’ve had the software in cars lose con“The influx of project managers into the Agile movement trol of the cars, kill dozens of people and changed its emphasis rather dramatically. Instead of Agile injure hundreds of people. There have being a small idea about getting small teams to do relatively been a number of interesting lawsuits small projects, it turned into a way to manage projects in some paid out because of that, just a software bold new way that people could not articulate,” Martin said. glitch has done that. We’ve heard tradMartin explained that the original goal at the Snowbird ing companies lose half a billion dollars meeting, where the Agile Manifesto originated, was to bridge a Robert Martin in 45 minutes because of software divide between business and technology, but the business side glitches. We’ve seen airplanes fall out of the sky, because of took over the Agile movement and disenfranchised the technisoftware that wasn’t working quite right and this kind of failure cal side. of the software industry is going to continue.” He said at one point the Agile Alliance tried to throw a techIf the business side and technical nical Agile conference in addition to side of software development cannot its annual Agile conference, which “What we see today now is be united again, Martin predicts the reinforced the idea that Agile fell off Agile is very popular on the government will eventually step in course. It was held twice — in 2016 and 2017 — and then discontinued. project management side and and do it for us. “We cannot have programmers “What we see today now is Agile is not very popular on the out there without some kind of techvery popular on the project management side and not very popular on technical programming side.” nological disciplines that govern the way they work, and that’s what Agile the technical programming side,” said was supposed to be. It was supposed to be this kind of goverMartin. “There are remnants of technical Agile such as Test-Drinance umbrella over both project management and technoloven Development and refactoring, but that’s prevalent in the gy, and that split. Now many [technologists] are free to do what technical community and not the Agile community.” they want without any kind of discipline,” said Martin. “Does the Agile Manifesto help the project management “My hope is that we could beat the government there and side of things? Yes of course, because about half of Agile was that we can get these two back together before the governabout project management, but the other half — the technical ment acts and starts legislating and regulating because I don’t side — that part fled. And so the project management side of trust them to do it well. “ Martin added. z Agile is now lacking the technical side and in that sense, it has
Looking back at the manifesto, co-author Hunt said if he had a chance he would add a preface to it that explains Agile is not Scrum. “Scrum is a lightweight project management framework. Agile is a set of ideals that a method should support. They are not the same, and you could argue that Scrum is not even all that Agile; it’s more like a mini-waterfall. Twenty years ago maybe we could wait weeks for feedback. Today, typically, we cannot,” he said. Thomas would also add something about respecting individuals over respecting the rules in order to reflect that it is not the organization’s job to tell individuals how to behave; it’s their job. In retrospect, he also would have liked to have had a more diverse group of people involved in the manifesto. Cockburn, though, noted that if anything inside that room 20 years ago had been different, if anyone else would have been added, the outcome would have been completely different and it probably would have been more difficult to come to an agreement. What Cockburn would change about the manifesto is
the wording of responding to change over following a plan. “The discussion we had was that the act of planning is useful. [When] the plan goes out of date, you have to respond to change. People, especially programmers, use it to mean I don’t have to make a plan. I don’t have to have a date. And that’s just flat incorrect. There’s no way to run a company if you don’t have dates, prices and budgets,” he said. Agile.ai’s Presten added: “I’m just so grateful for the founders, the folks in Snowbird and what they created. It really made the world a better place...It’s changing the world that we live in for the good, and then also the culture that it is creating at the companies we work at where decisions are getting decentralized. People are able to come in and grow and learn and fail fast to succeed, and having that safety net there has been a really cool thing, so I’m just super grateful for the founders and the work they did, kind of putting their neck on the line. I think we’ve all benefited from that.” z
27
28
SD Times
May 2021
www.sdtimes.com
Digital experience monitoring the key to supporting a distributed workforce BY JENNA SARGENT
W
hile making sure applications are up and running is important, it may be even more important to perform monitoring that is from the perspective of your users. After all, who cares if your APM data shows an application to be up and running if the user is experiencing an issue that’s gone undetected? This is where digital experience monitoring, or user experience monitoring, comes into play. “APM focuses on just collecting data from the application. It doesn’t collect data from the users. It doesn’t collect data from the network. And data from that interconnected digital chain, that needs to come together to deliver a great digital experience to customers and employees,” said Nik Koutsoukos, chief marketing officer at Catchpoint, a digital experience monitoring platform provider. According to Koutsoukos, the goal of digital experience monitoring is to measure the “performance of applications and digital services from the vantage point of a digital user.” He believes that any company delivering a digital service needs to be able to answer two questions: 1) Do I understand what my users are experiencing? 2) Do I have control of all of the services involved in delivering those experiences to my users? In addition, companies need to be able to answer those questions quickly so they can resolve issues quickly. “Time is of the essence,” said Koutsoukos. “Consumers and employees and digital users nowadays don’t have the patience for poor service or an outage.
Just wait milliseconds and people are moving onto the next competitor and they’re trying to find solutions themselves. The user experience stakes have gone incredibly high. You have to be able to respond very quickly to a problem. In fact, I would say it’s not a question of reacting quickly to a problem. You have to be able to identify a problem really before it impacts the user experience of a customer or an employee because by the time they see it, it’s too late and they’re moved on to some other competitor or solution. They’re not going to wait for you, so this is where your ability to collect data and act on the data proactively is super important.”
The three components of digital experience monitoring According to Koutsoukos, digital experience monitoring can be further bro-
ken down into three categories: 1. Real User Monitoring 2. Synthetic/Active Monitoring 3. Endpoint Monitoring Real user monitoring is all about collecting input from the browser. Synthetic monitoring involves doing tests that allow you to determine what accessing a website or application would be like for an end user. For example, if you have an application that you want to deploy to China, but you don’t currently have users in China, you can simulate user transactions and test the performance before it goes live into production. This involves using bots that behave like users that will test things like: “Can I access the application, is it up and running? Is the page rendering properly? And how is it performing in terms of response time, latency, and jitter?” If there is a problem that gets iden-
www.sdtimes.com
tified, then the question becomes finding out what that problem is, Koutsoukos explained. “If I establish that users can’t get to my website from China, the question is what is causing that outage? Is it the application itself? Is it my CDN provider, is it a DNS problem? Is a broadband or backbone ISP down? Is it a network issue? So the question then becomes: do you have the data from that digital chain that is interconnecting your application to your users so you have the data to point me to where the problem is.” This element of synthetic and active monitoring is also sometimes referred to as network monitoring, Koutsoukos explained. Finally, there is endpoint monitoring, which involves collecting data directly from a device. This is more common in the case of employees as end users, not customers, since companies don’t have a way of collecting data from their users’ devices, but may be able to monitor employee devices to gather metrics. After the data from these three components of digital experience monitoring is correlated and analyzed, it can then be used by the IT teams to help troubleshoot problems.
Core Web Vitals The Core Web Vitals are also a crucial part of user experience monitoring. They were created as part of Google’s Web Vitals initiative, which aims to provide unified guidance on the metrics most important for delivering good user experiences. “Site owners should not have to be performance gurus in order to understand the quality of experience they are delivering to their users. The Web Vitals initiative aims to simplify the landscape, and help sites focus on the metrics that matter most, the Core Web Vitals,” the Web Vitals website states. The Core Web Vitals are a subset of Web Vitals and are focused on three aspects of user experience: loading, interacting, and visual stability. The three metrics that correspond to those focus areas are Largest Contentful Paint (LCP), First Input Delay (FID), and Cumulative Layout Shift (CLS).
The vitals can be measured through a number of tools, including Chrome User Experience Report, PageSpeed Insights, and Search Console. Koutsoukos added: “Ultimately it’s meant to capture the quality of the experience that a user is having on the mobile device and on the desktop device.” In addition, Koutsoukos predicts that the Core Web Vitals will start to more heavily impact SEO. Google has already been using them when ranking websites in its search results, but Koutsoukos believes the Core Web Vitals will start to hold more rank.
Digital experience monitoring’s role in the distributed workforce Koutsoukos has observed that digital experience monitoring has become increasingly important in the past year than ever before because there are more digital end users than ever before. For example, there are people needing to order groceries online who have never done so, and millions of kids and teachers needing to conduct classrooms
May 2021
SD Times
Buyers Guide using technology. “Think about credit card processing systems and services. All of a sudden you saw a huge, huge spike in demand for what they were doing. The whole delivery system for groceries, local or global or more larger scale, had to sort of increase capacity to deal with the increased demand,” said Koutsoukos. Even though states in the U.S. are starting to roll back restrictions to what they were pre-COVID-19 and the pace of vaccinations continues to rise, that doesn’t mean this digital demand is going to slow down any time soon. “[That digital demand is] going to continue being high,” said Koutsoukos. “In fact, in some cases it’s never going to go back to pre-covid levels.” In addition, the mass influx of remote working exposed some of the weaknesses of the internet to handle demand shifting, he explained. Remote workers continued on page 30 >
How does your company help its customers with digital experience monitoring? Nik Koutsoukos, VP of product marketing, Catchpoint In a digital economy enabled by cloud, SaaS, and IoT, applications and users are many and can be located anywhere. Catchpoint is the only Digital Experience Observability platform that can scale and support today’s customer and employee location diversity and application distribution. We enable enterprises to proactively detect, identify, and validate user and application reachability, availability, performance, and reliability, across an increasingly complex digital delivery chain. Industry leaders like Google, L'Oréal, Verizon, Oracle, LinkedIn, Honeywell, and Priceline trust Catchpoint’s out-of-the box monitoring platform to proactively detect, repair, and optimize customer and employee experiences. Our platform consists of four key components that empower you to take your digital monitoring initiatives to the next level: • Proactive, True Synthetic Monitoring: Leverages the largest public global network in the industry and the ability to collect active data from anywhere within the enterprise network and datacenter so you can provide a top-notch user experience. • Real User Monitoring: Provides a complimentary view of your users’ actual experience. Our RUM solution helps you swiftly resolve performance issues, optimize conversions, and make better and more profitable business decisions. • Network Monitoring: Proactively detects and resolves issues throughout your entire network — from layer 3 to layer 7 — to lower MTTR and improve end users’ digital experiences. • Endpoint Monitoring: Unleashes the power of your digital workplace so you can see exactly what your employees see on their screen. Isolate the cause of delays to the device, network, or application to quickly identify and fix user-impacting issues. z
29
30
SD Times
May 2021
www.sdtimes.com
< continued from page 29
have to rely on their home networks rather than a business connection, which can be a challenge for IT teams who used to monitor network traffic as part of their digital experience monitoring. “All of a sudden the question that came into play is: is IT in a position to deliver a great service to their employees now that they are not in an office with an internet connection and are relying on home connections? That has ramifications on how you monitor the digital experience of employees, are you in a position to troubleshoot problems when they arrive, and do you have the ability to do that,” said Koutsoukos. According to Koutsoukos, this is where endpoint monitoring comes into play. When an employee was in an office it wasn’t necessary to monitor
Predicting user intent is the future of digital experience monitoring Search company Algolia believes that digital experience monitoring will evolve to be able to predict a visitor’s intents. Understanding why a user is there and what they want to achieve would enable sites and applications to surface relevant search results, recommendations, offers, and in-app notifications. It could also provide a site navigation that is completely customized to a particular user. “There has been a fundamental shift in how companies earn trust online, and no matter the industry, it’s driven by an increasing sense of consumer urgency. As we head toward a cookieless world where data privacy is much more stringent, organizations must cease reliance on external data sources, or their business will suffer,” said Bernadette Nixon, CEO of Algolia. “Immediately gathering, utilizing, and protecting first-party data is mission-critical for every brand. However, companies no longer have minutes to spare when delivering what a customer is looking for — they must show results instantly or suffer the consequences of their customers bouncing to competitor’s sites. That is a big part of Algolia’s larger vision.”
endpoints because the end user was in reach of the IT team. “They’re remote and you just don’t have a clue of what experience they’re
having on their PC. The ability to reach from an endpoint all the way to the employees has become very much needed,” said Koutsoukos. z
A guide to digital experience monitoring tools n
FEATURED PROVIDER
n
n Catchpoint: Catchpoint is the enterprise-proven ally that empowers teams with the visibility and insight required to deliver on the digital experience demands of customers and employees. With its combined true synthetic, real user, network, and endpoint monitoring capabilities and the largest, most diverse global monitoring network in the industry, Catchpoint delivers in-depth, accurate, and full-stack performance insights. As a result, companies gain a competitive advantage through superior digital user experience. Find us at www.catchpoint.com.
n Algolia is a provider of tools that enable
perspective of the end user.
developers to add search capabilities to their apps. It offers analytics that developers can use to see what customers are searching for and interacting with. It offers metrics such as total searchers, no-result rate, users, conversations, and average click position.
n Datadog offers real user monitoring to provide end-to-end visibility into an end users’ journey. It provides insights based on synthetic tests, back-end metrics, traces, logs, and network performance data. This allows Datadog to detect poor user experience and resolve issues more efficiently.
n AppDynamics is an APM provider that provides customers with information on user experience. Its Experience Journey Mapping feature tracks the application paths most common among users and evaluates performance, enabling customers to see how their users are interacting with their app. Companies can use AppDynamics to optimize customer journeys across devices and quickly identify any issues. n AppNeta provides tools for customers to monitor network performance from the
n Dynatrace’s digital experience monitoring solutions enable companies to prevent problems before the users notice them. In addition, by quickly receiving insights companies will be able to rapidly fix any issues that arise. n Martello believes that with the increasing number of services being delivered as cloud-based software as a service, teams lack visibility into the end user experience. Its solutions provide tools for understanding user experience of collaboration and
productivity solutions, such as video conferencing tools. n New Relic’s comprehensive SaaS-based New Relic Software Analytics Cloud provides a single powerful platform to get answers about application performance, customer experience, and business success for web, mobile and back-end applications. New Relic delivers code-level visibility for applications in production that cross six languages — Java, .NET, Ruby, Python, PHP and Node.js — and supporting more than 70 frameworks. n Plumbr: Plumbr is a modern monitoring solution designed to be used in microservice-ready environments. Using Plumbr, engineering teams can govern microservice application quality by using data from web application performance monitoring. Plumbr unifies the data from infrastructure, applications, and clients to expose the experience of a user. n SmartBear: AlertSite’s global network of more than 340 monitoring nodes helps monitor availability and performance of applications and APIs, and find issues before they hit end consumers. The Web transaction recorder DejaClick helps record complex user transactions and turn them into monitors, without requiring any coding. z
Full Page Ads_SDT037.qxp_Layout 1 6/23/20 9:16 AM Page 27
32
SD Times
May 2021
www.sdtimes.com
Guest View BY SCOTT SCHWAN
How compliance fits into DevOps Scott Schwan is CEO and co-founder at compliance prep startup Shujinko and former director of cloud engineering at Starbucks.
A
s security and privacy grow in importance, regulatory compliance is becoming an increasing priority for most businesses. But let’s just say it: compliance audits are not fun. That’s especially true when it comes to engineering and development teams, who are tasked with gathering all of the relevant data — in other words, evidence — needed to assess and demonstrate compliance with various regulatory frameworks. The more complex the environment, the harder the task. Making matters worse, evidence collection is a highly manual process that, depending on the size of the organization and the number of audits, can consume hundreds of hours annually, at least — time these teams could better spend doing their regular jobs. So, compliance is simultaneously a critical need and a huge time suck. How do we square that circle? Use automation to make compliance fit into a DevOps approach. It is entirely possible to automate the collection of evidence, such as specifics on user access, encryption of data at rest, key management, network segmentation, firewalls, vulnerability scan reports and more, across public cloud environments and connected SaaS systems. This data is considered more complete and accurate from a compliance perspective, because it includes time stamps and other metadata to ensure capture and processing integrity. Some teams and organizations already try to accomplish this on an ad-hoc basis through scripting, but that has problems when done at scale. First off, scripts are time-consuming to build, and there can be a lot of them required to have a meaningful impact on compliance audits. Second, those scripts need to be tested and, as systems evolve, modified to keep pace. Third, who is doing the next audit? If it’s not you, do they understand your scripts? Are you going to explain the process to them? Various vendors and cloud providers have also tackled this challenge, some more successfully than others. Much of the reason goes back to that notion of complexity. A single cloud platform and regulatory framework is manageable — but when you mix in multiple clouds or hybrid infrastructure, add in dif-
Compliance is a critical need and a huge time suck. How do we square that?
ferent SaaS tools and systems, tackle more than one compliance audit annually, etc. — all of a sudden, the scope and scale of evidence collection can become massive. Only the broadest of vendor approaches will get the job done. Done properly, however, this type of commercial automation solution can dramatically lessen the burden (and improve the quality) of evidence collection, it also creates the equivalent of a “system of record” to centralize, organize and codify all compliance data for future audits. From a DevOps perspective, another intriguing long-term possibility is tackling compliance with a broader, community-based approach. Consider: adherence to industry standards for data handling, privacy and protection is not a competitive issue. We are all better off if everyone shares a basic commitment to complying. While vendors can broadly address compliance automation for cloud and SaaS platforms, there are still a vast number of potential data sources, and many may be either too limited in scope to justify commercial connectors, or may reflect custom development. A community mindset might encourage people and organizations to tackle these issues through the use of standardized, open APIs. Engineers could thus share custom collectors, cloud and SaaS vendors could support those APIs to facilitate capture and extraction of relevant data, and compliance and tool vendors could design their collection engines, evidence libraries and usage policies such that customization and sharing is possible. Similarly, automation tools must not only pull data from standard systems, but support pushing of data from custom systems or legacy infrastructure. In my view, this bottoms-up approach is the best way to stop wasting DevOps’ time on audit prep while still allowing enough autonomy to do what works best for their specific environment. Ideally — and again, in keeping with a DevOps approach — it should be possible to move beyond episodic audits to assess and monitor security, privacy and compliance drift on a continuous basis. Ultimately, compliance audits are never going to be fun. But tackling compliance with a DevOps automation mindset will allow organizations to improve security and privacy, while making the process a lot less painful. z
www.sdtimes.com
May 2021
SD Times
Guest View BY DARREN BROEMMER
Use hackathons to validate your product Y
ou think you have a great product. Your product manager thinks you have a great product. Your developers think they have created a great product. The question is — how do you prove this before you send it out to your alpha and beta testers for real-world feedback? Therefore, we recommend the multistage hackathon approach to ensure product-market fit and usability. With multistage hackathons, you can start them earlier than the “final product” stage to get more useful feedback. Nevertheless, using a series of hackathons can make it easier to verify that you are solving the customers’ problem that you intended to solve. What you think you accomplished in the lab isn't always the case in the real world. Use hackathons to inject a bit of the “real world” in the development process. You want to have at least three hackathons for three main reasons: 1) You won’t catch everyone in a given day. 2) You won’t catch everything in a given day. 3) You need time to iterate and incorporate feedback.
Individual preparation Hackathon #1 needs to focus on the use-case level. For example, you want someone to test a car by driving to a specific location. During hackathon #1, you give them GPS and detailed instructions. For Hackathon #2, the task is the same, but instead of GPS and instructions, you give them a road atlas and some verbal directions. Hackathon #2 is more of a guided, end-to-end test. Hackathon #3 is a true, open-ended usability test. Hand them the car keys and tell them to get to the destination. The goal of hackathon #3 is to determine whether, without any specific guidance, the user can easily achieve the objective using the product. This allows them to spend more time exploring and comprehensively stress-testing the application.
Tasks for all hackathons The hackathon management team needs to have real-time visibility into what people are doing — either recording the sessions or via “feet on the ground” — when in-person hackathons come back. The managers should anticipate and prepare for questions related to the hackathon tasks but should
also “hold back” guidance to make sure they don’t interfere with the process they are trying to test. For all hackathons, prepare a way to measure results. Results come in two flavors, supervised and unsupervised metrics. Unsupervised metrics include basic system metrics, such as request latency, error rates, etc. Supervised metrics include data collected from the participants as well as more qualitative feedback, such as time to complete each step, individual videos of use-case execution, comments, complaints and exit interviews. Hackathon #1 – The first hackathon should be small. Consider hackathon #1 to be your initial product focus group. The task should provide a “sample” of what the participants should expect to accomplish at the end. Can they get there? Is the product easy to use? Difficult? Was a user able to achieve what the UX manager set out to do? Hackathon #2 – The second hackathon needs to consist of a large crowd, the bigger the better. Again, make it simple by asking them to accomplish a specific task, but more complex than the first one. One goal of the second hackathon is to test performance. Hackathon #3 – Outcomes are tested during hackathon #3. Instead of assigning a single task, the hackathon manager needs to provide a series of objectives, without going into detail what the end products should look like. The results then need to be examined to make sure the teams could accomplish the individual objectives.
Darren Broemmer is Developer Evangelist at DevGraph.
Hackathons can make it easier to verify that you are solving the problem you wanted to solve.
Post-hackathon analyses While the hackathon easily allowed for supervised metrics, the real metrics come after the hackathon is over. How useful is the product for the long-term? While some software is completely unique, with no other options on the market, most applications have alternatives. Once the hackathon is over, the product development team needs to track usage. Did the participants continue to use the product once the hackathon is over? Is it delivering results for them? Or did they use it for the hackathon and never log in again? z
33
34
SD Times
May 2021
www.sdtimes.com
Analyst View BY PETER HYDE
Succeeding as a remote Agile software Peter Hyde is a senior research director at Gartner, Inc. and an enterprise agile coach.
gile software development teams thrive on collaboration and dynamic interaction, but in 2020, the sudden shift to remote work created concern among software engineering leaders that development velocity would suffer. As many organizations look to transition to a hybrid remote work culture, development leaders are wondering if it will be possible for their teams to maintain effectiveness when working outside of the office long-term. Agile teams are inherently self-organizing and adaptive to change, but application technical professionals must maintain a strong team culture of close collaboration, feedback loops and dynamic interaction to stay effective in a remote environment. To maintain a successful and efficient remote work team, software development leaders can champion six best practices:
A
1. Review the situation. First, review your remote team situation. Because we have lost the benefits of colocation, where constant interaction, easy pairing and water cooler conversations aid teamwork, we need to address collaboration in other ways. Set the tone in a remote environment by arranging a video conference with your team to outline how you communicate and collaborate when working remotely, evolve your team culture to solve remote challenges and adapt the way you work. Hold another video conference with your product owner to align the team on the product, vision and strategy. These video conferences help empower a team by agreeing to new ways of working and reinforcing purpose. Every problem is a people problem — or at least, it has a people solution. Evaluate the degree to which your team possesses the essential skills for working together in a remote environment, which should include complex problem-solving abilities, critical thinking skills, creativity, flexibility and strong judgement. 2. Engage as a team and focus on culture. Remote working is a skill that requires time and effort to develop. Video conferencing
is a great way to engage with your team, but how many times have you been in a video conference with your camera off, your microphone muted, checking your email or even making a cup of tea? Reinforce simple rules for video conferencing etiquette, including: • Be present. If you do not feel the meeting has value for you, decline the invite. If you do attend, be attentive and leave your camera on. • Be human. Don’t be concerned that your children, significant other or pets will invade your picture. Welcome this, as it shows that you’re human and face the same challenges as everyone else. Stay on mute if you’re worried about interruptions. • Be part of the team. If it’s a team call, don’t mute it. Team members want to hear feedback. Keep team lunches or after-work drinks on the schedule to maintain team culture — and leave your camera and microphone on, eat on the call and invite your family around to say hello. Culture is frequently viewed as a barrier to effective collaboration, and this becomes more challenging when working remotely. Here are a few ways to improve your remote work culture: • Facilitate a short team workshop to evaluate your company’s values and align work to those values. • Act in a manner you would like to see. Culture is what you say and what you do. • Agree on values and a team charter to guide conduct and provide behavioral nudges. • Demonstrate personal cultural leadership by committing to following these guiding values every day. 3. Maintain momentum. As development teams, we must continue to deliver value while working remotely, and this may require some process tinkering. Make adjustments at every phase of the software development life cycle to be inclusive, build trust and ensure that everyone is heard. We sometimes forget that the reason we do
www.sdtimes.com
development team the work is to solve a problem for our end users. Working remotely adds another barrier between product teams and the people they support. To address this, we must refocus on helping the people who use our products to solve their problems. Get closer to your customers, understand the work they wish to accomplish and help them to achieve it. 4. Foster openness and transparency. We must build trust in our remote teams based on mutual understanding and respect. Encourage openness with weekly remote lunch events and virtual coffee breaks. Discuss everyday life, build empathy, form connections, and be clear on your intentions and reasoning. Fostering transparency builds trust, which enables team members to take risks, admit mistakes, rely on each other and improve together. Be understanding and empathetic when working with your team, but don’t value politeness over progress. Challenge behaviors that conflict with your remote working agreement and highlight potential issues early. Communicate openly, using reply-all on team emails and raising questions in your collaboration tool so everyone can contribute. While remote, we must also continue to validate our work with real customers. Fast feedback is essential to enable agile teams to make rapid decisions and focus on the right features. Without in-person user testing, we must rely on technology solutions. Video calls, surveys and usability testing are all ways to receive quick feedback. Everyone on the remote team should be involved with user testing to create a shared understanding and a better product experience. 5. Leverage technology. Effective remote teamwork requires close collaboration over multiple open channels with individuals skillfully moving between technology tools. Developing good communication and collaboration habits is a great start, but remote development teams must create a shared virtual space to succeed.
May 2021
SD Times
Match collaboration tools to desired behaviors to create a common toolset, form a sense of community and maintain trust through team connection. Identify tools that can support the way your team works while prioritizing face-toface interactions. Technology is rarely the answer, but it does provide the right platform to enable conversations. Shifting to cloud-hosted development environments can also increase the team’s agility and resilience through flexible, shared and alwaysavailable environments. Fully cloud-hosted development environments offer code, build, test and debug capabilities. Teams that have already moved to a cloud-hosted development environment are realizing their value in a remote workplace. 6: Evolve your remote team practices The agile process is built on the three pillars of the empirical process: transparency, inspection and adaptation. We must use these to continually evolve our working practices to improve the outcomes we produce for our customers. Disruptive change is stressful. Keep communication lines open, schedule one-on-ones, check in on people — but most of all, be kind to yourself and others. Your process modernization must be matched by a change to the way you organize your work. Your customers are unlikely to care about your process or product — they are more concerned about resolving their challenges and getting their jobs done. Your product is more likely to succeed if it aligns with their values and provides the best way to achieve their goals. These six best practices of the remote team framework can help you reassess how to help remote employees remain effective. This framework has proven successful in supporting remote product development teams and improving how they operate. Gartner analysts will further discuss application innovation and software engineering strategies at the Gartner Application Innovation & Business Solutions Summit 2021 taking place virtually May 26-27 in the Americas. z
Every problem is a people problem — or at least, it has a people solution.
35
Full Page Ads_SDT016.qxp_Layout 1 9/21/18 4:14 PM Page 28
SD T Times imes News on Mond day The latest news, news analysis and commentary delivvered to your inbox!
• Reports on the newest technologies affecting enterprise deve developers elopers • Insights into the e practices and innovations reshaping softw ware development • News from softtware providers, industry consortia, open n source projects and more m
Read SD Times Ne ews On Monday to o keep up with everything happening in the software devvelopment industrry. SUB BSCRIBE TODA AY! Y!