SD Times November 2020

Page 1

NOVEMBER 2020 • VOL. 2, ISSUE 41 • $9.95 • www.sdtimes.com


IFC_SDT041.qxp_Layout 1 10/16/20 3:01 PM Page 2

Instantly Search Terabytes

www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein drubinstein@d2emerge.com NEWS EDITOR Christina Cardoza ccardoza@d2emerge.com

dtSearch’s document filters support: ‡ popular file types ‡ emails with multilevel attachments ‡ a wide variety of databases

SOCIAL MEDIA AND ONLINE EDITORS Jenna Sargent jsargent@d2emerge.com Jakub Lewkowicz jlwekowicz@d2emerge.com ART DIRECTOR Mara Leonardi mleonardi@d2emerge.com

‡ web data

CONTRIBUTING WRITERS Jacqueline Emigh, Lisa Morgan, Jeffrey Schwartz, George Tillmann

2YHU VHDUFK RSWLRQV LQFOXGLQJ ‡ efficient multithreaded search

CONTRIBUTING ANALYSTS Enderle Group, Gartner, IDC, Intellyx

‡ HDV\ PXOWLFRORU KLW KLJKOLJKWLQJ ‡ forensics options like credit card search

CUSTOMER SERVICE SUBSCRIPTIONS subscriptions@d2emerge.com ADVERTISING TRAFFIC Mara Leonardi mleonardi@d2emerge.com

Developers: ‡ 6'.V IRU :LQGRZV /LQX[ PDF26

LIST SERVICES Jessica Carroll jcarroll@d2emerge.com

‡ &URVV SODWIRUP $3,V IRU & -DYD DQG NET with NET Standard / 1(7 &RUH

REPRINTS reprints@d2emerge.com

.

.

.

‡ )$4V RQ IDFHWHG VHDUFK JUDQXODU GDWD FODVVLILFDWLRQ $]XUH $:6 DQG PRUH

ACCOUNTING accounting@d2emerge.com

ADVERTISING SALES PUBLISHER David Lyman 978-465-2351 dlyman@d2emerge.com

Visit dtSearch.com for ‡ KXQGUHGV RI UHYLHZV DQG FDVH VWXGLHV ‡ IXOO\ IXQFWLRQDO HQWHUSULVH DQG developer evaluations

SALES MANAGER Jon Sawyer 603-547-7695 jsawyer@d2emerge.com

The Smart Choice for Text Retrieval® since 1991

dtSearch.com 1-800-IT-FINDS

PRESIDENT & CEO David Lyman

D2 EMERGE LLC www.d2emerge.com

CHIEF OPERATING OFFICER David Rubinstein


003_SDT041.qxp_Layout 1 10/22/20 10:06 AM Page 3

Contents

VOLUME 2, ISSUE 41 • NOVEMBER 2020

FEATURES

NEWS 4

News Watch

14

CloudBees delivers on delivery vision

14

HCL Accelerate gets new governance and reporting features

Digital experience monitoring more necessary than ever

COLUMNS 30 ANALYST VIEW by Rob Enderle Opportunity to develop virtual worlds

page 6 31 GUEST VIEW by Stephen Magill Using static analysis to secure open source

17

The Future of DevOps

19

Improve DevOps with Octopus Deploy

20

Debug Anything, Anytime, Anywhere

23

Value Stream Matters to DevOps, Business

24

DevOps Showcase

Half of managing is selling

page 10

It’s not all about deployment with software release management

page 26 Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 2 Roberts Lane, Newburyport, MA 01950. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2020 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 2 Roberts Lane, Newburyport, MA 01950. SD Times subscriber services may be reached at subscriptions@d2emerge.com.


004,5_SDT041.qxp_Layout 1 10/20/20 4:17 PM Page 4

4

SD Times

November 2020

www.sdtimes.com

NEWS WATCH HackerRank benchmarks dev candidates HackerRank announced at its HackerRank.main() virtual event a new pillar to be added to its developer hiring solution. Rank, the company’s fourth pillar, was designed to benchmark developer candidates and give hiring managers more confidence when bringing on new talent. “The purpose of this pillar is to help you build up the confidence in your hiring process and the offer you are making,” Vivek Ravisankar, CEO and cofounder of HackerRank. “For every skill that you evaluated a candidate on, we try to benchmark the candidate across all other developers who have attended a similar skill assessment.”

Coalition for App Fairness formed The independent nonprofit Coalition for App Fairness was formed to promote competition and to protect innovation on digital platforms by providing a roadmap of acceptable practices for operators of the most popular platforms. “As enforcers, regulators, and legislators around the world investigate Apple for its anti-competitive behavior, The Coalition for App Fairness will be the voice of app and game developers in the effort to protect consumer choice and create a level playing field for all,” said Horacio Gutierrez, head of global affairs and chief legal officer at Spotify, and a member of the coalition. The coalition’s goal is to make sure that apps can compete fairly as app developers have been increasingly raising

Microsoft’s 10 app store principles on Windows 10 Microsoft released a set of 10 principles to promote fairness and innovation on Windows 10. The release follows the announcement of the Coalition for App Fairness (CAF), which was formed late last month to counter Apple App store practices. The 10 Microsoft app store principles are: 1. Developers should have the freedom to choose how to distribute their applications 2. Applications should not be blocked from the app store based on business model or how it delivers content and services 3. Applications should not be blocked based on its payment systems 4. Developers should have timely access to information about interoperability interfaces 5. Every developer should have access to the app store as long as they meet object standards and requirements

concerns about the terms and conditions that govern the Apple App Store as well as last-minute iOS updates that have disadvantaged developers, according to the coalition.

Pixie Labs observability platform emerges Kubernetes-native observability platform provider Pixie Labs has emerged from stealth with $9.15 million in funding. The Series A funding round was led by Benchmark and GV also participated. According to the team, Pixie helps reduce the complexity and cost of observing and troubleshooting application performance. Developers can gain visibility into their application without needing to change code, manually set up ad hoc dashboards, or compromise on how much data they can observe. Pixie Labs was founded by

6. App store fees should charge reasonable fees to reflect the competition 7. Developers should not be prevented from communicating directly with users through their applications for legitimate business purposes. 8. Microsoft will hold its own apps to the same standards as competing apps 9. Microsoft won’t use any non-public information or data from its app store to compete with other developers 10. The App store will be transparent about rules, policies and opportunities.

Zain Asgar (CEO), an adjunct professor of computer science at Stanford University, and Ishan Mukherjee (CPO), who led Apple’s Siri Knowledge Graph team and was an early Amazon Robotics engineer.

Kong Konnect for cloud-native workflows Kong has announced a beta for its new platform Kong Konnect, which provides users with access to a suite of tools for service connectivity for APIs and microservices. Users can use Kong Konnect to simplify complex workflows across API gateway, Kubernetes Ingress, and service mesh runtimes. It uses a modular approach and users can access capabilities as modules rather than needing to purchase the entire platform. They can select and consume the feature they need through the

Kong Konnect interface, the company explained.

Microsoft to license OpenAI’s GPT-3 language model Microsoft has revealed it is teaming up with OpenAI to exclusively license GPT-3. According to the company, this will further Microsoft’s goals to develop and deliver advanced AI solutions for customers, as well as create new solutions that harness the power of advanced natural language generation. GPT-3 is an autoregressive language model that outputs human-like text that clocks in at 175 billion parameters, and is trained on Azure’s AI supercomputer.

DigitalOcean launches App Platform With the release of the DigitalOcean App Platform, the company wants to make writing code


004,5_SDT041.qxp_Layout 1 10/20/20 4:17 PM Page 5

www.sdtimes.com

a lot easier for developers by automatically deploying and running their code at scale. The App platform leverages the power, scale, and flexibility of Kubernetes without exposing developers to its complexity, DigitalOcean explained. Additionally, the platform is built on open standards that provide more visibility into the underlying infrastructure than in a typical PaaS (Platform-as-a-service) environment, according to the company.

Android Studio 4.1 addresses productivity The latest release of Android Studio includes new features that address common editing, debugging, and optimization use cases. A major theme for the 4.1 release is to help developers become more productive while using Android Jetpack libraries to help developers follow best practices and to write code faster, the team explained. Some other highlights included a new Database Inspector for querying an app’s database, support for navigating projects that use Dagger or Hilt for dependency injection, and better support for ondevice machine learning with support for TensorFlow Lite models in Android projects.

Dart 2.1: A unified dev tool The latest release of the programming language Dart features a unified developer tool designed to tackle all needs such as creating projects, analyzing and formatting code, running tests, and compiling apps. The new ‘dart’ developer tool is very similar to the ‘flutter’ tool, according to the

company. Flutter also includes this new Dart tool in the Flutter 1.22 SDK. “If you do both Flutter and general-purpose Dart development, you get both developer experiences from a single Flutter SDK, without needing to install anything else,” Michael Thomsen, product manager working on Dart, wrote in a blog post.

Lightstep’s OpenTelemetry Launchers Distributed tracing company Lightstep has announced the release of OpenTelemetry Launchers, a new solution for understanding complex systems. The release is based on the open-source project OpenTelemetry, which provides APIs, libraries, agents and collector services to capture distributed traces and metrics. Launcher now connects that data with Lightstep to provide observability and actionable insights, the company explained. Distributed tracing provides insight into the life cycle of requests to a system so developers can easily find failures and performance issues.

GitHub adds code scanning GitHub has announced that its code scanning feature is now available. The new code scanning capability scans code as it is created and provides reviews within pull requests and other GitHub experience. This automating of security helps ensure that vulnerabilities never make it to production, the company explained. Code scanning integrates with GitHub Actions and is powered by the code analysis

engine CodeQL. Developers can use the more than 2,000 CodeQL queries that have been created by GitHub and the community, or create custom queries to find and prevent security issues. This new feature is also built on the open SARIF standard and is extensible, meaning open source and commercial security testing tools can be added to it.

JetBrains launches Code With Me Code With Me is a service in IntelliJ IDEA that allows developers to share open projects

November 2020

SD Times

in the IDE with distributed team members. Using Code With Me, team members will be able to quickly access others’ code to help investigate issues, review, and work on your code together. Other IntelliJ IDEA features like code completion, smart navigation, refactoring, the built-in terminal, and the debugging suite, are all available while using Code With Me. Example use cases for Code With Me include pair programming, mentoring, and swam programming, which is when developers simultaneously code together in a single IDE, JetBrains explained. z

People on the move

n Viktor Farcic has taken on the role of principal DevOps architect at Codefresh. Previously, Farcic served as product manager and principal software delivery strategies and developer advocate at Cloudbees. Additionally, he has published several books on DevOps tooling, test-driven Java development, and hosts the DevOps Paradox podcast with Darin Pope. Other notable new hires at Codefresh include Vidhya Vijayakumar as head of customers, Sasha Shapirov as vice president of R&D, and Ran Zaksh as vice president of product. n Nobl9 has announced SRE influencer Alex Hidalgo will spearhead its internal site reliability engineering efforts as well as contribute to the development of its software reliability platform. Hidalgo recently published a book on “Implementing Service Level Objectives,” and contributed to Google’s “The Site Reliability Workbook” book. n Krishna Tammana has been appointed chief technology officer at data integration and integrity company Talend. Most recently, Tammana was the vice president of engineering at Splunk. At Talend, Tammana will be responsible for scaling the product and engineering organizations to help drive the company’s innovation and market growth. n Hybrid cloud data warehouse company Yellowbrick Data is bringing on Mark Cusack as its chief technology officer. Cusack will help shape the company’s roadmap, guide product development, improve data warehouses and data analytic systems and enhance the company’s ecosystem of partnerships. Prior to joining Yellowbrick, Cusack was vice president of data and analytics at Teradata.

5


006-8_SDT041.qxp_Layout 1 10/21/20 5:22 PM Page 6

6

SD Times

November 2020

www.sdtimes.com

Digital experience BY JENNA SARGENT

or many companies, digital transformation has been happening slowly over the years. But this year, the COVID-19 pandemic has forced companies to transform faster than before, or risk getting left behind. Brick-and-mortar stores needed to create digital storefronts, restaurants needed to invest more heavily into amping up their online ordering operations to support delivery and takeout requests, and schools needed to learn to adapt to remote learning. And with everyone stuck at home, digital services were more in demand than ever before. Online banking, streaming subscriptions, and e-commerce are just a few of the industries that saw increased usage since the start of the coronavirus outbreak. Even after the pandemic is over, the impacts will be permanent. Research from McKinsey indicates that 75% of people that are using digital channels for the first time will continue to use them even when things return to normal. It’s clear that in order for these businesses to survive, they need to make user experience a top priority. With so many competitors and alternatives for consumers to pick from, providing a poor digital experience can result in lost business. There are a number of ways that businesses can monitor those experiences and make changes based on what they’re seeing.

More necessary than ever in an

F

It’s important to make the distinction between APM, observability, and digital experience monitoring. While all of these are important to monitor, they all have different goals. The main difference between APM and observability is that APM is reactive, while observability is proactive, Wes Cooper, product marketing manager at enterprise software provider Micro Focus, explained in an Observability Buyer’s Guide on ITOps Times. With observability, you are looking into the unknown and using automation to fix problems, whereas with monitoring, you are identifying known problems, Cooper explained. Digital experience monitoring, on the other hand, is a form of monitoring that focuses on the user experience. With digital experience monitoring, development teams are looking to determine whether they are providing their users with a good experience. This allows them to identify issues in an application that may be impacting expe-

rience, which results in happier users. Both observability and APM are closely tied to the systems in an organization, while digital experience monitoring is more about the applications themselves. It might seem that APM and digital experience monitoring would be done by separate groups, with system admins being responsible for APM, and product teams monitoring user experience. But Daniel Cooper, managing director at automation and digital transformation company Lolly Co, believes that in order to be successful, application monitoring needs to be done on top of system monitoring. “Essentially, you need to first establish your observability pillars before implementing your application monitoring strategy. The result of doing so is a monitoring ecosystem that will help you concentrate on customer experience issues and not the health of the application,” he said. Tal Weiss, CTO and co-founder of code analysis tool OverOps, added that


006-8_SDT041.qxp_Layout 1 10/21/20 5:22 PM Page 7

www.sdtimes.com

monitoring increasingly digital world

monitoring in general has been shifting from the responsibility of the operations teams to that of the development team. “Observability, infrastructure monitoring and APMs have traditionally been key tools in the hands of Ops teams when it comes to assuring the performance and uptime of key business applications,” said Weiss. “This was well-suited to a world in which an application’s quality of service was primarily determined by its infrastructure, and the performance of underlying system components such as DBs, web servers and more. But as code has begun eating the world of software, and infrastructure itself has become an API, the focus shifts towards the quality and reliability of the application’s code itself.”

The three levels of digital experience monitoring According to Zack Hendlin, vice president of product at OneSignal, a provider of push notification services, there are three levels of user monitoring.

The first level is measuring core systems like databases, CPU load, memory load, memory used, disk space, queue lengths, API uptime, or error codes returned. These metrics help to cover the basics of knowing whether or not your application’s basic infrastructure is working. This can be measured using a tool like Grafana or something similar, Hendlin explained. The second level is monitoring what users experience. According to Hendlin, this includes things like page or app load times, drop-off at points in an app, and crash logs. The third level — and most interesting, according to Hendlin — is measuring what your users are doing. According to Hendlin, this measurement can help answer these questions: “Are they skipping your onboarding flow? Inviting friends to your app? Posting content? How long do they spend in your app? When do they purchase?” Hendlin continued: “At OneSignal we’ve focused on building the set of

November 2020

SD Times

analytics that makes it easy to keep track of sessions, session duration, clicks and more — out of the box. And then we built the ability to track custom outcomes — which can be any action a user takes in your app. We see our users tracking when their customers redeem coupons, share positive / negative feedback with them, purchase (and the amount), follow a new musical artist, and more.”

How to gather feedback There are a number of ways that a team can tackle monitoring those user experiences. One is by gathering user feedback through the users themselves. According to Hendlin, information can be gathered from user reviews, support questions, user research sessions, or conversations with customers. “With hundreds of questions a day, we keep a pulse on what people are asking for or where we could make parts of our product easier to understand. We aggregate these support conversations and share common themes to help the product team prioritize,” said Hendlin. “Really understanding users comes from talking to them, observing how they interact with the product, analyzing where they were trying to do something but had a hard time, and seeing where they need to consult documentation or ask our support team,” said Hendlin. “There was a Supreme Court justice, Louis Brandeis who said ‘There is no such thing as great writing, only great rewriting’ and working on building a product and improving it is kind of the same way. As you get user feedback and learn more, you try to ‘re-write’ or update parts of the product to make them better.” There are also tools that can be used to measure the technical components of digital experiences, such as latency or error rates. “Indicators like response time and error rates help to assess how easily (or not) customers are able to navigate an app,” said Eric Carrell, DevOps engineer at API company RapidAPI. “Application performance can also be measured by tracking app traffic and figuring out where demand spikes continued on page 8 >

7


006-8_SDT041.qxp_Layout 1 10/21/20 5:23 PM Page 8

8

SD Times

November 2020

www.sdtimes.com

< continued from page 7

or hits a bottom low.” According to Carrell, tools used for monitoring take advantage of protocols like TCP and WMI to gather information. They also use SNMP polling data to monitor things like usage patterns, session details, and latency. “As a DevOps engineer I can easily diagnose issues with data on all transaction parts using tools that help me visualize endto-end transactions,” said Carrell. In addition to measuring things like latency or error rates, product teams can use tools that are designed specifically to monitor how users are actually interacting with software. According to Michael Fisher, product manager at OpsRamp, Pendo, Heap, and Mixpanel are examples of tools that do this. “These tools generally give insight around product adoption, usage and critical paths users take in the application,” said Fisher.

A new breed of monitoring tools is arising According to Fisher, many APM tools are actually starting to incorporate features that allow for monitoring digital experiences. Usually this comes in the form of synthetics and helps provide a baseline of the paths that users take to complete a business transaction. “A new breed of tools that leverage machine learning and elements of AI is emerging in order to provide a deep and dynamic understanding of application code as it’s executing, not just the infrastructure on which it runs,” said Weiss. Examples of such types of tools include feature flags, AIOps platforms, and dynamic code analysis. These tools help developers understand when, where, and why business logic breaks, and then connect that context to the developer who initially wrote the code, Weiss explained. “It’s by creating this 3D view of infrastructure, system components and application code that an innovative, continuously reliable customer experience can be fully delivered,” said Weiss.

Challenges to digital experience monitoring There are, of course, a number of challenges when it comes to monitoring

The RUF framework creates a holistic customer experience report Tim Jones, founder & CEO of app screenshot generator LaunchMatic, shared a story from when he served as growth product manager at Keepsafe Software. He and his team were struggling to create a holistic customer experience report that could be presented to the executive team. “We were receiving tens of thousands of support emails, app reviews, app ratings, and non-support user inquiries every month and the fragmentation between all of them lead to no actionable data to influence our product decisions,” said Jones. Jones found success in using the RUF framework to sort and organize monitoring data to create actionable insights. With RUF (Reliability, Useability and Functionality), data points are sorted into three categories: reliability, usability, and functionality. Reliability would include data points like app uptime, bugs, and app performance. Usability would include data such as user experience issues, complexity of the app, and users who can’t find features. Functionality would include data points like what customers are asking for or missing features. According to Jones, each of these high-level categories can also be filtered into more specific subcategories. For example, sub-categories for reliability could be broken down further into crashes, screen stuttering, and long load times. “What this enabled us to do was consolidate tens of thousands of data-points into one report that could visualize problem areas, while also letting Product Managers dig deeper to read each individual ticket/review. This fundamentally changed the way Keepsafe prioritized feedback, giving each of these users a voice in what we planned in future sprints,” said Jones. z

digital experiences. One is that it can be difficult to define success. Fisher explained: “An application that is simply running, without users adopting, may be construed as successful. Conversely, an application with user adoption but poor performance may be seen as successful as well.” Fisher believes that an application can be considered successful when users are adopting it, it’s solving a real problem, and it is functioning as it should. Hendlin agreed that deciding on the right metrics can be a big challenge. For example, measuring the time a user spends in an application, and wanting it to be high, makes sense for a game or social media app, but for a bill-paying app where users want to get work done quickly, you would want the average session duration to be as low as possible.

According to Hendlin, thinking through key touch points will help you define what metrics to track and instrument. For example, if you’re in e-commerce, you’d likely want to look at outcomes like coupon redemptions, add to cart actions, purchases, time browsing on the site, or subscribing to a newsletter. A social media app would likely have key metrics like posts viewed, content posted, videos watched, comments made, ads revenue generated, and new friends made. Another challenge, according to Daniel Cooper, is getting stakeholders involved and on board. “Sure, you might have an idea of how to set up a comprehensive monitoring strategy, but it takes time, work and willingness to abandon the old way of doing things,” he said. z


Full Page Ads_SDT041.qxp_Layout 1 10/20/20 2:53 PM Page 9


010-13_SDT014.qxp_Layout 1 10/20/20 3:34 PM Page 10

10

SD Times

November 2020

www.sdtimes.com

Half of Managing Is Selling

Five tasks project managers must perform to ‘sell’ their proposals to executives and to their teams BY GEORGE TILLMANN

George Tillmann is a retired programmer, analyst, management consultant, CIO, and author.

D

id you ever stay up late watching infomercials on TV? Remember the salesman selling that stainless-steel turnip slicer-yogurt steamer, “And if you act now….” He must have been talking more than 200 words a minute. Three A.M., a crummy set behind him, a questionable item that might fall apart faster than its overnight delivery, and it most likely made him a fortune. Why? Because, corny as it sounds, he was probably a good salesman. Now imagine your best systems programmer in the same job. You might have the one programmer who would

do well, but many coders would have difficulty selling ice in the desert. What’s worse, they would probably be miserable doing it. The fact is most IT people have neither great selling skills nor the inclination to acquire them. That’s not the bad news. Here is the bad news. All managers—business, IT, and project—if they are to be successful—are salespeople. If you are a project manager, then one of your jobs is to sell your project. The initial proposal meeting—it’s a selling situation. The kickoff meeting—it’s a selling situation. The progress review—it’s a selling situation. So, what’s a reserved project manag-


010-13_SDT014.qxp_Layout 1 10/20/20 3:34 PM Page 11

www.sdtimes.com

November 2020

SD Times

Who Are the Project Stakeholders? There is an old tale about the pet supply company that introduced a new premium dog food. Despite its upscale image, sales were miserable, so the company hired a topnotch marketing consultant to help them. The company executives explained to the consultant that they conducted an extensive advertising campaign, ran nationwide store promotions, and even went as far as to gain celebrity endorsements, but the dog food still did not sell. The consultant then asked a single question, “But do the dogs like it?” The stunned executives looked at one another for anyone who had an answer. None did. There are many versions of this story, all of questionable veracity. Yet, within this tall tale is one of the most important lessons any manager can learn—”Who do I have to please?”

er to do? The successful salesperson needs to know a lot about his/her client, which, it turns out, is the first of five tasks the project manager/salesperson needs to perform.

ONE: UNDERSTAND THE CLIENT. To successfully sell the project, managers need to know their client and what that client is likely to buy. Identify the Clients and Stakeholders. For the average project, IT’s clients are certainly business user management and IT management, although clients, sometimes called stakeholders, can also include others such as employees (for a payroll system), the government (tax systems), and external customers (customer service). Not all stakeholders have a seat at the management table (for example, customer service reps), but they may

have proxies (such as an employee union) whose interests need to be represented and understood. Understand Client and Stakeholder Goals. Why is the user willing to spend good money on this project? Does the project need to be completed before a certain date (Christmas selling period or next tax year, for example)? There are issues that might not be publicly known but that the project manager needs to know. Know What Clients and Stakeholders Expect From Project Management. What do user and IT management expect from the project and the project manager? You would think this would be obvious, but you will be surprised to learn that user management, even IT management, can harbor diverse expectations. Even though they all want a functional system, on time, and on budget, they might not all share the same priorities. User management might be more concerned with cost and least concerned with schedules, while IT management, mindful of its project backlog, is more concerned with schedules. Understanding clients’ priorities can be complicated and tricky, but it is essential for a successful project.

is not always obvious ahead of time. Sometimes a small incident or misunderstood progress is enough to land a project manager in hot water. Pre-selling is having one or more informal one-on-one meetings with critical executives to discuss the meeting’s topics before the formal review session. The purpose is threefold. First, the project manager wants to inform senior management of the issues to be discussed, ensure that they are understood, and correct any misconceptions before the formal meeting. Second, the pre-selling meeting can be used to gauge senior management’s reaction to the meeting issues. This gives the project manager a heads-up on whether executives are likely to be amiable, displeased, or even hostile to issues that are scheduled to be raised at the meeting. The project manager can then prepare a presentation targeted to the expected response. A third advantage of pre-selling is the opportunity for the project manager to correct or mitigate issues upsetting to user management before the review meeting. Sometimes a quick fix can change a career-limiting situation into advancement.

TWO: PRE-SELL ALL MAJOR IDEAS.

THREE: BE PREPARED.

In a fair world, the project manager would be showered with accolades for successes and flogged for failures. Unfortunately, sometimes the clients get it backward. The complexity of the project plan, arcane technology, and bizarre terminology can lead even the most fairminded business executive to the wrong conclusion. Whether a project manager is to be praised or eviscerated at a review

It is amazing how many project managers go into senior review meetings unprepared. Formal Presentations. The formal project review is the primary venue for selling the project. Many project managers go into a meeting thinking that the presentation is the slides and that they only provide background and commencontinued on page 12 >

11


010-13_SDT014.qxp_Layout 1 10/20/20 3:37 PM Page 12

12

SD Times

November 2020

www.sdtimes.com

< continued from page 11

tary. They have it backward. The primary means of communication is the presenter speaking. The slides only provide background information and underscore some of what is said. The successful presentation is not defined by a series of charts and graphs, but rather by the story the presenter tells. The story includes what has been completed, what remains to be done, and any issues or implications going forward. It should be an informative sales pitch— not fluff, not feathers—but hard facts that are relevant to the audience. The best stories include a message that everyone present takes away with them. Informal Meetings. Every project manager should meet informally with every important stakeholder. For some stakeholders, one or two meetings during the entire project are sufficient. Others might want the project manager to meet more frequently. Each stakeholder has his or her own interests and concerns and might even be disinterested in other project issues to the point of rudeness (talk bits and bauds to the CFO, and the meeting might turn hostile). Many project managers have less success with informal stakeholder meetings than with formal ones. The reason: lack of structure. Most formal meetings follow a formula: reserve a conference room, provide coffee and donuts, present a few PowerPoint slides, ask for questions, muddle through some answers, take the remaining donuts back for the support staff. Informal meetings can be a minefield of misunderstood protocols, subtexts, and missed opportunities. However, following a few simple rules can help you to avoid being thrown out of executive row. Prepare to take the lead. A business unit president once complained that the project manager for an important project showed up at her office and apparently thought they were going to chat. The project manager was told not to come back until he had something specific and relevant to talk about. The solution is to never show up empty handed. One project manager always prepared three PowerPoint

pages of project issues that he kept in his briefcase. If the stakeholder had some project issues she wanted to discuss, then the pages never left the briefcase. If the stakeholder had no project issues on her mind, then the project manager brought out the three pages. Follow the top three/bottom three rule. Projects are often large, and the interests of stakeholders can be arcane. It is easy for a project manager to get stuck with little to say on an important topic. You cannot prepare for everything, you cannot know everything, but you can cheat—well not cheat but rather improve your odds of not looking like an idiot. Formal meetings tend to focus on facts: accomplishments to date, budget status, issues going forward. Informal meetings tend to be more question and

three examples of strict budget management as well as the (bottom three) cases of (real or potential) budget overrun. Question 3. Will it work? Will the system do what it was promised to do— features and quality? The project manager should be able to show or discuss three examples of functional success (top three) but also three cases (bottom three) of (real or potential) functional concerns. Why should the project manager have examples of project failures in her briefcase ready to share with stakeholders? First, the client might already know. It is a common management technique to ask questions of a subordinate when the superior already knows the answers to test the honesty of the subordinate—a powerful barometer of credibility and trust.

Every project manager should meet informally with every important stakeholder.

concern focused, with the client or stakeholder asking questions, sometimes at the prodding of the project manager. Questions might be about why the project manager is comfortable with progress or whether he or she needs more resources. Many executives want to ensure that their staff, the business experts associated the project, are providing what the team needs. Being prepared for the un-preparable is possible if the project manager limits the subject to the top three/bottom three facts about the three questions management wants to know most about a project. Question 1. Is the project on schedule? Will the project end when is it supposed to end? The project manager should know or have in her briefcase the top three things that need to happen for the project to finish on time and the bottom three (most probable) reasons it might not. Question 2. Is the project on budget? Will the project cost what it was projected to cost? The manager should have information on the top

Second, it is better to get the bad news out in the open in a one-on-one meeting, where emotions can flair with minimal consequence, rather than in a more public venue. Both stakeholder and project manager have a more private setting to work out differences and resolve problems. Why limit examples to the top three and bottom three? Would not the top five or bottom ten be better? The truth is the average project manager’s mind can only hold so much. Knowing three facts shows that the project manager has some mastery of the subject. Knowing six would add little to the meeting while doubling the project manager’s preparation work. And let’s face it; absorbing more than three of anything is beyond the span of attention of many senior executives.

FOUR: USE THE PROJECT CHAMPION OR YOUR MENTOR. A project champion is a senior executive who feels some ownership of a project. The champion might hold an official position with name and job


010-13_SDT014.qxp_Layout 1 10/20/20 3:37 PM Page 13

www.sdtimes.com

description in the project plan or charter or could be serving based on an informal arrangement solidified behind closed doors. The champion often has the power to influence, if not modify, budgets and project plans and commit organizational resources. The project champion can function both as a project-friendly customer for the project manager and as an excellent salesperson for the project. Every project manager should have one or more mentors (official or unofficial) who can help him or her navigate

happen sometime in the future. In systems development, users have an expectation of what the application they are paying for will do when installed. Of the three project planning variables (cost, time, and functionality), the one that most commonly involves expectation problems is functionality or features. The user believes that the system will do X, but instead it does Y. When systems development expectations get out of whack, it is usually not the person with the strange expectations who suffers the consequences, but IT.

senior executive waters. While senior executive mentors might be uninformed on the latest systems development techniques, they are probably pros on selling ideas to business managers and corporate executives. Use both the champion and the mentor to ensure your message to management is targeted (at what you want to achieve by the meeting), concise, cordial, and frank (the truth). Hone your message by running your presentation past them in a safe and supportive environment.

Expectations commonly go awry for one of two reasons: The user is unsure or unaware of the details. Confusion often exists about what the system will actually do (its functionality) when complete. Expectations stray over time and can grow between their first inception and their reality. Like the guy who buys a Ford Focus, but by delivery time expects a BMW, there are senior executives who want to limit costs and development time during the project funding cycle but, forget their frugality by project end. They are amazed to find that features, discarded as too costly during planning, are missing from the final system. What’s so insidious about expectation disease is that users are absolutely positive they are right. They firmly believe they told IT exactly what they

FIVE: MANAGE EXPECTATIONS. This fifth task—manage expectations— is the most important of the five and the one that comes the closest to encapsulating the other four. An expectation is an anticipation or mental image of something that will

November 2020

SD Times

wanted, and IT, owing to its tradition of underserving or its devious nature, has purposely ignored their requests and sabotaged the system. If you have not experienced this response, then you might find it hard to believe, but it is a real problem. Ask around. You will find someone in your organization who has experienced this horror show first hand. There is a solution to this problem that is also the poster child for this article—manage expectations. Managing expectations means providing near continual, honest, and unvarnished feedback to the user. Three simple rules apply. First, be clear about what the system will do. (1) During project planning, create a small summary document (one page would be ideal) specifying what the system WILL and, more important, what it WILL NOT do. Make sure the user signs off on this document. (2) Keep this document in your briefcase and bring it to ALL meetings with a user, but only use it if you have to. Second, focus on the progress IT has made in building the system. (1) Make sure that progress reviews are user friendly and geared to the three issues that concern users (cost, time, and functionality). (2) Be honest—you’re probably not a good enough liar to snow senior executives. Third, state exactly what IT and users need to do to successfully complete the project. (1) Lay out exactly what you will be doing between now and the next user meeting and any issues you think may arise. (2) If you need anything from the user (staff, cooperation, funds, etc.), this is the time to ask for it. The takeaways for being a great project manager are simple: 1. Recognize that all managers— business, IT, and project—if they are to be successful, are salespeople. 2. The project manager needs to know who his or her clients are and what they expect from the project team. 3. The key to selling success is constant, honest, and informative communication with the user and managing their expectations. Without constant feedback, expectations can go awry. z

13


014_SDT041.qxp_Layout 1 10/21/20 12:23 PM Page 14

14

SD Times

November 2020

www.sdtimes.com

DEVOPS WATCH

CloudBees delivers on delivery vision BY CHRISTINA CARDOZA

CloudBees announced at DevOps World 2020 the first two modules for its Software Delivery Management (SDM) vision are now generally available. The modules are designed for feature management and engineering productivity use cases. According to the company, the software delivery management vision aims to solve delivery challenges while continuously delivering software efficiently across teams, tools and technologies. The release of the two new modules is the initial step in the company’s goal of solving feature management problems and giving teams control over the features. The first module extends CloudBees feature flags technology to enable teams to manage features as well as group and control sets of flags. Additionally, features can be decoupled from deployment schedules and teams can release new features with reduced risk and on schedule. The second module aims to provide engineering managers and leaders with more visibility into the development process. With this module, the compa-

ny explained users will be able to better understand what teams are working on and the right priorities necessary to move quickly and deliver value on time. The second module leverages the software delivery management technology and connects tools and data into the system of record. “In our product research efforts around Software Delivery Management, we found that our customers face the same challenges we do – namely, how do we measure and continually improve engineering efficiency, and deliver product value faster and of higher quality,” said Susan Lally, senior vice president of product development at CloudBees. “That validated that both feature management and engineering productivity were widespread pain points in the industry, so we prioritized bringing these two Software Delivery Management modules to market first. CloudBees also announced new DevSecOps capabilities for its CI/CD solutions. The new features aim to help users bring security checks earlier into the life cycle and more often. New continuous integration and

delivery capabilities continuous include the integration of feature flags, improved role-based access control, and enhanced disaster recovery capabilities. According to the company, unaligned tools and processes as well as lack of integrated tooling and systems causes security to be brought in too late in the software delivery process. The new feature flag integration enables features to be pushed to production in a quick and automated process. Features can be pulled back immediately if any issues arise. New role-based access control includes fine-grained permissions set by team, user and file level. The updated feature also includes the ability to manage non-security related configuration. Disaster recovery capabilities extend Velero to CloudBees CI. Other features include audit-ready pipelines for full traceability and audit reports; a hardened version of its CI solution to meet strict government specifications for security, and integration with leading security automation providers. z

HCL Accelerate gets new governance and reporting features BY CHRISTINA CARDOZA

HCL Technologies has announced the latest release of its value stream management platform. HCL Accelerate 2.1 features automated governance with data-driven intelligence, visible and predictable work insights, and time to value improvements. The new automated rule-based gates leverage security and quality data integrations so no versions can go through without meeting specific criteria. “This feature is really to put the checks and balances in place to make sure that your team has full autonomy to release to the customer as fast as possible,

while also making sure no one is lying awake saying ‘did I look at the right build when I said there were 0 Blockers?’,” Bryant Schuck, product manager for HCL Software DevOps, wrote in a post. Version 2.1 also features an open pipeline for better visibility for build to production. The pipeline connects with deploy tools such as HCL Launch and Azure DevOps, and the company says it has a 70% faster load time. To make work more visible and predictable, the 2.1 release includes new value stream metrics, a new state of sprint report, the ability to run a securi-

ty audit, and the ability to “favorite” value streams for quick check-ins and comparisons. Time to value improvements include the ability of HCL Accelerate on HCL Software Factory, a catalog of Kubernetes-enabled products, and performance and stability improvements. Other features of the release included release orchestration improvements, a new plugin for Jenkins Server, new HCL Accelerate plugins like Black Duck and HCL Compass, more data for existing plugins, and a new pipeline designer role. z


Full Page Ads_SDT041.qxp_Layout 1 10/20/20 2:56 PM Page 15


039_SDT040.qxp_Layout 1 9/18/20 3:53 PM Page 1

presents

Next year’s date:

March 10, 2021

Join your peers for a day of learning Virtual VSM DevCon is a one-day, digital conference examining the benefits of creating and managing value streams in your development organization. At Virtual VSM DevCon, you will learn how to apply value stream strategies to your development process to gain efficiencies, improve quality and cut costs.

Highlights from last year’s sessions: l

An examination of the VSM market

l

What exactly is value?

l

Slow down to speed up: Bring your whole team along on the VSM journey

l

Why developers reject Value Stream Management — and what to do about it

l

You can measure anything with VSM. That’s not the point

l

Who controls the flow of work?

Taught by leaders

l

Tying DevOps value streams to business success

on the front lines of Value Stream

l

Making VSM actionable

l

Value Stream Mapping 101

l

How to integrate high-quality software delivery into the Value Stream

l

Transitioning from project to product-aligned Value Streams

l

The 3 Keys to Value Stream infrastructure automation

REGISTER FOR FREE TODAY! https://events.sdtimes.com/valuestreamdevcon 2020 Sponsors A

Event


017_SDT041.qxp_Layout 1 10/20/20 4:19 PM Page 17

17

The future of DevOps T

he idea of DevOps was born from a need for organizations to deliver software more quickly, to remain competitive in a world gone digital. It called for developers and operation engineers to work together so developers could have the infrastructure resources they needed to deploy multiple times per day. But according to a report published last month by analysis firm Gartner, organizations need to reach the next level in their Agile and DevOps practices to solidify their ability to continuously deliver value to their customers. And, in the 'Predicts 2021' report, the next level, the future, is value streams. Daniel Betts, one of the authors of the report, told SD Times, "You've got traditional DevOps, which is bringing the likes of operations and development to be collaborating and working on delivering value, but you've also got Agile, which is the business and the development teams working closely together. We look at DevOps as that complete piece, as the business

being enabled by IT to deliver value to the customer." Value streams give organizations visibility into their processes, so that when limitations to the flow of work occur, Betts said, "you're using Agile practices, collaboration, technology and tools to help you. So it's very much about delivering business value, where we are thinking very much around the value stream being this way of measuring and mapping out all of the different tasks that you have in delivery, or in the actual business planning, or things like return on investment." But, he noted, DevOps is not to be thought of as a development standalone effort. Disciplines such as infrastructure, security and compliance are also involved . Value stream management brings all of these together to enable businesses to quickly deliver new products and features that meet customer needs. IT is the business. This showcase turns the spotlight on several software providers that help organizations through the various aspects of DevOps. Have a look. t


Full Page Ads_SDT041.qxp_Layout 1 10/20/20 2:56 PM Page 18

Break down DevOps silos by centralizing on a single platform

Release management

Deployment automation

Operations runbooks

Automating your builds and deployments is great, but it’s not the end of automation. Operations teams need to automate all kinds

If we’re going to embrace DevOps and break

of routine and emergency operations tasks to

down silos, the ideal solution should put all

keep your software running. Most CI/CD tools

DevOps automation tasks in a single place.

end once the software is built or deployed,

That’s what we’re doing with Octopus Deploy.

and ops teams are left to use different tools. Octopus Deploy is the first platform to bring Effort is duplicated, multiple systems have

deployment automation side by side with

access to production, and there’s no source

IT/runbook automation, bringing your

of truth.

operations team together in a single place.

octopus.com Creating happy deployments at more than 25,000 companies.


019_SDT041.qxp_Layout 1 10/20/20 4:18 PM Page 19

19

Improve DevOps with Octopus Deploy

M

any organizations have adopted DevOps, but their results vary because some teams have been unable to optimize production-related processes and collaboration. For example, in some companies, developers build applications and deploy them into production. At other businesses, operations professionals have had to learn how to code. Even when teams have a healthy balance of development and operations skills, each function is using its own set of tools. Using Octopus Deploy, DevOps teams can ensure better application reliability in production, facilitate common understanding and deploy code consistently across disparate target environments on premises and in the cloud. “It used to be if you were an ops person, you’d click through wizards and you knew some scripting. If your company’s on-premises infrastructure failed, you knew what would happen,” said Paul Stovell, founder and CEO of Octopus Deploy. “Now if you work in ops you’ve got to know Python, Ruby and YAML because at least part of your infrastructure is in a public or private cloud.” DevOps requires developers and operations professionals to work closer together than they ever have before, which is difficult to achieve when development uses a CI server as its source of truth and operations uses production monitoring tools. “Our view is that we need to bring those things together,” said Stovell. “We think DevOps won’t really become a reality at companies until they share the same tooling and content.” Octopus Deploy provides a single place- to manage releases, automate deployments and automate the runbooks so their applications can stay up and running anywhere.

tools and operations’ tools. It integrates with the popular tools both groups use while providing a common mechanism to improve production-related outcomes. “Rather than starting in the build space and seeing deployment as the last step of the build process, we’re seeing it as the first step towards your application being in production, the lifecycle of your application in production, and all the automation that takes place when it’s in production,” said Stovell. Once an application reaches production, many things need to be automated such as backing up the production environment database and restoring it to the test environ-

Bridge the Gap in the DevOps Toolchain

Enable Better Outcomes

The DevOps toolchain is an end-to-end concept but not an end-to end reality. Developers use IDEs, unit testing frameworks and release automation tools to compile code, unit test it and deploy it, but then the code breaks in production. Meanwhile, operations uses Power Shell, Puppet or Ansible. Because each role uses different tools the classic “it runs on my machine” problem arises which frustrates developers and operations, delays release cycles and creates more work for everyone. “There isn’t a lot of knowledge sharing between dev and ops because there’s a tool chasm that’s preventing it. I think that’s the reason a lot of companies struggle with DevOps and fail to meet their goals,” said Stovell. “They think the way around it is teaching ops people how to code. Then, the ops people start thinking like developers and forget about why the software fails.” Octopus Deploy is the missing piece between developers’

With Octopus Deploy, DevOps teams can achieve higher levels of communication and collaboration using the same source of truth while ensuring that the applications they deploy run more predictably in production. In fact, large companies and businesses in regulated industries trust Octopus Deploy to help them meet auditing and compliance requirements. Octopus Deploy is also capable of automating complicated software deployments and ensuring consistent deployment automation across all target environments. It includes other advanced deployment capabilities and patterns as well as more than 300 deployment steps out of the box. Finally, Octopus Deploy’s runbook automation capabilities help teams automate recovery processes and execute them against infrastructure so applications run with minimal disruption. Learn more at www.octopus.com. t

“There isn’t a lot of knowledge sharing between dev and ops because there’s a tool chasm that’s preventing it.” —Paul Stovell

ment, taking an application offline for maintenance and bringing it online again, and ensuring failover in a disaster recovery scenario — none of which are part of a CI process or enabled by a CI tool. When DevOps teams use wiki pages as runbooks and those runbooks are used to create PowerShell scripts, there’s no good place to store those assets because they’re neither a deployment nor a CI process, they are simply an automation task that needs to be run. A more effective approach is to store all production-related artifacts in the environment where the deployments take place, which is a public or private cloud.


020_SDT041.qxp_Layout 1 10/20/20 4:18 PM Page 20

20

Debug Anything, Anytime, Anywhere

T

he best way to deliver great app experiences consistently is to have visibility and control over what’s happening in production. In today’s real-time world, organizations can’t afford gaps between software development and IT ops. While a DevOps process helps bridge the gap, developers need to understand what’s happening in production and be able to intervene in real-time. With Rookout, developers can interact with app code in real time from wherever they are regardless of whether the app is a legacy, desktop, web, mobile or IoT app. “Development teams are being held increasingly responsible for how their software performs in real world production environments. They need to understand what’s happening and be able to take appropriate action instantly,” Liran Haimovitch, co-founder and CEO of Rookout. Today’s IT ops professionals need real-time observability and application performance monitoring (APM) capabilities. Meanwhile, developers want to know whether the app is functioning properly and driving intended user behaviors. However, neither IT ops nor security want developers meddling with production systems. Simulators and emulators provide safe production-like environments, but they don’t provide complete visibility into actual code in production nor do they allow a developer to interact with the code. “Developers need more granular data than ops people,” said Haimovitch. “The problem is that ops tools weren’t designed for developers, so they don’t show developers the data they need.” With Rookout, developers get actual production-level insight that’s safeguarded by the organization’s governance and security policies. “Developers are constantly asking questions such as who’s calling that function? What arguments are being used to call that function?” said Haimovitch. “They could dig into the logs, but if the logs aren’t there, then data collection code needs to be added manually.” If the redeployment cycle of the code takes a few hours, a week or a quarter to deliver the data the developer needs, the developer has already moved onto many other issues and their contexts. Problem-solving momentum is lost because the developer needs to “come up to speed,” again. “Keep in mind that for every developer this is happening constantly throughout the day and organizations have thousands of developers, so the impact is enormous,” said Haimovitch. “Everything should be as data driven as possible to save time and achieve higher quality. At the same time, you want instant feedback you can act upon right away.” With Rookout, developers can achieve five to 10 iterations in just a couple of minutes while the situation and its context

are still top of mind. What’s more, the software changes do not introduce security, compliance or other risks because the risk mitigation policies have already been defined by security and IT ops so they can be enforced by Rookout.

Access Code Throughout the SDLC Rookout can be used from the earliest stages of the SDLC to produce higher quality code. For example, if an app needs to be updated and the task is being planned, Rookout empowers developers go beyond reading the statis code and to actually observe it in action to understand how it behaves. “Developers need more granular data than ops people. The problem is that ops tools weren’t designed for developers, so they don’t show developers the data they need.” —Liran Haimovithch

Meanwhile, enterprises are trying to enforce software engineering standards which is difficult to do when the different types of apps require different tooling. Rookout works across all applications and environments. “Software infrastructure is becoming more production first, whether it’s serverless, Kubernetes or cloud environments, but developers can’t use their traditional tools so you need cloud and production monitoring tools for your development environments but they don’t provide debugging capabilities,” said Haimovitch. “Rookout gives you almost the same debugging experience you would get from within your IDE.” Rookout also works in test environments. Without Rookout, deploying to a pre-production environment may take a few hours to a few days because, despite the availability of APM and exception management tools, test environments are not as stable as production environments. Rookout goes deeper than traditional APM and exception tracking, enabling developers to see and interact with what’s happening live. Wherever in the SDLC, whatever the app, Rookout enables faster iterations whether it’s fixing a bug or rolling out a new feature. And, if the app is running on a remote physical device or a virtual machine at a customer’s site, developers can connect to it and collect data securely without asking the customer to upgrade just so the developers can access the data. Rookout Also provides every developer on the team with the data they need to troubleshoot, so senior resources can spend more time writing important features and making infrastructure changes. Learn more at rookout.com. t


Full Page Ads_SDT040.qxp_Layout 1 10/15/20 1:44 PM Page 2

Empower your developers to debug less, code mor m e Rookout reduces debugging time by 80% and allows developers to instantly review production, staging, and dev environments with non-br -breaking breakpoints

BOOK YOUR DEMO TODAY

r ookou t.c om


Full Page Ads_SDT041.qxp_Layout 1 10/20/20 2:56 PM Page 22

Be the catalyst with the Digital.ai Value Stream Platform Transforming your organization, disrupting your industry, and delighting customers with digital products they love and trust is not easy. Automating processes and increasing velocity can help, but it’s not going to turn your organization into a high-functioning, digitalfirst company capable of continuous innovation. Agile Planning DevOps Application Security Continuous Testing AI-Powered Analytics

Fortunately, Digital.ai is here. Our intelligent Value Stream Platform helps you plan, build, test, secure, and deliver software at scale. And it’s all backed by AI-driven insights that align development efforts with measurable business goals, like increasing user satisfaction, acquisition, retention, and revenue. Be the catalyst for change in your organization. Learn more at https://digital.ai


023_SDT041.qxp_Layout 1 10/20/20 4:18 PM Page 23

23

Value Stream Matters to DevOps, Business

T

raditional DevOps initiatives have enabled businesses to become more Agile, automate more of their processes, get better at making smaller changes faster, and get releases out quicker… but that’s not enough to compete in today’s modern development world. According to Mike O’Rourke, chief research and development officer at Digital.ai, there is a missing link between DevOps and the business. “There is no link between the way the development organization is defining success and the way the business is defining success,” he explained. Success tends to differ depending on whose measuring it. For instance, success for the development team might be how fast they respond to changes, get a release out or how many bugs they can detect and fix before a release goes out. For the business, success might be getting a fivestar app rating or improving revenue or customer satisfaction. Both matter, but in most organizations, there is no connection between the two. What business and development teams really need is a way to align their definitions of success and understand whether or not what the development team is doing is driving the outcomes the business is looking for. This is where the notion of value streams is becoming extremely important to extend the scope of DevOps and achieve business success. “As you are developing something, wouldn’t it be great to know what the business expects to get out of it?” O’Rourke said. “In most companies there is a big gap between development and business outcomes. At Digital.ai, our goal is to dramatically reduce that gap to the point that when the business has an outcome they are looking for, everyone in the development organization understands what it is they are looking for, why they are looking for it, and how they can contribute to driving that outcome.” The first step in getting there is creating a value stream map, according to O’Rourke, where business requirements are mapped to development artifacts and everyone knows at every juncture what they are trying to achieve. “The goal is to be able to flow information throughout the development life cycle,” said O’Rourke. “Historically, DevOps has been good at looking at things like burndown charts, how many releases you did, where your build failed, etc. etc. But actually visualizing the value flow throughout the development life cycle is something DevOps is missing. At Digital.ai, we bake this capability into Agile planning so that information flows end-to-end regardless of what tools you are using.”

At the same time, the pipeline needs to get smarter. It has to understand all the changes that occurred, where they were made, and what impact they had. When done properly, this information provides an audit trail or software chain of custody that spans the organization. The next step is to take DevOps beyond delivery. Traditionally, the DevOps process has stopped when an item is delivered. Value stream takes it further and enables teams to gather information on the back-end about how customers are using an application, which features they are using, why, how it is performing under load, how secure it is, etc. This rich “As you are developing something, wouldn’t it be great to know what the business expects to get out of it?” —Mike O’Rourke

set of insights flows back into the development lifecycle and enables development teams to implement improvements based on real-world results. “For example, not only can developers do a better job at testing and a securing an app, they can do it context of what the business is looking for. What’s more, because development knows why they were asked to do this work, they can present the business with information about what is really happening with their app in ways that are interesting them,” said O’Rourke. Digital.ai’s value stream delivery solutions help DevOps teams expand their reach and deliver business value to their organizations by building context management right into their platform so information flows end-to-end and everyone has visibility into the software development process. The company’s value stream management solutions leverage an AI-powered analytics engine to flow that information back into the higher-level business. The company also integrates with popular application performance monitoring, AIOps and ITSM tools so it can gather information regardless of the environment the app is running in. When an app gets deployed, it can collect everything from the operating system to the memory size and bring all that information about what is happening inside the application back to the developer and business. “We believe everyone is going to want to go to value stream management over time, but most organizations are still struggling just trying to get information back to their own teams. We get it. And we’re here to help.” said O’Rourke. Learn more at www.digital.ai. t


024-25_SDT041.qxp_Layout 1 10/22/20 10:44 AM Page 24

Featured Companies n Digital.ai: Digital.ai’s DevOps intelligence provides the metrics and insight that

enterprises need to deliver software more efficiently, with less risk and with better results. DevOps intelligence helps you better understand your software delivery process while measuring and proving the ROI of your digital transformation initiatives. Analyze the complete DevOps value stream from ideation and planning, through building and testing, to deployment to production. Get clear context for your measurements to understand the meaning behind the numbers, so you can best direct efforts to continuously improve. n Octopus: Octopus Deploy is the first platform to enable developers, release

managers, and operations engineers to bring all automation into a single place. By reusing configuration variables, environment definition, API keys, connection strings, permissions, service principals, and automation logic, teams work together from a single platform. Silos break down, collaboration begins, and your team can ship – and operate – software with greater confidence. n Rookout: Rookout is pioneering the category of software “understandability” by giving developers tooling to insert code-level, non-breaking breakpoints that eliminate unproductive work and unnecessary wait times associated with traditional debugging. With the shift-left DevOps movement, developers are becoming more and more responsible for how their code behaves in production — and that responsibility requires information. By enabling developers to retrieve necessary data from live systems, without affecting the performance of the application or requiring redeployment, developers achieve a more comprehensive understanding of their applications.

n Appvance: The Appvance IQ solution is an AIdriven, unified test automation system designed to provide test creation and text execution capabilities. It plugs directly into popular DevOps tools such as Chef, CircleCI, Jenkins, and Bamboo. n Atlassian: Atlassian offers cloud and on-

premises versions of continuous delivery tools. Bamboo is Atlassian’s on-premises option with first-class support for the “delivery” aspect of continuous delivery, tying automated builds, tests and releases together in a single workflow. For cloud customers, Bitbucket Pipelines offers a modern continuous delivery service that’s built right into Atlassian’s version control system, Bitbucket Cloud. n Broadcom: With an integrated portfolio span-

ning the complete DevOps toolchain from planning to performance, Broadcom delivers the tools and expertise to help companies achieve DevOps success on platforms from mobile to mainframe. We are driving innovation with BlazeMeter Continuous Testing Platform, Intelligent Pipeline from Automic, Mainframe DevOps with Zowe, and more.

n Chef, from Progress: Chef Automate, the

leader in Continuous Automation, provides a platform that enables you to build, deploy and manage your infrastructure and applications collaboratively. Chef Automate works with Chef’s three open source projects; Chef for infrastructure automation, Habitat for application automation, and InSpec for compliance automation, as well as associated tools.

n CircleCI: The company offers a continuous integration and continuous delivery platform that helps software teams work smarter, faster. CircleCI helps teams shorten feedback loops, and gives them the confidence to iterate, automate, and ship often without breaking anything. CircleCI builds world-class CI/CD so teams can focus on what matters: building great products and services. n CloudBees: CloudBees is the hub of enter-

prise Jenkins and DevOps. CloudBees starts with Jenkins, the most trusted and widely adopted continuous delivery platform, and adds enterprisegrade security, scalability, manageability and expert-level support. The company also provides

CloudBees DevOptics for visibility and insights into the software delivery pipeline.

n Compuware, from BMC: Our products fit

into a unified DevOps toolchain enabling crossplatform teams to manage mainframe applications, data and operations with one process, one culture and with leading tools of choice. With a mainstreamed mainframe, any developer can build, analyze, test, deploy and manage COBOL applications. n Dynatrace: Dynatrace provides the industry’s only AI-powered application monitoring. We help companies mature existing enterprise processes from CI to CD to DevOps, and bridge the gap from DevOps to hybrid-to-native NoOps.

n GitLab: GitLab aims to tackle the entire

DevOps life cycle by enabling Concurrent DevOps. Concurrent DevOps is new vision for how the company thinks about creating and shipping software. It unlocks organizations from the constraints of the toolchain and allows for better visibility, opportunities to contribute earlier, and the freedom to work asynchronously.

n Instana: Agile continuous deployment practices create constant change. Instana automatically and continuously aligns to every change. Instana’s APM platform delivers actionable information in seconds, not minutes, allowing you to operate at the speed of CI/CD. AI-powered APM delivers the intelligent analysis and actionable information required to keep your applications healthy.

n JetBrains: TeamCity is a Continuous

Integration and Delivery server from JetBrains. It takes moments to set up, shows your build results on the fly, and works out of the box. TeamCity integrates with all major development frameworks, version-control systems, issue trackers, IDEs, and cloud services. n JFrog: JFrog Pipelines empowers software

teams to ship updates faster by automating DevOps processes in a continuously streamlined and secure way across all their teams and tools. Encompassing continuous integration (CI), continuous delivery (CD), infrastructure and more, it automates everything from code to production. Pipelines is natively integrated with the JFrog Platform. n Liquibase: Liquibase (formerly Datical) solu-

tions deliver the database release automation capabilities IT teams need to bring applications to


024-25_SDT041.qxp_Layout 1 10/22/20 10:44 AM Page 25

25

market faster while eliminating the security vulnerabilities, costly errors and downtime often associated with today’s application release process.

n Mattermost: The open-source messaging

platform built for DevOps teams. Its on-premises and private cloud deployment provides the autonomy and control teams need to be more productive while meeting the requirements of IT and security. Organizations use Mattermost to automate workflows, streamline coordination, and increase organizational agility. It maximizes efficiency by making information easier to find and increases the value of existing software and data by integrating with other tools and systems. n Micro Focus: Continuous delivery and deploy-

ment are essential elements of the company’s DevOps solutions, enabling Continuous Assessment of applications throughout the software delivery cycle to deliver rapid and frequent application feedback to teams. Moreover, the DevOps solution helps IT operations support rapid application delivery (without any downtime) by supporting a Continuous Operations model. n Microsoft: Microsoft Azure DevOps is a suite

of DevOps tools that help teams collaborate to deliver high-quality solutions faster. The solution features Azure Pipelines for CI/CD initiatives, Azure Boards for planning and tracking, Azure Artifacts for creating, hosting and sharing packages, Azure Repos for collaboration and Azure Test Plans for testing and shipping. n Neotys: Neotys is the leading innovator in

Continuous Performance Validation for Web and mobile applications. Neotys load testing (NeoLoad) and performance-monitoring (NeoSense) products enable teams to produce faster applications, deliver new features and enhancements in less time, and simplify interactions across Dev, QA, Ops and business stakeholders.

n New Relic: Its comprehensive SaaS-based solution provides one powerful interface for web and native mobile applications, and it consolidates the performance-monitoring data for any chosen technology in your environment. It offers code-level visibility for applications in production that cross six languages (Java, .NET, Ruby, Python, PHP and Node.js), and more than 60 frameworks are supported.

n OpenMake: OpenMake builds scalable Agile DevOps solutions to help solve continuous delivery programs. DeployHub Pro takes traditional software deployment challenges with safe, agentless

software release automation to help users realize the full benefits of agile DevOps and CD. Meister build automation accelerates compilations of binaries to match the iterative and adaptive methods of Agile DevOps.

n Perfecto: A Perforce company, Perfecto

enables exceptional digital experiences and helps you strengthen every interaction with a qualityfirst approach for web and native apps through a cloud-based test environment called the Smart Testing Lab. The lab is comprised of real devices and real end-user conditions, giving you the truest test environment available.

n Puppet: Puppet provides the leading IT

automation platform to deliver and operate modern software. With Puppet, organizations know exactly what’s happening across all of their software, and get the automation needed to drive changes with confidence. More than 75% of the Fortune 100 rely on Puppet to adopt DevOps practices, move to the cloud, ensure security and compliance, and deliver better software faster.

n Redgate: Its SQL Toolbelt integrates data-

base development into DevOps software delivery, plugging into and integrating with the infrastructure already in place for applications. It helps companies take a compliant DevOps approach by standardizing team-based development, automating database deployments, and monitoring performance and availability. With data privacy concerns entering the picture, its SQL Provision solution also helps to mask and provision database copies for use in development so that data is preserved and protected in every environment.

n Rogue Wave Software by Perforce:

Rogue Wave helps thousands of global enterprise customers tackle the hardest and most complex issues in building, connecting, and securing applications. Since 1989, our platforms, tools, components, and sup-port have been used across financial services, technology, healthcare, government, entertainment, and manufacturing to deliver value and reduce risk.

n Sauce Labs: Sauce Labs provides the world’s largest cloud-based platform for automated testing of web and mobile applications. Optimized for use in CI and CD environments, and built with an emphasis on security, reliability and scalability, users can run tests written in any language or framework using Selenium or Appium, both widely adopted

open-source standards for automating browser and mobile application functionality.

n Scaled Agile: To compete, every organization needs to deliver valuable technology solutions. This requires a shared DevOps mindset among everyone needed to define, build, test, deploy, and release software-driven systems. SAFe DevOps helps people across technical, non-technical, and leadership roles work together to optimize their end-to-end value stream. Map your current state value stream from concept to cash, identify major bottlenecks to flow, and build a plan that will accelerate the benefits of DevOps in your organization. n Tasktop: Transforming the way software is

built and delivered, Tasktop’s unique model-based integration paradigm unifies fragmented best-ofbreed tools and automates the flow of project-critical information across dozens of tools, hundreds of projects and thousands of practitioners. The ultimate collaboration solution for DevOps specialists and all other teams in the software lifecycle, Tasktop’s pioneering Value Stream Integration technology provides organizations with unprecedented visibility and traceability into their value stream. Specialists are empowered, unnecessary waste is eradicated, team effectiveness is enhanced, and DevOps and Agile initiatives can be seamlessly scaled across organizations to ensure quality software is in production and delivering customer value at all times.

n TechExcel: DevSuite helps organizations manage and standardize development and releases via agile development methods and complete traceability. We understand the importance of rapid deployment and are focused on helping companies make the transition over to DevOps. To do this, we have partnered with many automation tools for testing and continuous integration, such as Ranorex and Jenkins. Right out of the box, DevSuite will include these technologies.

n Tricentis: Tricentis Tosca is a continuous testing platform that accelerates software testing to keep pace with Agile and DevOps. With the industry’s most innovative functional testing technologies, Tricentis Tosca breaks through the barriers experienced with conventional software testing tools. Using Tricentis Tosca, enterprise teams achieve unprecedented test automation rates (90%+) — enabling them to deliver the fast feedback required for Agile and DevOps. z


026-28_SDT041.qxp_Layout 1 10/21/20 4:32 PM Page 26

26

SD Times

November 2020

www.sdtimes.com

It’s not all about deployment with BY JAKUB LEWKOWICZ

T

oday’s software release methodology involves multiple aspects of the SDLC: planning, scheduling, and managing a software build through the stages of developing, testing, deploying, and supporting the release. Release automation, as well as methodologies like Agile development, continuous delivery, and DevOps have greatly contributed to the evolution of release management. It is a core function of operations, however, the more mature organizations are focusing on a DevOps approach, according to Jason Bloomberg, president of analysis and advisory firm Intellyx. Release management oversees all the stages involved in a software release from development and testing to deployment, and is required any time a new product or even changes to an existing product are requested. “Release management as a whole has actually been around for many, many years. It’s been around from the old waterfall days and all the way through to Agile, and it’s meant different things to each of these methodologies,” said Mike O’Rourke, the chief research and development officer at DevOps software provider Digital.ai. “The release process used to be kind of like you just did a release and it was on a DVD and you hand the DVD to someone when they bought your product, but now those releases can happen, you know, tens of hundreds of times a day.” Developers shifted from thinking of just putting out a project at a set time, to getting involved in all of the processes that follow a release, whether that’s testing, updates, support, and more. In previous methods of release management, everything went through a change advisory board and through a lot of manual checkpoints and guardrails, whereas now it’s much more of an automated capability, and this style of release management has really picked up momentum in the last five to

Buyers Guide

Release automation as well as methodologies like Agile development, continuous delivery, and DevOps have greatly contributed to the evolution of release management. seven years, according to O’Rourke. The born-on-the-web companies such as Amazon started out the trend by constantly updating their main webbased product, and over time, earlierestablished companies started realizing that they have to shift their quarterly releases up to daily releases. Heavily regulated industries took longer to get there as they had to deal with compliance issues in their release management. This includes financial industries, insurance, government, aerospace, defense, and health care, O’Rourke said. Release management has also evolved from being just about deployment. Whereas deployment only refers to moving code into some sort of environment (which could be the test or staging environment), release management goes beyond. “The key difference here is that the end user experience is important with release, where it’s not something you can particularly focus on with deployment,” Bloomberg said. “So that is where software release management now has to go beyond CI/CD tools and now you have to worry about the operational side, in addition to the software deployment side.” Effective software release management requires that all of the modules are

interoperating at some level so that development teams have a good understanding of what’s really going on in the production environment with the code that they’re working on in the context of all the other code. This has spawned tooling that provides visibility into what’s going on with real-time metrics and logs. “Tooling is now supporting developers who are able to get visibility into the behavior of their software in production and this is another part of the cultural shift that DevOps presents,” Bloomberg said. “You can’t just throw your code over the wall and now it’s an Ops problem, but rather developers have to be responsible for their code in production in a day-to-day practical way. They have to have the appropriate operational tooling, the user experience, application performance management tooling that is suitable for the role of the developer. This is different from the typical role of the operations engineer, who is just making sure things are running. Developers have to make sure that releases are properly executed and they have some sort of canary testing or A/B testing that that’s done properly. They’re working with the SREs to ensure that software is properly tested in a production environment, according to Bloomberg. Today, the part of software release management that deals with testing in


026-28_SDT041.qxp_Layout 1 10/21/20 4:32 PM Page 27

www.sdtimes.com

November 2020

SD Times

software release management production has changed dramatically with the use of feature flags or A/B testing. “[These capabilities] allow us to say I have this person over here in this demographic and I want to try a couple of things with him, but I don’t want anyone else to see it. And maybe if he likes it, I’ll turn on some other capabilities for all these other people,” O’Rourke said. “If you don’t have that control to be able to pull it back and put the thing that used to work out there very quickly, you’re in trouble. These are the things that can take an app from five stars down to three, just like that.” As this type of testing in production became more prevalent, companies realized that they need to plan well ahead of time if something goes wrong. “No matter how well you test your software or how sophisticated your infrastructure-as-code approach is, there are going to be things that you need to deal with in production. So instead of just being caught by surprise by something breaking in production, you plan ahead,” Bloomberg said. “Testing in production is a phrase that’s always sort of struck fear in the hearts of any engineering manager because it’s like wiring your house with the power on. Who would want to do that if there’s an alternative. But, with this kind of software, dynamic software deployed to scale, there’s a lot of things that you just don’t know about the behavior of your software until you put it in a full production environment.” In addition to testing in production, another major shift is the emergence of GitOps within software release management, which is a set of practices that empowers developers to perform tasks that typically fall in the hands of IT operations. “You would think from its name that it’s about Git, but GitOps is more about how Ops is taking a Git

approach. This involves how a Git-based development life cycle deals with multiple teams working in parallel, deals with bringing code together and bringing into production and deals with representing code in a declarative fashion: the infrastructure-as-code approach,” said Bloomberg.

Automation is integral for remote release management While many organizations already know the benefits of automating their releases and have these types of tools working for them, the key challenge is to be able to orchestrate all of the information across all of the different tools, according to Digital.ai’s O’Rourke. “You can’t just say, we only work with one of those tools. If you’re going to bring the organization together and automate them as a whole, you have to really work and orchestrate all those capabilities,” O’Rourke said. “So the ability to manage that process from start to finish is becoming increasingly important.” In most organizations today, much of the release management process is still not fully automated, according to O’Rourke. However, many of these companies have improved hundreds of percentiles, meaning what might’ve taken six months and now it takes six weeks. “That’s great. But six weeks down to where Amazon is at multiple times a day, you know, you’ve still got a ways to go,” O’Rourke added. The future of this more sophisticated

automation will be primarily driven by AI, according to Intellyx’s Bloomberg. “There are going to be new levels of automation and it brings new, more intelligent automation that leverages AI approaches so that it doesn’t boil down to human decisions, even on how you go about automating,” Bloomberhg said. “We’ll have our AI create the automations. And the humans now manage the AI, which is essentially managing the continued on page 28 >

How does your company help organizations with software release management? Mike O’Rourke Chief of R&D at Digital.ai Digital.ai Release provides release orchestration capabilities that enable organizations to achieve continuous delivery. Teams across an organization can automate and monitor the stages of the most complex software deliveries, cutting release times by identifying bottlenecks and improving processes. Digital.ai Release provides the backbone for release automation by integrating existing tools, providing end-to-end governance and provable compliance, and by enabling full visibility across the entire software delivery process. Digital.ai Release allows organizations to: l Orchestrate, execute and monitor the most complex release pipelines l Automate manual release processes, improving team efficiency while increasing reliability l Gain insight into everything that happens in each release, showing the who, what, where and when of each software change l View real-time release status from a centralized dashboards and reports for both technical and business users l Identify and resolve software delivery bottlenecks l Manage all manual and automated tasks across the release pipeline l Easily integrate with existing DevOps tools

27


026-28_SDT041.qxp_Layout 1 10/21/20 4:32 PM Page 28

28

SD Times

November 2020

www.sdtimes.com

< continued from page 27

data that feeds the AI. So it’s a whole different way of thinking about dealing with automation at scale. And I think we’re just scratching the surface of that.” The growing need for automated release management was already gaining steam before 2020, but now it has drastically changed the trajectory

upward in a short period of time. With the pandemic forcing most software organizations to work from home, companies had to really invest in automating much of the process. “COVID has made this even a bigger issue. It used to be that with a lot of these manual things I could potentially still do them very quickly. If I’m hanging

around the office, I could say, approve this? Will you approve this? Well now I can’t do that,” Digital.ai’s O’Rourke said. “People have found that if they don’t automate, they’re basically losing ground with their competition and so the notion of COVID, I will tell you, it has dramatically accelerated this whole notion of release management and DevOps.” z

A guide to release management tools

n BMC Remedy with Smart IT provides capabilities for planning, building, testing, and deploying controlled releases into your IT environment. Consider using release management to implement or deploy an application or a software on a large scale across the company. Smart IT provides a simple release-tracking interface that is simple to learn and use, collaborative, and available on mobile devices. n Broadcom Continuous Delivery Director connects everything from user stories through testing and approvals to production deployment and monitoring. Thanks to a centralized, configurable dashboard, you can see stats on release productivity and identify bottlenecks, as well as monitor the evolution of your teams. You can also create powerful content and pipeline reports to share.

n Chef, from Progress Chef Habitat provides automation capabilities for defining, packaging and delivering applications to almost any environment regardless of operating system or deployment platform. Habitat enables DevOps and application teams to build continuous delivery pipelines across all applications and all change events; create artifacts that can be deployed on-demand to bare-metal, VMs or containers without any rewriting or refactoring; and scale the adoption of agile delivery practices across development, operations and security. n CloudBees CloudBees Flow is a release orchestration platform that offers unprecedented insight and control over releases and pipelines. It helps organizations reliably handle delivery at any speed or scale, while allowing users to manage the software release process in a single pane. n HCL Launch is engineered to handle

FEATURED PROVIDER n Digital.ai: Digital.ai Release is a release orchestration tool specifically for continuous delivery. It enables teams across an organization to model and monitor releases, automate tasks within IT infrastructure, and cut release times by analyzing and improving release processes. It provides the backbone for DevOps release automation, integrates existing tools, and enables full visibility across the entire software delivery process. your most complex deployment situations with push-button automation and controlled auditing needed in production, with single click deployment of complex applications with multiple tiers, services, and components. Launch helps organizations incorporate repeatability, predictability, auditability, and traceability into their delivery pipelines. n JFrog JFrog DevOps tools enable fully automated build, test, release and deploy processes providing rapid feedback loops for continuous improvement, while providing extensive APIs. n LaunchDarkly enables development and operations teams to deploy code at any time, even if a feature isn’t ready to be released to users. Wrapping code with feature flags gives you the safety to test new features and infrastructure in your production environments, without impacting the wrong end users. n Micro Focus Release Control empowers customers to manage the application release life cycle from development, through deployment, and into production. Users can centrally schedule, manage, track, and control all test and pre-produc-

tion environments. Deployment processes and release task lists can be unified by automating large-volume, highly repetitive tasks across systems, tools, and teams. n MidVision RapidDeploy allows you to plan, execute and track a release through every stage of the life cycle model. The application deployment life cycle is mapped within the tooling — increasing the speed, reliability and transparency of the software release process. Complex multi-component applications can be moved from one environment to the next as a single unit of work, re-using a consistent set of automation and orchestration procedures in every environment the application is deployed to. n Plutora provides a complete toolkit for application delivery for defining and scheduling hierarchical releases, tracking dependencies, managing approvals, and maintaining compliance while accelerating change. Its centralized planning and release orchestration creates efficient, predictable continuous delivery pipelines. n ShuttleOps Eliminate delays, failures, and conflicts by replacing time-consuming manual tasks and unmaintainable scripts, with its no-code approach to continuous integration (CI) and continuous delivery (CD). ShuttleOps combines build, deploy and manage capabilities in one easy-to-use platform, so you can move towards your goal of continuous application management. n Split combines feature flags and data, so you can deploy often, release without fear, and experiment to maximize impact. Feature flags free you to deploy when you want, roll out when ready, and dynamically adjust in production. Combine feature flags with ops data to detect release issues, identify the feature, and kill it instantly. z


Full Page Ads_SDT041.qxp_Layout 1 10/20/20 2:57 PM Page 29

Release Orchestration Deployment Automation DevOps Visibility and Insights End-to-End Governance & Reporting

Power Your Continuous Delivery Process with End-to-End Release Orchestration and Deployment Automation Designed for enterprises with complex pipelines, Digital.ai (formerly XebiaLabs) simplifies the DevOps delivery process. With Digital.ai, you gain a software system of record that provides visibility into each release, with the flexibility to deploy to any environment, from mainframe to containers and the cloud. Automate orchestration, connect all your DevOps tools, manage the interactions between them, and achieve continuous delivery with Digital.ai. Learn more at https://digital.ai

Agile Planning

DevOps

Application Security

Continuous Testing

AI-Powered Analytics


030_SDT041.qxp_Layout 1 10/21/20 2:21 PM Page 30

30

SD Times

November 2020

www.sdtimes.com

Analyst View BY ROB ENDERLE

Opportunity to develop virtual worlds Rob Enderle is a principal analyst at the Enderle Group.

T

wo products were announced this year that together would seem to create an unparalleled opportunity for developers. That opportunity is to create virtual worlds for both consumer and commercial audiences. As we continue to struggle through the coronavirus pandemic, the need for more in-home entertainment and work collaboration alternatives is sharply increasing. People want to get away, they want their kids out from under their feet, and they need better ways to work with their peers safely and remotely. Two products focused on developers together provide a path to meeting this increasing need. They are HP’s new Omnicept solution and NVIDIA’s Omniverse Machinima solution. The two firms are working together to flesh out a combined solution. The result could be fantastic for developers. Let’s explore that.

We have an increasing need to provide safe ways for people to work, play, and unwind.

Defining the need

Most of us are still locked up at home and, if you are like me, going a little stir crazy. VR had the promise of providing an athome way to explore new worlds and even collaborate virtually, but much of the consumer-focused hardware was of low quality, and the software didn’t step up to the challenge. As a result, those who have VR headsets aren’t using them to address this need for entertainment and collaboration. The industry is missing the related revenue and needs a critical set of tools focused on the problems. You can create 3D images in a virtual world and manipulate them as a group if that group is connected and has the proper hardware. When HP brought their first Reverb headset to market, they mainly fixed the hardware part with a high-resolution offering, higher comfort, and a better industrial design. HP recently refreshed that offering with its secondgeneration product, the HP Reverb G2, which had improved cameras, sound, and even higher quality. That set a new hardware baseline, but we still needed to focus the result on the problem.

HP Omnicept HP Omnicept begins with a modified HP Reverb headset with more sensors and a more collaborative

focus. Those sensors monitor the eyes, put a camera on the mouth, and monitor heart rate. The eye sensors lower hardware requirements, only rendering what the eye sees and reporting back what the eyes are doing while the mouth camera picks up mouth movements. Both eye and mouth sensors are critical to conveying emotions from the user through an avatar that might be placed into a virtual world. In addition, the cameras on the outside of the headset can capture 90% of arm movements, adding to creating more realistic avatars. The SDK and services that wrap this solution allow developers to create avatars that can act and react like their users in a virtual world, but we still need that virtual world.

NVIDIA Omniverse What NVIDIA Omniverse Machinima brings to the table is that virtual world. Game elements developers can build rich worlds that these avatars can explore for entertainment or use for virtual meetings. The avatars can move through those worlds using tools like the HP Omnicept headset and SDK as if they were in the real world. Textures, photorealism, physics, and environmental elements like lighting and NPCs (Non-Player Characters) can be added for realism. All worlds that people could vacation or work in become possible, opening a massive opportunity for developers. Face animation is built into the tool. All it needs is HP’s Omnicept headset to feed the animation (default is to animate by sound, but that won’t provide the level of realism possible from instrumenting the face). Imagine having meetings in a virtual castle, an office building that has yet to be built, or collaborating on a design in a virtual lab. We have an increasing need to provide safe ways for people to work, play, and unwind. VR had the potential to create this solution but, up until recently, didn’t step up to the challenge. HP with its Omnicept and NVIDIA with its Omniverse Machinima effort provide the potential, with the help of developers, to meet this massive and growing pandemic-sourced need. Now it is just a matter of time before a focused developer puts these two tools together and creates the next significant collaboration, entertainment, or virtual vacation offering. The only remaining question is, who will be up for that task? z


031_SDT041.qxp_Layout 1 10/21/20 4:31 PM Page 31

www.sdtimes.com

November 2020

SD Times

Guest View BY STEPHEN MAGILL

Use static analysis to secure open source S

onatype’s 2020 State of the Software Supply Chain Report found that next generation cyber-attacks actively targeting open-source software projects increased 430% over the past 12 months. Industry and the Open Source communities recognize heightened security risks and are working to solve these. For example, in August 2020 the Linux Foundation launched the Open Source Security Foundation (OpenSSF), billing itself as “a cross-industry collaboration that brings together leaders to improve the security of open-source software.” The Foundation notes how pervasive open source has become, and how critical it is to bring together open-source security initiatives and those who support them to advance open-source security for all stakeholders. Traditional security tools can help here, but open source is public, transparent, cloud-based, and collaborative. This lends itself to a new way of certifying software: Continuous Assurance. In this approach, automated tools and processes ensure that, as code changes, it continually satisfies compliance, quality, and security requirements. It’s the GitHub-era agile development approach to security and code quality. Continuous Assurance integrates directly into development and benefits from the always-up-to-date nature of cloud services, making it a perfect match for open source. Google and Facebook pioneered the first scaled implementations of Continuous Assurance, and have extensively shared their learnings and open sourced several tools like Infer and ErrorProne from those initiatives. Their findings boil down to these three key principles. 1. Developers First. Developers are the only ones who can fix bugs. Bug reports need to be targeted at developers, not security or compliance experts. Also, as Facebook learned, a focus on new/changed code, rather than generating long lists of pre-existing errors makes the best use of developer attention and gets bug reports fixed rather than ignored (see their report of a tool that went from 0% to 70% fix rate just by focusing on diffs during code review). 2. Use Many Tools. Unfortunately, there is no one tool to rule them all. Every project’s code base is different, whether because of the language make-

up, the bugs it cares about, or a million other reasons. And fortunately, the open-source community has created lots of analyzers for different languages, problem domains, resource constraints, etc. But these tools have limited uptake. Why is open source not using more open-source analysis tools? Ease of use is one factor, but cloud-based analysis services address this blocker. Just as open source relies on community code contributions, it should rely on those same contributors to suggest and implement static analysis tools that would improve code security and quality. We need better feedback loops between analysis authors and developers and it starts with increasing use of analysis tools. 3. Revisit and Improve Results. Static analysis tools have a well-earned reputation for being noisy and annoying. And as Google learned in their experiments, noisy analyzers make developers ignore tools and their results. At Google, where they consider any bug not fixed by developers a false positive, any analyzer with 10% or more false positives (i.e. bugs not fixed), would be pulled and reworked. That process made it easy for them to allow any developer at Google to write their own analyzer they could share with the rest of the company. Open-source maintainers should ensure they develop a strong feedback loop with their code analysis partners, so when tools are too noisy, they can be tuned or removed. A properly implemented static analysis solution should be fairly quiet, so when it raises issues as code review comments, developers listen and engage. To get started, learn about the tools available, think about what’s important to your project, and put together a plan and prioritization of tools you want to use. Focus on open-source tools that can be integrated into CI/CD pipelines, either directly, or through a commercial platform that is ideally free for open source, and integrates a broad range of open-source tools. Whatever path you go, focus on the developer experience, implement a broad range of tools, and continually monitor your tools and tune the noise out. Doing so will keep your contributors happy, and actively continuing to write great code for your project. z

Stephen Magill is CEO of MuseDev.

Continuous Assurance... is the Github-era agile development approach to security and code quality.

31


Full Page Ads_SDT040.qxp_Layout 1 9/18/20 3:49 PM Page 40

SD T Times imees Newsletterss The latest news, news n analysis and commentary delive ered to your inbox!

• Reports p on the newest technologies g affectingg enterprise deveelopers • Insights into thee practices and innovations reshaping softw ware development • News from softtware providers, industry consortia, open n source projects and more m

Read SD Times Newsletters to keep up with everything ng happening in the e software development industry.

SUBSCRIBE TODA T AY! Y!


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.