FC_SDT036.qxp_Layout 1 5/20/20 5:17 PM Page 1
JUNE 2020 • VOLUME 2, ISSUE NO. 36 • $9.95 • www.sdtimes.com
IFC_SDT036.qxp_Layout 1 5/20/20 10:52 AM Page 4
Instantly Search Terabytes
www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein drubinstein@d2emerge.com NEWS EDITOR Christina Cardoza ccardoza@d2emerge.com
dtSearch’s document filters support: ‡ popular file types ‡ emails with multilevel attachments ‡ a wide variety of databases ‡ web data
‡ efficient multithreaded search ‡ HDV\ PXOWLFRORU KLW KLJKOLJKWLQJ ‡ forensics options like credit card search
ART DIRECTOR Mara Leonardi mleonardi@d2emerge.com
CONTRIBUTING ANALYSTS Enderle Group, Gartner, IDC, Intellyx
CUSTOMER SERVICE SUBSCRIPTIONS subscriptions@d2emerge.com ADVERTISING TRAFFIC Mara Leonardi mleonardi@d2emerge.com LIST SERVICES Jessica Carroll jcarroll@d2emerge.com
Developers: ‡ 6'.V IRU :LQGRZV /LQX[ PDF26 ‡ &URVV SODWIRUP $3,V IRU & -DYD DQG NET with NET Standard / 1(7 &RUH
.
Jakub Lewkowicz jlwekowicz@d2emerge.com
CONTRIBUTING WRITERS Jacqueline Emigh, Lisa Morgan, Jeffrey Schwartz
2YHU VHDUFK RSWLRQV LQFOXGLQJ
.
SOCIAL MEDIA AND ONLINE EDITORS Jenna Sargent jsargent@d2emerge.com
.
‡ )$4V RQ IDFHWHG VHDUFK JUDQXODU GDWD FODVVLILFDWLRQ $]XUH $:6 DQG PRUH
REPRINTS reprints@d2emerge.com ACCOUNTING accounting@d2emerge.com
ADVERTISING SALES PUBLISHER David Lyman 978-465-2351 dlyman@d2emerge.com
Visit dtSearch.com for ‡ KXQGUHGV RI UHYLHZV DQG FDVH VWXGLHV ‡ IXOO\ IXQFWLRQDO HQWHUSULVH DQG developer evaluations
The Smart Choice for Text Retrieval® since 1991
dtSearch.com 1-800-IT-FINDS
SALES MANAGER Jon Sawyer 603-547-7695 jsawyer@d2emerge.com
PRESIDENT & CEO David Lyman CHIEF OPERATING OFFICER David Rubinstein
D2 EMERGE LLC 80 Skyline Drive Suite 303 Plainview, NY 11803 www.d2emerge.com
003_SDT036.qxp_Layout 1 5/20/20 3:54 PM Page 3
Contents
VOLUME 2, ISSUE 36 • JUNE 2020
FEATURES
NEWS 4
News Watch
6
Building a new developer experience
8
Governance, risk and compliance does not have to be a lengthy, tedious process
16
CloudBees brings feature flagging to on-premises environments
16
ShuttleOps announces no-code CI/CD solution for app delivery
The most important factor in project success? Your staff
COLUMNS 28
GUEST VIEW by Richa Roy It’s getting too technical
29
ANALYST VIEW by Bill Holz 8 traits of an Agile superhero
30
INDUSTRY WATCH by David Rubinstein What does ‘value’ mean to developers?
page 12
page 18
Documentation continues to be a thorn for developers page 10
Observability: It’s all about the data
Monitoring The lasT of Three ParTs
Moving from Python 2 to Python 3 page 14 page 22 Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 80 Skyline Drive, Suite 303, Plainview, NY 11803. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2018 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 80 Skyline Drive, Suite 303, Plainview, NY 11803. SD Times subscriber services may be reached at subscriptions@d2emerge.com.
004,5_SDT036.qxp_Layout 1 5/20/20 9:38 AM Page 4
4
SD Times
June 2020
www.sdtimes.com
NEWS WATCH Google to replace TensorFlow’s runtime Google has announced a new TensorFlow runtime designed to make it easier to build and deploy machine learning models across many different devices. The company explained that ML ecosystems are vastly different than they were 4 or 5 years ago. Today, innovation in ML has led to more complex models and deployment scenarios that require increasing compute needs. The new TFRT provides efficient use of multithreaded host CPUs, supports fully asynchronous programming models, and focuses on lowlevel efficiency and is aimed at a broad range of users such as: l researchers looking for faster iteration time and better error reporting,
l application developers looking for improved performance, l and hardware makers looking to integrate edge and datacenter devices into TensorFlow in a modular way.
Microsoft rebrands Visual Studio Online Microsoft has revealed it is renaming Visual Studio Online to Visual Studio Codespaces. The company found developers were using Visual Studio Online as much more than “just an editor in the browser,” so it decided to rename the product to better align with its value. “Do you want a great experience working on your longterm project? Do it in a Codespace. Need to quickly prototype a new feature or perform some short-term tasks (like reviewing pull requests)? Create a Codespace! Your
People on the move
n Abby Kearns, former CEO and executive director of the Cloud Foundry Foundation, has announced she is joining Puppet as its new chief technology officer. Kearns has 20 years of experience in cloud computing and working with open-source communities. At Puppet, she will help to grow and evolve the company’s products and services in order to help enterprises modernize and manage their infrastructure. n Enterprise cloud management platform provider CloudBolt has appointed Jeff Kukowski as its new CEO and member of its board of directors. Kukowski will be responsible for scaling and growing CloudBolt’s position in the cloud management platform space. n Digital.ai announced Stephen Elop and Angela Tucci are joining the newly formed company’s board of directors. Elop will become the chairman of the board while Tucci will be a board member. Elop was previously the group executive of technology, innovation and strategy at Telstra and the executive vice president of Microsoft’s devices group. Tucci served as the general manager of agile management at CA Technologies and chief revenue officer of Rally Software.
Codespaces are simply the most productive space to code,” Nik Molnar, principal program manager, wrote in a post.
TypeScript 3.9 now available TypeScript 3.9 introduces speed improvements to the compiler and editing experience and reduces bugs and crashes. The team accepted a number of pull requests that optimize speed, each of which should reduce compile times by 5-10%. “Our team has been focusing on performance after observing extremely poor editing/compilation speed with packages like material-ui and styled-components. We’ve dived deep here, with a series of different pull requests that optimize certain pathological cases involving large unions, intersections, conditional types, and mapped types,” Microsoft wrote in a post.
Catchpoint adds user sentiment to monitoring platform Catchpoint announced it is adding the ability to capture user sentiment to its digital experience monitoring platform. According to Catchpoint’s announcement, the new capability will offer enterprises “broader insights into the overall health and performance” of their applications and services. The offering rounds out the platform, which already performs synthetic, network, endpoint and real user monitoring. User sentiment is collected in three ways, according to Catchpoint CEO Mehdi Daoudi: monitoring global social net-
works to see if people are complaining about a company or its application; using machine learning to remove false positives, and selfreporting of issues.
GitHub Satellite introduces new coding solutions Despite being apart this year from attendees, GitHub’s virtual Satellite conference was focused on bringing software communities together. The company announced Codespaces, GitHub Discussions, code scanning and secret scanning, and GitHub Private Instances. Codespaces is a new development environment within GitHub that aims to make contributing to projects easier. GitHub Discussions focuses on collaborating outside the codebase. Code scanning and secret scanning are new beta features that work to secure code. Code scanning is available as a GitHub-native experience and works to find potential security vulnerabilities. Secret scanning, formerly known as token scanning, is now available for private repositories. GitHub Private Instances were designed for enterprises who needed more security, compliance and policy features.
ConnectALL’s new value stream offerings ConnectALL announced the release of 2.10 with new features and enhancements designed to extend its value stream management (VSM) offerings. The latest release includes a new value stream empowerment tool: the Value
004,5_SDT036.qxp_Layout 1 5/20/20 9:36 AM Page 5
www.sdtimes.com
books in batches, hybrid runtime support, Python script execution capabilities, notebook versioning through Git, and reusable configurations.
Node.js 14 with improved diagnostics
Android 11 beta plans
The latest version of the JavaScript runtime Node.js is now available. Node.js 14 will replace Node.js 13 on the current release line where it will remain the ‘Current’ release for the next 6 months until LTS support arrives in October 2020. Highlights of the new release include improved diagnostics, an upgrade of V8, an experimental Async Local Storage API, hardening of the streams APIs, removal of the Experimental Modules warning, and the removal of some long deprecated APIs.
Google announced a new release timeline for Android 11. Beta 1 has been moved to June 3rd. All other subsequent milestones have been moved by about a month to give everyone more time. The final release date is set for Q3. Beta 1 will include the final SDK and NDK APIs and Google Play publishing for apps targeting Android 11 will be opened. “The schedule change adds some extra time for you to test your app for compatibility and identify any work you’ll need to do,” Dave Burke, vice president of engineering for Android, wrote in a blog post. “We recommend releasing a compatible app update by Android 11 Beta on June 3rd to get feedback from the larger group of Android Beta users who will be getting the update.” z
The Open Container Initiative has announced the general availability of Harbor 2.0. The latest release makes it the first OCI-compliant open-source registry capable of storing cloud-native artifacts such as container images, Helm charts, OPAs, and Singularity. In addition, it enables pulling, pushing, deleting, tagging, replicating and scanning artifacts. “Although Harbor is now OCI-compliant, existing users should not worry; all of the familiar operations and key benefits of Harbor translate well to OCI. You can push, pull, delete, retag, copy, scan, and sign indexes just like you’ve been able to do with images. Vulnerability scanning and project policies, key ingredients to enforcing security and compliance, have been revamped to work with OCI artifacts,” the OCI wrote in a blog post.
Kite expands to JavaScript AI coding assistant Kite has announced the release of an AI-driven code completion feature for JavaScript. Its goal in creating this feature is to reduce the need for developers to write repetitive boilerplate code. Previously the company only offered this capability for Python code. This new JavaScript code completion ability is based on
a deep learning model that was trained on 22 million open-source JavaScript files. This extensive training also ensures that it works with libraries and frameworks, such as React, Vue, Angular, and Node.js. According to Kite, this new capability is able to provide completions at a level that other editors cannot. For example, typing user. into an editor using Kite will prompt suggestions like createdAt = moment(user and createdAt = moment(, while VS Code only provides suggestions like map, moment, user, and users. Kite uses AIbased filters to reduce noise on suggestions.
IBM’s Elyra extension for Jupyter Notebooks To further its commitment to open-source, IBM is releasing Elyra. Elyra is a set of opensource extensions for AI for Jupyter Notebooks, a browser-based IDE that allows developers to write and share code online. The initial release offers a Notebook Pipeline visual editor, the ability to run note-
SD Times
would like to. The team behind the Go programming language has released their annual Go developer survey, with data from 2019. According to the report, the reasons why developers can’t use Go more is because they are working on a project in another language (56%), working on a team that prefers another language (37%), and the lack of a critical feature in Go (25%). Critical features missing include generics, better error handling, functional programming features, stronger type system, enums/sum types/ null safety, increased expressivity, and improved performance and runtime control. However, the Go team finds the number of developers that prefer a different language decreases every year.
Harbor reaches 2.0 milestone
Stream Visualizer, which enables users to generate a complete map of their value stream, as well as extended lean metrics, and a new user interface. In addition, the company announced a new website, valuestreammanagement.com, a free platform for value stream mapping and delivery. Users can configure and map their value stream, and then download that map to drive more efficient and agile value stream integration within their organization and provide a structure for tracking improvement, according to ConnectAll.
June 2020
Go developer survey released Interest in the Go programming language continues to rise, but some developers aren’t using it as much as they
5
006,7_SDT036.qxp_Layout 1 5/20/20 5:14 PM Page 6
6
SD Times
June 2020
www.sdtimes.com
Building a new developer BY JENNA SARGENT
Federal guidelines on social distancing have prevented conferences from taking place for the past few months. But just because physical events can’t be held it isn’t stopping many companies from still hosting their planned events virtually. Microsoft has also taken a creative approach to taking its annual Build developer conference online. In an interview on SD Times’ podcast “What the Dev?”, Scott Hanselman, partner program manager at Microsoft, explained Microsoft’s process behind transitioning Build to an online event. He explained that when giving a talk at a conference, there is a lot of pomp and circumstance and everything feels big. “But that’s weird because we’re in our homes and we have headphones on,” said Hanselman. “So when we fundamentally sat down and thought about what Build needed to be, we wanted to figure out: how do you scale something to many, many, many thousands of people without trying to pretend that we’re all in a room together present-
ing to many thousands of different people.” The answer to that question was to make Microsoft Build more one-onone. This year’s event features a combination of live events that are a one-to-many stream, as well as oneon-one sessions for which attendees can reserve a seat. “[Attendees] could go into a room with 10, 20, or 100 of their friends,” Hanselman said. “We’ve got yoga, we’ve got the student zone, and there’s interactions with humans that make you feel like you almost came to work for a new company and you’re onboarding and you’re running around and going to all these different places, without us trying to pretend that we’re in an expo center. We don’t have a picture of an expo center with fake rooms to click on and pretend that nothings changed. It’s an acknowledgement of the situation while still making people with headphones on feel like their good friends are chatting in their ear.” Here are just some of the announcements Microsoft made at Build this year.
Windows Terminal
Windows Terminal 1.0 is now available for use. It allows users to use command-line tools and shells like Command Prompt, PowerShell, and WSL, in a modern and powerful terminal. Features in 1.0 include Unicode and UTF-8 character support, a GPU accelerated text rendering engine, and custom themes, styles, and configurations.
Azure Synapse Link
Azure Synapse Link is a cloud-native implementation of hybrid transactional analytical processing (HTAP), which is used to perform analytics on live operational data, Microsoft explained. Microsoft believes this offering will break down the barriers between Azure database services and Azure Synapse Analytics. Azure Synapse Analytics was first released last November and brings the capabilities of data warehousing and Big Data analytics together to deliver a unified experience for ingesting, preparing, managing, and serving data for machine learning, Microsoft explained. Azure Synapse Link provides insights from real-time transactional data, without administrators needing to move data or place burdens on operational systems. It is being released initially on Azure Cosmos DB and Microsoft will be adding support for other operational database services like Azure SQL, Azure Database for PostgreSQL, and Azure Database for MySQL.
mybuild.microsoft.com
Azure Cosmos DB
There are a number of updates to the NoSQL database Azure Cosmos DB. Microsoft is releasing Azure Cosmos DB serverless, which offers per-operation compute pricing. This will enable developers to manage intermittent usage in their apps in a cost-effective way. Other new capabilities include the ability for developers to bring their own
006,7_SDT036.qxp_Layout 1 5/20/20 5:15 PM Page 7
www.sdtimes.com
experience
June 2020
SD Times
Microsoft Teams
Microsoft has added a Teams extension in Visual Studio and Visual Studio Code to make it easier for developers to build Teams apps for their organization. It also now has single sign-on for users and a new Teams Activity Feed API that provides an easier way to send app notifications.
mybuild.microsoft.com
Fluid Framework
encryption keys, recover data from a specific point in time, access Version 4 of Azure Cosmos DB’s Java SDK, and use new delete functionality. Microsoft also announced the upcoming general availability of autoscale provisioned throughput, which has 99.999% availability and single-digit millisecond latency.
Azure Cognitive Services
Microsoft has made several updates to Azure Cognitive Services that will allow developers to incorporate AI capabilities into their applications. It added an apprentice mode to Personalizer, which allows organizations to overcome the learning curve of the service. When in use, the Personalizer API will learn in real time and won’t be exposed to end users until it meets certain KPIs set by the operator. Speech to Text is being expanded to 27 additional locales, and Neural Speech to Text to 11 new locales, with 15 new voices. Language Understanding now has an improved labeling experience, which Microsoft believes will make it easier for developers to build apps and bots that understand complex language structures. Language Understanding and Text Analytics sentiment analysis 3.0 are also now available for use in containers. QnA Maker, a service that converts existing content like an FAQ page into
Q&A pairs to produce knowledge bases, has introduced role-based access control (RBAC) and the ability to use rich text editors to control formatting.
Visual Studio Codespaces
Microsoft has also rebranded Visual Studio Online as Visual Studio Codespaces. Codespaces is a cloud-hosted version of Visual Studio that allows developers to remotely work on code from anywhere and on any device. “Once you have the power of the cloud, then you can start throwing the elastic power of it at this,” said Hanselman. “I could have a $300 Walmart laptop and suddenly have the power of a $4,000 developer workstation.” It also allows developers to collaborate on the same codebase through Visual Studio, rather than having to deal with sharing screens over a conference call. “You probably do screen sharing all the time, but do you ever really do the thing where you request control and give you control of their machine? I don’t want control of your machine. I don’t know your whole thing, I don’t know your hotkeys. I want the code and the context, I don’t want your pixels. So yeah, absolute gamechanger, and also a lot lower bandwidth. I would be on my machine looking at your codebase, but your code never comes over to my computer and that’s what’s so cool about it,” Hanselman continued.
Microsoft also announced that it will be open sourcing the Fluid Framework. The Fluid Framework is a set of technologies and experiences to make collaboration easier across Microsoft 365. It offers the ability for multiple people to author documents, includes a componentized document model, and allows for AI to work alongside humans to translate text, fetch content, suggest edits, and perform compliance checks. Now that it is open source, developers will be able to use portions of the Fluid Framework in their own code. Microsoft first announced the framework at Build last year, and released the first public preview in November.
Azure Active Directory (Azure AD) External Identities
This new security feature enables developers to build user-centric experiences for external users without needing to write double the code. This will make it easier for employees to collaborate with supply-chain parts in Microsoft Teams, SharePoint, and other line-of-business apps.
Azure Security Center
There are two major updates being released for Azure Security Center. First, Azure Secure Score API is now generally available. Secure Score provides an assessment of an environment’s security risk and offers actions to reduce risk. The second update is the availability of suppression rules for alerts that let operators hide alerts that are known to be safe in order to reduce alert fatigue. z
7
008_SDT036.qxp_Layout 1 5/20/20 3:59 PM Page 8
8
SD Times
June 2020
www.sdtimes.com
Governance, risk and compliance does not have to be a lengthy, tedious process BY CHRISTINA CARDOZA
Software development may be a faster process thanks to the rise of Agile, DevOps, and continuous delivery, but governance, compliance and risk (GRC) management are slowing things down. There are many manual and lengthy checks that go into GRC to make sure the software is secure, adheres to laws and regulations, and is on track with the company’s business goals. However, in today’s modern software development world there are new methods being applied to speed up the process, according to Rebecca Parsons, CTO of ThoughtWorks. There is currently a trend to automate more parts of the software development life cycle, and while automation is typically associated with software testing, ThoughtWorks’ recent Technology Radar, a guide to technology trends, found a move to apply automation to governance, risk and compliance. “Building automation around cloud cost, dependency management, architectural structure and other former manual processes shows a natural evolution; we’re learning how we can automate all important aspects of software delivery,” the Technology Radar stated. Some examples of automated governance from the Technology Radar included: • Dependency drift fitness function: Tracks technical dependencies in software to see if it needs improvements or if any potential issues get worse. • Security policy as code: Security policies put rules and procedures in place to protect systems from threats. Treating policies as code enables them to be automatically validated, deployed and monitored. • Run cost as architecture fitness function: “This means that our teams
can observe the cost of running services against the value delivered; when they see deviations from what was expected or acceptable, they’ll discuss whether it’s time to evolve the architecture,” the Technology Radar stated. “You don’t have to spend time doing something that a computer can do,” said Parsons. “It really ties into the broader narrative around continuous delivery where you try to ensure that you can, in an unimpeded way, get to production from checking in code.” According to Service Now, automating GRC can improve visibility, save time, reduce risks, prevent problems and enable businesses to respond quickly to changes. In a whitepaper, Service Now provided eight tips to automating the GRC process: 1. Defining business rules: Do this upfront and include them in an implementation plan. 2. Rationalizing controls: Some questions you need to ask yourself include: How does this control support my business objectives? Is this control actually preventing or detecting risks? And is there a different control I can put in place that better protects my business? 3. Consolidating controls: There are common, repeated controls across the multiple regulatory authorities and frameworks. Consolidating these removes any redundant tests and repetitive activities. 4. Defining what’s important: To avoid massive amounts of unnecessary work, define what matters. 5. Identifying risks: Include the impact they will have and the likelihood of those risks occurring. 6. Starting small: Adding automated GRC functionality incrementally can help minimize business disruption. 7. Building toward continuous monitoring: By being able to identify
and control deficiencies when they happen, you can catch problems when they are small and prevent them from getting out of hand. 8. Picking low-hanging fruit: A good start can be with administrative overhead or processes related to current audit findings. The number one tip or prerequisite Parsons had for automating GRC is to be able to define it in an unambiguous way. “You can’t automate something if you don’t actually know what it means,” she said. Parsons noted that the reason there isn’t one single way or more tools available to apply automation to GRC is because technology stacks are complex and often times it is too hard to figure out how to define certain aspects, and different organizations are going to be exposed to different kinds of risks. She went on to explain that some things will be easier to automate than others such as code analysis metrics or declaratively stating a policy in a tool to do a checking for you, but if you are clever about it there are some ways you can automated GRC at work. Parsons does acknowledge that not everything should be automated, and it really comes down to what is being checked. There will be some instances where you still want a person in the loop, but she added that it also doesn’t mean they have to do all the work. Parts of GRC can be automated and then manually checked by a real person. Some areas that might not be a good place to start are things that tend to take up a lot of time in the manual governance process because that probably means there are a lot of nuances involved. “You want to start with things that are well understood so you can start to see quick wins and then spend your time trying to [break down] some of those things that are more ambiguous,” said Parsons. z
Full Page Ads_SDT036.qxp_Layout 1 5/20/20 10:37 AM Page 9
GET THE
ELEPHANT OUT OF THE ROOM
Bad address and contact data that prevents effective engagement with customers via postal mail, email and phone is the elephant in the room for many companies. Melissa’s 30+ years of domain experience in address management, patented fuzzy matching and multisourced reference datasets power the global data quality tools you need to keep customer data clean, correct and current. Tell bad data to vamoose, skedaddle, and Get the El out for good!
Data Quality APIs
Global Address Verification
Global Email
Identity Verification
Global Phone
Geocoding
Demographics/ Firmographics
U.S. Property
Matching/ Deduping
Activate a Demo Account and Get a Free Pair of Elephant Socks! i.Melissa.com/sdtimes
Integrations
www.Melissa.com | 1-800-MELISSA
010,11_SDT036.qxp_Layout 1 5/20/20 9:32 AM Page 10
10
SD Times
June 2020
www.sdtimes.com
Documentation continues to be a thorn for developers BY JENNA SARGENT ew tasks can make a developer groan more than the process of documenting their code. As important as documentation is, it’s a task that has always gotten set as a low priority for developers, which leads to problems down the line. Postman’s 2019 State of API survey found that over half of respondents felt that API documentation was “below average or not well documented.” In addition, Coding Sans’ 2020 developer survey showed that sharing knowledge was the biggest challenge in software development. When talking about documentation, it’s important to make the distinction between two main types. Internal documentation is used to share knowledge about a particular piece of code with fellow developers, or developers who might be working on that piece of code in the future when the person who wrote that code might no longer be around. External documentation is documentation for consumers of a product. This includes things like release notes or product manuals. “If your customers can’t adopt your product and use it well, often you will lose that customer, or they will become frustrated and may look at other products,” said Kendra Little, DevOps Advocate for Redgate. One interesting evolution of customer-facing documentation over the last few years might be changing its place in the software development life cycle. Some companies have taken to integrating documentation into the product itself. For example, rather than a user having to search for a specific page in web documentation when they want to learn more about a feature, the application itself will include a pop-up
F
that appears when an object is hovered over that provides more information. “Instead of asking a customer to go look at our external page of releases, it’s in the tooling to have a place that notifies your customer ‘hey we have this new feature and here’s where you can go read more about it,’” said Little. Joseph Spurrier, head of engineering at cloud governance company Cloudtamer, encourages his team not to think of features as simply buttons or forms, but as the know-how that goes along with that feature. “We develop a variety of documentation to share that knowhow: basic pop-up labels to help identify unique icons, tool tips that provide a sentence or two to help the user complete the task, and context-sensitive help accessible from within our application and hosted on our support portal,” he said. Little believes that it’s no longer enough to just have a set of web pages that users can find by searching. “Yes, you do need to have that, but you also need to help them find it at the relevant time,” said Little. According to Asif Rehmani, CEO of VisualSP, a company that provides incontext training solutions, no matter how skilled a person is, eventually they will come to a part of an application they’re not familiar with and will have questions. At that time, it’s not efficient for a user to have to go search for answers. It’s much better for those answers to be surfaced in the moment, which prevents the flow of work from being disrupted. A short snippet of documentation that appears in a help icon that you can click on is much more effective than long documentation that isn’t as accessible when you need it, Rehmani said.
Making documentation contextaware can help users better get the information they need, and the same principles can be applied to end user training as well. Context-aware training educates users based on what they need to know at any given time, rather than dumping all training on them upfront, before they even get to access or explore the systems they’re being trained on. Rehmani said that traditional training has its place in certain situations, such as for administrators or power users, but that most end users don’t need all that training up front, especially for tools they won’t be using for another six months to a year. Rehmani thinks of this contextaware training less as training, and more as helping. “Think of it like this. How many times have you had someone come up to you and say ‘can you train me on this?’ That's just not what people say. When people come to oth-
010,11_SDT036.qxp_Layout 1 5/20/20 9:32 AM Page 11
www.sdtimes.com
ers they're using the word ‘help.’ They’re saying ‘can you help me with this?’ It is truly help that they're looking for to accomplish the task at hand at their moment-of-need. Context sensitive training does exactly that. It provides help at the moment of need.” Historically, and even still today in companies that haven’t adopted these context-based methodologies, documentation has always gotten shoved further and further down the priority list. This is especially true in environments where higher-ups are demanding a lot from developers, leaving them little time to document their work. “Maybe traditionally, developers have been a little bit sparse on documenting things and it’s been an extra chore that they have to do after work on implementation has been done,” said Heikki Nousiainen, CTO and co-founder of managed database provider Aiven. Shayne Sherman, CEO of IT compa-
ny TechLoris, agreed, adding that while developers might begin a project with the intention of adding in good documentation, things quickly fall apart as deadlines near. For example, he has seen a number of developers over the years start off where “each method has a great comment block, new services and functions are added to the documentation repos, the sun is shining and the birds are singing. But, as the iteration ends, QA gets their hands on the code, and the deadline looms near, clouds gather and the birds are silenced. Suddenly all we care about is getting the code out. Methods change and the documentation isn't updated. New methods are written and are never documented at all.” Sherman believes there are a few ways that developers can get control back over this process. He recommends developers pad their tasks with the understanding that documentation is a requirement. He also advocates for including documentation in the definition of done. Another way that developers can stay on top of things is to include it as part of peer review, he explained. Code collaboration platform CodeStream is tackling this issue of knowledge sharing with their flagship product. “We believe that collaboration solutions that are specifically designed for developers should become the foundation of useful documentation,” said Claudio Pinkus, COO and co-founder of CodeStream. They believe in “on-demand” documentation, which Pinkus believes is more efficient than if a developer were to try to document every single thing about the code that someone might have a question about somewhere down the line. “When a developer implements a new component or module, and wants to attach documentation, instead of trying to figure out in advance what others may not understand, CodeStream allows the consumers of that code to more easily ask questions, and “pull” the information from the authors,” said Pinkus. “We have implemented an in-editor interactive FAQ approach that captures the question-and-answer interaction and attaches it to the code it refers to. So not only do the missing pieces get filled in to solve the immediate need, the discussion is
June 2020
SD Times
saved alongside the code for the benefit of the next developer who consumes the component or module.” According to Nousiainen, ultimately the key to moving documentation higher up on developers’ priority lists is to get them to understand the value it will bring to them, whether that be through easier maintenance and refactoring down the road through good internal documentation or reduced support tickets through better customer-facing product documentation. Noisiainen believes that another key factor in getting documentation done properly is having upper-level management push for it. “Management often needs to make a push to start the initiative and then ensure there's enough time allocation and prioritization to keep the practice ongoing.” In fast-paced development cycles, it can be harder to keep up with documentation, but it’s not impossible. For example, at Aiven, where they do two production deployments a day, they have discovered and practice a documentation system that works for them. They consider design documentation as part of the source code, meaning that documents are reviewed as part of the change management process, said Nousiainen. “It’s quite a fast-moving target,” he said. “It’s even more important that the documentation really follows the same life cycle. And it’s natural for us to keep that in the same delivery, or manage it in the same fashion as the implementation itself.” Automation plays a key part in this, too. Little explained that documentation has the potential to actually save developers time if they work it into their automation. “I think what actually is the truth about documentation and the reason that a lot of people dread it, is that it involves a lot of creation of a document, putting that in an email, and sharing it with people, and a lot of these manual steps that are quite time-consuming,” she said. “If you can build that more into an automated pipeline and start chipping away and removing those steps and finding the tooling that lets you do little bits of this as part of your workflow, it makes it not only less time-consuming but it makes it a lot more pleasant to do.” z
11
012-13_SDT036.qxp_Layout 1 5/21/20 9:49 AM Page 12
12
SD Times
June 2020
www.sdtimes.com
The most important factor
Your staff BY GEORGE TILLMANN o to any IT conference or class and you see the attendees chatting with each other about their organization’s hardware, software, and networks. But you almost never hear them talk about their staff. What is surprising about this is that study after study has shown that, of all of IT’s assets, staff is the most important and the number one factor in determining project success. Studies of programmers within the same organization have shown that the most productive programmer is often 10 or more times more productive than the least productive programmer. And, because it is rare for the best in an organization to be paid more than twice the worst, they are a bargain. The problem is so simple, yet so real. If you want firstrate systems, then you need first-rate staff. Conversely, if your staff isn’t first rate, your systems won’t be either.
G
How do you get productive staff? n Hiring productive people is the best and cheapest way for IT to gain productive staff. Unfortunately, it is also the area where IT does the worst job. Why? The reason is that the system is geared to hire the average, not the exceptional. Look at the typical IT hiring practice. In most companies, human resources (HR) have taking over hiring functions. Too often HR’s candidate screening is limited George Tillmann is a retired programmer, analyst, management consultant, CIO, and author of ‘Project Management Scholia: Recognizing and Avoiding Project Management’s Biggest Mistakes,’ from which this article is excerpted.
to word matching—lining up the words in IT’s staff request with those on a candidate’s resume. (“Oh, you know C++. Too bad, we are looking for a C programmer.”) Lastly, IT salaries, routinely analyzed, plotted, and graphed by HR, are forcefully structured toward the average in the industry. Average salaries acquire average staff, not exceptional staff. This is a case where average is just another word for mediocre. The only way to gauge a candidate’s skills is for the most talented IT staff to spend time with the candidate discussing his or her knowledge, experience, and that je ne sais quoi that sets apart the talented. IT’s problem is mirrored in the project team. Many project managers either have no say or do not challenge who is on their team. However, every project manager should want the best staff on his or her team and should be willing to make sure it happens. A little two-step process will help. First, do what IT should have done; interview all prospective team members with a special eye for pro-
012-13_SDT036.qxp_Layout 1 5/21/20 10:02 AM Page 13
www.sdtimes.com
June 2020
SD Times
in project success?
staff is the most important IT asset for project success, one would think that developing hiring skills (interviewing, researching backgrounds, assessing need and fit) and assessing and bolstering the skills of existing employees (understanding both the employees’ and the project’s development needs and how to satisfy them) would be at the top of the list. But it is not. Rather than that SQL or Python course, project managers should sign up for courses on developing hiring skills and what project managers can do to assess and develop team member skills. n Experience simply can’t be beat. It is important to
place the most experienced staff on the largest, most critical projects. It is equally important to place inexperienced staff where they can learn while doing little damage. How does IT treat project staffing? Like a
Rather than that SQL or Python course, project managers should sign up for courses on developing hiring skills.
ductivity. Gain from IT management the right to accept or reject prospective team members. This is not always easy, it might not even be possible, but it is worth trying. Second, volunteer to interview potential new hires. The new hires might not help you on a current project but they could prove invaluable in future efforts. At the very least, the project manager learns before anyone else in IT who to try and staff on future projects and who to avoid. n Training is not foreign to the IT industry, which prides itself on the amount of classroom time provided staff—much of it wasted. For many organizations, training is treated as a staff benefit, not a department asset. Courses are selected based on employee interest rather than IT need. Few IT organizations tie training to an overall staffing plan or master project schedule. Given the near universal finding by researchers that
vending machine comes to mind. Whoever does project staffing looks to see who is available and then pulls the lever on the first programmer in line, with little concern for staff/project fit. A project manager often has to take what he or she is given, but there are still opportunities to strengthen experience. Project managers should ensure that the knowledge of the most experienced is passed on to the new generation. Junior staff should be paired off with an experienced team member/mentor. The experienced team member should know that his or her performance is rated, not just on how well they build the system, but how well they develop their mentee. Every project manager should recognize that when it comes to staff, hardware, and software, staff is by far the most important IT asset and their biggest avenue for success. z
13
014,15_SDT036.qxp_Layout 1 5/20/20 4:00 PM Page 14
14
SD Times
June 2020
www.sdtimes.com
Moving from Python 2 to Python 3
BY CHRISTINA CARDOZA
Python 2 has officially reached its end of life. The Python programming language team just announced Python 2.7.18, the last release of Python 2. Going forward, Python 2 will no longer receive updates, bug reports, fixes or changes. The Python Software Foundation recommends those using Python 2 switch to Python 3 as soon as possible. Python 3 introduces new and improved capabilities that are not backwards compatible. Version 3 has been under active development since 2008. The latest version of Python 3 was version 3.8.2, which was a second maintenance release for Python 3.8. The team is currently working on Python 3.9, which is available as an early developer preview. To learn more about the changes between Python 2 and Python 3, and how to successfully move to Python 3, SD Times talked to Jeff Rouse, vice president of product at ActiveState. Below is an edited version of the conversation. The full interview can be found on the SD Times weekly podcast: “What the Dev?� SD Times: What does end of life for Python 2 mean for organizations?
Rouse: When Python 3 was introduced, Guido van Rossum and the core team decided that there were significant changes they wanted to make to the language that meant they were going to break backwards compatibility, and that is a very difficult call to make when you are designing a language. Design decisions you make a decade or two ago may not hold up in the light of new technology or where you want to take the language so ultimately with the introduction of Python 3, the community and core language maintainers spent the better part of a decade getting people to move off of Python 2 and onto Python 3 so that all the maintainers and everyone in the Python community that are supporting both versions can finally finish. What the end of life means for Python 2.7 is there will be longer be any bug fixes, no improvements and probably most importantly no security updates
014,15_SDT036.qxp_Layout 1 5/20/20 4:00 PM Page 15
www.sdtimes.com
into that language. That includes most of the community packages. How long do organizations have to make the move to Python 3?
Realistically, they should have already been thinking about this. It has been well advertised for quite a period of time. The initial thoughts were that around 2010 everyone [would] start moving. 2014 was going to be the deadline, and then it was extended to 2020. Organizations should have already been thinking about it, and if they haven’t been or are new to it, that is fine. They can actually make use of all the content, applications and items available to help with the transition moving from Python 2 to Python 3. There are a series of steps you want to go through to evaluate how much it is going to take to move from Python 2 to Python 3 and each case is really different. With the last release of Python 2.7, do you think organizations feel pressure to finally make the leap and move forward?
Yes, and we have been seeing this a lot at ActiveState. We support both Python 2 and Python 3, and we’ve had a lot of new customers come to us and say ‘Hey, I am still on Python 2. Can you help us out?’ or ‘Can you give us a little more [time] until we are ready to get to Python 3?’ For organizations that have really large codebases, it is non-trivial to make the change. Even though this is not a rewrite to move from Python 2 to Python 3, it is significant enough that you need to comb through the codebase in a pretty painstaking way in order to make sure you have everything moved over. Then, by the same token, you have all these dependent packages that your codebase relies on, so you also have to take that into account. There can be some upgrade pains there as well so organizations should be planning immediately to do this. Recognizing that security vulnerabilities and bugs do crop up over time and
June 2020
SD Times
important it is to do it sooner rather than later. The number one thing is to really start with excellent test coverage of your Python 2 app because it is going to be vital as you move to Python 3 that your test continues to show that the functionality hasn’t been broken in any way. My own personal opinion is if you are not on the latest Python 2.7, maybe you are on 2.6 or an earlier version of What are the differences you are going to 2.7, it probably makes the most sense see between Python 2 and Python 3? to ensure that everything works well What they have done [with Python 3] is right up to the final version. Then you really tightened up the syntax. One of can actually run source code translathe core philosophies of Python is to tors...which gives you the results of your Python 2 translated to Python 3. From there, it will ‘For organizations that point out anything you need to have really large manually fix up. codebases, it is trivial Along the way you are going to run into dependencies in to make the change your code with the various pack[from Python 2 to 3].’ ages you are using. You may —Jeff Rouse, Active State need to move to a different version of the same package. have just one way to do something and to do it very well. One of the great How do you see Python continuing to advantages of Python is that it is a very be used in the future and how is readable language, and it is so easy to Python 3 going to play a role? work with that the language designers Python 3 is obviously the engine behind really decided they wanted to continue data science these days and in a lot of to improve upon that. ways data science has coalesced around There is only one way to do iterators Python. for instance, or there are not multiple All the world-class data science is ways to do ranges. So there are a lot of being done in Python. I don’t see that syntactic things that I think developers changing any time soon. will benefit from, and it keeps things a When we talk about Python 2, there little simpler. The performance of was a fair amount of data science being Python 3 continues to improve. The done in Python 2 as well but most of the standard library has tons of improve- major packages, for instance Tensorments in it. It handles asynchronous Flow and others, have stopped supportfunctions in a much stronger fashion, ing Python 2 at pretty early revisions so and overall, all the efforts to continue to you are not really looking at wanting to advance the language. All of that effort do real data science, you can do a lot of is going into Python 3 so that is really data analysis in Python 2, but to really do hardcore data science, machine where you want to be. learning, you really want to be on Python 3. z Since there are so many consideradepending upon their risk profile that could be something they should be very concerned about right now. If they plan on maintaining their application moving forward, and they want to take advantage of any of the new technologies, take advantage of obviously having maintained security updates and bug fixes, they really have to move to Python 3.
tions in the migration process, how can organizations successfully make the move?
The first thing to do is to figure out what the risk profile is for your application and the utility. Then you know how much you want to invest and how
LISTEN TO THE FULL INTERVIEW ON SD TIMES’ WEEKLY PODCAST
15
016_SDT036.qxp_Layout 1 5/20/20 9:31 AM Page 16
16
SD Times
June 2020
www.sdtimes.com
DEVOPS WATCH
CloudBees brings feature flagging to on-premises environments BY CHRISTINA CARDOZA
Enterprise software delivery company CloudBees has updated its feature flagging capability with an on-premises feature manager. According to the company, this will allow developers to leverage feature flag technology in both on-premises and cloud environments. “Very soon, all features will be released behind a feature flag. It’s a natural evolution in continuous delivery,” said Sacha Labourey, CEO and cofounder of CloudBees. “With this release, we are providing the same functionality for on-premises environments that previously had only been available as a cloud-based service. We are committed to the ongoing integration, automation and governance of feature flags within the software delivery
lifecycle and giving users choice in selecting the best environment for their project — on-premises or cloud.” CloudBees Feature Flags comes from the company’s acquisition of the feature management company Rollout last year. Additionally, it integrates with Cloudbee’s CI/CD capabilities so organizations can use feature management capabilities across their entire software development life cycle. “We recognize that many companies are realizing the benefits of feature flags,” said Moritz Plassnig, senior vice president and general manager at CloudBees. “By flagging features, they no longer have to sacrifice innovation to lower risk. We felt that it was critical to offer this technology to any company working in on-premises or hybrid environments.” z
ShuttleOps announces no-code CI/CD solution for app delivery BY CHRISTINA CARDOZA
DevOps company ShuttleOps revealed a new no-code SaaS and CI/CD solution powered by Chef, Docker and HashiCorp. The new offering is designed to help developers modernize app delivery with prebuilt integrations to popular DevOps tools and a no-code pipeline editor that enables users to build, deploy and manage apps across cloud environments. “Application delivery is complex, and most companies don’t have the time or skills to figure it out” said Damith Karunaratne, CEO, ShuttleOps. “Teams want a simple, fast and secure way of managing applications, without a massive investment in time, money and services. The no-code
strategy we’ve taken with ShuttleOps alleviates skills requirements, so teams of any size can quickly achieve automated application delivery to the cloud.” Key features include: • A unified SaaS platform to build, deploy and manage applications • No-code CI/CD pipeline editor • Prebuilt source control integration to GitLab, GitHub, Bitbucket • Cloud infrastructure provisioning in AWS, GCP, Azure with clustered deployments • Infrastructure-agnostic application orchestration with Chef Habitat • Application support for Windows and Linux • Built-in secrets management with
In other DevOps news: n Chef Infra 16 was released with YAML support, unified mode, cookbook upgrade automation and expanded platform support. The solution is designed for DevOps and infrastructure and operations teams and features the ability to define infrastructure as code; ensure configuration policy is flexible, versionable, testable and human-readable; and enables a repeatable process to elimit drift. n LaunchNotes is now generally available to help development teams keep on top of product changes, communicate with users about changes coming, and provide updates to stakeholders. According to the company, Agile and DevOps practices are making it hard to communicate the right information to the right people at the right time. The solution features a public release stream, an internal change feed, and release articles. Additionally, development teams can provide status updates and measure impact about feature adoption and engagement within LaunchNotes. n Parasoft 2020.1 was released with new features designed to strengthen DevOps team collaboration and help users manage virtual services. For remote teams, the release features intelligent virtual service creation, performance tracking characteristics, and the ability to associate work items with test cases in the systems of record. In addition, the company aims to assist Agile and DevOps teams with integrated test automation throughout CI/CD workflows and the ability for non-technical testers to participate. z
HashiCorp Vault • Gated approvals and notifications • Reporting, analytics and customizable dashboards • Docker container and Kubernetes support in Q4-2020 z
VSMDC-house ad.qxp_Layout 1 5/21/20 9:25 AM Page 1
July 22, 2020
Virtual VSM DevCon is a one-day, digital conference examining the benefits of creating and managing value streams in your development organization. At Virtual VSM DevCon, you will learn how to apply value stream strategies to your development process to gain efficiencies, improve quality and cut costs.
Join your peers for a day of learning Value Stream Management
Taught by leaders on the front lines of Value Stream
As software development, delivery and performance become more complex due to modern architectures, value streams can help organizations unlock the bottlenecks and eliminate process waste to continuously improve how they work and deliver better experiences to their customers. Value stream management concepts are critical when the product changes frequently due to opportunities in the markets, the materials change due to the complexity of modern software architectures and means of delivery, and the output is often changing based on customer demands and expectations.
REGISTER FOR FREE TODAY!
Sponsored by A
Event
018-21_SDT036.qxp_Layout 1 5/20/20 9:39 AM Page 18
18
SD Times
June 2020
www.sdtimes.com
'Tear down those walls' BY DAVID RUBINSTEIN
I
n today’s rapidly changing software industry, it feels like disruption is happening at a faster pace than at any time in its history. How quickly we’ve gone from SOA to APIs to microservices to containers to serverless! It’s been breathless. This year’s SD Times 100 include many disruptors that came in like a wrecking ball, figuratively knocking down walls in how software is delivered, monitored and improved. Yet we also give a nod to those companies that have held their ground as leaders in their segments, adapting to new ideas and ways of
doing things while maintaining their positions as industry leaders. Once again, we’ve added categories and eliminated some, reflecting the changes in the industry itself. The editors of SD Times noted the rise in areas such as open source and value stream by adding them to the list, for example, as other bricks in the wall. So, while not exactly taking a wrecking ball to the SD Times 100, we want it to reflect the important changes we’re seeing in the industry. To paraphrase former U.S. President Ronald Reagan, we say to the industry: ‘Keep tearing down those walls!’ z
018-21_SDT036.qxp_Layout 1 5/20/20 9:34 AM Page 19
www.sdtimes.com
June 2020
SD Times
APIs and Integration API Fortress Boomi CData
Jitterbit Kong MuleSoft
Postman SmartBear TIBCO Software
Analytics and DataOps Datadog Delphix Elastic
Kinetica Looker Tableau Software
Database and Database Management Cloudera Informatica Cockroach Melissa Confluent MongoDB DataStax Datical (now Liquibase)
Neo4j Oracle Redgate Software Redis Labs Talend
DevOps Cloud and Cloud Native Amazon DigitalOcean Google IBM /Red Hat
Microsoft Rancher Labs Stackery VMware
CircleCI CloudBees JFrog Octopus Deploy OpenMake Software Puppet
19
018-21_SDT036.qxp_Layout 1 5/20/20 9:35 AM Page 20
20
SD Times
June 2020
www.sdtimes.com
Development Tools ActiveState Flexera GitHub
JetBrains LaunchDarkly Optimizely Perforce
Progress Sparx Systems Split Software
Performance Monitoring AppDynamics Catchpoint Dynatrace
Instana Lightstep New Relic
Sentry SolarWinds Stackify
Productivity and Collaboration Appian Atlassian Codestream Microsoft
Nintex OutSystems Quick Base Slack
018-21_SDT036.qxp_Layout 1 5/20/20 9:36 AM Page 21
www.sdtimes.com
June 2020
SD Times
Security Aqua Security Bugcrowd Contrast Security Palo Alto Networks Signal Sciences Sonatype
Open Source Patrons Red Hat Netflix Facebook Google CNCF
Testing Applause Applitools Eggplant Mobile Labs Parasoft
Perfecto Testim Tricentis Gremlin
Value Stream Management
Digital.ai ConnectAll GitLab Micro Focus
Plutora Tasktop HCL Broadcom
Splunk Synopsys Veracode WhiteHat WhiteSource
21
022-27_SDT036.qxp_Layout 1 5/20/20 5:38 PM Page 22
observability:
It’s all about the
BY DAVID RUBINSTEIN
O
bservability is the latest evolution of application performance monitoring (APM), enabling organizations to get a view into CI/CD pipelines, microservices, Kubernetes, edge devices and cloud and network performance, among other systems. While being able to have this view is important, handling all the data these systems throw off can be a huge challenge for organizations. In terms of observability, the three pillars of performance data are logs (for recording events), metrics (what data you decide gives you the most important measures of performance) and traces (views into how software is performing). Those data sources are important, but if that is where you stop in terms of what you do with the data, your organization is being passive and not proactive. All you’ve done is collect data. According to Gartner research director Charley Rich, “We think the definition
of observability should be expanded in a couple of ways. Certainly, that’s the data you need — logs, metrics and traces. But all of this needs to be placed and correlated into a topology so that we see the relationships between everything, because that’s how you know if it can impact something else.” Bob Friday, who leads the AIOps working group at the Open Networking User Group (ONUG) and is CTO at wireless network provider Mist Systems, said from a network perspective, it’s important to start with the question, “Why is the user having a problem?” and work back from that. That, he said, all starts with the data. “I would say the fundamental change I’ve seen from 15 years ago, when we were in the game of help-
Monitoring The lasT of Three ParTs
ing enterprises deal with network stuff, is that this time around, the paradigm is we’re trying to manage end-to-end user experience. [Customers] really don’t care if it’s a Juniper box or a Cisco box.” Part of this need is driven by software development, which has taken services and distributed deployment environments to a whole other level, by deploying more frequently and achieving higher engineering productivity. And, as things speed up, performance and availability management become more critical than ever. “Infrastructure and ops, these app support teams, have to understand that if more applications are coming out of the factory, we better move fast,” said Stephen Elliot, program vice president for I&O at analysis firm IDC. “The key thing is recognizing what type of analytics are the proper ones to the different data sets; what kinds of answers do they want to get out of these analytics.” But with that, it’s very important to recognize what type of analytics are the
022-27_SDT036.qxp_Layout 1 5/20/20 5:38 PM Page 23
www.sdtimes.com
June 2020
23
SD Times
Three pillars of observability
proper ones to the different data sets; what kinds of answers do organizations want to get out of these analytics. Elliot explained that enterprises today understand the value of monitoring. “Enterprises are beginning to recognize that with the vast amount of different types of data sources, you sort of have to have [monitoring],” he said. “You have more complexity in the system, in the environment, and what remains is the need for performance availability capabilities. In production, this has been a theme for 20 years. This is a need-to-have, not a nice-to-have.” Not only are there now different data sources, it’s the type of data being collected that has changed how organizations collect, analyze and act on data. “The big change that happened in data for me from 15 years ago, where we were collecting stats every minute or so, to now, we’re collecting synchronous data as well as asynchronous user state data,” Friday said. “Instead of collecting the status of the box, we’re collecting in-
state user data. That’s the beginning of the thing.”
Analyzing that data To make the data streaming into organizations actionable, graphical data virtualization and visualization is key, according to Joe Butson, co-founder of Big Deal Digital, a consulting firm. “Virtualization,” he said, “has done two things: It’s made it more accessible for those people who are not as well-versed in the information they’re looking at. So the virtualization, when it’s graphical, you can see when performance is going down and you have traffic that’s going up because you can see it on the graph instead of cogitating through numbers. The visualization really aids understanding, leading to deeper knowledge and deeper insights, because in moving from a reactive culture in application monitoring or end-to-end life cycle monitoring, you’ll see patterns over time and you’ll be able to act proactively. “For instance,” he continued, “if you
TRACES
METRICS
LOGS
data
Cindy Sridharan’s popular “Distributed Systems Observability” book published by O’Reilly claims that logs, metrics, and traces are the three pillars of observability. According to Sridharan, an event log is a record of events that contains both a timestamp and payload of content. Event logs come in three forms: l Plaintext: A log record stored in plaintext is the most commonly used type of log l Structured: A log record that is typically stored in JSON l Binary: Examples of binary event logs include Protobuf OBSERVABILITY formatted logs, MySQL binlogs, systemd journal logs, etc. Logs can be useful in identifying unpredictable behavior in a system. Sridharan explained that often distributed systems experience failures not because of one specific event happening, but because of a series of possible triggers. In order to pin down the cause of an event, operations teams need to be able start with a symptom pinpointed by a metric or log, infer the life cycle of a request across various system components, and iteratively ask questions about interactions between parts of that system. Logs are the base of the three pillars, and both metrics and traces are built on top of them, Sridharan wrote. Sridharan defines metrics as numeric representations of data measured across time intervals. They are useful in observability because they can be used by machine learning algorithms to gain insights on the behavior of a system over time. According to Sridharan, their numerical nature also allows for longer retention of data and easier querying, making them well suited for building dashboards that show historical trends. Traces are the final pillar of observability. According to Sridharan, a trace is “a representation of a series of causally related distributed events that encode the end-toend request flow through a distributed system.” They can provide visibility into the path that a request took and the structure of that request. Traces can help uncover the unintentional effects of a request, making them particularly well-suited for complex environments, like microservices. z — Jenna Sargent
have a modern e-commerce site, when users are spiking at a certain period that you don’t expect, you’re outside of the holiday season, then you can then look over, ‘Are we spinning up the resources we need to manage that spike?’ It’s easy when you can look at a visual tool and understand that versus going to a command-line environment and query what’s going on and pull back information from a log.” Another benefit of data virtualization is the ability to view data from multiple sources in the virtualization layer, without having to move the data. This helps everyone who needs to view data stay in sync, as there’s but one version of truth. This also means organizations don’t have to move data into big data lakes. When it comes to data, Mist’s Friday said, “A lot of businesses are doing the same thing. They first of all go to Splunk, and they spend a year just trying to get the data into some bucket they can do something with. At ONUG continued on page 27 >
022-27_SDT036.qxp_Layout 1 5/20/20 5:39 PM Page 24
24
SD Times
June 2020
www.sdtimes.com
Monitoring applications in modern software architectures BY DAVID RUBINSTEIN
In today’s modern software world, applications and infrastructure are melding together in different ways. Nowhere is that more apparent than with microservices, delivered in containers that also hold infrastructure configuration code. That, combined with more complex application architectures (APIs, multiple data sources, multicloud distributions and more), and the ephemeral nature of software as temporary and constantly changing, is also changing the world of monitoring and creating a need for observability solutions. First-generation application monitoring solutions struggle to provide the same level of visibility into today’s more virtual applications — i.e., containerized and/or orchestrated environments running Docker and Kubernetes. Massively distributed microservices-based applications create different visibility issues for legacy tools. Of course, application monitoring is still important, which has driven the need to add observability into the applications running in those environments. While legacy application monitoring tools have deep visibility into Java and .NET code, new tools are emerging that are focused on modern application and infrastructure stacks. According to Chris Farrell, technical director and APM strategist at monitoring solution provider Instana, one of the important things about a microservice monitoring tool is that it has to recognize and support all the different microservices. “I think of it like a giant T where the vertical bar represents visibility depth and the horizontal bar represents visibility breadth,” he explained. “Legacy APM tools do great on the vertical line with deep visibility for code they support; meanwhile, microservices tools do well on the horizontal line, supporting a broad range of different technologies. Here’s the thing — being good on one axis doesn’t necessarily translate to value along the other
because their data model is built a certain way. When I hear microservices APM, I think, ‘That’s what we do.’ [Instana has] both the depth of code-level visibility and the breadth of microservices support because that’s what we set out to do, solve the problem of ephemeral, dynamic, complex systems built around microservices.” When talking about observability and application monitoring, it’s important to think about the different kinds of IT operations individuals and teams
you have to deal with. According to Farrell “whether you’re talking about SREs, DevOps engineers or traditional IT operators, each has their own specific goals and data needs. Ultimately, it’s why a monitoring solution has to be flexible in what data it gathers and how it presents that data. Even though it’s important for modern monitoring solutions to recognize and understand complexity, it’s not enough. They must also do so programmatically, Farrell said, because today’s systems are simply too complex for a person to understand. “You add in the ephemeral or dynamic aspect, and by the time a person could actually create a map or understand how things are related, something will change, and your knowledge will be obsolete,” he said. Modern solutions also have to be able to spot problems and deliver data in context. Context is why it’s practically impossible for even a very good and knowledgeable operations team to
understand exactly everything that’s going on inside their application themselves. This is where solutions that support both proprietary automatic visibility and manually injected instrumentation can be valuable. Even if you have the ability to instrument an application with an automated solution, there still is room for an observability piece to add some context. “Maybe it’s a parameter that was passed in; maybe it’s something to do with the specific code that the developer needs to understand the performance of their particular piece of code,” Farrell said of the need for contextual understanding. “That’s why a good modern monitoring tool will have its own metrics and have the ability to bring in metrics from observability solutions like OpenTracing, for example,” Farrell added. “Tracing is where a lot of this nice context comes out. Like Instana, it’s important to have the ability to do both. That way, you provide the best of both worlds.” To make the ongoing decisions and take the right actions to deliver proper service performance, modern IT operations teams really require that deep context. It’s valuable for ongoing monitoring, deployment or rollback verification, troubleshooting and reporting. While observability on its own can provide information to an individual or a few individuals. It is the monitoring tool that provides understanding into how things work together; that can shift between a user-centric or an application-centric view, and that can give you a framework to move from monitoring to decision-making to troubleshooting and then, when necessary, moving into reporting or even log analysis. Farrell pointed out that “the APM piece is the part that ties it all together to provide that full contextual visibility that starts with individual component visibility and ultimately ties it all together for application-level performance and service-level performance.” z
Full Page Ads_SDT036.qxp_Layout 1 5/20/20 2:18 PM Page 25
Full Page Ads_SDT036.qxp_Layout 1 5/20/20 10:39 AM Page 26
022-27_SDT036.qxp_Layout 1 5/20/20 5:39 PM Page 27
www.sdtimes.com
June 2020
SD Times
APM vs. ASM Traditional application performance management was built from the ground up to be for infrastructure operations and the emergent DevOps teams. They were not designed for product and engineering teams. But if you’re a developer, and you’re writing code to deliver to your customers in the form of an application or a service, you’d likely want to know after you deliver it that it’s working the way you intended. This engineering-centric view of performance management has taken on the name “application stability management.” James Smith, co-founder of ASM solution provider Bugsnag, said his company and another, Sentry, are the first two to raise the banner for application stability. So what’s the real difference between APM and ASM? Smith explained: “There’s this big gap in the APM space — figuring out when to promote builds from data to production, figuring out when to roll out an A/B test from 5% to 100%. You need to know when you’re making these rapid iterative changes, ‘Are the changes we’re delivering actually working?’ And this is just not something that the APM providers are thinking about. It’s an afterthought for them.” It’s this focus on this persona of product and engineering teams that is making a difference. Smith said that when used alongside a traditional APM solution, his company found that
< continued from page 23
we’re trying to reverse that. We say, ‘Start with the question,’ figure out what question you’re trying to answer, and then figure out what data you need to answer that question. So, don’t worry about bringing the data into a data lake. Leave the data where it’s at, we will put a virtualized layer across your vendors that have your data, and most of it is in the cloud. So, you virtualize the data and pull out what you need. Don’t waste your time collecting a bunch of data that isn’t going to do you any good.” Because data is coming from so many different sources and needs to be understood and acted on by many different roles inside a company, some of those organizations are building multiple monitoring teams, designed to take out just the data that’s relevant to their role and presented in a way they can understand. Friday said, “If you look at data scientists, they’re the guys who are trying to get the insights. If you have a data science guy trying to get the insight, you need to surround him with about four other support people. There needs to be
less than 5% of the engineers were logging into the APM, while 70% of the engineering team was logging into Bugsnag on a weekly basis. “That’s meant that we’ve built what essentially is a daily dashboard for the engineering and product teams,” Smith said, “instead of waiting from the monitoring team to tell the software engineer that he screwed up and needs to fix it. It’s a tool those people are using every day to hone their craft and get better at being a software engineer.” Large enterprises today are realizing that their brand impression comes more from the web and mobile experiences than it does from their stores or offices. So focusing on the customer experience first, Smith said client-side monitoring —JavaScript and mobile monitoring — is where “the rubber meets the road when it comes to customers touching your software.” z — David Rubinstein
a data engineering guy who’s going to build the real-time path. There has to be a team of guys to get the data from a sensor to the cloud. Once you have the data to the cloud, there needs to be a team of guys — this is like Spark, Flink, Storm — to set up real-time data pipelines, and that’s relatively new technology. How do we process data in real time once we get it to the cloud?”
AI and ML for data science The use of artificial intelligence and machine learning can help with things like anomaly detection, event correlation and remediation, and APM vendors are starting to build those features into their solutions. AI and ML are starting to provide more human-like insights into data, and deep learning networks are playing an important role in reducing false positives to a point where network engineers can use the data. But Gartner’s Rich pointed out that all of this activity has to be related to the digital impact on the business. Observing performance is one thing, but if something goes wrong, you need
to understand what it impacts, and Rich said you need to see the causal chain to understand the event. “Putting that together, I have a better understanding of observation. Adding in machine learning to that, I can then analyze, ‘will it impact,’ and now we’re in the future of digital business.” Beyond that, organizations want to be able to find out what the “unknown unknowns” are. Rich said a true observability solution would have all of those capabilities — AI, ML, digital business impact and querying the system for the unknown unknowns. “For the most part, most of the talk about it has been a marketing term used by younger vendors to differentiate themselves and say the older vendors don’t have this and you should buy us. But in truth, nobody fully delivers what I just described, so it’s much more aspirational in terms of reality. Certainly, a worthwhile thing, but all of the APM solutions are all messaging how they’re delivering this, whether they’re a startup from a year ago or one that’s been around for 10 years. They’re all making efforts to do that, to varying degrees.” —With Jenna Sargent
27
028_SDT036.qxp_Layout 1 5/20/20 9:40 AM Page 28
28
SD Times
June 2020
www.sdtimes.com
Guest View BY RICHA ROY
It’s getting too technical Richa Roy has worked in IT as an ABAP programmer, project manager, business analyst and managed technology delivery portfolios for various organizations.
very business analyst has heard this at least once, if not often: “I don’t want to get too technical for this discussion.” But the fact of the matter is “it is getting technical.” Contrary to the popular belief that the role of ‘business analyst’ is fading, I believe it is becoming even more essential. It is a matter of survival of the fittest, where “fittest” means business analysts who are best suited for the new era in the software industry.
E
Noise reducer On one hand, business analysts help stakeholders come to a decision, and on the other hand, they ground everyone else — architects, developers, data analysts, operations, infrastructure, security architects and other stakeholders in the organization — with collective understanding of the business needs. In order to help stakeholders decide, a “fittest” business analyst not only strives to understand the business process to separate stakeholders’ needs from desires, they tend to ask difficult questions like “Are you sure this is a must-have?” or “Could you prioritize the requirements so that not everything is a high priority?” In addition to that, business analysts also explore what options are available to fulfill the needs of stakeholders. Unlike developers, whose black and white world makes it acceptable to say that a requirement can or cannot be met, for a business analyst, it is of vital importance to know the reasoning behind not being able to meet the requirement. Was the requirement not clear, was it misunderstood, is it cost-prohibitive to meet the requirement in its current form? Is it the technology platform, or lack of developer experience in certain technology that is hindering developers to not be able to meet the requirement? Can part of the requirement be met, and if yes, what are options A, B and C with their pros and cons? If the requirement can be met, what would be the cost estimates or points estimates? In some organizations, business analysts are also UAT testers: they verify that what was asked is indeed what was developed and identify bugs even before stakeholders get to see the business demo. In a nutshell, business analysts remove a lot of
Business analysts... will need to exhibit their technical know-how more than ever.
noise on both sides, by clarifying requirements and verifying that developed features meet the needs of the stakeholder and the users.
Explainer-in-chief With so many disruptions in the software industry in such a short period of time, business analysts have their work cut out for them. Their first and foremost responsibility is to educate business stakeholders with cloud terminology in the language they understand, without getting frustrated with the everchanging software industry. They will have to think and talk the language of hybrid cloud, private cloud, on-premises, high availability, scalability, fault tolerance, automation and multifactor authentication. They will have to explain the rationale behind using a data warehouse instead of a database. Analysts will have to understand the difference between storage and compute, for example, and be able to explain that to the stakeholders. Business analysts will have to inevitably discuss and analyze options based on serverless or server-based computing, find out the cost benefit of each solution, research the availability of an API and its throttling limits to identify gaps even before it can be considered as a potential solution.
Up your technical game In this new era, business analysts will not only continue to use their soft skills — communication, analytical thinking, problem-solving and building relationships — but they will also need to exhibit their technical know-how more than ever. To be prepared, business analysts should take an indepth look at their arsenal for technical expertise, take a pause and think about what terms and services they will need to know in order to continue to help stakeholders in cloud-centric organizations, and participate in discussions with others in the organization. It would not be a bad idea for business analysts to get a couple of cloud certifications under their belts. The function of business analysis is not going anywhere because the need for the work business analysts do still exists and will continue to exist. The noise in business requirements will need to be filtered regardless of waterfall or agile, on-premises or cloud; however, it is more important than ever for business analysts to invest in their technical skills. z
029_SDT036.qxp_Layout 1 5/20/20 9:40 AM Page 29
www.sdtimes.com
June 2020
SD Times
Analyst View BY BILL HOLZ
8 traits of an Agile superhero A
gile IT practitioners must exhibit a range of digital business skills that go beyond the ability to code, such as courage, communication and leadership. Application technical professionals who exhibit this diverse and important skill set are what we like to call “Agile superheroes.” Some superheroes, like Superman, Wonder Woman, Spider-Man and the Hulk, have superhuman powers — X-ray vision, power of flight, enhanced reflexes, super strength. Others, like Batman and Iron Man, rely purely on enhancing human abilities. Highly successful Agile practitioners model their growth after the second set of superheroes. They develop skills, live Agile values, and strive to continuously improve their proficiency and breadth of knowledge — becoming true “Agile superheroes.” Here are eight key traits that will help application technical professionals responsible for product development transform into Agile superheroes. Be an agent of change. Lean and Agile methods require a major departure from traditional development, IT and business approaches. The Agile superhero assumes the role of change agent, responsible for evangelizing, advocating for and creating the changes needed for Agile. Influence others through example. Proactively share new ideas, even if those ideas aren’t all winners — it will encourage people to engage in strengthening them. Be courageous. In the 2019 Gartner Agile in the Enterprise survey, respondents cited shifting from a culture of control to one based on trust as the top inhibitor to Agile success. Agile superheroes must have courage to fight against siloed and bureaucratic behaviors and complacent thinking. Demonstrate courage by communicating honestly and exercising humility. This will help team members trust that they can be honest about their own problems or shortcomings. Encourage teammates to take risks, knowing that failures are okay. Model Agile values. In Agile methodologies, teams are collaborative, self-sufficient and accountable. However, autonomous teams only succeed at collaborating to build solutions when all members of the team commit to a set of shared values. These values should include: • Focus: Everyone focuses on the work of the
sprint and the goals of the team. • Courage: Team members have the courage to do the right thing and to work on tough problems. • Openness: The team and its stakeholders are open about their work and any challenges with performing it. Team members are willing to share knowledge, learn from others and express opinions without fear of being judged or punished. • Commitment: Each team member personally commits to achieving the goals of the team. • Respect: Team members respect each other to be capable, independent people. Take these values to heart and regularly assess whether the team upholds them. Become a Lean/Agile expert. Agile success requires a deep understanding of Agile and Lean thinking. It also requires experience with proven frameworks and techniques, such as Scrum, Kanban, Extreme Programming (XP), Lean, DevOps and continuous delivery practices. Be a good communicator. Agile promotes the inclusion of representatives from all aspects of a project, including developers, testers, architects, business stakeholders and customers. Strong communication and intrapersonal skills are necessary for this collaboration to be effective. Be a strong agile leader. Having the right team culture and leadership style is essential to scaling Agile and DevOps initiatives. For the Agile team to be accountable and self-sufficient, all members of the team must provide leadership, not just managers. Get to know your peers as both people and professionals, gaining an understanding of differing skills, communication styles and personal experiences. Be a multiskilled and lifelong learner. The more multiskilled an Agile team is, the better it can quickly solve problems. Waiting for the “expert” to perform a critical project step impedes agility. Be a problem solver. When a problem arises that impedes progress, Agile superheroes take the responsibility to resolve it. Apply methods like the Theory of Constraints or continuous improvement to understand problems, identify constraints and initiate change. z
Bill Holz is is a Research VP on the Application Platform Strategies team in Gartner for Technical Professionals.
Highly successful Agile practitioners... develop skills, live Agile values, and strive to continuously improve.
29
030_SDT026.qxp_Layout 1 5/20/20 4:50 PM Page 30
30
SD Times
June 2020
www.sdtimes.com
Industry Watch BY DAVID RUBINSTEIN
What does ‘value’ mean to developers? David Rubinstein is editor-in-chief of SD Times.
I
t was around late 2017 when we first reported on ‘value stream management’ as it pertained to software development. By now, everyone knows that value stream concepts were first introduced on manufacturing floors, and were designed to eliminate waste (of effort and materials) and wait times (dependence on others to complete tasks before you could complete yours). But as software development organizations started using Agile for faster production and DevOps to accommodate that speed through the use of CI/CD pipelines, microservices and containers, APIs and cloud infrastructure, these organizations found that things weren’t running as smoothly as they should. Gaps appeared with software testing and security. Development teams were stalling as engineers were pulled off one project and onto another, and as priorities for the effort changed. Studies have shown they face challenges with poorly integrated toolchains, misalignment of work between teams and isolated areas of automation. There was a need to take DevOps to the next level, and in the January 2019 cover story in SD Times, our editors declared that would be ‘The Year of the Value Stream.’ Since then, we’ve seen more solution providers plant a stake in the ground of that industry segment, offering solutions that help development shops find the waste in their processes and gain better efficiency while maintaining high quality of software delivered. The way software development has evolved, with autonomous teams owning their projects, and managers only seeing what their teams are doing, has made it nearly impossible for enterprises employing numerous developers working on pieces of projects to see the overall effects of what’s working and what isn’t. They’re told of the problems, but can’t assess how that is impacting business goals because they don’t have the big-picture view. “In the beginning of DevOps, a lot of it was purely looked at as, ‘How fast can I push things out?’ That’s still important, the idea that I can move things, but can I move things that make a difference?” Thomas Murphy, Gartner research director, told me in a recent conversation. “As a developer, I
Last month, D2Emerge — publishers of SD Times and ITOps Times — announced {Virtual} VSM DevCon.
want to build things that matter … to the customer, whether internal or external. If it’s internal, it could be operational efficiency. If it’s external, it could be customer satisfaction. I’m building features that people love, and drive them to consume the things we build.” Value streams, he added, create a better conversation between engineers and the business. Last month, D2Emerge — publishers of SD Times and ITOps Times — announced {Virtual} VSM DevCon, a one-day training event on value stream creation, management and optimization specifically targeting development teams. Based on some Lean manufacturing principles, value stream management for software teams prevents some different challenges. For one, if I’m manufacturing, say, a car, specs for doing so are slow to change. Even as the big auto companies introduce new models, much of the construction doesn’t change, or only requires slight modifications. Software production, though, changes often — and rapidly, making it even more important to continuously assess production processes to make sure the efficiencies gained by implementing a value stream for development are maintained from project to project — er, from product to product. The development team will continue to own their practices, what’s working and what’s new, and what should they be adding, Murphy said. This is why there’s the continued importance of communities of practice, like DevOps dojos. Organizations need these practitioners to share new ideas so they can continue to evolve. That includes value steam practitioners. {Virtual} VSM DevCon will look at such topics as: • How do you define value in software development? • Software testing in the value stream • Architecting for value stream • Creating a value stream culture with new and changed roles • What does DevOps have to do with value stream? • And more You can learn more about {Virtual} VSM DevCon and register to attend on the event website, https://events.sdtimes.com/valuestreamdevcon. Did I mention it is free to attend? Hope to see you there! z
Full Page Ads_SDT036.qxp_Layout 1 5/20/20 10:40 AM Page 32
Full Page Ads_SDT036.qxp_Layout 1 5/20/20 10:39 AM Page 31