FC_SDT029.qxp_Layout 1 10/24/19 11:54 AM Page 1
NOVEMBER 2019 • VOL. 2, ISSUE 029 • $9.95 • www.sdtimes.com
Full Page Ads_SDT029.qxp_Layout 1 10/23/19 12:12 PM Page 2
003_SDT029.qxp_Layout 1 10/23/19 3:25 PM Page 3
Contents
VOLUME 2, ISSUE 29 • NOVEMBER 2019
FEATURES
NEWS 6
News Watch
8
Saving Flash from extinction
10
Kotlin’s emergence: Common coding mistakes to watch for
12
Grace Hopper Celebration is more than just a tech conference
16
.NET 5 merges Core and .NET Framework into one solution
18
What developers need to know about 5G applications
20
5 Things product managers should know about QA
22
Report: Shifting left does not solve security
31
Don’t do Agile, be Agile
page 24
Is the party over for Hadoop?
Agile Costing and Capitalization — How to Work with Finance to Scale Agile
COLUMNS 44
GUEST VIEW by Tobi Knaup Why Day 2 is critical in DevOps
45
ANALYST VIEW by Michael Azoff Cloud native means Kubernetes
46
INDUSTRY WATCH by David Rubinstein Identifying, and winning, with unicorns
page 32
APMs are more important than ever for microservice-based architectures
page 37 Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 80 Skyline Drive, Suite 303, Plainview, NY 11803. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2019 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 80 Skyline Drive, Suite 303, Plainview, NY 11803. SD Times subscriber services may be reached at subscriptions@d2emerge.com.
004_SDT029.qxp_Layout 1 10/23/19 12:25 PM Page 4
Instantly Search Terabytes
www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein drubinstein@d2emerge.com NEWS EDITOR Christina Cardoza ccardoza@d2emerge.com
dtSearch’s document filters support: ‡ popular file types ‡ emails with multilevel attachments ‡ a wide variety of databases ‡ web data
SOCIAL MEDIA AND ONLINE EDITORS Jenna Sargent jsargent@d2emerge.com Jakub Lewkowicz jlewkowicz@d2emerge.com ART DIRECTOR Mara Leonardi mleonardi@d2emerge.com CONTRIBUTING WRITERS Alyson Behr, Jacqueline Emigh, Lisa Morgan, Jeffrey Schwartz
2YHU VHDUFK RSWLRQV LQFOXGLQJ ‡ efficient multithreaded search ‡ HDV\ PXOWLFRORU KLW KLJKOLJKWLQJ ‡ forensics options like credit card search
Developers: ‡ 6'.V IRU :LQGRZV /LQX[ PDF26 ‡ &URVV SODWIRUP $3,V IRU & -DYD DQG NET with NET Standard / 1(7 &RUH
.
.
.
‡ )$4V RQ IDFHWHG VHDUFK JUDQXODU GDWD FODVVLILFDWLRQ $]XUH $:6 DQG PRUH
CONTRIBUTING ANALYSTS Enderle Group, Gartner, IDC, Intellyx, Ovum
ADVERTISING SALES PUBLISHER David Lyman 978-465-2351 dlyman@d2emerge.com SALES MANAGER Jon Sawyer jsawyer@d2emerge.com
CUSTOMER SERVICE SUBSCRIPTIONS subscriptions@d2emerge.com ADVERTISING TRAFFIC Mara Leonardi adtraffic@d2emerge.com LIST SERVICES Jourdan Pedone jpedone@d2emerge.com
Visit dtSearch.com for ‡ KXQGUHGV RI UHYLHZV DQG FDVH VWXGLHV ‡ IXOO\ IXQFWLRQDO HQWHUSULVH DQG developer evaluations
The Smart Choice for Text Retrieval® since 1991
dtSearch.com 1-800-IT-FINDS
REPRINTS reprints@d2emerge.com ACCOUNTING accounting@d2emerge.com
PRESIDENT & CEO David Lyman CHIEF OPERATING OFFICER David Rubinstein
D2 EMERGE LLC 80 Skyline Drive Suite 303 Plainview, NY 11803 www.d2emerge.com
Full Page Ads_SDT029.qxp_Layout 1 10/24/19 10:32 AM Page 48
006,7_SDT029.qxp_Layout 1 10/23/19 11:36 AM Page 6
6
SD Times
November 2019
www.sdtimes.com
NEWS WATCH Databricks gives Delta Lake project to the Linux Foundation
industry contribution and consensus building, which will improve the state of the art for data storage and reliability.”
Databricks has announced it is donating its open-source data lakes project to the Linux Foundation. Delta Lake is designed to improve the reliability, quality and performance of data lakes. As a Linux Foundation project, Delta Lake will be used as an open standard for data lakes. It will adopt an open governance model and provide a framework for long-term stewardship. “Bringing Delta Lake under the neutral home of the Linux Foundation will help the open source community dependent on the project develop the technology addressing how big data is stored and processed, both on-prem and in the cloud,” said Michael Dolan, VP of strategic programs at the Linux Foundation. “The Linux Foundation helps open source communities leverage an open governance model to enable broad
Microsoft reveals foldable Surface devices and new OS Microsoft announced its 2020 holiday season will include two dual-screen Surface devices, the Surface Neo and Surface Duo, as well as a new operating system to support foldable screens. “With Surface Neo and Surface Duo we are introducing a new category of dualscreen devices designed to help people get more done on smaller, more mobile form factors. Today, people carry PCs, tablets and phones because each device performs a specific task well. Each screen does something we need when we need it. But these devices are limited with what they can achieve when you must switch between apps on a single screen or switch between screens altogether. When that
People on the move
switching happens, we break our focus. We break our flow,” Microsoft wrote in a post.
AWS to sponsor the Rust programming language AWS has decided to sponsor the Rust programming language now that many of its services run on the Mozillasponsored language, including Lambda, EC2 and S3. It also announced that it will be offering AWS promotional credits to open-source projects to “free up resources for open source projects to further expand and innovate in their communities.” AWS explained it uses the Rust language to store release artifacts such as compilers, libraries, tools and source code on S3; run ecosystem-wide regression tests with crater on EC2; and to operate docs.rs, a website that hosts documentation for all packages published to the central crates.io registry. Amazon has not yet revealed the exact details of the sponsorship yet.
n Technology industry veteran Aled Miles has been appointed the CEO of Sauce Labs. Miles has more than 30 years of experience. He previously was CEO of TeleSign, senior vice president on the executive committee for Symantec Corp and currently sits on the board of directors of the cybersecurity company Sapien.
n Art Landro is joining the MontaVista executive team as strategic advisory. He will focus on helping the company support key vertical automotive applications such as in-vehicle infotainment, connected car and advanced driver assistance systems. Landro previously served as COO at Cognomotive, and was CEO at both Sencha and Cordys.
n The OpenJS Foundation has announced Robin Ginn as its new executive director. Before joining OpenJS, Ginn was a Microsoft executive where she led a number of key opensource initiatives, co-founded @OpenAtMicrosoft, and helped with Microsoft’s contribution and involvement in the Node.js project.
n BMC announced Ayman Sayed as president and CEO. Sayed will replace Bob Beauchamp, who served as interim CEO. Beauchamp will continue as chairman of the board. Before joining BMC, Sayed was president and chief product officer at CA technologies were he was responsible for the company’s vision, strategy and development.
Python 3.8 is now available The latest version of Python is now available. Python 3.8 introduces a number of new features, including assignment expressions, positional-only parameters, and a new parallel filesystem cache. Key features of the release include: l Assignment expressions l Positional-only parameters l Parallel filesystem cache for compiled bytecode files l Debug build uses the same ABI as release build l Python Runtime Audit HOoks l Python Initialization Configuration l A fast calling protocol for CPython l Pickle protocol 5 with out-of-band data buffers
MuleSoft releases Anypoint Service Mesh The MuleSoft Anypoint Service Mesh is meant to address the growing need to manage complex environments, dependencies, and loosely coupled services; and provide security and reliability to microservicesbased apps. “Organizations can only realize the benefits of a microservices architecture by eliminating the custom code and complexity that comes from managing all of these disparate services. Developers also need to be able to find and reuse microservices created by other teams — breaking down these silos unlocks the true promise of speed, agility and flexibility from microservices,” the company wrote in its announcement. Features include the ability to visuaize microservice depen-
006,7_SDT029.qxp_Layout 1 10/23/19 11:36 AM Page 7
www.sdtimes.com
November 2019
SD Times
Girls Who Code receives IBM Open Source Community Grant IBM recently announced a community grant that would be awarded quarterly to nonprofits that are “dedicated to education, inclusiveness, and skill-building for women, underrepresented minorities, and underserved communities.” Girls Who Code will be the first recipient of the $50,000 community grant. Half of the grant will be awarded in cash and the other half in cloud credits. Girls Who Code is a group that provides after-school classes and summer courses in computer science to teenage girls. In addition to gaining technical skills, the program also empowers girls.
dencies; ensure resiliency across services; measure and optimize performance; implement mutual TLS for all traffic; and automatically enforce access controls.
AMP moves to the OpenJS Foundation The open-source web component framework AMP now has a new home. At its AMP Contributor Summit 2019, it announced that it would be joining the OpenJS Foundation’s incubator. AMP is a 4-year-old project that was initially backed by Google, and will continue to be supported by Google, a platinum member of the OpenJS Foundation. It is used by many organizations to speed up page load time on mobile devices. According to the foundation, AMP is currently used in over 30 million domains.
Low-code Node-RED 1.0 now available The open-source low-code programming tool Node-RED reached a milestone this week as it announced its 1.0 release. Node-RED was originally developed by IBM and open sourced six years ago. It is designed for event-driven applications and features flow-based program-
ming to visualize how messages flow throughout an application. The 1.0 release features: l Asynchronous by default, which allows for fairer handling of messages across multiple flows. l Overhauled CSS: Designed for consistency and ease of use. l Docker image: The previous base images that Node-RED was using were no longer maintained, so 1.0 features completely redesigned Docker images with proper multi-architecture. l Updates to the Node-RED Flow Library: The flow library now enables users to create and share collections of things. It also features a new design.
Eclipse Foundation forms working group for cloud-native The Eclipse Foundation is trying to make it easier for developers to build cloud-native applications. It has formed the Eclipse Cloud Development Tools Working Group to achieve this. The group is a vendor-neutral open source group “that will focus on development tools for and in the cloud.” Its founding members include Broadcom, IBM, Red Hat, and
SAP, among others. The group will work on driving the evolution and adoption of standards of cloud developer tools, including language support, extensions, marketplace, and developer workspace definition.
remotely as required,” Rachel Taylor, CEO of Nubix.io, wrote in a post. The new developer edition targets enterprise developers who aren’t familiar with embedded toolkits or used to writing apps for embedded devices.
Nubix.io launches developer edition of tiny edge containers
Cloudera launches enterprise data cloud
IoT edge-native container and analytics provider Nubix.io wants to help the edge meet its full potential with the announcement of its new developer solution. The developer edition of its edge-native tiny containers provides access to a library of sensors, analytics and tiny services for creating IoT applications. “For the edge to have any chance to reach its potential, it has to get much easier for all developers to build and deploy applications to edge devices. To engage the level and quantity of talent required to build applications and analyze the deluge of data that’s going to be generated by the 50B connected devices deployed by next year, there has to be a platform that allows developers to build applications in the cloud that can be deployed to physical hardware and managed
Cloudera announced the first instantiation of its enterprise data cloud, the Cloudera Data Platform, a native cloud service to manage data and workloads on any cloud. Many enterprises are creating multi-cloud strategies but face increased complexities due to having some workloads in Microsoft Azure, for instance, while others live in AWS or Google Cloud, and still other remain on-premises. Further, IT teams are concerned with regulatory compliance, while the business side wants to move fast, and reconciling these also can be difficult. “This leads to an inherent tension, where both sides are right — business wants to move fast while IT has to make sure that even when they move fast, they don’t break too many things, especially when it comes to regulations,” Arun Murthy, chief product officer at Cloudera, told SD Times. z
7
008-9_SDT029.qxp_Layout 1 10/23/19 11:35 AM Page 8
8
SD Times
November 2019
www.sdtimes.com
BY CHRISTINA CARDOZA
lash is quickly approaching the end of its life. Adobe plans to halt updates and distribution by the end of 2020, and encourages any content creators to migrate their existing Flash content to new open formats like HTML5, WebGL, and WebAssembly. “Several industries and businesses have been built around Flash technology — including gaming, education and video — and we remain committed to supporting Flash through 2020, as customers and partners put their migration plans into place,” Adobe wrote in a blog post when it announced its plans in 2017. The problem, however, is not everyone is ready to say goodbye to Flash. “Even though the openness and accessibility of web standards is the best way to go, the ease of use and the joy of creativeness with Flash is something I still miss now and then,” Juha Lindstedt, web architect who created a petition to open-source Flash, told SD Times. “There was a movement around cool Flash sites, developers almost competing who would make the coolest art piece or website.” Lindstedt does acknowledge that the Internet was very different back then. He explained when Flash was at its most popular, the Internet was still new and developers were using Flash to create almost pieces of artwork to showcase their talent. Today’s Internet is about being more social and connected. “I would compare Flash to music videos,” he said. “Back in the music videos’ golden era, they used to be art pieces, sometimes even separate short films. MTV was showcasing the best music videos and FWA [Favorite Website Awards] was similar for Flash projects,” he said. Open web technologies are becoming the default choice when creating web experiences because they are fast and more powerefficient. “They’re also more secure, so you can be safer while shopping, banking, or reading sensitive documents. They also work on both mobile and desktop, so you can visit your favorite site anywhere,” Anthony Laforge, product manager of Google Chrome, wrote in a blog post when Google announced its own plans to remove Flash from Chrome. As of July 2019, Flash has been disabled in Chrome by default. Despite the general agreement that Flash can’t keep up with today’s user demands and experiences, most would agree that throughout its life Flash has been the foundation for many web skills. “For 20 years, Flash has helped shape the way that you play games, watch videos and run applications on the web,” wrote Laforge. Because of this, there are many efforts looking to keep Flash
F
008-9_SDT029.qxp_Layout 1 10/23/19 11:35 AM Page 9
www.sdtimes.com
around in some capacity. “Flash is an important piece of the Internet’s history. We need to somehow preserve those interactive art pieces,” said Linstedt.
Preserving Flash content One effort to preserve, or reimplement Flash was the Gnash project, an open-source Flash alternative that lived under the Free Software Foundation banner. “Unfortunately, Gnash soon fell behind Adobe’s player in terms of features,” Alessandro Pignotti, founder and CTO of Leaning Technologies, a compile-to-JavaScript and compile-to-WebAssembly tools provider, and author of the open-source Flash player Lightspark, wrote in a post. Other efforts, in addition to Linstedt’s petition to open-source Flash, include Shumway, an HTML5 technology experiment; Ruffle, a Flash Player emulator written in Rust; and Lightspark, a C++ implementation of the Flash player, “If there is one lesson that I learned from working on Lightspark, it is that reimplementing Flash is a very, very, very hard and time-consuming task,” wrote Pignotti. “That’s why I am certain that the only practical, robust way to accurately preserve Flash content in the medium and long term is not through a reimplementation.” Along with his team at Leaning Technologies, Pignotti recently announced CheerpX, a new technology designed to run unmodified x86 binaries in a browser using WebAssembly. “There is so much of Flash content [out there]. We are talking thousands, tens of thousands of content and that is if you are only looking at video games. But that is not the only type of content. Over the years there has been a lot of enterprise content, dashboards, logistics, traditional databases and front-end solutions written in Flash or in various frameworks built upon Flash like Flex,” Stefano De Rossi, founder and CEO of Leaning Technologies, told SD Times. With no intervention, CheerpX allows users to enjoy Flash applications, play Flash video games in a browser, and run legacy, enterprise-grade Flash content that don’t have the means or resources to be rewritten from scratch in a modern technology. “It is about preservation of all the content, and also about migration and modernization of existing legacy apps,” said De Rossi. CheerpX is still in a prototype phase, but is currently capable of fully executing unmodified apps, the team explained. Going forward, the plan is to directly download the Flash binary over HTTP, write a native x86 host application, and provide access to browser resources. “As you have probably guessed by this point, our solution to preserve Flash in the long term is to run the full, unmodified, Flash plug-in from Adobe in WebAssembly,” wrote Pignotti. “Long live Flash!” Lindstedt added. z
November 2019
SD Times
9
The demise of Flash There once was a time where Adobe Flash was the obvious choice for building rich web applications, but as the Internet began to grow and technology started to advance, Flash slowly started to die out. Over the last couple of years more and more businesses have announced that they will no longer support Flash. There was even an Occupy Flash movement to “rid the world of the Flash Player plugin.” “Flash Player is dead. Its time has passed. It's buggy. It crashes a lot. It requires constant security updates. It doesn't work on most mobile devices. It's a fossil, left over from the era of closed standards and unilateral corporate control of web technology. Websites that rely on Flash present a completely inconsistent (and often unusable) experience for a fast-growing percentage of the users who don't use a desktop browser. It introduces some scary security and privacy issues by way of Flash cookies,” the Occupy Flash website stated. “Flash makes the web less accessible. At this point, it's holding back the web.” In addition, Flash has become expensive to maintain and support due to crashes and bugs, and its ability to provide rich content has been diminishing over the years as more modern approaches have come to the surface. The beginning of the end of Flash can be traced back to 2007 with release of the iPhone, which came equipped with a mobile Internet browser — but did not support Adobe Flash. Fast forward to 2010, Steve Job released an open letter on his thoughts about Flash. “I wanted to jot down some of our thoughts on Adobe’s Flash products so that customers and critics may better understand why we do not allow Flash on iPhones, iPods and iPads. Adobe has characterized our decision as being primarily business driven – they say we want to protect our App Store — but in reality it is based on technology issues. Adobe claims that we are a closed system, and that Flash is open, but in fact the opposite is true,” Jobs wrote. Apple began working on its own open standards, such as the opensource project WebKit which provided a HTML5 rendering engine. “New open standards created in the mobile era, such as HTML5, will win on mobile devices (and PCs too). Perhaps Adobe should focus more on creating great HTML5 tools for the future, and less on criticizing Apple for leaving the past behind,” Jobs wrote. Over the years, HTML5 has matured and become more advanced and other open standards such as WebGL and WebAssembly have emerged to enable companies to bypass the Flash plugin and use modern technology to add rich features to their web solutions. These standards are more favorable than Flash because they are available natively inside a browser, instead of being a browser hack. Adobe itself has moved on from the Flash brand. It announced the release of Adobe Animate CC in 2015, which supports HTML5, WebGL and Scalable Vector Graphics. “Over time, we’ve seen helper apps evolve to become plugins, and more recently, have seen many of these plugin capabilities get incorporated into open web standards. Today, most browser vendors are integrating capabilities once provided by plugins directly into browsers and deprecating plugins,” the company wrote in a post. “Given this progress, and in collaboration with several of our technology partners — including Apple, Facebook, Google, Microsoft and Mozilla — Adobe is planning to end-of-life Flash.” Adobe is expected to stop distribution of the Flash Player by the end of 2020. z —Christina Cardoza
010-11_SDT029.qxp_Layout 1 10/23/19 11:37 AM Page 10
10
SD Times
November 2019
www.sdtimes.com
otlin’s emergence: Common coding mistakes to watch for BY STEPHEN GATES
In May 2019, Kotlin, a programming language for modern multi-platform applications, became Google’s preferred language for Android app development. As a result, many developers have shifted from using Java, the original language for building Android apps, to embracing Kotlin. According to a recent survey, 62% of developers are now using Kotlin to build mobile apps, with an additional 41% using Kotlin to build web-backend projects, meaning the language is here to stay. In tandem with Kotlin’s emergence, we’re also seeing a greater emphasis placed on mobile application security from prominent organizations, including the U.S. government. Its recent Study on Mobile Device Security, commissioned through the Department of Homeland Security (DHS) in consultation with the National Institute of Standards and Technology (NIST), found that vulnerabilities in applications are usually the result of failure to follow secure coding practices and these vulnerabilities typically result in some sort of compromise to a user’s datae. Now, more than ever before, and in light of National Cybersecurity Awareness Month that took place throughout October, it’s important for developers to familiarize themselves with Kotlin and understand secure coding best practices for mobile apps when it comes to using this language. To do this, let’s look at some of the common pitfalls when using Kotlin: n Insecure data storage
The Android ecosystem provides several ways to store data for an app. The kind of storage used by developers depends on the type of data stored, the Stephen Gates is cybersecurity evangelist at Checkmarx.
Which types of apps do you develop in Kotlin? 62%
Mobile
41%
Web Back-end
29%
Library or framework
22%
Tooling Tooling
9%
Desktop
7%
IoT
5%
Game development
4%
Web front-end
2%
Data analysis / BI
2%
Embedded
0%
Machine Learning
2%
Other Source: JetBrains
usage of the data, and whether the data should be kept private or shared with other apps. Unfortunately, a very common coding error revolves around storing sensitive information in clear text. For instance, it is frequent to find API keys, passwords, and Personally Identifiable Information (PII) stored on the ‘Shared Preferences’ or databases used by the app. We’re seeing this oversight increasingly lead to loss of confidential data since an attacker, able to access the database of the app (rooting the device, backup of the app, etc.), can retrieve the credentials of the other users using the app. n Insecure communication
Currently, most mobile applications exchange data in a client-server fashion at some point. When these communications happen, data traverses either the mobile carrier’s network, or between some Wi-Fi network and the Internet. Although exploiting the mobile carrier’s network is not an impossible task, abusing a Wi-Fi network is usually much easier. If communications lack SSL/TLS, then an adversary will not only be able monitor traffic transmitted
in clear text, they are also able to steal the exchanged data and can execute Man-in-the-Middle (MitM) attacks. In order to prevent insecure communication, it’s important to always assume that the network layer is not secure and continuously ensure that all communications are encrypted between mobile apps and back-end servers. n Insecure authentication
Weak, or insecure authentication for mobile applications is fairly prevalent due to mobile devices’ input factor: 4digit pins are a great example of this. Either a weak password policy due to usability requirements, or authentication based on features like TouchID, make your application vulnerable. Unless there’s a functional requirement, mobile applications do not require a back-end server to which they should be authenticated in real-time. Even when such back-end servers exists, users are typically not required to be online at all times. This poses a great challenge for mobile applications’ authentication. Whenever authentication has to happen locally, then it can be bypassed on jailbroken devices through runtime manipula-
010-11_SDT029.qxp_Layout 1 10/23/19 11:37 AM Page 11
www.sdtimes.com
tion or modification of the binary. Insecure authentication is not just about guessable passwords, default user accounts, or data breaches. Sometimes, the authentication mechanism can also be bypassed and the system will fail to identify the user and log its activity.
November 2019
SD Times
Threats via Mobile Apps
n Code tampering
Once a mobile application is downloaded and installed on a device, both the code and data will be available there. Since most mobile apps are part of the public domain, this gives adversaries the chance to directly modify the code, manipulate memory content, change or replace the system APIs, or simply modify an application’s data and resources. This is known as code tampering. Today, rogue mobile apps often play an important role in fraud-based attacks, becoming even more prevalent than malware. Typically, attackers exploit code modification via malicious types of apps, tricking users to install the malicious app via phishing attacks. To prevent code tampering, it’s important that the mobile app can
Source: Department of Homeland Security (DHS) Study on Mobile Device Security
detect at runtime that code has been added or changed. From there, development teams should be able to react accordingly by reporting the code integrity violation to the server or shutdown the execution entirely. Exploitation techniques are always evolving; new vulnerabilities might be found in the future based on dependencies that may reveal new application tampering points. By watching for these
coding errors, developers can build more secure Android apps and avoid pitfalls that can lead to these avoidable scenarios. Additionally, developers can stay up-to-date by referring to the OWASP Mobile Top 10 security weaknesses list and reviewing Google Codelabs’ recent training modules that include Android Kotlin Fundamentals, Kotlin Bootcamp for Programmers, as well as Refactoring from Java to Kotlin. z
11
012-14_SDT029.qxp_Layout 1 10/24/19 10:27 AM Page 12
12
SD Times
November 2019
www.sdtimes.com
Grace Hopper Celebration is more than just a tech conference
T
wo years ago, Lin Classon was attending a tech conference when she discovered the one perk of gender inequality in the tech industry: empty bathrooms. She took to Twitter, saying “Something serious for 1sec. Every ladies room I’ve been in was near empty. Although I’m grateful for the gender inequality that affords me quiet, empty bathrooms at these conferences, I’d take a long line at ladies room instead for a change… I’ve taken so many pictures of empty ladies rooms and selfies at these conferences. It was funny, sardonic for a while. It’s getting more and more like tired news though. We could do better.” That tweet inspired hybrid IT solutions provider Ensono, where Classon is director of public cloud product, to conduct a report about womens’ experiences at tech conferences. The results are dismal. One in four women experience sexual harassment at tech conferences. Only 25 percent of keynote speeches in the last three years have been made by a woman. Seventy percent of women who have been on panels at a conference were the only women on the panel. While some tech conferences are trying to create more inclusive spaces
BY JENNA SARGENT for all technologists, there’s still a long way to go. In the meantime, at least there’s Grace Hopper Celebration (GHC), a yearly gathering of women (including transgender and non-binary) technologists put on by AnitaB.org. The event is named for Grace Hopper, a rear admiral in the U.S. Navy who invented the first compiler in 1952. This led to the creation of the COBOL programming language, which was one of the first high-level programming languages. Without Grace Hopper’s contributions, programming as we know it today wouldn’t exist. “Admiral Hopper broke down walls in part by simply acting as if those walls didn’t exist,” said Dr. Rebecca Parsons, CTO of ThoughtWorks and member of AnitaB.org’s board of trustees. “She was her own person and refused to be defined by others and their expectations of her.” Kesha Williams, AWS ML Hero, Alexa champion, and technical instructor at A Cloud Guru, believes that GHC can be defined in one word: “empowering.” “Where else can you find so many women software engineers, technical
leads, and senior executives coming together to talk about what we do with technology and sharing our lessons learned? Nowhere! GHC is structured to enable women to gain career guidance, technical knowledge and application of that knowledge in the real world, and to build and foster professional networks,” said Williams. Grace Hopper Celebration, which was held in early October, was a fourday event this year, making it longer than it was in previous years. First-day events included the First Timer’s Orientation, Career Fair Hall Crawl, a keynote session and other sessions, Williams explained. The event also featured mentoring circles. According to Williams, who has a mentor this year, these allow attendees to get “one-on-one advice with an industry expert in small groups of 10.” There were 66 mentors this year corresponding to a particular topic, she explained. “Attendees choose a topic, sit at the corresponding table and chat with a mentor for 20 minutes. Every 20 minutes, attendees switch tables and chat with a different mentor on a different topic. It’s sort of like ‘speed dating’ but for technology concepts!” Williams said that at this year’s conference, thousands of women attended who had the shared experiences of feeling “alone, isolated, or like you are the minority in your classes, office meetings, or software teams.” Going forward, AnitaB.’s Parsons said, “Women/non-binary technologists should attend GHC because it is the largest gathering of its kind in the world and there is no other experience quite like it. Spending four days soaking in educational sessions on both career issues and technology, and mingling with other talented technologists will leave you inspired and empowered. And you’ll discover new connections that
Photos courtesy anitab.org
012-14_SDT029.qxp_Layout 1 10/23/19 11:41 AM Page 13
may last a lifetime.” Hear from these women in technology about what their experiences at the event have been like and what it meant:
Nancy Wang, co-founder and CEO at AWIP and head of product management at Amazon Web Services (AWS) “Grace Hopper has become a mecca for technologies, diversity advocates, and corporations who are looking to draw from the well of collective talent. This year, I was honored to join leaders from GitHub, Heroku, and Facebook as speakers on a panel that explored ‘Building Products for Developers.’ As one of the more underrepresented fields among tech (which itself is underrepresented), the infrastructure and dev tools space is one that is ripe for more inclusion. As a woman of color myself, I want to see the next generation of technical product managers at Amazon Web Services comprising of more women. Recently, I have been asked by Amazon Web Services leadership to foster more D&I initiatives, and this will be a great forum for me to gain best practices from other corporate leaders.”
Imris Curry, computational mathematics major at RIT and a leader in RIT’s Women in Computing organization “Last year was my first time at Grace
Hopper, and I just remember being in awe of the spectacle. Thousands of women gathered to celebrate computing, with companies pulling out all the stops to build giant booths at the career fair. I was certainly overwhelmed, but I was also inspired by all the positive, uplifting messages around the conference. I had the opportunity to interview for and eventually get job opportunities with companies I likely wouldn’t have interacted with otherwise...I have attended other conferences, and Grace Hopper stands out above the rest in its positive messaging. The conference is not just about tech, but also about uplifting its attendees. It builds community in a way I don’t usually see other conferences do.”
Gail Frederick, VP of mobile and developer ecosystem, GM of eBay Portland “For me, Grace Hopper is one of the most exciting times of the year. We are celebrating women in technology, and we are inviting a growing number of young women professionals into our ranks. Recruiting talent at the event is invigorating and rewarding for me as a tech leader. When I started in tech, there wasn’t a conference like this that I could attend that would encourage me, mentor me or connect me with larger companies – I had to figure it out
on my own. Grace Hopper is different than other conferences, it has a strong focus on inclusion through mentoring, workshops and even senior management panels to help educate and grow our future technologists. It’s a privilege to be invited to this conference year after year, having eBay’s men and women continuing to drive diversity initiatives for the company and the industry.”
Ya Xu, head of data science at LinkedIn “GHC is a celebration of all women technologists, whether they’re still in school or are seasoned veterans. There’s always something new to learn for each and every attendee—not every tech conference is so relevant to such a wide range of experience levels. One of my favorite GHC memories is helping a college student who stopped by our LinkedIn booth one year. She was interested in pursuing a career in data science, but was majoring in a different discipline. Together, we worked to chart out the courses and experience she would need to make the transition. I really enjoy the environment where I can have conversations like that, and then have my very next discussion be about mid-career management challenges with a senior engineer. It’s a continued on page 14 >
012-14_SDT029.qxp_Layout 1 10/23/19 11:41 AM Page 14
SD Times
November 2019
www.sdtimes.com
Besides being far more diverse in more ways than simply gender, the vibe is more friendly, less competitive, and more about raising each other up to even higher levels. It’s a remarkable experience that’s difficult to put into words. At both work and my daily tech life, I often think back to something I have learned at GHC. Whether it’s sharing what I learned about accessibility on the web, or how Google Safe Browsing is helping protect us, or even how to transform self-criticism into selfencouragement, the Grace Hopper Celebration has greatly impacted both my career and my life.”
Photo courtesy anitab.org
14
< continued from page 13
great opportunity for us to all learn from people with different experience levels and interests.”
Kesha Williams, AWS ML Hero, Alexa champion, and technical instructor at A Cloud Guru “In an industry where women tend to be the minority, imagine attending a tech event where you are the majority? An event that at its core celebrates a woman pioneer in the computer programming industry, Grace Hopper. An event where you will see women technologists innovating, creating, and driving change and growth in their organizations in a major way. I believe the true impact of GHC is that is has the ability to change how you see yourself, your career, and your overall existence as a woman in tech. You will no longer see yourself as a minority! You will see yourself as a CEO, CTO, VP, or a startup founder, and you will know that you are not alone. I find it very empowering to see thousands of women in tech; the experience can cause you to walk away with an increased sense of belonging and a renewed excitement for computer science and the future of tech.”
Neeha Bollepali, technical lead at PROS “I attended Grace Hopper Conference (GHC) for the first time last year and was blown away. The number one
thing I noticed which made GHC stand apart from other tech conferences that I have attended in the past (O’Reilly Fluent Conference, QCon, NFJS) were the number of women and the diversity among them. I could see myself in the attendees, presenters, organizers and at once I felt understood and that I belonged. The presenters ranged from college students, career beginners, seasoned professions, tech leaders to mom entrepreneurs. I could relate to them in so many ways, being a tech lead at my own workplace and a mom of 2 young children. I noticed these women came from very similar background as I did and were breaking barriers and leading the world in technology. They inspired me to do more and speak louder. I resolved to try harder at work and made a goal to become a GHC presenter in 2019.”
Frances Jurek, software engineer at PROS “The Grace Hopper Celebration (GHC) is extraordinary. The first day begins with a stadium full of people in technology from all over the globe – and nearly all of them are women. To be literally surrounded by over 20,000 inspiring and inquisitive women is energizing. The three days are packed with fascinating presentations ranging from emerging tech to navigating your career path, yet the atmosphere is completely different than other tech conferences.
Dolly Singh, head of talent innovation at ServiceTitan “This year, at GHC19, I’ve been given the distinct honor of speaking for a select group of roughly 400 executive female leaders. These women are coming from all four corners of the globe to celebrate themselves, their peers and the progress of women in the world as a whole. I can’t help but feel like these interactions and opportunities are even more critical in the current political climate, when parts of the world appear to be moving backwards. It’s even more critical that we channel our voices and resist any attempts to defile our progress. My hope will be to bring energy to the women to whom I’m lucky enough to present to and take energy from all the amazing women I will hear from and meet. The mathematical equation for Power is Energy divided by Time (P = E/T); GHC creates a space for us to bring together the energy (E) from thousands of remarkable women, and packing it into a dense time (T) and place, giving us collectively more power (P) than we could ever have alone. That is the true legacy of Grace Hopper, beyond her brilliance and her grit. She is not just an icon, but a reminder of our potential and our power as women; a power that is exponentially more powerful when we pool our networks and resources, and when we are committed allies and advocates for one another.” z
Full Page Ads_SDT029.qxp_Layout 1 10/23/19 12:12 PM Page 15
016-17_SDT029.qxp_Layout 1 10/23/19 2:20 PM Page 16
16
SD Times
November 2019
www.sdtimes.com
.NET 5 merges Core and .NET Framework into one solution BY JENNA SARGENT
Microsoft has been making big changes to .NET this year. In May, the company announced that it would be merging all of its .NET products, like .NET Core and .NET Framework, into a single .NET. The .NET Framework is a development platform for building web, Windows, Windows Phone, Windows Server, and Microsoft Azure apps. .NET Core is a development platform for building cross-platform device, cloud, and IoT applications. In September, the company released .NET Core 3.0, the last release under which the .NET products would be separated. Starting with .NET 5, there will be a single .NET installation. Microsoft plans on releasing .NET 5 in November 2020, with a preview coming out in the first half of 2020. “.NET Core was built from the learnings of building .NET Framework and has taken the best features forward, at first focusing on hyper-scale cloud scenarios,” wrote Scott Hanselman, partner program manager at Microsoft, in an email to SD Times. “With .NET Core 3.0 we have added Windows desktop support with Windows Forms and WPF which were previously only available on .NET Framework. As we move forward toward .NET 5 we will bring in the best of Mono and its support for native execution and small footprint for the smallest of devices. Having one .NET instead of three simplifies the choices and the platform for new and existing developers.”
According to Microsoft, all of the things that developers love about .NET Core will remain, such as its opensource community, cross-platform implementation, support for leveraging platform-specific capabilities, high performance, side-by-side installation, small project files, a capable commandline interface, and integration with Visual Studio, Visual Studio for Mac,
and Visual Studio Code. But in addition, .NET 5 will also provide more choices on runtime experiences, Java interoperability on all platforms, Objective-C and Swift interoperability on multiple operating systems, and CodeFX will be extended to support static compilation of .NET, among other changes.
.NET Core 3.0 .NET Core 3.0 was released in late September. According to Microsoft, one of the biggest improvements in .NET Core 3.0 is support for Windows
desktop applications. According to Hanselman, .NET Core 3.0 is smaller and faster, which makes it ideal for cloud installations where density and speed are important. “Folks want their containers and microservices to start up fast and stay fast. .NET Core 3.0 is ideal for microservices environments like Kubernetes and AKS,” the team said. Making the framework smaller and faster was one of the team’s main goals in this release. Several of the new features in this release helped achieve this, such as fast JSON reading and writing and support for HTTP/2. In addition to improving performance, the team was also able to drastically reduce the memory usage of the garbage collector. “You’ll see big improvements on big machines, especially ones with >64 cores. It’s also been hardened for Docker to make it efficient and tight in containers. The garbage collector and thread pool work better when a container is configured for limited memory or CPU,” Hanselman wrote. He also pointed out that .NET is everywhere now — allowing developers to make apps that run on IoT devices, in small containers, on Windows, or on an iPhone or Android phone, as well as create games, websites and APIs, and scale to the cloud. “From a tiny device to a massive cloud, that language you just learned or that platform you’ve been working with for years JUST GOT BETTER. You can take those skills and translate them directly to these other platforms,” he wrote.
016-17_SDT029.qxp_Layout 1 10/23/19 2:21 PM Page 17
November 2019
SD Times
17
To celebrate this fact, Microsoft has released over 80 free beginner-level courses that can be used for bootcamps, college students, tech enthusiasts, or anyone looking to brush up on their understanding of .NET. Hanselman explained that while .NET on Windows will be supported forever, .NET Core is what developers should be using to get ongoing innovation. Going forward, Microsoft will continue working on unifying its platforms across Windows, Linux, macOS, iOS, Android, tvOS, watchOS, and WebAssembly into a single .NET solution to reduce confusion. In addition, .NET Core is receiving long-term support (LTS) releases in odd-numbered years. Developers looking for a stable platform to rely on should be targeting those LTS releases.
Discovery. Insight. Understanding.
.NET Core 3.1 Preview 1 In addition, in October, the company released the first preview of .NET Core 3.1, which features key improvements in Blazor and Windows Desktop. .NET Core 3.1 will be a long-term support release and it is expected to ship in December 2019. In addition, Microsoft revealed plans for improvements in future releases. They will be available in future previews. It will make it so that the .NET Core Desktop Runtime Installer installs the .NET Core Runtime, which it currently does not do. It will also introduce a developer experience in Visual Studio and SDK for C++/CLI in .NET Core 3.1 as well as Visual Studio 16.4. In addition, a new requirement of macOS 10.15 Catalina is that applications have to be notarized, and .NET Core SDK, .NET Core 3.1, and all other supported .NET Core releases will satisfy that requirement.
.NET Framework October 2019 The latest release of .NET Framework is a preview version of the October 2019 release. The .NET Framework October 2019 includes quality and reliability updates for the framework. It fixes issues in ASP.NET, CLR, Windows Forms, and WPF. z
SD Times offers in-depth features on the newest technologies, practices, and innovations affecting enterprise developers today â&#x20AC;&#x201D; Containers, Microservices, DevOps, IoT, Artificial Intelligence, Machine Learning, Big Data and more. Find the latest news from software providers, industry consortia, open source projects and research institutions. Subscribe TODAY to keep up with everything happening in the ever-changing world of software development! Available in two formats â&#x20AC;&#x201D; print or digital.
Sign up for FREE today at www.sdtimes.com.
018_SDT029.qxp_Layout 1 10/23/19 2:20 PM Page 18
18
SD Times
November 2019
www.sdtimes.com
What developers need to know about 5G applications BY JENNA SARGENT
A Gartner survey released at the end of last year revealed that two-thirds of organizations planned on deploying 5G by 2020. 5G will enable a number of use cases, such as IoT communications, but in order to reap the benefits, developers will need to learn to develop for 5G. According to Michael Rasalan, director of research at Evans Data, there isn’t much of a difference between mobile and IoT development for 5G and other types of development. “A developer uses the same basic programming skillset for 5G development,” he said. The main challenge will be in learning new APIs and learning how to take advantage of the new functionality that 5G brings, Rasalan explained. In terms of hardware, developers will need to understand new APIs that allow them to take advantage of 5G connectivity. One specific technology that developers need to learn is 3GPP interfaces, which allow “developers to properly use 5G network functionality exposed to them by service providers and mobile and IoT platforms,” Rasalan explained. In terms of applications, developers may have to consider the low latency and high speeds that are associated with 5G. They will need to design apps to be even more responsive than usual. 5G also offers the opportunity to design applications that more efficiently use virtualized and remote resources. “The network is no longer going to be as much of a bottleneck for application performance,” said Rasalan. “5G might allow developers to architect their apps to use edge networks to handle processing without affecting the speed of their apps.
Ultimately, if anything will be different it’s how developers will use the network as a resource.” Developers should expect to begin learning about 5G applications within the next two years. According to Evans Data’s Global Development Survey, the majority of developers surveyed predict that 5G will become more widespread within that time period. “We think that now is as good a time as any for developers to begin learning to think about 5G enabled applications,” said Rasalan. He added that at the bare minimum, design approaches should begin to consider how network approaches impact what can be done with an application. According to Rasalan, this will mean learning to develop for edge networks, learning to more beneficially employ distributed computing, and learning to optimize applications to make them more responsive. Rasalan believes that 5G will enable developers to employ more edge networking and virtualized resources in their applications. And because of lower network latencies, 5G will be able to drive distributed computing. In addition to the effects on developers, 5G will greatly impact the end user as well. Rasalan is already seeing that, with developers using more video, integrating complex user interactions, and implementing features that are driven by real-time events. There will also be downstream effects in areas like AR/VR and AI, which require hefty data analytics. “Because massive amounts of data can be handled and transmitted in shorter amounts of time, data intensive applications will just thrive,” said Rasalan. z
Full Page Ads_SDT029.qxp_Layout 1 10/23/19 12:13 PM Page 19
020-21_SDT029.qxp_Layout 1 10/23/19 3:09 PM Page 20
20
SD Times
November 2019
www.sdtimes.com
5 Things product managers BY CATHERINE HUANG remember the cold sweat forming behind my ears the first time a stakeholder asked me, “What’s the Test Plan?” I had no idea. As a new product manager, I had put most of my energy into defining requirements. Without a Quality Assurance team in place, testing became an afterthought — one I didn’t really know how to handle and which frankly gave me a lot of anxiety. It turns out my experience was not entirely unique. QA is consistently ignored as part of the development process. According to a 2018 State of Testing survey by Practitest, QA testers reported “communicating the value of testing to the organization” as one of the challenges they face most frequently on the job. If QA experts are constantly explaining the importance of their job to their teammates and employers, that’s a red flag. As the product manager leading the product’s vision and execution, gaining a better understanding of how to collaborate with the QA team has become essential to developing and launching products. The QA team is an integral part of the development process. They can catch problems before they happen, raise issues no one else has considered and help the product manager understand the risks of the project. But first, you — the product manager — must be proactive about reaching out to your QA compatriots and making them partners in the development process. Here are five tips I wish I’d known earlier and that I consistently continue to work on to ensure a strong partnership with QA. Even if there isn’t a
I
Catherine Huang is product director at crowdsourcing testing company Applause.
strong QA presence in your company, I hope these tips help build a framework for items you should think about in during product release cycles.
and accurate acceptance #1 Clear criteria is important We’ve all been there — a rushed user story amounts to poor requirements and anemic acceptance criteria. This is a reminder that the stronger our acceptance criteria, the more clarity developers and QA team members have to do their best work. Clarity is key when it comes to writing acceptance criteria. The QA team is
the first step between you and real end users; if they are confused about something in your user stories, your customers will be too. At the epic level, you’ll want your criteria to outline a concrete expected end to the user journey. At the story level, think about writing your acceptance criteria in BDD (Behavior-Driven Development) format. This type of format challenges you to break down your features into testable chunks. In some organizations, acceptance criteria also becomes the basis for your release notes, so it is important to be clear about what your story covers and
020-21_SDT029.qxp_Layout 1 10/23/19 3:09 PM Page 21
www.sdtimes.com
November 2019
SD Times
should know about QA Having the QA team’s input early in development is helpful in another way. It is this team’s job to see things others don’t and to ask the difficult questions – especially around edge cases. QA can also tell team members what they’ll be looking for during testing. This helps identify dependencies that aren’t visible to the rest of the team, which is invaluable when it comes to refining requirements.
QA accentuate the negative #3 Let My product management per-
what it does not.
QA at the beginning of the #2 Include process I’m not sure if anyone is still touting QA as something that happens at the end of the development process…and thank goodness for that. In some organizations, we actually see that paradigm flipping on its head; representatives from product, design, development and QA create the test plan together at the beginning. Working in unison helps to find the critical paths and workflows that will require the most thought and attention.
sonality tends toward the optimistic. I spend so much of my time extolling the virtues of the product that I often fail to see where things can go wrong. Working with the QA team gives me a healthy reality check. Because their job is to find problems, QA teams have a pretty good idea about where the bumps in the road are likely to lie. Leaning on their expertise, we work together to define edge cases that are not always considered in my initial requirements gathering. I once worked on an enhancement to a feature on an existing product. I was really excited about what this enhancement could bring for usability but I neglected to understand exactly how deep the tendrils of the existing feature reached into the product. The QA team’s help in thinking through the edge cases saved the day. They were able to map out the different possibilities early on so we could make appropriate design and product decisions as a result.
the QA mindset #4 Understand Don’t treat the QA team
as a gate holding back your release! Involve QA at the beginning of the project and work together to define: ● What success looks like for this release: What does the team want to achieve? What do both groups consider a win?
● The risks of the release: What is acceptable to your team? What’s a hard pass? ● Your key metrics: What KPIs are you tracking? Is it test coverage, number of bugs or something else entirely?
Help the QA and development team #5 prioritize what gets automated It’s tempting to want to automate all QA, or to let the QA team decide which tests are automated, but you should have a say in the priorities. As the voice of the customer, use your knowledge of what is important to users and what is not. It is your job to convey this information to your QA team. The priorities will help define the boundaries of testing and let QA members know where to focus their attention. Automate the paths that are most used and least likely to change over time. As the product grows, continue to assess the risk of the releases — what components of the product are most likely to break and what can you put in place to prevent that from happening?
Plan testing early! The best teams build testing into their development process. If you think proactively about QA and work closely with your QA team, they’re likely to spot problems before they happen. If you don’t, QA will be catching problems at the end of development, when they’re harder (and more expensive) to fix. This approach will leave you spending many more moments with cold sweat behind your ears trying to figure out what to do with your test plan. By developing relationships with QA engineers and managers early on, and incorporating QA into product requirements discussions, you’re setting yourself — and your product — up for success. z
21
022_SDT029.qxp_Layout 1 10/23/19 11:46 AM Page 22
22
SD Times
November 2019
www.sdtimes.com
DEVOPS WATCH
Report: Shifting left does not solve security ty integration are able to deploy to production on demand at a significantly higher rate than firms at all other levels of integration — 61 percent are able to do so. Compare this with organizations that have not integrated security at all: Fewer than half (49 percent) can deploy on demand,” the report stated. Where the problem lies is “in the middle.” Respondents revealed they experience the most friction and frustration during the middle states of development because this is where things start to become more complex. The report explained this friction slows down delivery and increases audit issues. However, the report did find that teams that continue to share and
BY CHRISTINA CARDOZA
The rise of DevSecOps has stressed the importance of shifting security left in order to provide better protection. A recently released report, though, found shifting left isn’t enough. In order for security to be viewed as more than just an extra step, it needs to be built into the entire life cycle. Puppet, CircleCI and Splunk announced the release of the 2019 State of DevOps Report, which surveyed nearly 3,000 technical professionals. “It’s true that everyone should care about the security of the application or service they’re building, but people will continue prioritizing the work that’s right in front of them unless they are
Security integration and time to remediate critical security vulnerabilities Less than 1 hour
1 hour or less than 1 day
1 day to less than 1 week
1 week to less than 2 weeks
2 weeks to less than 1 month
1 month to less than 3 months
3 months to less than 6 months
6 months or more
40% Level 1
30%
Level 2 Level 3 Level 4
20%
36% 38% 39%
32% 33%
Level 5
31% 32%
31% 27% 28%
10%
18% 15% 11% 7% 6% 6% 6%
9%
11%
11% 7%
7%
9% 8% 5%
0%
6% 4%
2%
3% 4%
3%
2%
1% 1%
2%
4%
2%
1% 0% 2%
2019 State of DevOps Report | Presented by Puppet, CircleCI & Splunk
incentivized to do things differently. That’s why security needs to be prioritized from the top of the organization. That’s also why it needs to be built into the entire software delivery life cycle,” the report stated. According to the report, the most advanced DevOps cultures are ones where security teams are involved throughout technology design and development, and incident responses are automated. Additionally, the more security is integrated throughout the entire life cycle, the better delivery teams are able to respond to problems. “Firms at the highest level of securi-
collaborate during this stage will see faster results and be able to refine their processes. “To progress out of the middle stages, you should focus on measuring both business outcomes and metrics that show how day-to-day toil is being reduced and alleviated (planned vs. unplanned work, deployment pain, Severity 1 incidents, etc.). Being able to visualize your progress when things still seem hard can be a powerful motivator, and just as important, can make it much easier to see what should come next, thus leading you forward,” the report stated. The report also found five best prac-
tices for improving security: 1. Having security and development teams collaborate on threat models 2. Integrating security tools throughout the development integration pipeline so engineers can be confident security problems aren’t being introduced into the codebase 3. Prioritizing security requirements, both functional and non-functional, as part of the project backlog 4. Evaluating automated tests, and reviewing changes in high-risk areas of the code 5. Reviewing infrastructure-related security policies before deployment “The DevOps principles that drive positive outcomes for software development — culture, automation, measurement and sharing — are the same principles that drive positive security outcomes,” said Alanna Brown, senior director of community and developer relations at Puppet and author of the State of DevOps report. “Organizations that are serious about improving their security practices and posture should start by adopting DevOps practices.” Other findings included that security doesn’t have to take a back seat to feature delivery; time to remediate vulnerabilities doesn’t dramatically decrease at higher levels of security integration; and the more security is integrated the more teams feel a shared responsibility. “It was interesting to discover it doesn’t matter how your teams are structured, so long as you have someone focused on security collaborating closely with development, test, and operations teams throughout the software delivery life cycle. You don’t need to have purely autonomous project teams that report to the same person, or that are even in the same department. What matters most is everyone working together towards the common goal of making the software more secure,” said Nigel Kersten, field CTO at Puppet. z
Full Page Ads_SDT029.qxp_Layout 1 10/23/19 12:13 PM Page 23
024-28_SDT029.qxp_Layout 1 10/23/19 3:05 PM Page 24
Donâ&#x20AC;&#x2122;t do Agile, be Agile
024-28_SDT029.qxp_Layout 1 10/23/19 3:06 PM Page 25
www.sdtimes.com
November 2019
BY CHRISTINA CARDOZA
D
espite what you may have heard, Agile is not dead. A couple years ago, Dave Thomas, one of the creators of the Agile Manifesto, declared that Agile was dead, but it wasn’t the idea of Agile he was talking about. It was the word
Agile itself. “The word ‘agile’ has been subverted to the point where it is effectively meaningless, and what passes for an agile community seems to be largely an arena for consultants and vendors to hawk services and products,” he wrote in a post. The core principles of Agile are still important for helping organizations deliver value more efficiently and effectively, but somewhere along the way the word Agile has become corrupt and depreciated that it caused confusion amongst the industry, he explained. For instance, there are no Agile programmers or Agile teams, there are programmers and development teams who practice agility, and tools that help them increase that agility. The industry is constantly looking for ways to build on their Agile successes and scale it throughout the rest of the organization, but if they don’t understand the core principles of Agile in the first place, it can be nearly impossible to do. “Agility requires more than just having a few roles, artifacts and events. It involves actually using each of those things for a specific advantage. Most organizations seem to be going through the motions rather than understanding what truly drives effective, sustainable agility,” said Bob Hartman, founder of Agile for All, an Agile and Scrum consulting firm. “Agile is hard to do well because it requires changing the way we think and the way we do things. We do all that for a reason: to build better products or get better results. If we don’t understand how Agile principles relate to the final results then we will be stuck doing the practices and basically having the old way of work done in short increments.” The problem is that Agile isn’t something you do, it is something you are. It’s a mindset, according to Steve Denning, an Agile thought leader and author of the book The Age of Agile. “It is a shift in mindset from a top-down bureaucratic hierarchical approach to a very different way of thinking about and acting in organizations,” he said. “If you don’t have the Agile mindset you are going to get it wrong.” Denning explained that there are three components of an Agile mindset: 1. An obsession with delivering value to customers 2. Descaling work into small items that can be handled by selforganizing teams 3. Joining those teams together in a network that is interactive and can flow information horizontally as easily as up and down Most organizations that say they are Agile usually only possess the second characteristic through Scrum, Kanban, or DevOps, but continued on page 26 >
SD Times
25
024-28_SDT029.qxp_Layout 1 10/23/19 2:24 PM Page 26
26
SD Times
November 2019
www.sdtimes.com
< continued from page 25
they miss the obsession with adding value and the network component. Without all three components, it is very likely that the organization will revert back to bureaucratic hierarchical top down practices, according to Denning. “You can’t scale mindsets. You either have them or you don’t, but obviously everyone in the organization has to have the same mindset. If you don’t
and the results will be far superior to what was done in the past,” said Agile for All’s Hartman. Steve Elliot, head of Jira Align at Atlassian, used the analogy of crawling, sitting, walking, running and flying when talking about the evolution of Agile within a large organization. “When you really get to where you are running, flying and innovating more like a startup does is where you are really driving heavy outcomes and you are
‘Things can start very small, but they can have fantastic success. It just takes time. It is not going to happen overnight, but it does need courage and a deep understanding of the change involved.’ —Steve Denning
have people in the organization with that mindset you are going to run into massive conflicts,” said Denning.
Scaling Agile to the enterprise Most organizations are still in the early stages of scaling Agile. “A decade ago many enterprises thought Agile was only for small teams doing small projects. Now it is being recognized that larger projects can be done using Agile
not letting the burearchy and red tape that comes with a large organization get in your way as much as it typically does,” Elliot said. “The reason this is still such a hot topic is we haven’t completely solved it yet, especially at the enterprise level.” He explained scaling Agile really means getting to a place where functions are repeatable, predictable and measurable at the team level and then
bringing it in to the rest of the business. “The market and technology moves too fast to work in the old way and so [the business] knows they need to learn a new way of working. I think they are all kind of watching other industry leaders and figuring out how fast they need to get there,” said Elliot. One of the biggest barriers keeping organizations from running or flying is just being able to align the entire organization. “This concept of taking Agile principles and applying them across thousands or even hundreds of thousands of people is just not an easy thing to do quickly. It really is just a transformational challenge to get that many people on the same page,” said Elliot. According to Elliot, the ones who are the best at being Agile are the ones who have executive alignment. The executives across different business units, portfolios and teams are all aligned and trying to solve one problem. They are willing to stick their necks out and work on the problem directly and be owners of the transformation, he explained. But getting everyone in the organization on the same page and the same way of looking at things can take up to five to 10 years in medium-to-large organizations, according to Denning,
Separating fake Agile from real Agile Somewhere along the Agile transformation companies hit roadblocks, start cutting corners, and they start just going through the motions. What they end up with is a fake Agile approach. They say they are going Agile, but they are not. “They are sort of like flamenco dancers who wear flamenco costumes and talk about flamenco but don’t know how to actually dance flamenco. They don’t have the spirit or sense of what it is to be a flamenco dancer,” said Steve Denning, an Agile thought leader and author of the book The Age of Agile. The Department of Defense (DoD) is uncovering the fakes with the release of its “Detecting Agile BS” guide. “Agile is a buzzword of software development, and so all DoD software development projects are, almost by default, now declared to be ‘agile.’ The purpose of this document is to provide guidance to DoD program executives and acquisition professionals on how to detect software projects that
are really using agile development versus those that are simply waterfall or spiral development in agile clothing (‘agile-scrum-fall’),” the DoD wrote in the guide. According to the DoD, a fake Agile project is one where: l No one is talking with or observing the software users l There is no continuous feedback loop between the users and the teams l There is a strong emphasis on requirements l Stakeholders are autonomous l End users are not present in any of the development l And manual processes are tolerated If you are in fact practicing fake Agile, Denning said the first step is to take a deep breath and start to look for some genuine Agile within the organization. “You will find people who really understand what is going on and actually operating in this fashion, so find those teams that are genuinely on the right track. Find out what is getting in their way, what
024-28_SDT029.qxp_Layout 1 10/23/19 2:25 PM Page 27
www.sdtimes.com
which can deter Agile initiatives. “Things can start very small, but they can have fantastic success. It just takes time. It is not going to happen overnight, but it does need courage and a deep understanding of the change involved,” he said. Both Denning and Elliot agree that executive buy-in is one of the biggest challenges when it comes to a new way of working, but as there are more Agile successes in the industry and executives are pressured to compete in the market, that challenge should go away with time, they said. In addition to alignment, Flint Brenton, CEO of the enterprise Agile solution provider CollabNet VersionOne, explained communication and collaboration are key, and can help with the overall alignment. Once the overall leadership alignment is in place, Brenton explained a number of things have to happen: You have to commit as a company that you are going to change the way you build software and understand it takes time; you have to change the culture and empower teams; and you have to select the platform to support Agile. “Large enterprises will take some time to scale horizontally across the organization to get teams straightened out and understand how to get pre-
dictable in a more Agile fashion as opposed to the traditional planning cycle,” said Elliot. “If we are trying to measure results more ruthlessly and frequently, it ends up being an organization change, culture change and how you think about business in general, even all the way to funding. It just takes time.”
Don’t let scaled Agile mislead you Despite all the conversations to scale Agile, Denning explained the word scaling puts businesses on the wrong track. Agile is about descaling things, and finding ways to simplify things into the smallest possible component, then connecting them together in a network, he said. According to Hartman, instead of thinking about scaling, organizations should think about breaking projects down into smaller teams, understanding what is truly valuable and what can wait, and limit the amount of work in progress. “One project at a time is optimal for a lot of reasons. This is a huge learning curve for most organization. They want a lot of things in process, which slows it all down,” he explained. “Organizations need to first recognize that scaling is not a requirement for success. Second they need to recognize that however they start with scaling, it needs to be inspected and adapt-
impediments they have, start removing those impediments, give encouragement to those teams, celebrate their successes, and encourage other parts of the organization to start embracing it,” he explained. If you are lower down in the company and at the team level, and don’t really have the power to do any of that, Denning also suggested persuading management of the benefits by getting a small team or group of teams that are doing it right to achieve some success. Other questions to ask your teams, according to the DoD, are: l Are they working to deliver working software to real users every iteration and gathering feedback? l Do all team members understand how they contribute to the overall mission and goals? l Is feedback gathered and turned into concrete work items? l Are teams empowered to change process or requirements based on feedback and continuous learning? If the answer to those questions is no, than you have discovered fake Agile. z
November 2019
SD Times
ed as they learn,” added Agile for All’s Hartman. “When they have projects that truly need scaling, the next step is understanding the need for small empowered teams that work together toward a common goal. Scaling agility requires agility — at all levels of the organization. It can’t be something where a bunch of teams work together and nothing else changes. Additionally, organizations need to recognize and understand how scheduling, planning and leading all change. “Tradeoffs need to be made and they need to be made with agility in mind. When we focus on being more Agile, we tend to get results. It feels more risky, but in the end it is actually less risky than all the assumptions we put in place that make us feel better about most projects,” said Hartman. “There have been many studies done on Agile and the results achieved. When small, empowered teams work together toward a common goal, the results are amazing. When small teams are simply told what to do and there is no flexibility in what is created then the results don’t tend to be much better than prior to the agile implementation. In fact, things can sometimes get worse as people simply think they are in more meetings that serve no purpose.” continued on page 28 >
27
024-28_SDT029.qxp_Layout 1 10/23/19 2:25 PM Page 28
28
SD Times
November 2019
www.sdtimes.com
< continued from page 27
Atlassian’s Elliot predicts in five years, Agile will be the safe choice among enterprises. “It is proven to be more effective. Hanging back with the old method is actually going to be the way where you are at more risk because you look like a dinosaur and you are not getting results.” It’s also important to be careful of the tools you implement in a large Agile initiative. A lot of times, organizations turn to a scaling Agile framework like SAFe, LeSS or DaD to help; however Denning believes just like the term scaling, these frameworks lead people in the wrong direction. SAFe, for example, “is all about an internal system running in a top-down fashion and trying to create a hierarchy in a bureaucracy within which Agile teams can function, but once you have them locked in these compartments, you are not talking about Agile at all,” he explained. Hartman is also wary about recommending a framework because too often organizations look at them as a silver bullet. Instead, Hartman explained a good starting point is to understand issues and choose strategic tactics and
Frameworks for scaling Agile
n Disciplined Agile Delivery (DAD): DAD aims to fill the Agile and Scrum gap with its Agile delivery solution. According to the team, Scrum only covers part of Agile delivery, but when organizations try to integrate other methods there is too much overlap and conflicting terminology. DAD supports a set of roles and delivery life cycles; offers a hybrid approach with strategies from Agile Modeling, Extreme Programming, Unified Process, Kanban, Lean Software Development, SAFe, LeSS and other frameworks; and takes a goal-driven approach to Agile. n Large Scale Scrum (LeSS): LeSS includes two different large-scale, single-team Scrum frameworks: one for up to eight teams and one for up to a few thousand people. Instead of having teams focus on individual parts of a product, LeSS focuses on one product as a whole. By having a scaled-up version of one-team Scrum, users will benefit from only one product backlog, one definition of done, one potentially shippable product, one product owner and one sprint, the makers of LeSS explained. n Scaled Agile Framework (SAFe): One of the most popular frameworks for scaling Lean, Agile, and DevOps across the enterprise. It aims to speed up time to market, increase productivity and quality and improve employee engagement. The framework covers leadership, thinking, product development flow, Scrum, Kanban and team building with a set of principles and practices designed to help businesses deliver more efficiently, effectively and continuously. z
practices based on needs. Elliot agreed that frameworks can be tricky because people tend to live and die by the framework and start to forget about why they are doing it in the first place. But he also explained without a framework it is hard to have a common language across the organiza-
Value Stream Management comes to Agile A fundamental tenet of scaling Agile is linking the team-level activities with the business strategy, and the way to do that is through value stream management or mapping (VSM), according to Flint Brenton, CEO of the enterprise Agile solution provider CollabNet VersionOne. VSM takes fragmented tools and processes and puts them in one place so that teams can gain data-driven insights that measure, track, optimize and align their process, he explained. Agile is not the only mindset change that needs to happen within a business, Brenton explained. There also needs to be a shift in how our software and solutions are viewed. Today software is a significant part of business strategy and success, so as a result the business needs a better way to identify and manage their assets. Value stream management identifies and examines value streams so teams can provide process and flow improvements, increased collaboration and knowledge sharing, and alignment with the overall business strategy. To put it in football terms, “scaling Agile gets you in the red zone, but does not necessarily get you in the end zone so value stream management makes sure that not only do you get on the 20-yard line, but you are actually punting the football across the goal line. It allows you to successfully complete the process in an automated way. That is how value stream management takes Agile to another level,” said Brenton. z
tion and understand what or what isn’t working. “Organizations don’t have to use an off-the-shelf framework. It can be a framework that the organization designs. But you need some level of structure and process to Agile. I don’t think you can do it without [a framework]. There has to be some way above the teams to look across different products and look at what is happening with customers in a uniform away,” he said. However CollabNet’s Brenton believes a framework like SAFe can help provide a good pathway to Agile success. Organizations just need to have a conversation about how stringent and strict they want to be around the SAFe principles. For instance, organizations will take the SAFe principles and decide to only focus on a couple in order to get a baseline and then continue to work at taking Agile and SAFe to the next level. “You have to do a self-assessment and determine what is your capability to make those changes and basically ramp the change,” said Brenton. “You have to look at your own company and determine what is your ability to execute and then you adjust your expectations and your deployment model accordingly and that is the best way to ensure success.” z
Full Page Ads_SDT029.qxp_Layout 1 10/23/19 12:17 PM Page 29
030-31_SDT029.qxp_Layout 1 10/23/19 11:49 AM Page 30
30
SD Times
November 2019
www.sdtimes.com
INDUSTRY SPOTLIGHT
Agile Costing and Capitalization – How to Work with Finance to Scale Agile BY PATRICK TICKLE
“Today every company is a technology company, no matter what product or service it provides.” And to rapidly produce competitive products and services that customers love, your technology company is likely using Agile and scaling your Agile teams for faster product or software delivery. While it’s clear to many technology executives and development leaders that Agile is a major driver of future business success, historically, finance leaders aren’t as easily convinced. Why? Generally, Agile development has uncertain accounting impacts and unfamiliar capitalization rules. Often finance believes they will be forced to expense all Agile software development costs. How many of you
people for their Agile delivery teams. No one wants to learn their answer for increased value delivery, faster time to market, and better customer satisfaction is a pipe dream because of how finance views Agile development costs. But when faced with ongoing financial governance headaches around headcount, budgeting, staffing, and funding, many leaders, like you, might end up questioning whether Agile is worth the hassle, despite the successes you’ve had to date. In reality, the customer value delivered through Agile development should outweigh finance’s desire to “keep things as they've always been” — and shouldn’t inhibit your organization from scaling the best methods for delivering customer value. It’s time for Agile leaders and finance teams to discuss a
A common challenge for Agile leaders trying to accelerate Agile at scale is giving finance the data it needs for calculating costs. have hit a roadblock scaling Agile when trying to get the right headcount or dollars from finance to fund your teams? The inability for finance to evolve financial reporting and governance practices constrains the ability to scale Agile by forcing development efforts into outdated capitalization methodologies that misinterpret Agile costs. Without an approach for how Agile work is costed and capitalized, growing Agile organizations will continue to face an uphill battle when securing budget and Patrick Tickle is the chief product 0fficer at Planview.
joint solution for how to fund, budget, and manage the costs associated with Agile software development work or face the consequences of stalled Agile scaling efforts. Unlike waterfall or milestone driven work, Agile software development doesn’t follow linear development processes or gates. Financial planners and accountants are often unsure of how to measure Agile costs and appropriately attribute them to the correct capital expense (CapEx) or operational expense (OpEx) categories, leading many to expense all development efforts up front. Finance does this to keep financial policies compliant with
auditors. If the Agile work is all expensed, then there’s no question regarding how or what to capitalize. While the certainty of expensing Agile costs may be more straight-forward to deal with, it can be detrimental to the business, as it can overstate costs and falsely cause Agile to appear expensive to the company and its investors. And in the world of costly investments, most companies will opt to defund and reduce headcount for initiatives that are “too expensive.” Sound familiar? Has this happened to your Agile scaling initiative? As technology organizations shift to accommodate market pivots and competitive threats, Agile practices often scale across software development and IT to keep pace. Costing and capitalizing their increased Agile efforts accurately becomes paramount to successful fiscal planning and the overall Agile transformation. Knowing what to capitalize and when versus what to expense impacts an organization’s tax liabilities and profitability. Therefore, finance and Agile leaders must come together to discuss how Agile is costed and capitalized. If your organization wants to remain relevant to your existing customer base (or grab market share from less nimble organizations), understanding and driving profitability is critical. Sounds straightforward, right? For many Agile leaders, financial topics are not typically in their wheelhouse. A common challenge for Agile leaders trying to accelerate Agile at scale is giving finance the data it needs for calculating costs. In a nutshell, Finance needs to understand all Agile costs and then capitalize Agile labor appropriately. To do this, they need visibility and transparency between what the Agile teams are creating and an
030-31_SDT029.qxp_Layout 1 10/23/19 11:49 AM Page 31
www.sdtimes.com
understanding of the effort (and hence cost) to do so. This is a critical step in capitalization of these costs. Traditionally, this “visibility” is delivered to Finance in a timesheet format. Time tracking is typically a manual process managed by the individuals doing the work (i.e. your software developers and engineers), and the time spent on development efforts is captured in a time tracking system (time sheets). Especially for the Agile team, time reporting is seen as a source of incremental overhead and waste, something that doesn’t produce end user or customer value (read: not at all Agile). Ideally, finance wants developers to track their time every day against the stories they’re working on to create the most accurate representation of their time as possible. But, let’s be honest; few software developers and engineers are doing this daily, relegating this task for 4 p.m. on a Friday, when Finance sends the email pleading with them to do so. And even in a perfect developer timetracking scenario, most organizations would love to recapture and trade
administrative time spent collating data for more value creation, productivity time. And what’s worse, due to the manual and human nature of the effort, time tracking is only as accurate as the timekeeper. It is estimated in a whitepaper from AffinityLive, a professional services management software company, that “people who track their time weekly are 47% accurate. Meanwhile, those who prepare their timesheets less than once per week are only 35% accurate.” So, how do we make both finance and Agile leaders happy? What if you could cost Agile without disrupting how the Agile teams deliver or how the finance team interprets the costs for capitalization? Cool, huh? Automating the capture of Agile costs helps remove the overhead of manual time tracking and provides finance with an auditable way to calculate and capitalize Agile software development costs. By utilizing a system that automatically tracks effort spent on a story, feature, and corresponding epic, organizations gain a realistic idea of the value delivery of their Agile teams. You
November 2019
SD Times
need a solution that takes in the work of disparate Agile teams, apportions their time accordingly, and then rolls-up the data into a comprehensive solution. With this information, Agile and Finance leaders can better understand the true impact their Agile teams have to the bottom line and how to identify Agile software development costs to ensure proper CapEx categorization. This detail ensures the Agile teams get the right level of funding and budgeting support for future endeavors. And, it ensures the organization’s Agile efforts do not appear as expenses that negatively impacts profitability. The bottom-line: Who doesn’t want an easier and unobtrusive way to deliver important Agile costing information to Finance, while keeping their developers happy and advancing their Agile transformation? Seems like a win, win to me. To learn more about our approach to capitalizing Agile development costs, download The Challenges of Agile Software Development Costing and Capitalization eBook or visit us at planview.com/lean-agile-delivery. z
31
032-35_SDT029.qxp_Layout 1 10/23/19 2:30 PM Page 32
32
SD Times
November 2019
www.sdtimes.com
Is the party over for Cloud and open-source have stolen most of its thunder, but there are opportunities in data management for hybrid and multi-cloud environments BY DAVID RUBINSTEIN ifteen years ago, the Hadoop data management platform was created. This kicked off a land rush of companies looking to plant their flags in the market and open-source projects began to spring up to extend what the platform was designed to do. As often happens with technology, it ages, and newer things emerge that either eclipse or consume those earlier works. And both of those things have impacted Hadoop: Cloud providers offered huge data storage that overtook HDFS and the proprietary MapR file system. But industry experts point to execution missteps by the Hadoop platform providers as being equally to blame for what appears to be the decline of these platforms. Things looked bad for the big three in the market. Cloudera and Hortonworks merged to strengthen their offering and streamline operations, but fumbled its release and sales plan. MapR, which offered a leading file system for Hadoop projects, clung to life before finally being rescued — if that’s the right word — by HPE, which has not had a great track record of reviving struggling software. To get some perspective, it’s important to define exactly what Hadoop is. And that’s no simple task. It started out as a single open-source distributed data storage project to support the Big Data search tool Nutch, but since has grown into the stack that it is today, encompassing data streaming and processing, resource management, analytics and more. Gartner analyst Merv Adrian said back when he started covering the
F
space, the question was ‘What is Hadoop?’ Today, he said, it just might be what ISN’T Hadoop? “I had a conversation with a client that just finished a project where they used TensorFlow, a Google cloud thing for AI, and they used Spark and they used S3 storage, as it happens, because they were on Amazon but they liked the TensorFlow tool,” Adrian recounted. “And they said, ‘This is one of the best Hadoop projects we’ve done so far,’ and I asked them, ‘Why is this a Hadoop project?’ And they said, ‘Well, the Hadoop team built it, and we got the Spark from our [Cloudera] Hortonworks distribution.’ It’s some of the stuff we got with Hadoop plus some other stuff.”
Factors impacting Hadoop How did we get to this place, where something that seemed so transformational just a few years ago couldn’t sustain itself? First and foremost, the Hadoop platform vendors simply missed the cloud. They were successfully helping companies with onpremises data centers implement distributed file systems and the rest of the stack, while Google, Amazon, Microsoft and — to a lesser degree Oracle — were building this out in the cloud. Further, open-source projects that extended or augmented the Hadoop platforms became viable options in their own right. This created complexity and some confusion. According to Monte Zweben, cofounder and CEO of data platform provider Splice Machine, the problems
032-35_SDT029.qxp_Layout 1 10/23/19 2:31 PM Page 33
www.sdtimes.com
Hadoop? were due to the growing number of components supporting Hadoop platforms, and from swelling lakes of uncurated data. “When Hadoop emerged, a mentality arose that was, to use a fancy word, specious. That mentality was that you could just dump data onto a distributed system in a fairly uncurated and sort of random way, and the users of that data will come. That has proven to not work. In the technical ranks, they call that ‘schema on read,’ meaning, ‘Hey, don’t worry about what these data elements look like, whether they’re
numbers or strings. Just dump data out there in any random format and then whoever needs to build applications will make sense of it.’ And that turned out to be a disaster. And what happened with this data lake view is that people ended up with a data swamp.” Zweben went on to say that complex componentry created a sales problem, due to how complicated they made the Hadoop distributions. “You need a car but what you’re being sold is a suspension system, a fuel injector, some axles, and so on and so forth. It’s just way too
November 2019
SD Times
difficult. You don’t build your own cars, so why should you build your own distributed platform, and that’s what I think is at the heart of what’s gone sideways for the Hadoop community. Instead of making it easier for the community to implement applications, they just kept innovating with lots of new componentry.” The emergence of the public cloud, of course, has been cited as a major factor impacting Hadoop vendor platforms. But Scott Gnau, vice president of data platforms at Intersystems and former CTO at Hortonworks, sees it from two sides. “If you define Hadoop as HDFS, then the game is over … take your toys and go home,” Gnau said. “I don’t think that cloud has single-handedly caused the demise of or trouble for Hadoop vendors … The whole idea of having an open-source file system and a massively parallel compute paradigm — which was the original Hadoop stuff — has waned, but that doesn’t mean that there isn’t a lot of opportunity in the data management space, especially for open-source tools.” Those open-source projects also have hurt the Hadoop platform vendors, providing less expensive and just as capable substitutes. “There are about a dozen or so things that all distributors have,” Gartner’s Adrian explained. “Bear in mind that in every layer of this stack, there’s an alternative. You might be using HBase but you might be using Accumulo. You might be using Storm, but you might be using Spark back then. Already, by 2017, you could also add, you might be using HDFS or you might be using S3, or rather data lake storage, and that’s very prevalent now.”
Vendors still delivering value Still, there is much life left in the space. Adrian provided a glimpse of the value remaining there. “Let’s just take the dollars associated with what you could call the Hadoop players, even if they don’t call themselves that. In 2018, if you took the dollars for Cloudera and MapR and Google and AWS Elastic MapReduce, we’re talking about close to $2 billion in revenue representing continued on page 35 >
33
webinar ad.qxp_WirelessDC Ad.qxd 1/21/19 10:26 AM Page 1
Be a more effective manager
Visit the sdtimes.com Learning Center Watch a free webinar and get the information you need to make decisions about software development tools.
Learn about your industry at www.sdtimes.com/sections/webinars
032-35_SDT029.qxp_Layout 1 10/23/19 2:31 PM Page 35
www.sdtimes.com
< continued from page 33
over 4.2% of the DBMS revenue as Gartner counts it. That makes it bigger than the sum by far of all of the pureplay non-relational vendors who weren’t Hadoop. If you add up MarkLogic, MongoDB, Datastax and Kafka, those guys only add $600 million of revenue — that’s less than a third of the Hadoop space. In 2018.” Going forward, a big future opportunity lies in helping organizations manage their data in hybrid and multicloud environments. Arun Murthy, chief product officer at Cloudera, explained, “Hadoop started off as one open-source project, and it’s now become a movement — a distributed architecture running on commodity hardware, and cloud well fits this concept of commodity hardware. We want to make sure that we actually help customers manage that commodity hardware using opensource technologies. This is why Hadoop becomes an abstraction layer, if you will, and enterprises can use it to move data and workloads better if they choose, with consistent security and governance, and you can run multiple workloads on the same data set. That data can reside on-prem, in Amazon S3, or Microsoft [Azure Data Lake Storage], and you get a consistent one plane of glass, one set of experiences to run all the workloads.” To that end, Cloudera last month launched the Cloudera Data Platform, a native cloud service designed to manage data and workloads on any cloud, as well as on-premises. Murthy pointed out that enterprises are embracing the public cloud, and in many cases, more than one. They also are likely to have data they’re retaining on private servers. “IT is trying really hard to make sure they don’t run afoul of regulations, while the line of business is moving really fast, and want to use data for their productions,” he said. “This leads to inherent tension. Both sides are right. In that world, you want to make sure regardless of where you want to do this — on-prem, public cloud and the edge — today, more data is handled outside the data center than inside the data center. When you look
DBMS Revenue in US$Millions
Vendor
2018
Oracle
14,518.1
Microsoft
11,039.6
Amazon
6,319.1
IBM
4,867.1
SAP
3,227.3
Teradata
831.5
Cloudera
654.2 Source: Gartner
at the use cases the line of business wants to solve — even something as prosaic as real-time billing — you want to lift your smartphone and see how much data you used. You need streaming, data transformation, reporting and machine learning.” Another opportunity is for ISVs to play the multicloud game, according to Gartner’s Adrian, who said containers are not going to do this. “Containers will let me pick something up and move it somewhere else and have it run, but it’s not going to let me govern it, it’s not going to let me manage security and policy consistently, from one place. That is one of the opportunities,” he said. “What Cloudera has ahead of them is a very good, relatively open field to continue to sell what we think of as Hadoop
November 2019
SD Times
on-premises,” Adrian added, “people who already know what they’re doing, and there are lots of successful use cases that are going to grow. They’re going to sell more nodes for the people who want to be on-prem, and as for people who want to do on-prem, where else are they going to go to? They could cobble it together out of open-source pieces, which, if they haven’t done it by now, they’re not the early adapters with a strong engineering organization that’s going to do that. They’re going to want something packaged.” As the industry moves forward, the technologies that underlie Hadoop remain, even if it won’t be known as Hadoop. “Far be it for me to guess what the marketing folks at these companies are going to come up with,” Intersystems’ Gnau said. “With all of the execution missteps by management teams and these companies recently, maybe they want to change their name, to protect the innocent,” he added with a chuckle. “In the end, there is a demand out there for this kind of tack, and folks who are calling it over because of the execution missteps are being a bit short-sighted. “I’m talking about the need in the marketplace,” he continued. “I’ve got diverse sets of data created by systems or processes that are potentially outside of my control, but I want to capture and map that data into real-time decisionmaking. What are the tools I need to go do that? Well, provenance is one of the tools I need. Certainly, the ability to have flexibility and not require a schema for capturing, onboarding this data, because data that’s created outside of my control is going to change, the schema’s going to change, so there’s an interesting space for the toolset, regardless of what it ends up being called.” So whatever it’s name will be, Hadoop technologies will continue to have a place in the market, no matter who’s supplying it. “I think there is a use case and a relevance for that kind of product and that kind of company,” Gnau said, “and I do think there’s a lot of confusion based on failure to execute versus validity of technology.” z
35
Why Time Series matters for metrics, real-time, and sensor data DOWNLOAD THE E-BOOK
“MySQL is not intended for time series data… I can testify it is like pounding nails with a screwdriver. It’s definitely not what you want to do in any relational database.” John Burk, Senior Software Developer
037-43_SDT029.qxp_Layout 1 10/23/19 2:24 PM Page 37
www.sdtimes.com
November 2019
SD Times
Buyers Guide
APMs are more important than ever for microservice-based architectures A BY JAKUB LEWKOWICZ
pplication performance management (APM) solutions need to adapt now that the age of monolithic applications has evolved into microservice-based architectures, which are innately distributed and complex and therefore harder to monitor. Collecting vast troves of data on how apps are performing is no longer enough, and APM providers have been adding new ways to analyze that data that
will drive meaningful and hyperfast solutions to expose any bottlenecks or code dependencies. Whether that’s by adding AI, ML, new plugins or methods of monitoring, reliability and speed are on everyone’s mind. “It’s not just enough to monitor specific isolated metrics because it’s not enough to just detect that something’s wrong. You need to act fast because the environment is fast. The end continued on page 39 >
37
Full Page Ads_SDT029.qxp_Layout 1 10/23/19 12:19 PM Page 38
037-43_SDT029.qxp_Layout 1 10/23/19 2:15 PM Page 39
www.sdtimes.com
< continued from page 37
user reaction to degradation is catastrophic,” said Daniella Pontes, senior product marketing manager at InfluxData. “If you are in a big event day, you are talking about hundreds of thousands of dollars per minute or billions per day. So you can’t afford a degradation that cannot be quickly identified and, most importantly, fixed.” In 2017, The Economist reported that the world’s most valuable resource is no longer oil, but data. But data in application monitoring isn’t effective if it can’t be analyzed, which makes it all the more crucial to have easy-to-use and intuitive monitoring to transform that data into outcomes, Pontes added.
adequate capacity to support a load and to find potential bottlenecks. Service mesh is a relatively new method that aids APM in microservices. “Instead of using an API gateway which can be challenging, service meshes are a very new modern way that we can concentrate, be a proxy, and provide a point that all microservices can report to,” said Charley Rich, senior director analyst at Gartner. “And then a monitoring tool can inquire to the service mesh to capture the collection of data. So it can act as a collection point and you can help in terms of ease of deployment and potentially performance.” Another trend is the use of OpenTracing. OpenTracing is a CNCF proj-
Another major change in who uses the APMs in an organization has occurred, moving more towards the developers. Most commonly, teams use APM tools when they find out that their app is running slow, according to Denny LeCompte, general manager of application management at SolarWinds. “You’re then trying to find out as rapidly as you can, is it the code? Is it the infrastructure? Is it the network? Is it the database? You’re trying to figure out where in the stack it is. If you can provide an application team a way to reduce the meantime to resolution or meantime to innocence, that’s it,” LeCompte explained. APM solutions leverage data that is collected through API gateways, service mesh, business transaction tracking, log analytics and container APIs to determine both the performance experienced by end users of an application and to measure the computational resources to see whether there is an
ect that includes a set of vendor-neutral APIs and instrumentation that is used for distributed tracing. “OpenTracing, census telemetry, service mesh and others need to be explored and utilized,” Rich said. “We’re moving from an era where the monitoring solutions go out and collect the data they need to an area where the infrastructure and applications are reporting back that information.” Another major change in who uses the APMs in an organization has occurred, moving more towards the developers, according to LeCompte. “Ten years ago the app dev guys would not have cared. That was not their problem. Whereas now, they’re definitely more involved and when there is a problem, they are more likely to go into the tool and expect the monitor tool to help them understand,”
November 2019
SD Times
LeCompte said. “It’s getting to the point where any sort of application team would feel naked without a tool to provide them with visibility.” Meanwhile, Pontes said APM solutions have evolved to a point where all parts of a team are using it. The developers are using APM to understand how fragmented code performs before moving forward with it in the production environment. The CI/CD teams are using it to understand what kind of impact that change can do and IT teams are using it to make sure everything stays as it should. What used to be one slowly changing monolith is now all of a sudden dozens of quickly changing microservices that get changed on a weekly or even daily cadence, according to Ivo Mägi, CEO of Plumbr. “Every change is risky by nature so you need to keep a closer eye on your microservices-based architecture because errors are just more likely to happen in situations where you have really agile release cadences,” Mägi said. He added that APM helps users with availability metrics so that whenever those metrics drop below tolerable levels, the teams are aware of the issues emerging. Another important aspect is the distributed tracing throughout all the microservices in the back end that allows one to zoom in to the exact service failing and, better yet, into the single line of source code in a particular service failing. These functionalities cut down the time to resolution for every incident. “Technical monitoring solutions like APMs are similar to sport watches in the sense that through some sensors they gather data and turn it into information. It would be like monitoring the heart rate or steps done during the day. Now if I just see that I did 3000 steps during the day, I don’t know whether I just broke the world record or I am the laziest guy in the world.. I actually haven’t changed my habits nor really gained anything It’s just a distraction after a while,” Mägi explained. “But if I know that 10,000 steps a day keeps the doctor away and that coupling this with continued on page 43 >
39
Full Page Ads_SDT029.qxp_Layout 1 10/23/19 12:19 PM Page 40
037-43_SDT029.qxp_Layout 1 10/23/19 3:44 PM Page 41
www.sdtimes.com
November 2019
SD Times
How these companies can help you monitor your applications Daniella Pontes, senior product marketing manager, InfluxData InfluxDB time series database offers a platform where today’s highly complex application environments (ephemeral containers, distributed apps in hybrid and multi-cloud, mobile applications, expanding APIs and such) can be effectively monitored. InfluxDB provides a scalable time series store that ingests data types (application metrics, logs, tracing and more) together with a real-time analytical engine that can process these data in complex ways. InfluxDB’s data enrichment and advanced operations takes high-volume raw data from multiple sources and delivers information as
needed to be presented to the various audiences in order to be actionable. When early signs of degradation or anomalies are detected in important KPIs monitored with InfluxDB platform, the diagnosis analysis comes into place and integration with an existing bytecode APM solution could provide quick access to what could be a root-cause in the application code. This way customers can use application performance monitoring and instrumentation in a more effective, focused and manageable manner.
Ivo Mägi, CEO of Plumbr Most organizations today are facing operating in DevOps, dynamic infrastructure or microservices environments. Against these institutional changes most monitoring tools fail by just working in the background to protect brands against outages in service. An APM fit for the era of digital transformation, though, guides you in proactively helping improve app performance with a focus on increasing user engagement and transactions, as well as ensuring that engineers are adding value rather than fighting fires. Plumbr APM simplifies the act of discovering, verifying, fixing and actively preventing issues through four key features:
• Actionable alerts based on user data. • Distributed tracing for all and every request in your stack. • Root-cause detection that, unlike other APMs, dives to the exact line of code that needs fixing. • Impact analysis to show were the biggest ROI lies for connected business outcomes. This allows users to trace every user interaction done in the UI throughout distributed traces in your back end monitored by the APM down to the bottleneck or error that’s actually hindering the user experience, and then being able to rank those bottlenecks and error based on the impact they have on users.
Denny LeCompte, general manager, application management, SolarWinds The main thing is that what we do see customers looking for is an integrated solution. So we think that the advantage that we’ve got is that we have best-of-breed products. They were all built standalone to either solve performance, logging or APM. And then we’re bringing them together so that they can work in a tightly integrated fashion so that they’re all mature enough. What we try to do is not necessarily have the most features or the lowest price. What we shoot for is for every product to be the best value in the market so we have all the features that almost everybody needs. One of the things we want to bring is an affordability element to the whole equation that lets you either save money or
stretch your monitoring farther so that you can monitor more of your vertical apps. We could see that a lot of customers didn’t want to have to go and instrument an application to be monitored. They really just want to use it out-ofthe-box. That’s been important for all of our SolarWinds products. We do not think that a monitoring product should require some third-party to go spend a bunch of time and money to make it work. It should all be a sort of automatic out-of-box. The SolarWinds APM Suite includes Pingdom, AppOptics and Loggly which combine user experience monitoring with custom metrics, code analysis, distributed tracing, log analytics and log management. z
41
037-43_SDT029.qxp_Layout 1 10/23/19 2:16 PM Page 42
42
SD Times
November 2019
www.sdtimes.com
A guide to APM tools n AppDynamics: The AppDynamics Application Intelligence Platform provides a real-time, end-to-end view of application performance and its impact on digital customer experience, from end-user devices through the back-end ecosystem — lines of code, infrastructure, user sessions and business transactions. The platform was built to handle the most complex, heterogeneous, distributed application environments; to support rapid identification and resolution of application issues before they impact users; and to deliver real-time insights into the correlation between application and business performance. n Catchpoint Systems: Catchpoint offers innovative, real-time analytics across its Synthetic Monitoring and Real User Measurement (RUM) tools. Both solutions work in tandem to give a clear assessment of performance, with Synthetic allowing testing from outside of data centers with expansive global nodes, and RUM allowing a clearer view of end-user experiences. n Dynatrace provides software intelligence to simplify enterprise cloud complexity and accelerate digital transformation. With AI and complete automation, our all-in-one platform provides answers, not just data, about the performance of applications, the underlying infrastructure and the experience of all users. We help companies mature existing enterprise processes from CI to CD to DevOps, and bridge the gap from DevOps to hybrid-to-native AIOps. n Instana is a fully automatic APM solution that makes it easy to visualize and manage the performance of your business applications and services. The only APM solution built specifically for cloud-native microservice architectures, Instana leverages automation and AI to deliver immediate actionable information to DevOps. For developers, Instana’s AutoTrace technology automatically captures context, mapping all your applications and microservices without continuous additional engineering.
n
FEATURED PROVIDERS n
n InfluxData: APM can be performed using InfluxData’s platform InfluxDB. InfluxDB is a purpose-built time series database, real-time analytics engine and visualization pane. It is a central platform where all metrics, events, logs and tracing data can be integrated and centrally monitored. InfluxDB also comes built-in with Flux: a scripting and query language for complex operations across measurements. n Plumbr: Plumbr is a modern monitoring solution designed to be used in microservice-ready environments. Using Plumbr, engineering teams can govern microservice application quality by using data from web application performance monitoring. Plumbr unifies the data from infrastructure, applications, and clients to expose the experience of a user. This makes it possible to discover, verify, fix and prevent issues. Plumbr puts engineering-driven organizations firmly on the path to providing a faster and more reliable digital experience for their users. n SolarWinds: The SolarWinds APM Suite — Pingdom, AppOptics, and Loggly — combines user experience monitoring with custom metrics, code analysis, distributed tracing, log analytics, and log management to provide proactive visibility into modern applications. All major types of data are collected, including logs, traces, metrics, and both synthetic and real enduser experience data, enabling proactive problem avoidance and rapid root cause troubleshooting. The suite works across all major application development architectures: monolithic, n-tier SOA, and microservices. n LightStep’s mission is to deliver insights that put organizations back in control of their complex software applications. Its first product, LightStep [x]PM, is reinventing application performance management. It provides an accurate, detailed snapshot of the entire software system at any point in time, enabling organizations to identify bottlenecks and resolve incidents rapidly. n New Relic: New Relic’s comprehensive SaaS-based New Relic Software Analytics Cloud provides a single powerful platform to get answers about application performance, customer experience, and business success for web, mobile and back-end applications. New Relic delivers code-level visibility for applications in production that cross six languages — Java, .NET, Ruby, Python, PHP and Node.js — and supporting more than 70 frameworks. New Relic Insights is embedded in the
platform, enabling customers to do detailed, ad hoc queries for real-time analytics across New Relic’s APM, Mobile, Browser and Synthetics products. n Oracle: Oracle provides a complete end-to-end application performance management solution for custom and Oracle applications. Oracle Enterprise Manager is designed for both cloud and on-premises deployments; it isolates and diagnoses problems fast, and reduces downtime, providing end-toend visibility through real user monitoring; log monitoring; synthetic transaction monitoring; business transaction management and business metrics. n OverOps captures code-level insight about application quality in real-time to help DevOps teams deliver reliable software. Operating in any environment, OverOps employs both static and dynamic code analysis to collect unique data about every error and exception
037-43_SDT029.qxp_Layout 1 10/23/19 2:16 PM Page 43
www.sdtimes.com
— both caught and uncaught — as well as performance slowdowns. This deep visibility into an application’s functional quality not only helps developers more effectively identify the true root cause of an issue, but also empowers ITOps to detect anomalies and improve overall reliability.
n Pepperdata: With proven products, operational experience, and deep expertise, Pepperdata provides enterprises with predictable performance, empowered users, managed costs and managed growth for their big data investments, both on-premise and in the cloud. Pepperdata enables enterprises to manage and improve the performance of their big data infrastructures by troubleshooting problems, maximizing cluster utilization, and enforcing policies to support multi-tenancy. n Riverbed recognizes the need to maximize digital performance and is uniquely positioned to provide organizations with a Digital Performance Platform that delivers superior digital experiences and accelerates performance, allowing our customers to rethink what is possible. Riverbed application performance solutions provide superior levels of visibility into cloud-native applications — from end users, to microservices, to containers, to infrastructure — to help you dramatically accelerate the application lifecycle from DevOps through production. n SmartBear: AlertSite’s global network of more than 340 monitoring nodes helps monitor availability and performance of applications and APIs, and find issues before they hit end consumers. The Web transaction recorder DejaClick helps record complex user transactions and turn them into monitors, without requiring any coding. n SOASTA: The SOASTA platform enables digital business owners to gain continuous performance insights into their real-user experience on mobile and Web devices — in real time and at scale. z
< continued from page 38
an actual action and doing the remaining 7,000 steps, I have gained quality in my life. And to me this is really similar to what APMs are able to do. If you understand how and why performance and availability can impact your business and know when to respond then you can actually have a significant impact on your business.” However, despite all of its benefits, creating an effective APM solution comes with a set of challenges. According to Rich, the biggest challenge when monitoring microservices is its ephemerality, and APM vendors have to adapt to work with it. “Usually agents for most cases are specific, so that’s problematic for a lot of vendors. To package agents in the containers, I need to know in advance what’s going to go into a container image. That’s a lot of work. And it also makes me more static when I’m trying to be agile,” Rich said. “They’re just there for moments, then gone and somewhere else, which makes monitoring challenging. That’s different from the traditional approaches to monitoring within an enterprise in a cloud,” Another challenge, according to a Gartner report, is that many organizations don’t provide production visibility for the application development and DevOps teams that build microservicebased applications, resulting in an isolation from the IT teams that are responsible for operational deployment. To fix these problems, Gartner recommends companies adopt a coordinated monitoring strategy between operations, developers and DevOps teams, enabling service discovery by using the API gateway layer, leveraging service mesh and maintaining up-todate service metrics. Rich said companies that are undergoing digital transformation are the primary candidates for using APM solutions. Mode 2 applications that emphasize agility and speed need to be monitored the most because these are the ones that change frequently. Sometimes changes occur several times a day; therefore, protecting the moneymaking applications is most critical.
November 2019
SD Times
“Anything that’s built now really does need some sort of APM. I don’t really think there’s an application in modern times that doesn’t do better with some level of monitoring,” SolarWinds’ LeCompte said. “Lots of customers only monitor the most missioncritical things, but if you built it and it’s running part of your business, then if you’re not monitoring it, you’re just going to be surprised.” LeCompte said this includes things many people would not immediately regard as an application, such as websites. Yet, web dev and web operations teams are constantly monitoring how different users are perceiving it. He added that users expect an APM solution to work out-of-the-box and to automate agent deployment. “Customers don’t want to have to spend weeks rolling this thing out. We do not think that a modern product should require some third party to go spend a bunch of time and money to make it work. It should all be a sort of automatic out of the box,” LeCompte said.
Increasing automation to keep up with continuous deployment In order to keep up with the rapid pace of monitoring, many APM solutions are adding AI and ML capabilities. Manual APMs are no longer equipped to deal with the dynamism and the scale that microservices require, said Pontes. “You need to feed the data into artificial intelligence and machine learning frameworks to start automating certain aspects of the workflow. Because the human factor is actually the bottleneck,” Pontes said. These machine learning additions do things like correlation and analysis to reduce the volume of alerts, preventing a storm, reducing false alarms, detecting anomalies and finding unusual values to then correlate them and then predicting the potential impact, Rich added. “Machine learning has been embedded in many APM solutions, not necessarily to do anything new but to do what they did before much better,” Rich said. z
43
044_SDT029.qxp_Layout 1 10/23/19 11:53 AM Page 44
44
SD Times
November 2019
www.sdtimes.com
Guest View BY TOBI KNAUP
Why Day 2 is critical in DevOps Tobi Knaup is CTO at D2iQ, formerly Mesosphere.
R
eady. Set. Deploy. The excitement and anxious energy developers feel the first time a new application runs in a production environment is probably a much smaller scale version of what NASA engineers felt when the Apollo 11 landed on the moon. And while there are no space travel celebrations until the astronauts safely return home, organizations have a tendency to prematurely celebrate the deployment of cloud-native applications. Organizations must develop a holistic DevOps strategy that focuses on Day 2 operations to avoid surprises when new applications are deployed in production environments. Day 2 is a DevOps concept that has been around for some time, referring to the phase of the development life cycle that follows initial deployment, where the real application demands exist. On Day 2, organizations begin to place stringent requirements on applications such as resilience, scale, agility, security, governance and compliance. Day 2 is when an application moves from a development project to a strategic advantage to the business. When developing cloudnative applications, there are a number of common challenges that organizations must navigate to reach Day 2 operational success. ● Timing: While one of the goals of cloudnative application deployments is agility and speed, getting there should never be a blind sprint. Rushing to the finish line can cause productivity disruptions, lost revenue, employee turnover, security deficiencies, and more. ● Open Source Maturity and Interoperability: The state of each open-source technology varies greatly, so getting a clear understanding of maturity levels is a challenge. In addition, most open-source technologies were not built to ensure interoperability, so jamming together multiple disparate solutions from the cloud-native landscape is often complex and labor intensive. ● Limited Experience and Expertise: Building cloud-native applications, particularly for enterprises not born in the cloud, can be a challenging proposition. Most of the open-source tech-
On Day 2, organizations begin to place stringent requirements on applications such as resilience, scale, agility, security, governance and compliance.
nologies that power cloud-native journeys are new, meaning there is a very limited talent pool with the right experience. ● Complexity: The biggest overall challenge to cloud-native application development, deployment and ongoing operational success is complexity. The need to simultaneously manage technologies, people and processes requires a culture that has entirely bought into the organization’s DevOps strategy.
Blueprint for successful Day 2 operations Day 2 operations success begins by thinking holistically beyond implementation to how the application will perform in production environments over time. When defining the holistic approach for their organization, development teams should focus on the following aspects of the cloud-native journey: 1. Enterprise-grade scalability — Scalability, security and compliance must be taken into account from the earliest stages of a cloud-native development project. Organizations must learn how to balance these priorities as the business requirements and demands, such as scale, on the application change over time. 2. Infrastructure flexibility — The beauty of cloud-native applications is the ability to evolve and adapt quickly over time, empowering organizations to react to timely opportunities, competitive threats and market shifts. This agility and flexibility require an understanding of how different infrastructures, such as the cloud, data center or at the edge, can impact the application. 3. Data-driven architecture — To ensure that the most sophisticated applications can operate at significant scale when dealing with massive data needs, organizations must consider the capabilities, and limits, of the selected open-source data technologies, such as Cassandra, Kafka and Spark. 4. Testing and Training — Create complex testing stacks to train development teams, determine the limits of the applications under various scenarios, and ensure that maintenance can be performed without causing downtime. By focusing on the pillars above, organizations can safely accelerate the development life cycle to create cloud-native applications that operate as intended over time. z
045_SDT029.qxp_Layout 1 10/23/19 11:53 AM Page 45
www.sdtimes.com
November 2019
SD Times
Analyst View BY MICHAEL AZOFF
Cloud native means Kubernetes A
ttending the Pivotal SpringOne conference last month has hit home how important is the alignment around Kubernetes in the cloud-native technology world. This event is a developer conference for the popular Java web framework Spring — Pivotal was keen to quote from the recent JetBrains survey that the two most popular offerings in this category are Spring Boot (56%) and Spring MVC (43%), the next most popular stood at 6%. This report reflects my experiences at the event wearing Kubernetes-tinted glasses. The reason for this is that to play in the cloud-native world today you need to be part of the Kubernetes ecosystem. Kubernetes has emerged as a de facto standard in cloud-native computing and it has achieved that because it is open source, vendor neutral, and its timing was perfect in solving the need to manage containers. Originated by Google, the open source project today is owned by the Cloud Native Computing Foundation (CNCF), a non-profit organization in turn owned by the Linux Foundation. VMware CEO Pat Gelsinger was invited on the opening keynote stage and talked about the company’s acquisition of Pivotal. VMware gave birth to Pivotal as an external partnership with Dell and EMC (now part of Dell), and bringing Pivotal inside VMware is a strategic move that is all about Kubernetes. VMware recently announced a Kubernetes native vSphere (project Pacific), and project Tanzu — a build, run, and manage offering for cloudnative applications, with again Kubernetes at the center. This shows the pieces of VMware’s strategy falling in place. All the public cloud players want to facilitate Kubernetes-based cloud-native applications and with VMware playing as the middleware for the cloud, it can benefit in multiple ways: Pivotal gives it the grassroots developers, and its VM infrastructure stack attracts its enterprise customer base to the public cloud players who want to run those enterprise workloads. The cloud is also a strategic play for Google, and it has played a benign role, supporting the opensource community. So, it came as a surprise that two important open-source projects in the Kubernetes ecosystem, Knative and Istio, expected to join CNCF, will remain managed by Google. While there is nothing wrong with that, given how much
investment is flowing into the Kubernetes world, there will be suspicion that Google will steer these projects towards serving the Google Cloud better than rival clouds. It remains to be seen whether Google can convince the community of the wisdom of controlling these projects or whether the community decides it is better to create a fork. Moving to the cloud, and re-architecting applications to run optimally on the cloud, is at the heart of digital transformation. Pivotal provides cloud technology but to many of its target large enterprises, the re-architecting element is a huge undertaking. To help enable this is Pivotal Labs, the consulting body that helps enterprises master agile and DevOps and enter the brave new world of containers, microservices and more. Large enterprises carry a huge amount of legacy code (‘heritage’ is a nice term I heard used), and to help transform this, Pivotal Labs has created agile techniques for large system modernization. This is a combination of methodologies and tools, such as the Boris method — a process for mapping architecture components (named after The Who song, “Boris the Spider”), and the SNAP (Snap Not Analysis Paralysis) method, which uses a Pivotal tool called App Analyzer that automatically analyses code and provides complexity analytics, such as degree of monolithic coupling across components. The Labs concept is a huge success, and key competitor IBM Red Hat has emulated it, but is difficult to scale in large multinationals. The path taken by two customers, Dell and a large telco, is to create an internal version of Labs and have Pivotal train the trainers. Finally, a major announcement at the event is the partnership with Microsoft on Azure Spring Cloud. In development since February 2019 and going live in early 2020, the arrangement gives developers a seamless and effortless move to the Azure public cloud via Spring; the platform takes care of all infrastructure concerns, providing ‘Spring Cloud as a Service,’ and is a managed service by both companies. Platforms like Spring carry significant enterprise workloads, and as observed above, public cloud providers want large enterprises as customers, so expect to see interest from other public clouds. z
Michael Azoff is a principal analyst for Ovum’s IT infrastructure solutions group.
Bringing Pivotal inside VMware is a strategic move that is all about Kubernetes.
45
046_SDT029.qxp_Layout 1 10/23/19 11:50 AM Page 46
46
SD Times
November 2019
www.sdtimes.com
Industry Watch BY DAVID RUBINSTEIN
Identifying, and winning, with unicorns David Rubinstein is editor-in-chief of SD Times.
Y
ou’re working on a project that’s months late and millions over budget. Your developers move at a snail’s pace, being held up by unfilled requests for server environments, authorizations to access needed data and even the codebase. You’re frustrated, your managers are mad and worried for their job security, and execs are jumping ship — either of their own volition, or pushed overboard. What do you do? In a delightful followup to his seminal DevOps work The Phoenix Project, author Gene Kim revisits the fictional car parts company that suffered from a payroll delivery failure and was wilting under the pressure to complete a digital transformation, in his newest work, The Unicorn Project. It follows Maxine, a developer par excellence, who — after taking the fall as the “human error” in the payroll debacle — is reassigned to the Phoenix project, where she is greeted with everything described in this column’s opening paragraph. “The reason that really motivated me to write the book was, even if organizations did all the things that got prescribed in The Phoenix Project ... there’s still this problem where there’s still all these invisible structures that are needed to make developers productive,” Kim told me in a recent interview. “One of the main reasons that I wanted to retell The Phoenix Project was to show the equally heroic journey that happened not from the Ops perspective but from the Dev perspective and show just how awful life is and how great it can be,” he added with a laugh. The book is also a story of contrasts. Maxine “knows what awesome looks like, but wherever she looks, she’s surrounded by mediocrity … not just mediocrity, but horrible,” Kim said. In The Phoenix Project, the protagonist, a character named Bill Palmer, was the VP of operations. The book looked at things, as Kim put it in Star Trek terms, from the bridge, whereas The Unicorn Project is “a novel about the redshirts trapped in the engine room. I think what I explore and posit in the book is the daily work of developers; if the future depends on developer productivity and software, then how that daily work feels for that hands-on-keyboard developer. It’s probably one of the most
There’s still all these invisible structures that are needed to make developers productive.
important things that leaders should care about.” Underlying the story are what Kim calls five ideals: Locality and simplicity; focus, flow and joy; improvement of daily work; psychological safety; and laser focus on the customer. Organizations need to ensure developers are able to independently develop, deploy, test and create value for customers, and not require the entire company to do this, with meetings and approvals that can add delays for the business and negative feelings for the team. These five ideals, he said, are meant to describe the necessary conditions for a software organization “to really kick ass and win to the marketplace.” However, in many organizations, the best developers work on features. In places like Facebook, Amazon, Netflix, Google and Microsoft, 3 to 5 percent of the best developers are working to improve the developer experience. In other enterprises, the people responsible for elevating developer productivity — those who work on the builds, the CI pipelines and infrastructure improvement — are usually interns, or the developers not good enough to work on features, Kim said. That, he noted, is completely inverted. One of Kim’s goals with the book is try to define the qualities of leadership that organizations should model, and those qualities that should not be modeled. For example, Chris — the VP of development — is actually quite a weak character. “He’s the one,” Kim said, “who says to Maxine, ‘Stay under the radar. Don’t rock the boat. We need a fall guy to take the blame for the payroll outage, and that’s you, Maxine.’ He’s the one who doesn’t want to put his neck on the line. He’s too frightened to be the leader of the rebellion, which is actually two levels down.” That’s where Maxine thrives, reaching out to people within the development organization and beyond until she finds people willing to do what it takes to advance the work, without requiring meetings. She joins, then leads, a team of like-minded devs and IT pros — a “team of teams’ whose goal is bigger than that of their siloed team or department. Kim’s hope with the book is that it might create some uncomfortable conversations about an individual’s role in the organization. “Am I really committed to the rebellion or am I against the rebellion,” he asked. “Am I one of the good guys, or one of the ones in the way?” z
Full Page Ads_SDT029.qxp_Layout 1 10/23/19 12:20 PM Page 47
Bad address data costs you money, customers and insight. Melissa’s 30+ years of domain experience in address management, patented fuzzy matching and multi-sourced reference datasets power the global data quality tools you need to keep addresses clean, correct and current. The result? Trusted information that improves customer communication, fraud prevention, predictive analytics, and the bottom line. • Global Address Verification • Digital Identity Verification • Email & Phone Verification • Location Intelligence • Single Customer View See the Elephant in Your Business -
Name it and Tame it!
www.Melissa.com | 1-800-MELISSA
Free API Trials, Data Quality Audit & Professional Services.