SD Times January 2020

Page 1

Integrate apps. Automate actions. Innovate anywhere. Find out how at redhat.com/agile-integration

JANUARY 2020 • VOL. 2, ISSUE 31 • $9.95 • www.sdtimes.com


Full Page Ads_SDT031.qxp_Layout 1 12/20/19 9:54 AM Page 2


003_SDT031.qxp_Layout 1 12/20/19 3:18 PM Page 1

JANUARY 2020 • VOL. 2, ISSUE 31 • $9.95 • www.sdtimes.com


004_SDT031.qxp_Layout 1 12/19/19 4:46 PM Page 4

Instantly Search Terabytes

www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein drubinstein@d2emerge.com NEWS EDITOR Christina Cardoza ccardoza@d2emerge.com

dtSearch’s document filters support: popular file types emails with multilevel attachments a wide variety of databases web data

SOCIAL MEDIA AND ONLINE EDITORS Jenna Sargent jsargent@d2emerge.com Jakub Lewkowicz jlewkowicz@d2emerge.com ART DIRECTOR Mara Leonardi mleonardi@d2emerge.com CONTRIBUTING WRITERS Alyson Behr, Jacqueline Emigh, Lisa Morgan, Jeffrey Schwartz

efficient multithreaded search

CONTRIBUTING ANALYSTS Enderle Group, Gartner, IDC, Intellyx, Ovum

ADVERTISING SALES

forensics options like credit card search

Developers:

.NET with .NET Standard / .

PUBLISHER David Lyman 978-465-2351 dlyman@d2emerge.com SALES MANAGER Jon Sawyer jsawyer@d2emerge.com CUSTOMER SERVICE SUBSCRIPTIONS subscriptions@d2emerge.com ADVERTISING TRAFFIC Mara Leonardi adtraffic@d2emerge.com LIST SERVICES Jourdan Pedone jpedone@d2emerge.com

Visit dtSearch.com for

REPRINTS reprints@d2emerge.com ACCOUNTING accounting@d2emerge.com

developer evaluations

The Smart Choice for Text Retrieval® since 1991

dtSearch.com 1-800-IT-FINDS

PRESIDENT & CEO David Lyman CHIEF OPERATING OFFICER David Rubinstein

D2 EMERGE LLC 80 Skyline Drive Suite 303 Plainview, NY 11803 www.d2emerge.com


005_SDT031.qxp_Layout 1 12/20/19 3:10 PM Page 5

Contents

VOLUME 2, ISSUE 31 • JANUARY 2020

FEATURES

NEWS 6

News Watch

12

‘API First’ paves the way for agile integration

14

Around the industry: Predictions for 2020

25

JetBrains introduces new developer collaboration tool

29

Challenges to effective DevOps collaboration

COLUMNS 52

GUEST VIEW by Matt Chotin Embracing a DevOps culture

53

ANALYST VIEW by Charles Araujo IT predictions, or parlor tricks?

54

INDUSTRY WATCH by David Rubinstein The little dirty data secret

2020: The year of integration Crunch culture can destroy development teams

BUYERS GUIDE Getting the most value out of your value streams page 43

CI-CD Pipelines are Expanding

page 8

page 18

Productivity tools are crucial in the current development landscape page 26

The realities of running an open-source community

page 36

page 31 Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 80 Skyline Drive, Suite 303, Plainview, NY 11803. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2020 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 80 Skyline Drive, Suite 303, Plainview, NY 11803. SD Times subscriber services may be reached at subscriptions@d2emerge.com.


006,7_SDT031.qxp_Layout 1 12/19/19 4:47 PM Page 6

6

SD Times

January 2020

www.sdtimes.com

NEWS WATCH Berners-Lee launches Contract for the Web The Contract for the Web is officially coming to life. The contract was created by web inventor Sir Tim Berners-Lee as a global plan of action to make the online world safer and accessible by anyone. A core facet of the Contract for the Web is that users must be able to control their lives online by making choices to protect their data and privacy. In its 76 clauses, the contract states that governments and companies should ensure everyone can connect to the Internet; keep all of the Internet available, affordable and accessible all of the time; and respect and protect people’s fundamental online privacy and data rights. The contract also states that governments must make connectivity affordable and accessible to everyone.

GitHub launches lab for open-source security GitHub wants to help protect the open-source ecosystem with the announcement of the GitHub Security Lab. The lab is designed to bring together security researchers, maintainers and companies who are dedicated to open-source security. In addition, the company will provide tools, resource bounties and hours of security research.

Microsoft’s tips on developing for dual-screen devices As part of its entry into the foldable device market, Microsoft is providing guidelines and advice for developers looking

to develop for these types of devices. According to Microsoft, there are two stages for optimizing apps for dual-screen devices: 1) making sure that websites and apps work and 2) embracing dual-screen experiences. Microsoft stated that developers will not have to start from scratch on these devices. “Our goal is to make it as easy as possible for your existing websites and apps to work well on dual-screen devices,” Kevin Gallo, corporate vice president of the Windows Developer Platform, wrote in a post. Read the tips at bit.ly/2Puu7j5

Facebook, Microsoft team up on remote development Facebook announced that Visual Studio Code is now the default development environment used at the company. It’s also teaming up with Microsoft to enhance their remote development extensions to do remote development at scale. “Given the scale of devel-

opment at Facebook, supporting the efficiency and productivity of our engineers is key. Constant work is being done to enable Visual Studio Code to be the IDE of choice inside the company, whether by building extensions or enhancing our current technologies to better support it,” Joel Marcey, developer advocate at Facebook, wrote in a post.

WhiteSource acquires Renovate Open-source security specialist WhiteSource has announced that it is acquiring Renovate. According to WhiteSource, Renovate is an open-source dependency update solution. “Renovate was developed because running user-facing applications with outdated dependencies is not a serious option for software projects today,” said Rhys Arkins, founder of Renovate. “It increases the likelihood of unfixed bugs and increases the quantity and impact of security vulnerabilities within software applications. With Renovate,

you can automatically and efficiently keep dependencies upto-date, integrating this process into any DevOps workflow.”

Altova MobileTogether 6.0 adds control templates Data management solution provider Altova announced the release of MobileTogether 6.0. The update is designed to bring new functionalities to low-code programming and to speed up mobile app development. New features include control templates that allow developers to define and group multiple controls in a way that makes this group of controls easily reusable on multiple other pages, as well as Placeholder Control that can position Control Template at a desired location.

Linux Foundation’s JDF to standardize data interoperability The Linux Foundation’s Joint Development Foundation (JDF)

People on the move ■ AppDynamics has announced Vipul Shah as its new vice president of product management. Shah is an industry veteran who has spent his career working at top brands such as Microsoft, Oracle and more recently VMware where he was the senior director of product management and multi-cloud management SaaS. ■ Former Appthority and FlewCheck founder and CEO Anthony Bettini is joining WhiteHat Security as chief technology officer. Bettini has more than 20 years of cybersecurity experience. At WhiteHat, he will be responsible for leading the company’s technology and engineering teams as well as developing, implementing, managing and evaluating its technological resources.

■ Francisco D’Souza is joining MongoDB’s board of directors. D’Souza is a veteran entrepreneur and tech executive who co-founded and was the CEO of Cognizant, an IT services company. ■ Postman has announced Nick Tran as its new vice president of marketing and Kin Lane as chief evangelist. Tran has more than 20 years of experience in technology, marketing and software. He will be in charge of Postman’s worldwide marketing. Lane is an API evangelist who will help Postman lead the next generation of APIs.


006,7_SDT031.qxp_Layout 1 12/19/19 4:47 PM Page 7

www.sdtimes.com

January 2020

SD Times

IBM addresses cloud security As organizations start to move to the cloud and adopt multi-cloud and hybrid cloud environments, IBM wants to ensure data stays secure. The company today announced Cloud Pak for Security, a new solution that connects security tools, cloud and onpremise systems without having to move data. The Cloud Pak for Security will include the ability to hunt threats and use automation, is preintegrated with Red Hat OpenShift, connect data sources to find hidden threats and make better risk-based decisions, connect security workflows with a single interface and offers a model to help Managed Security Service Providers operate at scale, address silos and streamline processes. is teaming up with AWS, Genesys and Salesforce to create an open source data model that standardizes data interoperability across cloud applications. They’re calling it the Cloud Information Model (CIM). The CIM is meant to tackle the challenge of cloud computing and creating data models. The foundation explained that data models force developers to build, test and manage custom code in order to translate data across systems.

Tech companies eye WebAssembly In an effort to make WebAssembly a cross-platform, cross-device computing runtime, Mozilla, Fastly, Intel and Red Hat have announced the Bytecode Alliance. The new alliance is an opensource community that will focus on creating a runtime environment and associated language toolchains that provide security, efficiency and modularity across a wide range of devices and architectures. The alliance will build on existing standards such as WebAssembly and WebAssembly System Interface. The idea is to provide “a secure-by-default

WebAssembly ecosystem for all platforms,” the alliance explained.

Fighting open-source patent trolls The Open Invention Network (OIN) is strengthening its fight against patent trolls. The organization has announced it is partnering up with IBM, Microsoft and the Linux Foundation to protect open-source software from Patent Assertion Entities (PAEs), or patent trolls. The OIN was created to provide patent non-aggression cross-license in the “Linux System.” As part of the partnership, the companies will support Unified Patents’ Open Source Zone and provide an annual subscription.

Sentry brings mobile application error monitoring to Android Sentry has announced a new software development kit and native development kit for Android developers. The new kits aims to bring mobile application error monitoring to the Android operating system. “App crashes cause more

than 70% of uninstalls, and Google ranking algorithms now downrank apps with stability problems, so uptime and performance are critical for companies to remain successful and competitive in the mobile world,” said David Cramer, co-founder and CEO of Sentry. “The challenge for mobile application developers is the lack of visibility and control over the devices. And on Android, flaws in your native code, third-party dependencies or, in rare cases, even in the system libraries, can bring down the entire application.”

ActiveState adds Python packages ActiveState announced that it added more than 50,000 package versions covering the most popular Python 2 and 3 packages to its ActiveState platform. “In order to ensure our customers can automatically build all Python packages, even those that contain C code, we’re designing systems to vet the code and metadata for every package in PyPI,” said Jeff Rouse, the vice president of product management at ActiveState.

Developers can automatically build open source language runtimes from source, automatically resolve all dependencies, and then certify it against compliance and security criteria within a few minutes.

Amazon Aurora makes ML more accessible AWS is trying to make it easier for developers to leverage machine learning with new integrations with Amazon Aurora. According to AWS, in order to use machine learning on data in a relational database, you would need to create a custom application that would read the data from the database, then apply the machine learning model. This can be impractical for organizations because developing the application requires a mix of skills, and once created, the application needs to be managed for performance, availability, and security, the company explained. To alleviate some of that burden, AWS is integrating Amazon Aurora with Amazon SageMaker and Amazon Comprehend. This new integration will allow developers to use a SQL function to apply machine learning models to data. ❚

7


008-10_SDT031.qxp_Layout 1 12/20/19 3:50 PM Page 8

8

SD Times

January 2020

www.sdtimes.com

2020: The year BY DAVID RUBINSTEIN

S

oftware development has changed, moving from monolithic code blocks to a cobbling of open source and services. Delivery has changed, as organizations moved from on-premises servers to the cloud, and end points such as smartphones and all manner of IoT devices have become ubiquitous. How data is distributed and consumed has changed, as containers may need just a piece of data to operate, but that has to scale massively. With all the wiring required to keep these systems up and running, while remaining highly performant, SD Times has recognized 2020 as the Year of Integration. It is the scope of what needs to be integrated, and the scope of the kinds of systems involved, that is becoming greater. Because of that, it’s getting harder to differentiate what is integration and what is just regular

application development, or even data science and analytics. As Matt Brasier, analyst in application architecture and platform team with Gartner for Technical Professionals, explained, “Nobody really writes systems anymore that just sit on their own and never talk to anything else. Everything has to be a part of this greater whole.” In software development, the key technology for integration is the API, a long-used, well-understood way to bring data, functionality and services into your application. APIs and the services behind them change, however, so managing the APIs that you create internally and those you rely on externally, is important to ensure your application remains functional. “That really comes down to a discipline we call full life cycle API management,” Brasier said. “The idea that the API, the interface in which you’re inter-

Data analysis: Batch or event-driven? Organizations are looking for collect and analyze data faster, and closer to real-time as well. So, does moving to an event-driven approach to data improve on batch processing? Gartner analyst Matt Braiser said, “Rather than having your sales figures move in a batch overnight and then having a reporting tool that runs on that and generates a report and the sales manager reads it in the morning, you can have events like every time something sells, you generate real-time event reporting in a live dashboard for that. There’s definitely a trend of going in that direction, having access to real-time data via events, which is an expensive thing to do.” He cautioned that instead of merely thinking real-time is better than batch, organizations should be clear about how and where they expect to get the return on their investment in real-time processing. “Some clients say we don’t want any batch anywhere in our modern architecture because it’s an outdated approach,” Brasier said. “That’s not true; it’s sufficient and it’s very good at managing relationships and data consistency. Events are not good at relationships, and they’re not good at data consistency. They are good at real-time, but you need to have return on investment. There’s no point in having this live, to-the-second sales dashboard if you still put together your sales targets on a monthly basis.” ❚ — David Rubinstein

acting, should be separate and have a separate life cycle from the back-end service implementation.” Ani Pandi, director of solution engineering at integration platform provider MuleSoft, said, “That talks to best practices about how you do design of the APIs. For example, the UX/UI of an application is driven by consumer behavior. The fields that are identified in that particular UI is not exactly the behavior the back-end service is giving us. So we struggle with that design aspect, and then what happens is, we go back into this whole cycle of versioning, change management, where new versions of the API are created, and it’s not a very collaborative process. But if we start doing a design-first approach where the whole idea is to take design thinking practices to API life cycle, then you have the ability to look at the UX and the experience you want to deliver to your customer and take that and be able to create a model of your API that reflects that. From there, you figure out what do you want to implement from your API... is it orchestration, is it modernization of a back end, is it validation and enrichment and aggregation of certain information. Then you start really defining the API that is less susceptible to change.” It is important to have consistency, and to do that requires a good versioning strategy for the API. And Pandi noted that the consistency standard should be accessible to everyone, whether it’s human resources or finance departments defining their own APIs. If they’re using a different versioning strategy, then you have inconsistency; you need to be able to define the strategy for everyone. Further, Pandi pointed out, organizations need “a strong sense of how you do dependency management.” He explained that means when an API


008-10_SDT031.qxp_Layout 1 12/20/19 3:51 PM Page 9

www.sdtimes.com

January 2020

SD Times

of integration provider decides it has to version or create a new capability for the API, you have to be able to notify consumers of the API, both upstream and downstream. This comes from having an API life cycle management capability, and from that, effective communication and Agile development practices follow. “But today,” he said, “we don’t have sophisticaion in a lot of organizations to do that. The concept is to move away from a thinking of API management to a concept of API life cycle management, and that’s what we’re heading towards.” But more than just joining services, APIs can be used by the business for competitive advantage. To do that successfully, though, requires a strategy. “Just creating an API and putting it out there, there’s not much value in that,” said Pandi. “The value is, how do you take that to your partnership.” He gave the example of a bank that started partnering with retail property platforms, and embedded their API in their partner’s platform, so the consumer could not only review the homes they want to buy but also apply for a mortgage and quickly get approved for it. “That’s the experience where the bank has literally embedded itself in the customer,” Pandi said. “And that’s the kind of integration and being able to build the building blocks and unlocking data internally, and taking that experience outward through an API economy, is what we’re seeing as a differentiator and a capability that organizations are focusing on.” One trend that is helping organizations deliver value to businesses through integration is the democratization of integration, according to Gartner’s Brasier. In today’s world, specialist teams of integrators are giving way to what Gartner is calling Integration Strategy Empowerment teams, which are creating best practices and plat-

forms that enable non-specialists to create the integration flows they need. This is being enabled by platform providers offering integration tools with simpler user interfaces that don’t require weeks of training to understand. Matthew Scullion, CEO of data transformaton software provider Mattilion, agreed that empowering “citizen data professionals” is where data integration is heading. “In the prior generation, the data warehouse was a specific team and a specific part of the IT department. Increasingly today, it’s the citizen data

professional doing this stuff on behalf of the business. It kind of has to be. IT departments are turning into the service providers who are providing those citizen data professionals with the tools, and the citizen data professionals are actually doing the innovation with data. “The interesting byproduct of that,” he continued, “is they still have to load data, they still have to transform it, they still have to embellish it, because those things are computer science, and you still have to do that stuff, it doesn’t magically go away. You need to make the continued on page 10 >

9


008-10_SDT031.qxp_Layout 1 12/20/19 3:51 PM Page 10

10

SD Times

January 2020

www.sdtimes.com

Is there still a role for integration specialists? According to Gartner Technical Professionals analyst Matt Brasier, the answer is a definite ‘yes.’ He explained: “First of all, integration specialists will still need to do all of the hard bits of integration; the bits at the back end where you don’t have a REST API exposed by the system because it’s a 15-year-old ERP system that only communicates with CSV files over FTP, or something like that. There will be legacy systems that you won’t be getting rid of in the short term that need to be integrated. There will also be a whole set of best practices and governance that is needed. One of the models I recommend to clients when they’re looking to do this Agile integration is that they create the role of an integration architect, who works with an Agile delivery team in the same way that a security architect or a data architect would. They’re there to provide best practices, to peer review what’s going on and to provide governance as an internal part of that team, if only part-time, based on best practices from their peer group of integration specialists, rather than it being an external review board that you submit things to.” He went on to say that specialists are still needed to provide best practices, governance, oversight and training to assist ad-hoc integrators, who are not as well-versed in integrations. Brasier said, “Maybe they don’t know patterns like circuit breakers; maybe they are not aware of the finer details of some of the SOAP specifications and things like that, where there still might be a need to help.” ❚ — David Rubinstein

< continued from page 9

tools consumer-like.” Alan Jacobson, chief data and analytics officer at Alteryx, agreed that this democratization is allowing people across different disciplines to do things “they’ve never really been able to do in the past.” He went on to say that the convergence of compute power becoming incredibly potent, data becoming much more available, people becoming much more data literate and technology becoming more accessible is having a dramatic effect on what’s available to people throughout an organization. “There are a lot of different ways to think about data integration but the one that to a data scientist resonates the best is that the most challenging problems in the world — and usually the most valuable solutions — frequently come when that data scientist takes data from a myriad of systems, not from a single system,” Jacobson said. “When you really want to optimize the business, and you need some financial data mixed in with some customer data, maybe mixed in with some logistics data, and you need to blend all that data together to fully optimize the equation. That is a chal-

lenging data integration problem. And these systems historically were frequently built by very different areas of the business to solve very different problems, and they don’t naturally always key together. And figuring out how to prep that data, blend it together and get to insight can be challenging.” As organizations move more of their workloads to the cloud, and look to use cloud-native solutions, there are opportunities to be had, but challenges to overcome. Mattilion’s Scullion said the cloud offers companies the ability to compete using their data more quickly, at more scale and more cost-effectively than they ever have before. “Companies don’t just want to compete using data anymore, they have to, as a competitive imperative. That drives a need to move apace. And if you want to move apace, you can’t rely on small numbers of individuals exercising high-end coding skills, because availability of skills and ability to innovate apace just aren’t there. You can’t get away from the raw computer science. Data is coming from different systems, it is in different shapes and sizes, it doesn’t necessarily

have all the business rules built into it, and so you still have to do all that stuff we used to call ETL. And in our case, we call it ETL again. The need for cloud-native ETL is actually more present today than it ever was.” In the cloud, ETL must work alongside data warehouses and data lakes to maximize the ability to transform and use data for competitive advantage. MuleSoft’s Pandi sees the need to build integrations that are flexible and don’t require teams to create new projects when things need to be changed. “I think we are at a juncture where organizations are starting to think that, we need to consider integrations to be a very strategic capability within the organization,” he said. Gartner’s Brasier said the research firm is seeing a trend towards what it calls the hybrid integration platform. It’s a concept, he said, that explains that organizations will need more than one integration technology to solve all of their integration use cases. “You will have a mixture of specialist integrators and these ad-hoc integrators — developers and data scientists. Then you’re going to have a mixture of data integration and application integration and event integration,” he said. “You’re going to have all of these different use cases and there’s going to be no one tool that will solve all of these for you, so what you need to do is manage and give advice on a portfolio of tools, each of which is clearly defined for a specific use case.” Mattilion’s Scullion said the point of data integration is delivering value. “The vast majority of business value being created in the cloud as regards data analytics isn’t migration and modernization projects. It’s net new questions being asked and answered in businesses in the cloud... questions a company wasn’t asking of itself five years ago but is now. Thinking about the citizen data professional, and these businesses not just wanting to but having to compete using data, faster than they ever have been able to before, fueled by the cloud, that’s most of what we see happening, and why data integration tools are really important.” ❚


Full Page Ads_SDT031.qxp_Layout 1 12/20/19 9:54 AM Page 11

Bring your data together. Set your business apart. Applications, data, APIs, and processes emerge from every corner of an IT ecosystem. Those elements multiply faster as your business grows. With integration that works across environments, you can coordinate the accumulation. Our flexible, distributed solution brings smart integration to every layer of your stack. See how at redhat.com/agile-integration.

Copyright Š 2019 Red Hat, Inc. Red Hat and the Red Hat logo are trademarks or registered trademarks of Red Hat, Inc., in the U.S. and other countries.

Hybrid Cloud | Cloud-Native Development Automation | IT Optimization | Agile Integration


012,13_SDT031.qxp_Layout 1 12/20/19 3:27 PM Page 12

12

SD Times

January 2020

www.sdtimes.com

INDUSTRY SPOTLIGHT

‘API First’ paves the way for agile BY JEFFREY SCHWARTZ

Until now, organizations have addressed the connectivity issue with middleware or enterprise application integration infrastructure with some form of enterprise service bus at the core. However, today’s cloud-native applications may require new types of platforms and data sources, which is why Red Hat delivered a unified cloud-native integration platform in February 2018.. Red Hat’s Integration has enabled, among other things, organizations to transform their traditional middleware architectures into a more agile framework. Sameer Parulkar, Red Hat’s product marketing director for middleware integration spoke, with SD Times, underscoring that he believes organizations that are looking to roll out modern, cloud-native applications should embrace agile integration as a core component of their application architectures. SDT: What is agile integration? Sameer Parulkar: We started talking about agile integration at Red Hat Summit in 2017. We were looking at the integration space and the capabilities that we offer as well as some of the challenges from the customer perspective of adopting these integration capabilities, as well as providing faster and competitive solutions. And then we spoke with a lot of our customers and there was consensus that integration should be more agile and align with DevOps. One of our key motivations with agile integration was to essentially position integration as a key business capability, enabling differentiated services for customers. In what way does Red Hat Integration make integration agile? Agile integration is based on three key capabilities: distributed integration, the ability to distribute your integrations across your applications; APIs, essentially using API’s to create your integration; and having an API-first approach and leveraging technologies like con-

tainers to essentially host and connect your integrated applications. As customers are adopting microservices, DevOps and cloud, and they are developing continuous services, integration capabilities should be just part of that. How is the architecture different from your typical enterprise application integration or middleware platforms? When you look at technologies like enterprise service bus or traditional integration technologies, they have been here for 10-plus years in the indus-

mean legacy? What I mean by that is, if I’m deploying an enterprise service bus, depending upon how it was done and deployed, that itself became a monolithic application. Now, with the adoption of cloud, APIbased digital solutions, more customercentric solutions, more event-driven solutions and with the necessity today of doing things much more quickly, like with DevOps or microservices, it is incredibly difficult to retool your centralized ESB based services, to be part of DevOps processes, or to connect as well

‘As customers are... developing continuous services, integration should be just part of that.’ —Sameer Parulkar

try. I used to be an architect before moving to Red Hat eight years back, and I implemented several of those technologies. There are incredible benefits of that architectures and proven technologies for traditional challenges. So, middleware is no longer suited for today’s environments? Frankly middleware technologies and how customers deploy them have evolved, especially with the adoption and popularity of cloud-based services. From a technology perspective, there are benefits to traditional enterprise service bus technology and one of those benefits was the ability to control the environment. All the integrations are done using the enterprise service bus that is deployed in one place. Everything connects to that one thing. What happens is you get control, efficiency of solution and you know what’s going on. The challenge is that this enterprise service bus technology has become a monolithic application by itself. When you say monolithic, do you

as create microservices. From organizational perspective, oftentimes the ESBs were being developed and managed by centralized integration teams, that are oftentimes called ICCs or integration competency centers. Those are typically like the integration specialists. Now, with a specialist, they bring a lot of expertise in terms of integration — they’ve been doing this for several years. And the integration oftentimes is hard, but they know how to do it. We believe the ICC teams should be enablers of innovation and work with the application development teams. But they need tools and technologies that can adapt with fast changing requirements as well as leverage existing assets. Technologies that allows them to deploy capabilities embedded or distributed with applications. Have you found that organizations are willing to move away from this centralized focus to the more distributed approach? Since we introduced it back in 2017, a lot of things have changed, especially


012,13_SDT031.qxp_Layout 1 12/20/19 3:27 PM Page 13

www.sdtimes.com

integration with the adoption of containers. Customers are adopting containers to create and scale their environment, adopt hybrid cloud architectures. Now, with an approach like that, one of the key capabilities of agile integration is that we are now seeing much more adoption of this type of technology within the marketplace. Are companies actually implementing this approach? We have several customers, like UPS, who have been longtime users of our integration technology. And now they’ve started to adopt more of a DevOps approach, a continuous approach with their application development, and adapted their integrations with the new architecture. Another one is the Government of Canada, Innovation Science and Economic development (ISED). One of their innovation departments started using APIs where they can now share the government services within different departments to provide citizen services. They have adopted our integration technologies like our 3scale API management with container technology. Swiss Federal Railways is adopting API management capabilities with our OpenShift Container Platform to bring their applications to market faster. What are the biggest drivers of this architectural change? Providing a differentiated customer experience, which is sometimes called digital experience, is becoming extremely important. And that is driving a lot of this innovation and they want to do this faster with more agility. To deliver that, they are adopting different approaches. And an API approach helps because if you think about your services as APIs, that in turn helps to create those differentiated services, to connect with customers, suppliers and to connect with partners by essentially creating an API platform or API type of serv-

ices. Not every customer is doing that yet but it’s something we’ve seen changing over the last few years. Where does introducing API management fit into this? As APIs become important, managing, securing, scaling, measuring those APIs becomes even more important. That is API management. When adopting an API management model, an important aspect is the gateway, which controls access to the APIs. The next challenge is how do you want to deploy that capability? A common approach we often see is when you configure your API for security and policies, you can deploy your gateway across your enterprise. New technologies like service mesh provide service-to-service communication especially for microservices. API management capabilities are complimentary to service mesh. With the recent release of OpenShift 4, how has that advanced your agile integration and API-first approach? A key focus is on a capability we call Operators, which in a container environment, allows you to build and manage Kubernetes-native applications. It provides an automated way in which you can deploy your services into that container environment. You can actually define it as a service and then use tools to deploy those services directly into your environment. The advantage of that is you are now going to have an operator for different sets of capabilities. For example, you may have an operator for API management capability, or you have an operator for your messaging capability or you have an operator for data streaming. Then you can just use the operator to define how you want to configure and install that capability in your container environment. Once you configure it it’s easier to automate those deployments across your environment. Operators provide a mechanism to automate deployment of integration services and making it easier for them to be part of DevOps process. Would you explain how the whole

January 2020

SD Times

notion of how an API-first approach changes the integration process? When you look at an API-first approach, it’s about the services that you deliver. It’s about providing your services or functionality as APIs. What it forces you to do is think about how you can actually share those services with your other customers and partners. This enables you to create much more differentiated services. To what extent are organizations applying real-time messaging into these new scenarios? Real-time messaging is a key requirement as it directly helps access to the most updated data and just-intime data. Now we are also seeing more data streaming capability. And the idea behind that is with technologies like Apache Kafka you can get streams of data from one place to the other and you can share it among different entities. We can analyze the streams of data to make more real-time decisions and further processing. Are most people starting out with a pure cloud type environment or are they doing it in a more hybrid? Quite frankly, I would say they are typically more in a hybrid environment because they have their existing assets and those aren’t going away, and then there are a lot of assets that are born into the cloud. And so agile integration actually becomes like a really a bridge for them to adopt hybrid cloud technologies. How do you see that changing? We are seeing that progressing much more now because when you look at it from a container adoption perspective, containers enable you to create a hybrid cloud deployment architecture. When you say hybrid cloud or multicloud, containers provide you the layer to deploy your applications on-premises in a private cloud or in a hybrid cloud. Plus, you may have procured multiple SaaS apps. You need to connect all of this together and manage all of this together. So, I think adoption of hybrid cloud is essentially contributing to a customers requiring agile integration. ❚

13


014-16_SDT031.qxp_Layout 1 12/20/19 9:49 AM Page 14

14

SD Times

January 2020

www.sdtimes.com

a!"#$% &'( )$%#*&!+

,!(%)-&)"$* ."! /0/0

Adam Scroggin, CEO of CardBoard DevOps will continue to be key as we move toward 2020. Software teams will notice more and more that once a product is released, it is not done. Software products are never done. We have begun to see more applications moving to mobile and web, which allows software teams to instrument their product to learn if customers are using what they released and how much value they are getting from it. Not all ideas are good ones, but getting out there and testing them before scaling will be vital for the next decade. Good DevOps practices have paved the road for ideas to move into production quickly.

Monte Zweben, CEO of Splice Machine “Cloud Disillusionment” blossoms because the meter is always running. Companies that rushed to

Antony Edwards, COO of Eggplant Technology is going to become increasingly regulated across the globe. Testing will not escape this, and by 2025 AI algos will need government certification. Testing will need to be able to guarantee that the system is safe to release, delivers the desired experience and that it's ethically sound. In the 2020s, testers will become software optimizers. They will focus on utilizing intelligent technology to help digital businesses continually improve.

Scott Johnston, CEO of Docker Containers pave the way to new application trends. Now that containers are typically considered a

the cloud finish their first phase of projects and realize that they have the same applications they had running before that do not take advantage of new data sources to make them supercharged with AI. In fact, their operating expenses actually have increased because the savings in human operators were completely overwhelmed by the cost of the cloud compute resources for applications that are always on. Ouch. These resources were capitalized on-premises but now hit the P&L.

common deployment mechanism, the conversation will evolve from the packaging of individual containers to the packaging of the entire application (which are becoming increasingly diverse and distributed). Organizations will increasingly look for guidance and solutions that help them unify how they build and manage their entire application portfolio no matter the environment (on premise, hybrid/multi-cloud, edge, etc.)

Tatianna Flores, head of Atos North America’s AI Lab In 2020, AI product companies will incorporate elements of reinforcement learning and widescale data sharing to remain competitive. 2019

Tim Tully, CTO of Splunk 2020 will be the year of the indulgent user experience, and that doesn’t bode well for the holdouts. Even as enterprise and industrial applica-

revealed that highly specialized applications of AI geared toward industry-specific problems are hot commodities. Tesla acquired a company that focuses exclusively on object recognition, and McDonalds acquired a speech recognition company focused on languages. In the coming year, we’ll see even greater competition to improve performance in these popular and specialized applications of AI. Products will need to integrate reinforcement learning to constantly improve deep learning applications and stay ahead of their competition. Also, movement toward wide-scale data sharing will occur more rapidly.

John Pocknell, senior solutions product manager for Quest Software’s information management business unit NoSQL will gain momentum. NoSQL hasn’t seen a huge amount of movement in recent years, but I believe we’ll see it pick up more next year, especially as people move towards fresher and newer data needs. While relational databases are good for traditional workloads like OLTP applications and business analytics (OLAP), for more complex OLTP workloads that include low-latency applications, NoSQL is better (versatility, agility, scalability). Ultimately, it’s a matter of getting the right database to suit the workloads of the organization, especially with the variety of structured and unstructured data in use.

tions evolve, they’re not yet consumer-friendly enough for daily users. Enterprise software companies who are still producing dull user experiences will find it harder to keep their users loyal, and will be even more vulnerable to disruption. When it comes to enterprise UX, the companies that will succeed are the visionaries that design software to make people’s entire experience better.

Srinath Perera, vice president of research at WSO2 Cloud APIs will democratize AI. To date, custom AI model building has been limited to large organizations with the resources to tackle the complexity of AI deployment and management, not to mention the scarcity of experts and data. But now, cloud APIs make it possible for a few organizations to concentrate on providing the expertise and data required to solve a given problem, and then share or market the AI models they build. In this way, cloud APIs hold the promise to solve many AI use cases in 2020 by letting organizations of all sizes gain access to AI models provided by data experts.


014-16_SDT031.qxp_Layout 1 12/20/19 9:50 AM Page 15

www.sdtimes.com

January 2020

SD Times

Chris Patterson, senior director of product management at Navisite Big data democratization will make everyone data analysts. Big data has been a buzzword for so Prince Kohli, CTO of Automation Anywhere RPA will play a pivotal role in global data privacy and governance initiatives. The 2020s are shaping up to be the decade defined by big data — with the advent of 5G and the explosion of connected devices. In this new era, we’ll see even more pressure on companies to be fully transparent about the information they collect and how it’s used, with legislation like GDPR and the upcoming California Consumer Privacy Act (CCPA) representing only the tip of the data governance iceberg. Additionally, as malware increasingly becomes enhanced with artificial intelligence (AI) to identify network vulnerabilities, intelligent, secure bots will be a critical line of defense against data breaches.

Matthew Halliday, co-founder and VP of Product for Incorta Quantum computing applications will take off in 2020. Quantum computing remains in the most nascent stages of development, but the possibilities are fascinating — quantum computing unlocks a new world of use cases that were previously impossible. While we may still be years away from widespread use cases, the number of initial applications will skyrocket in 2020, as companies like Google and IBM join smaller outfits like Quantum Thought in beginning to commercialize their quantum abilities. As a result, 2020 will bring heavy investments in quantum computing applications from venture capitalists and major enterprises alike — the upside is simply too great to ignore.

David Cramer, co-founder and CEO, Sentry Tool and framework frenzy will continue; fatigue will worsen. The plethora of tools, languages, and frameworks are adding massive complexity to the application development ecosystem. IT teams are challenged to interconnect these disparate languages and platforms to build applications that are the lifeblood of business in today’s digital economy. And while conference halls echo with cries of tool and framework fatigue, there will not be a clear resolution in 2020. In fact, there will likely be more disruption. Although it seems React.js is approaching victory for front-end development, there are still a number of viable competitors ready to shake things up. On the back end, there is still no standardization, in spite of significant innovation in recent years. PHP, Ruby, Python, Node.js, Java, and .Net are all in use — but there is no clear winner and that won’t change in 2020. As teams struggle to connect it all, even more tools — many of which will be open source — will emerge to integrate technologies, but the challenges of complexity and control will get worse before they get better.

long, it has lost value. But, in 2020 and beyond, we’ll see it begin to provide real, tangible results. One reason for this is that data warehousing tools have improved and are no longer inhibitors to accessing enterprise insights in real time. Going forward, employees and stakeholders — from IT to the Board of Directors — will be able to more easily tap into the data well and become analysts themselves. And, with the democratization of data, the focus will shift from how to access data to: 1) asking the right questions of data, and 2) identifying who within your company is best positioned to analyze and glean answers from that data.

Kirit Basu, VP of products for StreamSets DataOps will gain recognition in 2020. As organizations begin to scale in 2020 and beyond — and as their analytic ambitions grow — DataOps will be recognized as a concrete practice for overcoming the speed, fragmentation and pace of change associated with analyzing modern data. Already, the number of searches on Gartner for “DataOps” has tripled in 2019. In addition, StreamSets has recognized a critical mass of its users embracing DataOps practices. Vendors are entering the space with DataOps offerings, and a number of vendors are acquiring smaller companies to build out a discipline around data management. Finally, we’re seeing a number of DataOps job postings starting to pop up. All point to an emerging understanding of “DataOps” and recognition of its nomenclature, leading to the practice becoming something that data-driven organizations refer to by name.

Dr. Jans Aasman, CEO of Franz, Inc. Digital immortality will emerge. We will see digital immortality emerge in 2020 in the form of AI digital personas for public figures. The combination of Artificial Intelligence and Semantic Knowledge Graphs will be used to transform the works of scientists, technologists, politicians and scholars into an interactive response system that uses the person’s actual voice to answer questions. AI digital personas will dynamically link information from various sources — such as books, research papers and media interviews — and turn the disparate information into a knowledge system that people can interact with digitally. These AI digital personas could also be used while the person is still alive to broaden the accessibility of their expertise.

Michael Morris, CEO of Topcoder So what’s the future of work? It’s the passion economy. Forget the set-schedule work week — the future of work will be driven by the “passion economy,” especially in the tech world. As the prevalence of open workforce models grow, freelance designers, developers and data scientists will shift loyalties to the work that’s out there, rather than a specific company. In order to recruit and retain people with coveted tech skills, companies will need to provide interesting projects for the freelance community that challenge and inspire them. continued on page 16 >

15


014-16_SDT031.qxp_Layout 1 12/23/19 10:19 AM Page 16

16

SD Times

January 2020

www.sdtimes.com

a!"#$% &'( )$%#*&!+

,!(%)-&)"$* ."! /0/0

< continued from page 15

Adam Famularo, CEO, ERwin Data finds a soul. Highly regulated industries

Avon Puri, CIO of Rubrik Data privacy takes the next step. It used to be

will begin to change their philosophies, embracing data ethics as part of their overall business strategy and not just a matter of regulatory compliance. In addition, ethical artificial intelligence (AI) and machine learning (ML) applications will be used by organizations to ensure their training data sets are well-defined, consistent and of high quality.

that organizations had to spend millions of dollars on consultants to find out where PII (sensitive) data lived, but today there are a number of data privacy and governance technologies that can bolster security and data practices. Next year will see an inflection point in organizations finally understanding more about their data — which will be critical to improving data privacy standards as an industry.

Vanessa Pegueros, chief trust and security officer at OneLogin With the convenience of what the iPhone has brought to the masses with facial recognition, end users will continue to expect similar offerings from most if not all applications in 2020. Although facial recognition has its flaws, the convenience outweighs the concerns for users.

Maty Siman, founder and CTO at Checkmarx Open source vulnerability. With organizations increasingly leveraging open-source software in their applications, next year, we’ll see an uptick in cybercriminals infiltrating open-source projects. Expect to see attackers “contributing” to opensource communities more frequently by injecting malicious payloads directly into open source packages, with the goal of developers and organizations leveraging this tainted code in their applications.

Alan Jacobson, chief data and analytics officer, Alteryx The CDO role is evolving. The CAO is the new breed. The role of the data chief is changing, as is their title. The chief data officer needs to progress, and in 2020 the chief analytics officer title will really rocket upwards. It’s a manifestation that at last the role, and the projects managed within business, are less about data and more about what businesses are doing with it. The CAO is now a type of digital transformation officer — and in fact could just be termed a transformation officer — a sign that those in the role are becoming more tightly focused on what business success is really about.

READ MORE IN ANALYST VIEW

IT predictions, or parlor tricks? PAGE 53

Oskar Sevel Konstantyner, product owner and team lead at Templafy In 2020 we’ll see enterprises ensuring that their choice of cloud doesn’t limit their agility and performance. While AWS, Azure and Google Cloud look very much alike, they do have specific distinguishing features. Enterprises are moving toward multi-cloud computing to not limit themselves to the features of a single cloud. Initiatives like Azure Arc, where it’s possible to deploy Azure technology on Amazon servers, clearly shows how cloud vendors support this journey. 2020 will be less about retaining customers by locking them to a single cloud vendor, but instead convincing them to stay by being the best in some areas — and admit that other vendors might offer better services in other areas.

George Gallegos, CEO of Jitterbit 2020 will be a test for the integration market. The integration market is one of the hottest markets today, and we don’t expect demand to slow. But integration comes in many flavors, and while traditional integration offerings may work well for a small subset of businesses, the biggest impact and growth will occur in enterprises undergoing digital transformation and relying heavily on comprehensive connectivity strategies. The past year was marked by several acquisitions and partnerships as integration and API vendors scrambled to expand capabilities to support enterprise-class needs. 2020 will be a test to see which bets worked, and I suspect only a handful of vendors are well equipped to address all of the aspects of enterprise class iPaaS, and those who are not, will become even more stark.

Steve Burton, DevOps evangelist, Harness DevOps Teams will continue to replace Jenkins. There will be a new breed of CI/CD solution where engineers won’t write a single script, update a single plug-in, restart a single slave, work late nights or weekends debugging their failed deployments. Instead, engineers will adopt Continuous Delivery as-a-Service where deployment pipelines auto-verify and roll back code, thus allowing engineers to get their lives back after 6 and spend weekends with their family and kids. ❚


Full Page Ads_SDT030.qxp_Layout 1 11/18/19 11:41 AM Page 13


018-22_SDT031.qxp_Layout 1 12/19/19 4:29 PM Page 18

18

SD Times

January 2020

www.sdtimes.com

BY JENNA SARGENT

While it may seem like there are sort-term benefits to longer work weeks, it’s not sustainable and takes a heavy toll in the long run

O

ver the past few years, as work/life balance has become more of a priority for developers, the notion of a “crunch culture” has been the subject of much discussion. Crunch has been especially prevalent in the game development industry, where game developers have come to accept that crunch is just a part of the job. According to a 2019 study by Take This, 53% of game developers reported that “crunch” is an expected component of their employment. According to the survey, crunch is defined as working more than 40 hours per week for an extended period of time. Crunch is identified by “emotional exhaustion, reduced personal accom-

plishment, and feelings of hopelessness,” said Take This. But crunch isn’t just something that happens in the game development industry. Enterprise software developers also experience crunch cultures at work. No matter the industry, crunch is often caused by unrealistic deadlines, a lack of communication, and a lack of stability, said Vlad A. Ionescu, chief architect at ShiftLeft. Chris Nicholson, CEO of Skymind believes that crunch is more prevalent in startups than in established companies. This is because in a startup, they’re essentially trying to find a profitable business model before they run out of money. “Anybody who joins a startup in


018-22_SDT031.qxp_Layout 1 12/19/19 4:30 PM Page 19

the early stages really signs up for that pressure,” he said. “They sign up for the pressure, some of the sacrifices. There are things nobody should have to sacrifice, but when a lot of young people sign up for startups, they’re probably putting family on hold for a while, and they’re probably not going to be seeing their friends as often. That’s kind of how it goes. That’s the deal.” According to Ionescu, there is always a long-term cost to crunch. It may seem good in the short-term to have that increased productivity, but the effects of crunch will almost always catch up to a company. “You cannot really sustain it,” he said. “Either your productivity goes down in the end or your employees start to leave. You get all these side effects

that happen long-term.” To a certain extent, people are paid in prestige just as much as they get paid in salary, Nicholson explained. This would mean employees are more willing to put up with crunch if they’re working for a company they view as prestigious. But according to Ionescu, this form of payment really only matters for a few years. “I think overall this may work for a few years maybe, but in the end companies like Facebook and Google and so on, they want to make a name for themselves as a good employer, where it’s fun and enjoyable to work for them,” said Ionescu. “And in fact, this is probably much more important to them than getting the short-term productivity out of people. And

although people might put up with it for a while, it’s never sustainable. And I think prestige comes as a form of motivator for people for sure.” Phil Alves, CEO of Devsquad, added that the turnover that occurs in crunch environments actually causes more delays than if employees had a proper work/life balance and teams were continuously evolving. “With proper time to rest, employees will be more productive and stick around longer, enabling them to accomplish more in the long-run,” he said.

Crunch leads to burnout Crunch also leads to burnout, which can have severe consequences, both for continued on page 20 >


018-22_SDT031.qxp_Layout 1 12/19/19 4:30 PM Page 20

20

SD Times

January 2020

www.sdtimes.com

< continued from page 19

the individual and the team as a whole. “This is something serious and it always destroys either the team or the individual,” said Ionescu. “Burnout is no joke. Burnout can happen to anyone, and the outcome can be loss of productivity at best, to things more serious like depression and anxiety and panic attacks and so on. It really takes a toll on the individual, creates tension in the team. It’s a thing you really really want to avoid. And I’ve seen it first-hand. Sometimes people around me are getting burned out and I think as thought leaders of our industry, we should reach out and help out people who are in this situation and talk to them and understand them. In most cases, people are burn-

(like depression or bipolar), 61% have an anxiety disorder (such as generalized anxiety, social anxiety, or a phobia), 19% have ADHD, 12% have PTSD, and 8% have OCD. According to Akullian, these don’t add up to 100% because it is very common for someone to have more than one diagnosis at a time. OSMI surveyed 1,570 tech employees as part of their research. “[According to the research], 80% of people endorse that [their mental illness] does interfere with their work,” said Akullian. “So we have a community of high-functioning tech professionals, around 50% of whom are diagnosed with a mental illness. Eighty percent of those people are not functioning to their full capacity.”

According to a 2016 study... 51% of tech workers have been diagnosed with a mental health condition by a medical professional.

ing out, but they’re not realizing themselves that they’re burning out.” Burnout can happen in any industry, but it is especially common in the tech industry. At KubeCon | Cloud Native Con, which took place in San Diego in November, Jennifer Akullian, founder of Growth Coaching Institute, an organization that provides coaching to tech executives and organizations, gave a talk on mental health and burnout in tech. According to a 2016 study by Open Sourcing Mental Illness (OSMI), 51% of tech workers have been diagnosed with a mental health condition by a medical professional. 73% have a mood disorder

According to Akullian, there are some common reasons for burnout, such as not feeling valued at work, not getting recognition for the work you do, and not doing something that is enjoyable. In the tech industry there are additional factors that contribute to burnout, like performing at a very high cognitive level, being physically isolated from teammates, and working on highstakes projects that often fail. “Putting your heart into something, doing the best you can, exerting yourself using all of your energy and resources, and then it failing, and then it fails again and again and again, and that’s kind of your

job — we’re in technology and we’re innovative and in order to move forward and create new things you have to fail a bunch of times first. That takes a mental toll on you as a professional,” said Akullian. Akullian explained that your brain chemistry when you’re burned out often resembles the brain of someone suffering from a mental illness. When you’re burned out, areas of cognition, processing speed, working memory, and problem solving all tend to be impacted.

It’s hard to shake the crunch mindset, even once you’re out of that environment DevSquad’s Alves said that many of the company’s employees come from crunch environments. He said he’s noticed certain differences between these employees and ones who haven’t been exposed to crunch culture. One difference is that employees coming from a company where crunch was present work fast and are more results-oriented. This is a good thing, but also can make them more likely to make mistakes since they are working too quickly, he explained. In addition, those employees tend to stay highly productive even once they’re in a more sustainable environment and handle stressful situations well because they are used to working under pressure 24/7, he explained. Alves added that it takes some time for those employees to actually adapt to the more sustainable work schedule. “It’s difficult for them to register that they won’t be fired if they want to go home and spend time with family and friends,” he said. “When you say that they won’t be penalized for working a normal 40 hours per week, they don’t believe you. Most people work 60 hours a week when they first start and slowly start to reduce as we keep enforcing that we don’t require that, and that we think it’s important to take care of our body and social life. Eventually reality sets in and they start to adapt.”

Crunch in video game development Crunch is especially prevalent in the video game development industry. As


018-22_SDT031.qxp_Layout 1 12/19/19 4:31 PM Page 21

www.sdtimes.com

more horror stories have emerged from game developers, gamers have had to reconcile their favorite games with the “crunch culture” that went into developing them. Crunch culture has been permeating the game development industry for over a decade. For example, in 2004, LiveJournal user “ea_spouse” wrote a post that revealed that their partner who worked at EA gradually started working longer hours until ultimately he was working mandatory 13 hour days, seven days a week — that’s about 85 hours per week — for no additional compensation. Several other exposés have been published in the past few years, from Kotaku sharing the experiences of developers on Rockstar Games’ “Red Dead Redemption 2” to Polygon reporting that some developers on Epic Games’ “Fortnite” had worked 100hour weeks at points to get “Fortnite” updates out.

The history of crunch in video game development Initially, crunch was the result of a Box Product mentality that created intense time constraints, explained Virginia McArthur, consulting executive producer at game studio Endless Studios. Crunch was especially common as game developers worked to get physical game discs ready for the holiday time frame, which meant completing a game by August. In the early 2000s, there was a mind shift and platform shift that allowed teams to deliver faster. Beta releases opened up games to early feedback and allowed players to experience a game before its release, which made them feel like a part of the development process, McArthur explained. Digital games also allowed for more flexibility on ship dates. They also allowed game studios to build R&D teams that could take their time to develop games that would get tested and played before they were green-lit into production. “With this new approach, once in production, teams could evaluate the right features and team size needed against the budget

January 2020

SD Times

Three steps to reduce burnout

Jennifer Akullian, founder of Growth Coaching Institute, an organization that provides coaching to tech executives and organizations, offered three steps that people can take to help alleviate or reduce burnout: SLEEP: Sleep deprivation mimics a mental illness. According to Akullian, for most people, when we sleep, we are able to process thoughts in the absence of norepinephrine, which is a stress-inducing hormone. “The expression ‘sleep on it’ is recognized across dozens of different languages because there is this universal understanding of the problem-solving benefits of sleep,” she said. “If you’ve ever struggled a lot on a problem, gotten sleep, woken up and it’s just come to you, that’s kind of what I’m talking about. So the sleep has an impact on the brain that allows you to cognitively move forward in a way where you might have been stuck before having that sleep.” UNPLUG: Take a break from technology, and step outside. According to Akullian, there is significant research around mental health and the brain and being in nature. Fifteen minutes outside can significantly reduce cortisol (stress hormone) levels, and 45 minutes can significantly increase cognitive performance, she explained. “There’s also a piece around attention, where because nature has fewer things to attend to, if you go outside and put yourself in that environment, when you return back to work it sort of resets your brain and helps you with attention moving forward,” Akullian said. TALK: According to Akullian, the act of talking can reduce stress or feelings of anxiety. When a person talks, the neural activity shifts from the amygdala, which is the emotional center of the brain, and into the pre-frontal cortex, which is where rational thinking and problem-solving occurs. “There is a word for the relief that comes when you talk. It’s called catharsis. Talking is cathartic,” said Akullian. “There is a whole field of therapy that is based on research and science around the act of talking. It doesn’t make the problems go away, but it takes a little bit of the sting out and makes it easier for you to move forward and address it.” OSMI has a number of resources on their websites to help tech employees deal with burnout or mental illness. ❚

1.

2.

3.

and funding available and then pick a ship date,” said McArthur. “Larger companies had the luxury of doing this with new titles, but for older franchise titles, risk for crunch still existed. You had to get products out to get sales and importantly, keep the team.” According to McArthur, when freeto-play games eventually started entering the market, it led to a degradation in game play quality and “low-budget titles that could be made with smaller teams for less, resulting in more intense crunch and ridiculous timelines.” McArthur added that in the early

days of gaming, developers were relatively young and more willing to work longer hours. When these developers aged and started families, these crunch schedules were no longer viewed as a badge of honor. “Now, working smarter and efficiently has become the mode of choice,” said McArthur. “Agile still exists, but it’s important to work with your community, focus test early and use digital and social distribution channels to stay on top of what players want and develop better games.” continued on page 22 >

21


018-22_SDT031.qxp_Layout 1 12/19/19 4:31 PM Page 22

22

SD Times

January 2020

www.sdtimes.com

< continued from page 21

Nicholson, also believes that crunch is so prevalent in games because unlike a piece of software that gets regular updates, games are typically only played through once. If a game is buggy, it ruins the player’s one-time experience of it. This puts a lot of pressure on game developers to get things right. It’s not as important at a SaaS company to get things right on the first release, Nicholson explained. “It’s a piece of software that runs in the cloud,” said Nicholson. “Let’s say I host it. I can update that five times a day. And in five times in a single day I can improve my users experience of it … Deadlines don’t have the same meanings for web-hosted SaaS companies because they know that their errors are not ruining their users unique experience and they know that they’re going to have a chance to recuperate.”

The future of crunch culture Fortunately, Ionescu believes there is becoming more of an understanding that crunch is bad. More and more, thought leaders in the industry are talking about how it is unsustainable and how people are more motivated by positive forces than negative ones. Alves isn’t so sure that crunch will ever be completely eliminated. He believes that the only way to change this is to have different KPIs in place. “We need to reward good leaders,” Alves said. “Unfortunately, bad leaders are rewarded for ‘quick wins’ when they are actually causing huge internal and future damage. To be honest, I don’t think it will change anytime soon.” Nicholson also doesn’t think crunch will get better in the games industry. “I don’t see how they can fix the fundamental structure of their industry, which is people play a game once, I need to provide a bugless experience and deliver on a deadline like Hollywood. They have a release date — they’re a cross between Silicon Valley and Hollywood,” said Nicholson. McArthur believes that a shift has already occurred in the games industry, which may mean that these structural changes aren’t that far-fetched.

Crunch doesn’t have to happen — How can it be eliminated? There are many ways that crunch can be reduced or even eliminated. The method for eliminating crunch depends on what the cause of it is. Sometimes crunch can be caused by unrealistic timelines or a lack of stability that forces teams to be constantly fixing things that are breaking, Ionescu explained. “I think both of these cases have their own situations and ways to deal with it, but overall I think there has to be some kind of honest conversation at the management level,” said Vlad A. Ionescu, chief architect at ShiftLeft. “You can’t have someone ask for things that are not realistic and then maybe the engineering Vlad A. Ionescu groups are running with it. The engineering teams need to take ownership of this crunch problem and say the way it is there’s no way to build this in X amount of time. You have to take into account what’s actually realistic. And you cannot take shortcuts. You can’t say ‘Okay, we’re not going to do QA or good engineering practices just to save time.’ It looks like it’s saving time in the short run, but in the long run it’ll burn people out and cause much more work than initially necessary.” Virginia McArthur, consulting executive producer at Endless Studios, agrees that communication is key to preventing crunch. “Engineering should work closely with the PR and marketing teams to set realistic communication targets, review what the market demands and build a schedule accordingly in order to develop realistic, healthy timelines,” said McArthur. “Today, decisions are no longer siloed at the executive staff level and smaller teams have equal Virginia McArthur input on what is realistically doable.” Another way to reduce crunch is to “release early and often,” McArthur explained. For example, at Endless, which is a game studio, they put more of a focus on smaller titles, and make sure to include time for QA and community feedback. “We also strategically release many different game options, let our players decide which ones they like the best and then evolve the title,” she said. “This alleviates a crunch to develop a perfect game, while also revealing what users really want to play.” Phil Alves, CEO of Devsquad, also believes that crunch is often the result of weak leadership and a lack of planning. By managing stakeholder expectations and training leaders, crunch can be avoided. ❚ Phil Alves

“Because the way we distribute and consume games has evolved over time, executives have been forced to change their expectations,” said McArthur. “There’s been a shift in a positive way to healthier work environments; employees are leading more well-rounded lives and valuing quality time with family and friends over an unhealthy crunch to get an extra week on Steam or in App stores.” For example, Nintendo had planned to release “Animal Crossing: New Horizons” for the Nintendo Switch in 2019, but at E3 this year, they revealed that they were pushing

back the release to March 2020. “For us, one of our key tenets is that we bring smiles to people’s faces, and we talk about that all the time,” Doug Bowser, president of Nintendo of America, told IGN at E3. “It’s our vision. Or our mission, I should say. For us, that applies to our own employees. We need to make sure that our employees have good work-life balance. We will not bring a game to market before it’s ready. We just talked about one example [with ‘Animal Crossing’]. It’s really important that we have that balance in our world. It’s actually something we’re proud of.” ❚


Full Page Ads_SDT031.qxp_Layout 1 12/20/19 9:56 AM Page 23

Bad address data costs you money, customers and insight. Melissa’s 30+ years of domain experience in address management, patented fuzzy matching and multi-sourced reference datasets power the global data quality tools you need to keep addresses clean, correct and current. The result? Trusted information that improves customer communication, fraud prevention, predictive analytics, and the bottom line. • Global Address Verification • Digital Identity Verification • Email & Phone Verification • Location Intelligence • Single Customer View See the Elephant in Your Business -

Name it and Tame it!

www.Melissa.com | 1-800-MELISSA

Free API Trials, Data Quality Audit & Professional Services.


Full Page Ads_SDT031.qxp_Layout 1 12/20/19 9:56 AM Page 24


025_SDT031.qxp_Layout 1 12/19/19 4:50 PM Page 25

www.sdtimes.com

January 2020

SD Times

DEVOPS WATCH

JetBrains introduces new developer collaboration tool BY CHRISTINA CARDOZA

JetBrains has announced the launch of Space, an integrated team environment for creative teams. The announcement came from the company’s KotlinConf for the Kotlin programming language. “In Space, the concept of a team is a first-class citizen. When you join a team, you are automatically included in everything related to it, be it meetings, blogs, source control, calendars, vacations, etc. This eliminates the need for creating concepts such as groups and then making sure that every team member is also part of the corresponding group,” Natasha Katson, team tools product marketing manager at JetBrains, wrote in a blog. According to JetBrains, while many teams are switching from spreadsheets to project management tools to manage their agile projects, the tools still aren’t integrated with the entire creative workflow process. Space is designed to combine DevOps, communication, team and project management into a single solution. It will feature resources such as internal blogs, meeting schedul-

The code review feature in Space tracks discussions and has a transparent system for accepting changes and resolving concerns.

ing and collaboration tools. “Most digital collaboration environments are in fact a mixed bag of solutions tackling different problems, from development tools to task management ones. This leaves people switching tools and tabs, manually copying information, and generally losing time and creative flow,”

JetBrains CEO Maxim Shafirov said. “JetBrains Space is changing this — and thus changing the foundation of creative work, software development included.” “Space fills a crucial gap in creative workflows, which is why we’ve been using it internally for two years now. We are thrilled that other teams can now benefit from this environment as well,” Shafirov added. In addition, Space covers the software development process with the ability to store data in one place and integration with tools for source code management; code review and browsing; continuous integration, delivery and deployment; package repositories; issue tracking; planning; and project documentation. Space is available as an early-access program with a freemium starting tier. According to the company, the ultimate goal of Space is to expand into more teams such as designers, marketers, sales and accounting. JetBrains also plans to add a knowledge base, automation CI/CD pipelines and personal to-do lists and notification management to Space. ❚

Sonatype gets new strategic partner to help scale DevOps BY CHRISTINA CARDOZA

Vista Equity Partners has announced it is acquiring a majority interest in Sonatype. Going forward, the companies will work together to continue to provide DevOps automation and opensource governance solutions. “At Sonatype, our vision, strategy, and execution have long been focused on helping software engineering teams scale DevOps by automatically harnessing all of the good that open source has to offer, while minimizing the inherent risks. Vista, perhaps more than anyone else in the world, understands our vision and appreciates the strategic importance of our mission,” Sonatype’s CEO Wayne Jackson and CTO Brian

Fox wrote in a post. Sonatype will continue to operate as normal. With Vista as a new partner, the company explained it will be able to continue to focus and foster its Nexus solutions and community with Nexus Repository Manager, Nexus Lifecycle, Nexus Firewall, Nexus Auditor, and the full Nexus Platform. Sonatype will also continue to invest in Central Repository and Nexus Repository Manager OSS. “And you should expect that we will continue to contribute new innovations like Sonatype Nancy, Sonatype Goalie, Sonatype DepShield (a GitHub integration), and OSS Index for purposes of helping front-line developers build more secure code, with less hassle,”

Jackson and Fox wrote. “Open source tools are an invaluable resource that enable companies and developers to keep up with the demand to deliver software applications at a rapidly accelerating pace,” said Patrick Severson, principal at Vista Equity Partners. “Wayne and his team have built an impressive business and an innovative portfolio of products that empower software development teams to continuously innovate responsibly and with the highest quality and most secure open source across every stage of the digital supply chain. We are pleased to partner with Sonatype as they continue to grow their company in the large and rapidly expanding DevOps market.” ❚

25


026-29_SDT031.qxp_Layout 1 12/19/19 4:13 PM Page 26

26

SD Times

January 2020

www.sdtimes.com

Productivity tools are

in the current development

BY JENNA SARGENT

A

s development teams work to ship code faster and faster, streamlining development workflows is crucial. Developers don’t just write code all day. There are other tasks developers spend time on that may be slowing them down and preventing them from doing the work that adds value to a company. According to ActiveState’s 2019 Open Source Runtime Pains Developer Survey, 36.8% of developers spend two to four hours per day coding, and only 10.56% spend all of their day coding. Non-coding time is typically spent on tasks such as software design or attending meetings. As the need for being more productive has grown, so has the availability of solutions to help developers be more productive and collaborate more easily. These tools come in all shapes and sizes. There are tools that are designed specifically with productivity and collaboration in mind, such as Slack, Microsoft Teams, and Trello. But it is also common to find development tools with productivity features baked in, such as IntelliJ IDEA, CodeStream, or ZenHub. Developers need to work together on projects often, and having a tool that makes communication easier and more transparent is beneficial. According to Mike Ammerlaan, director of Office & SharePoint ecosystem marketing at Microsoft, email isn’t a great system for this for several reasons, including that it doesn’t provide a way for people to specify how they want to be notified in threads and it isn’t a great format for recording knowledge. Another advantage of communication tools like Slack and Microsoft

Teams is the ability to create custom channels. “With the custom channels, what you can do is you can further subdivide in the processes within your developer teams,” said Ammerlaan. “So for example, [you can] have a channel for people to come by and report bugs or a channel for having post-mortem conversations after an incident, or dealing with incident response.” These platforms also offer a number of different integrations targeted at developers so that different aspects of the development life cycle can be tied back into the chosen communication platform. “The idea is that you can bring those applications into Microsoft Teams and connect them into the conversations, connect them into the workflows, and sort of weave them into a tailored space that fits the engineering teams.” Microsoft is also continuing to innovate the platform and extend the surfaces that developers can customize,

Ammerlaan explained. For example, in November at Microsoft Ignite, the company added capabilities such as secure private channels, multiwindow chats and meetings, pinned channels, and task integration with To Do and Planner. Apart from communication platforms, project management tools are also heavily used by developers to keep projects on track. Trello is especially popular because it essentially allows for a personal Kanban workflow that is completely customizable based on a person’s working preferences. “Trello has become my most favorite to tackle all my pending tasks in a day while maintaining a normal routine,” said Mehul Rajput, CEO and co-founder at Mindinventory. “I can classify my tasks as ‘to do,’ ‘in process’ and ‘done’ categories.” Peter Wilfahrt, chief digital officer and co-founder of German e-commerce agency Versandgigant, also


026-29_SDT031.qxp_Layout 1 12/19/19 4:14 PM Page 27

www.sdtimes.com

crucial

SD Times

developers more productive. by reducing the amount of code they There are also things that need to write, Altova’s CEO Alexander one might not immediately Falk explained to SD Times. think of as a productivity tool, According to Langr, productivity but that boost productivity in tools can help developers spend more other ways. For example, time on the things that matter. “It may Rajput uses an app called F.lux, seem silly to worry about such small which automatically changes amounts of time, but they really do the color of his computer add up.” screen based on the current Rajput recalls many days where he time and his location. “As a and his team were in the right frame of developer, I need to spend so much mind to do their best work, but small time with my eyes on the screen. And hindrances got in the way. for me, working with tired and dry eyes In addition to the added productiviis a real obstacle. So, I use F.lux to help ty, these tools may have added benefits, myself remain relaxed … Warm colors such as documentation that might not help to work longer by making the work otherwise have existed. For example, environment more pleasant and natural Wilfahrt keeps track of his work in Trelto my eyes.” lo, and by forwarding every request into IDEs also fall into this category a single source of truth, he can docubecause of helpful features like auto- ment change requests and keep track of complete and syntax highlighting. improvements of delays. “A long-term “Most of that streamlining occurs by record will allow you to re-evaluate previrtue of replacing mouse operations dictions (how long will it take to implewith keyed commands that are faster to ment X?).” execute,” said Jeff Langr, owner of training and consulting company Langr What do developers look for from a productivity solution? Software Solutions. No-code and low-code tools also When searching for new tools, Langr help with productivity. These tools typi- finds that tools that help with small cally feature drag-and-drop compo- tasks are more helpful than ones trying nents, which allow developers or busi- to help with big-picture challenges. He ness users to easily create simple also seeks out tools that don’t force him into a certain workflow. applications. Views like this explain Slack’s popuLow-code and no-code are often marketed to business users, or “citizen larity among developers. In the Slack App Directory, you can developers,” allowing find a very specific app for non-developers to Development accomplishing a specific create business applitask, but that app will still cations without hav- Management be grouped into a single ing to know how to platform. According to write code. But these Bear Douglas, director of solutions can also be developer relations at strategically used by Slack, the platform just developers to cut surpassed 2,000 apps in its down on production app directory, and over time. A 500,000 custom integraIn fact, low-code tions are actively used on provider Altova offers several developer-focused features in a weekly basis. “When you think about what it’s like its low-code platform, MobileTogether. Its “program once, run everywhere” in your everyday workday, I would environment provides developers with guess that you probably use upwards of an abstraction layer between them and a dozen just between communication the native SDKs and APIs. This helps with people, writing things, your code continued on page 28 > developers cut down on time and effort

landscape

praised Trello for its ability to help keep track of project goals. To keep himself organized, he has organized his board into “Projects,” “Next Actions,” “Waiting For,” and “Someday/Maybe.” He also connected Integromat to Trello, which adds items to Trello, even further increasing productivity. Tools like this can end up being so central to a person’s or an entire team’s workflow, too. For example, Scott Kurtzeborn, an engineering manager on the Microsoft Azure team, explained that his team has been using ZenHub to track all of their work since the team was formed. “[Without ZenHub], it would just be a mess,” said Kurtzeborn. “I can’t imagine not managing our backlog the way we are … How has it affected our productivity? It’s kind of at the center of it, in terms of how we track our work.” Other project tracking and management software includes Atlassian Jira, Anaxi and Clubhouse, each geared to making

January 2020

27


026-29_SDT031.qxp_Layout 1 12/19/19 4:14 PM Page 28

28

SD Times

January 2020

www.sdtimes.com

< continued from page 27

Slack custom bots helped streamline Color’s lab Slack has become a place where developers can find an app for handling almost any task, or they can build their own easily. Bear Douglas, director of developer relations at Slack, has noticed that more and more, developers are no longer just building tools for themselves, but recognizing the value they’ve received from these tools and asking what value they could provide to their whole team. To make building apps even easier, Slack recently released Workflow Builder, which is a WYSIWYG editor that allows users to create custom workflows to automate routine tasks. “We’ve seen that totally take off with users and the greatest thing with that as far as I’m concerned is that it means that this value that was essentially locked in our platform for developers, because you had to be conversant with how to use an API, is now starting to become something that everyone can tap into.” One company that has created a custom bot in Slack to significantly increase productivity is Color, which is a genetics testing company. They have a fully automated lab, and developed a bot that alerts lab workers when samples have been processed. According to Justin Lock, head of R&D at Color, a traditional lab is made up of a number of robots, with humans competing various tasks between them. But this isn’t a particularly effective way of getting things done, Lock explained. There are two steps that Color needed to take to improve their lab. First, they brought the robots into a small envelope, allowing them to pass things (both information and physical objects) among each other. Second, the robots need to communicate with humans to communicate what part of the process they are at so the humans know what needs to happen next. To accomplish this, they turned to Slack. “We were using Slack within the company to communicate between each other and so we thought, you know, we’re always on Slack anyway, is it possible to get these robots to just ping us and tell them when they’re done via Slack,” said Lock. Lock explained that Slack’s website has a number of resources, like comprehensive instructions on how to build different bots and how to integrate with the Slack infrastructure. He added that the person on their team who created the bot wasn’t even a software engineer; he was a mechanical engineer. “He was able to write the executable, design the kill script, and within probably two days of trial development, we were able to start pinging each other via robots in the lab.” Lock believes that the products that companies build are usually a reflection of their company culture. For Color, their goal is to provide high-quality clinical genomics and affordable, efficient, high throughput testing. “When you think about delivering health care at scale, ensuring high quality, it becomes really important to think about how you’re actually building your infrastructure from the ground up so that it’s efficient and scalable,” said Lock. “And I think historically we’ve seen that’s something that hasn’t been prioritized by a lot of organizations. And so Color has spent a lot of time and energy really thinking deeply about each step in our lab process and subsequent downstream processes to make sure that the data we’re generating is as high quality as possible and also as efficient as possible. And just generally being able to use software tools like Slack and others, just really enabled us to scale.” ❚

editor, etc,” said Douglas. “It adds up quickly and I think that one of the things that has made Slack so powerful is that we’ve been the communication hub and the nexus for all of these different services.” Wilfahrt has a similar view when looking for tools. “I love productivity tools that are easy to use and don’t require to change your current way of working ... The interface and usage has to be simple but powerful,” he said. In addition, because Wilfahrt’s goal with these types of tools is to clear his mind and limit distractions, the ability to keep track of meeting outcomes and add new data points, such as requests, improvements, critical issues and reprioritization, is a must for him. Wilfahrt also recommends developers find one system and stick to it. “We software developers are happy to explore the newest hype and the newest promise,” he said. But changing your productivity system every month is brutal, he explained. “Implement your vision in one system, adjust it to your needs and stick to it until you find one that can do everything that your current system can plus your new requirements. Only if you can check off all those boxes are you allowed to initiate a change.” Perhaps more important than finding the right tools is mastering them, Langr explained. “Master the wonderful tools that Unix distributions provide, but particularly master your editor, whether it be emacs, vim, or bare-bones editing in IDEA,” Langr said. “I'm always amazed at how inefficient too many veteran developers are when it comes to editing their code ... Master your tools. It's well worth the investment.” He added that once he masters a tool, the tool itself moves out of the way too. “I can think about my real goals instead of the rote steps needed to accomplish them,” said Langr. By implementing and then mastering these tools, developers can significantly reduce the amount of unnecessary work that interferes in their day-to-day work, and focus instead on creating value for the business. ❚


026-29_SDT031.qxp_Layout 1 12/19/19 4:14 PM Page 29

www.sdtimes.com

January 2020

SD Times

INDUSTRY SPOTLIGHT

Challenges to Effective DevOps Collaboration D

eveloper productivity is impacted by many factors, including an organization’s tools, practices, policies, and individual preferences. As you evaluate your current collaboration landscape, consider how the following may be holding your organization back from optimizing DevOps team efficiency. Teams tend to organically develop their own ways of working together using their favorite communications tools and methods to power workflows and day-to-day teamwork. It gets complicated when a team needs to collaborate with others across the organization and externally. When everyone is using different tools, it creates communication silos that become hurdles to effective cross-team and cross-functional collaboration. The added friction makes it harder, and slower, for your organization to get things done.

Limited, frustrating legacy messaging tools Many products on the market offer enterprise messaging capabilities. Legacy platforms, like Jabber, Lync, Yammer, or Microsoft Teams, are focused on communications and social networking, providing lightweight feature sets for enabling group chat, audio and video conferencing, or file sharing. However, for development teams, simple chat is not enough. They need a modern solution that supports the complexity and dependencies of DevOps teamwork. Teams need to communicate, but they also need to collaborate, with efficiency and speed, connecting to all of the disparate tools and systems that help them get the job done. While tools Content provided by SD Times and

like MS Teams provide basic chat capabilities, a truly modern solution also provides features like robust search, granular user permissions, and text and code formatting. In addition, modern tools not only provide chat features, but also enable integrations and automated workflows connected to code repositories, CI/CD systems, and other mission-critical systems developers use every day.

Too many manual tasks and workflows Development teams depend on automation to help them deliver higher-quality software faster. For many workflows, such as CI/CD or incident response, messaging and automation go hand in hand — humans, bots, and systems need to exchange critical information to keep everything moving smoothly. Yet most messaging solutions do not integrate with DevOps tools and systems, such as Jenkins, GitLab or Jira. People are forced to manually log into numerous dashboards to accomplish simple or repetitive tasks, or rely on others to share information.

This context-switching not only slows an engineer’s pace, but it also breaks their focus and flow, making it more difficult to focus on writing clean code or resolving high-priority bugs.

Lack of control over data security and compliance Data security is critical to any organization, and for those with strict compliance requirements, it is paramount. IT teams must maintain a strong security stance over all company systems and data processes, including messaging. However, since technology evolves at an increasing pace, development teams also need the autonomy to work with the tools that best fit their needs. So organizations are faced with the challenge of providing developers with the freedom to use the best tool for a specific use case, while still maintaining control and ownership over data. If developers aren’t given the tools they need, IT may face a proliferation of unauthorized use of tools that are outside of their control, leaving the company at risk of a data breach. ❚

29


Full Page Ads_SDT031.qxp_Layout 1 12/20/19 4:34 PM Page 30


030-35_SDT031.qxp_Layout 1 12/19/19 4:53 PM Page 31

www.sdtimes.com

January 2020

SD Times

BY LISA MORGAN

CI/CD pipelines are evolving as organizations identify opportunities to improve release velocity and as the industry considers what CI/CD pipelines should look like in the first place.

A

malgam Insights recently released “The 2020 Guide to Continuous Integration and Continuous Delivery: Process, Projects and Products.” In it, report author and research fellow Tom Petrocelli explains what basic and extended CI/CD pipelines involve. The “basic” CI/CD pipeline includes five processes, which are: merge, build, test, package and deploy. All of these are individually defined so readers have a common reference point. The basic pipeline includes sub-pipelines associated with each step, such as moving artifacts from a build into a repository. An extended pipeline has twice as many processes (10 total). It includes plan, code, merge, build, test, security scan, package, artifact repository, deploy, and monitor. Although the flow in both diagrams appears linear (and the report notes that the term “pipeline” suggests a linear process), loopbacks to earlier points in the workflow can occur when failures happen or a security scan indicates a threat. “The standard pipeline is just the basic stuff. Once my code has been written, how do I get it to release? There are other non-tooling stops along the way because some pipelines will deposit you in UAT or some kind of production staging area but overall the idea is to get from point A to point B,” said Petrocelli.

“When you think about an extended tool chain — which is where a lot of the vendors are going — you start to bring in other things like monitoring.” By extending the pipeline, teams are able to connect and automate more of the total software development process, though there may still be manual steps along the way. “Right now, CI/CD really addresses one small piece of the total software development process, but there’s other stuff that needs to be integrated into it,” said Petrocelli. Hasan Yasar has a similar view. He’s technical manager of the Secure Lifecycle Solutions group in the CERT Division of the Software Engineering Institute at Carnegie Mellon University. “The point of a pipeline is basically to help build an application, test the application, deliver the application, and deploy it into production. A pipeline is not just having continuous integration servers or deployment tools and techniques, but covering the full life cycle,” said Yasar. By full life cycle he means from the inception phase through build, test, delivery and deployment, as well as monitoring the production environment.

As far as tooling, Yasar said a pipeline should include an issue tracking system, a build system, an integration environment, a communications system such as chat, and monitoring systems that can monitor the progress of various development environments. In the center of all that should be a repository management system. That system should be connected to all the other systems and include all artifacts including infrastructure as code. The central repository helps facilitate communication and collaboration and ensures that all stakeholders have access to the artifacts. All the tools should be integrated, of course.

Whose tools? End-to-end solutions are emerging alongside the point solutions. The Amalgam Insights report includes a representative sample of some of the vendors moving toward end-to-end solutions. They include Anaxi, Atlassian, AWS CodePipeline, CircleCI, CloudBees, GitHub, GitLab, Google, Oracle, Pivotal and Red Hat. The end-to-end solutions are integrated, which saves customers the hassle of cobbling together point solutions. “Complexity works against you when continued on page 33 >

31


030-35_SDT031.qxp_Layout 1 12/19/19 4:54 PM Page 32

32

SD Times

January 2020

www.sdtimes.com

The Continuous Delivery Foundation advances CI/CD BY LISA MORGAN

More organizations have matured from CI to CI/CD, but their paths differ as do their pipelines and results. Most enterprises are implementing a mix of open source, commercial and even home-grown tools, and they’re looking for answers. One place to look is the Continuous Delivery Foundation (CDF) which is home to many of the fastestgrowing CI/CD open-source projects. The CDF fosters vendor-neutral collaboration among developers, end users and vendors to further best practices and industry specifications. DeployHub CEO and co-founder Tracy Ragan Tracy Ragan, who serves as the CDF general membership board representative, provides additional insight in now for the CI/CD community to have a this Q&A. conversation about what CI/CD looks like today (terms and definitions) and SD Times: Why was the CD where it is headed tomorrow (KuberFoundation formed? netes pipelines, AI/MLOps). Ragan: We are on the top of a swell that is turning into a massive tsunami What does the organization hope when it comes to software development. to accomplish? According to “The Future of EmployWe have 9 strategic goals: ment: How Susceptible Are Jobs to 1) Drive adoption Computerisation?” by Carl Benedikt 2) Cultivate growth of open-source projFrey and Michael A Osborne, 47% of ects U.S. jobs may be replaced by AI. If this is 3) Foster tool interoperability true, we have lots of software to develop 4) Champion diversity and inclusion over the course of the next 20 years and 5) Foster community relationships our current CI/CD process will not be able to sustain this massive amount of 6) Grow the member base software development and management. 7) Create value for all members For most organizations, CI/CD is a work- 8) Promote security as a first-class CI/CD citizen flow orchestration tool that calls scripts. In most cases, the scripts do the heavy 9) Expand into emerging tech areas. lifting of the movement of code through the process. In order to build the softIn what ways will the CD Foundaware of tomorrow, CI/CD must do bet- tion help organizations improve the ter, be more automated and include efficiencies and effectiveness of their more than just “check-in and build.” CI/CD practices? And we are moving from monolithic softThe CD Foundation provides a platware to microservices. This directly form and thought leadership communiimpacts the CI/CD process. The time is ty for driving CI/CD to the next level.

The CD Foundation’s job is not to produce a CI/CD “stack” or best practices guidelines. CD is too broad to have a single solution for all. The CD Foundation’s job is to bring members together to achieve our 9 strategic goals and provide a vendorneutral platform for open-source tools that fit into the CI/CD landscape. What impact might the initial and future projects have on CI/CD tools and tool chains going forward? While it’s hard to predict the impact today, we are beginning to see more productive discussions between projects. Jenkins and Spinnaker are working to define how they interoperate. There is community discussion around Tekton and JenkinsX. Most organizations will have a variety of different pipeline orchestration tools (Jenkins, Tekton, CircleCI, Bamboo, Spinnaker, etc). And most companies will allow individual teams to decide on what those orchestration tools will call. Not all teams will look the same. A mix of open source and commercial will continue to be the way new tools are adopted and implemented. What the CD Foundation can offer is a platform for managing the open-source tools and foster discussion between the teams. Our third most important goal is to “foster tool interoperability.” If we are successful, we will have achieved the ability for one orchestration tool to call another. Developers might use Jenkins while Production Control is using Spinnaker. What does the future of CI/CD look like? In the future, CI/CD will grow in importance as developers are pushed to create more software. We now talk about moving software through the lifecycle faster, tomorrow’s discussion will


030-35_SDT031.qxp_Layout 1 12/19/19 4:54 PM Page 33

www.sdtimes.com

be around sustaining the number of new applications hitting the market. So fast is important and CI/CD is a key player in ‘faster’ and ‘more’. From a lower level, if we look at microservices, our CI/CD process changes. First off, version control becomes less critical. We will not have source code that is several thousands of lines of code long that requires branching and merging to allow multiple developers to work on it at the same time. We will instead have code that is hundreds of lines long at max. When you think of microservices, you must think functions. Second, compiling code may not be required. While Go is compiled, Python is not. And even if it is compiled, it will be quick — no more long builds and no linking. Third, microservices are “loosely coupled.” This means no linking at build time. APIs at runtime replace a build “link” step. The concept of an “application” built all at once goes away. Our CI/CD pipelines will be managing thousands of individual microservice workflows. Today we generally have one workflow per application; tomorrow we will have one workflow per microservice. Deployments will always be incremental. Deploying a full application will no longer happen. A new microservice will be updated, creating a new version of the “logical” application. An application is just a collection of services. And finally, configuration management will be needed to map a collection of microservice versions back to a logical view of an application version and to track which applications are using which service. So yes, lots of changes are coming our way, and the CD Foundation is here just in time to help lead the discussion, manage new open-source projects and inspire tool interoperability. ❚

< continued from page 31

you want to achieve higher velocity,” said Amalgam Insights’ Petrocelli. “But the broader effect is without these tools, it’s going to be hard to have a microservices architecture. You’re not going to get what you want to get out of it, which is faster and more frequent updates. You’re not going to be able to do those continuous deployments without automated tools.” “Microservices” is the operative word driving the need for CI/CD. Without that, high levels of automation may not be necessary, particularly if the application doesn’t require frequent changes. The age-old choice between point solutions and end-to-end solutions is the same for CI/CD as it has been for other types of tools: depth of features versus breadth of features. The latter is easier to implement because all of the pieces are designed to work together. However, depending on the complexity of the application and the environment, it may be preferable to implement “best of breed” point solutions.

January 2020

SD Times

chain. Many of them aren’t running integrated DevOps teams,” said Petrocelli. “If you’re not doing the management piece, you’re not doing DevOps.” The Amalgam Insights report includes a section entitled “DevOps vs CI/CD” to help alleviate some of the confusion between the two. DevOps is deemed a project and organizational strategy. Since it’s conceptual, there’s no fixed implementation. CI/CD is defined as process automation, specifically “the process of taking raw developer code and other build artifacts and turning them into running applications and services.” “When it comes down to implementation, DevOps is management and organization. It’s not tools. There’s no such thing as a DevOps tool chain,” said Petrocelli. “You can do DevOps and not have CI/CD tool chains in place and you can have CI/CD tool chains in place and not be doing real DevOps or DevOps at all.” The report also explains the individual elements of CI/CD: continuous integration, continuous delivery and continuous

“Right now, CI/CD really addresses one small piece of the total software development process, but there’s other stuff that needs to be integrated into it.” —Tom Petrocelli Whatever the application mix is in an organization, it’s wise to procure CI/CD tooling with the future in mind so it can support the breadth of what exists, including traditional and new types of applications. “What you don’t want to do is create tensions in your own organization because some people are getting the advantages of automated tools and others aren’t,” said Petrocelli.

DevOps and CI/CD: Real or not? “DevOps” and “CI/CD” mean different things to different people. Many organizations claim to be doing one, the other, or both, which is not always the case. “A lot of teams call themselves ‘DevOps’ when they’re really doing a tool

deployment. Most people associate continuous deployment with the digital native disrupters that are doing hundreds or thousands of releases per day. Continuous delivery is what’s practical for everyone else. Two things that distinguish continuous delivery from continuous deployment are release velocity and the level of automation, which go hand-in-hand. According to the Amalgam Insights report, continuous delivery involves manual while continuous deployment automates deployment. What organizations call their practices matters less than the results they’re deriving from it. However, it may be harder to achieve the results desired when it’s unclear what the organization is trying to achieve in the first place. ❚

33


030-35_SDT031.qxp_Layout 1 12/19/19 4:54 PM Page 34

34

SD Times

January 2020

www.sdtimes.com

How some organizations are implementing CI/CD

BY LISA MORGAN

CI/CD implementations and the impetus for them varies among companies, but everyone wants to ensure faster delivery of high-quality software. Following are three examples of companies that have adopted CI/CD in their own way.

Lucidchart improves productivity About four years ago, diagramming solution provider Lucidchart had 30 developers. The team was “running into issues” because not everyone was aware of the changes going into production. “We often spent a lot of time tracking down the individual who made code changes that went out with a given release so that we could try to piece together what happened and how best to fix it,” said David Torgerson, director of DevOps at Lucidchart. “Our QA team was having trouble scaling the testing that needed to be done because of all of the different combinations of manual tests that needed to be done.” The goal was to achieve a regular release cycle, but because they were unable to track down which change caused a problem, releases were “terrifying.” So, they had a meeting following one release to discuss what when wrong. Out of that meeting came the idea of “emergency releases” (aka bug fixes). However, the result was meetings about the emergency releases they had to do because of the problems with the original release. Worse, because there was no classification for a businessrelated change, they created a third category of release called

“business release,” which led to meetings before release meetings just to determine whether the release was an emergency release or a business release. Eventually, Torgerson and other technical leads realized that the time spent in meetings would be better spent on iterations, so they started moving toward a CI/CD process, albeit slowly because everyone wasn’t ready for the “mindset shift” required. That first meeting devolved from a discussion to four concurrent arguments. At the second meeting, they agreed that their definition of continuous deployment was “as frequently as possible.” “If we were looking strictly to be a purist CI/CD shop and appear as a DevOps shop, we would probably do things differently, but our approach has been to improve developer productivity and increase production stability.” The QA team hated the idea of CI/CD because they didn’t want the manual regression tests they were doing automated. CI/CD seemed like an existential threat. Torgerson said they’re now the biggest CI/CD proponents because they’re having more fun trying to break the product than they ever had doing manual regression tests. ❚

Negotiatus perfects manual processes, then automates Spend management software provider Negotiatus has grown from 2 to 45 employees since it was founded in 2016. It wanted to improve developer speed by streamlining the onboarding process and improving the reliability of code. They also wanted to improve the speed and accuracy of the deployment process because it had become clear that when the deployment process is manual, it’s easy to forget to run a specific command. What they didn’t want to do was automate processes that weren’t optimized to begin with. “We try to solve as much as we can manually first, but in an extremely structured way. We basically document a process, run the same exact steps every single time, and ensure people are following them so we can identify any gaps in the process and fill them in,” said Negotiatus CTO Tom Jaklitsch. “The whole thing would run for a month or two and we’d iterate on it.” Another area targeted for improvement was test coverage because it was less than 30% on a monolithic application at the

time. Now, it’s more than 75%. They also made sure all new services were above 95%. Developers are now required to run their own QA, which has improved code quality and reduced the amount of back-andforth between developers and QA. Deployment now happens about 15 times a day, with each deployment taking about 25 minutes. Of the 25 minutes, developers spend one or two minutes at the halfway point making sure everything looks fine. Then, the developer clicks a button and the rest is automated. “It’s better to be safe than sorry versus automating every step of the process,” said Jaklitsch. “I’d love to get the process down to 10 to 15 minutes, but as of right now, once you kick off and deploy, it kicks off all these automatic processes.” The deploy ticket queue is populated and managed in Slack with the last person adding their release to the end of the list. That process is being optimized now, Jaklitsch said. ❚

Stealth Communications modernizes its application and processes Internet service provider Stealth Communications was saddled with a legacy business application built in 2009 that was written in C++ and PHP. It had 1.6 million lines of code and took two years to build. The application’s architecture made it extremely difficult to implement CI/CD, so about a year ago, the development team started rewriting it as a microservices application so they could implement CI/CD. More importantly, the new architecture would enable easier upgrades to the UI and easier implementa-

tion of new features without massive, infrequent overhauls. “We want to release features in a real-time fashion without having the user reinstall it or do these upgrade paths that might be intensive or error prone,” said Shrihari Pandit, CEO of Stealth Communications. “Instead of a single stack application that controls everything, we have mini applications that plug in so we can easily take advantage of them. CI/CD is extremely crucial because that’s how you’re going to react to moving markets.” ❚


030-35_SDT031.qxp_Layout 1 12/19/19 4:55 PM Page 35

www.sdtimes.com

CI/CD success requires a sound approach There’s considerable confusion about “the best way” to approach CI/CD when no single path exists. There are important considerations organizations should contemplate to avoid wasting time and money that could have been spent making progress, however. “One of the first things an organization should do is understand what their needs are [in terms of] the business, their application and the application environment,” said Alan Zucker, founding principal of training and consulting firm Project Management Essentials. “The other thing you should do when looking at your CI/CD pipeline needs is figure out what you’re going to get the biggest benefit from, your automation or integration work or where you pain points are. Then start stitching things together.” Clearly, progress is always the goal, but sometimes organizations come up short. Following are some of the “gotchas” that can get in the way. ■ COMPLEXITY CI/CD tool chains tend to be complex because CI/CD involves so many processes and associated types of tools. The resulting complexity can be difficult, time-consuming and expensive to maintain over time. “Now you’ve got a whole group of people whose only job is to maintain the tool chain, which was supposed to keep you from having all these people in your organization,” said Tom Petrocelli, research fellow for DevOps and Collaboration at Amalgam Insights. “If you’re a midsize company, you’re not going to have the budget to hire all the engineers to do that and maintain those engineers over time. The result is modest tool chains that don’t have the impact the organization expected or the cost prevents them from doing anything.” The emerging end-to-end solutions alleviate the tool management overhead since all elements are integrated. However, highly complex applications running in highly complex environments may require point solutions that have a greater depth of features. “I think a successful pipeline really simplifies complex operations. Your pipeline may grow, so it becomes more complex in all of the different aspects of what it’s doing, but it needs to be simple, straightfor-

ward and easy to work with,” said Daniel Ritchie, distinguished engineer at Broadridge Financial Solutions. ■ SECURITY Security needs to be part of a CI/CD pipeline; however, a lot of times “security” is limited to just a quick CVE scan. Alternatively, a security scan may be left until the end. If a vulnerability is found, developers may be more inclined to file an exception request so they can fix it in the next release rather than spending the time to fix it in the current release. “Security should be everywhere, not just before or after integration. A lot of organizations think of security as just static analysis or dynamic analysis,” said Hasan Yasar, technical manager of the Secure

January 2020

SD Times

testing or they don’t understand the value of the testing they’ve done. Security should have threat modeling or understanding of security requirements at the beginning of feature development.” Don’t think of security as a step, but rather a process that should be integrated throughout the pipeline. ■ CULTURE Some people are really excited about CI/CD because they want to get more done faster, but not everyone. CI/CD automates a lot of manual tasks that have been associated with job descriptions. “Organization and culture are the biggest barriers. You should have accountability at the lowest responsible level so people are taking ownership for their work and people have that end-to-end responsibility,” said Project Management Essentials’ Zucker. “[Companies are] trying to acquire new methodologies and processes and spend a lot of money training people in the same old tools, but then they still behave the same old way and they don’t get the

‘A lot of organizations think of security as just static analysis or dynamic analysis.’ —Hasan Yasar

Lifecycle Solutions group in the CERT Division of the Software Engineering Institute at Carnegie Mellon University. “Everywhere” means various types of security testing throughout the SDLC, beginning with defining security requirements along with functional requirements. Yasar recommends including tools like Fortify or SonarQube as part of the CI server. Depending on the application, other security considerations may include authentication, network segmentations, regulatory compliance (such as PII usage) and the like, so security testing is happening at the code level and the architectural level. “Almost every organization is struggling with false positives or how much security testing they need to do because static and dynamic analysis such as pen testing take time,” said Yasar. “If the organization is only doing static and dynamic analysis without tying them into the requirements in the beginning, they don’t know what they’re

results they want.” Having the people affected by CI/CD involved in its planning, problem-solving and implementation can help break down barriers to adoption. ■ UNREALISTIC EXPECTATIONS It’s easy to expect too much too soon. Buying tools alone won’t help, nor will automating processes that weren’t designed well in the first place. The space itself is evolving, and as a process, organizations are wise to approach CI/CD from the perspective of continuous improvement, since they’ll discover many things that could along the way anyway. Trying to do too much too quickly can backfire. “I would not counsel anybody to go from a well-controlled annual process to pushbutton full release management. You’re talking about a several-year journey with a lot of fits and starts,” said Zucker. ❚ — Lisa Morgan

35


036-40_SDT031.qxp_Layout 1 12/20/19 1:25 PM Page 36

36

SD Times

January 2020

www.sdtimes.com

The realities of running an BY CHRISTINA CARDOZA

T

here is no question that open source is the backbone of software today. Mike Milinkovich, executive director of the Eclipse Foundation, explained that about 80% of all software written is open-source software. The benefits of using opensource software is immeasurable, but it’s not the code itself that makes opensource software an invaluable resource. According to Ben Balter, senior product manager of community and safety at GitHub, software and technology is the easy part. The hard part is creating and fostering a culture around an open-source project. “The superficial promise of open source is that if you publish your code, others will make it better. Strictly speaking, that’s not the case. To reap the benefits of open source, maintainers must seek to grow communities around their project, a step often overlooked when ‘publishing’ code,” Balter wrote in a blog post. Open-source project creators and maintainers take on a difficult role when they decide to release an open-source project. Balter told SD Times that maintainers should think of themselves as managers rather than engineers. Their primary contribution to the project often won’t be in the form of code, but in terms of community management, marketing, recruitment, evangelism, automation, tooling and support. “You often start a project to solve a specific technical problem, but as the community grows, in order to scale your own efforts and to have the biggest impact on your project, your role often shifts to solving the human and the workflow side of open source, rather than the technical,” said Balter.

What it takes to run an open-source project GitHub’s Balter explained that opensource creators and maintainers should

think of projects as a distributed dinner party. “Just as you would at a dinner party, as the host, you want to welcome guests as they arrive, take their coat, offer them refreshments, and introduce them to other party goers to ensure they have a good time. Open source is no different, except instead of taking coats or offering hour d’oeuvres you’re offering documentation and your responsiveness,” he said. When starting a project or releasing code into open source, software owners should think about the developer experience of their project much like developers think about the user experience of an application, according to Balter. “How can you make it easier for developers to contribute? This includes documentation, setting up their local

environment, writing tests, and following style guides to get their code included in the project,” he said. Once you have a plan or process set, the next step is to let developers know you want them to contribute. “There are a lot of projects on GitHub that might just be published source code, so distinguish your project from the others by letting potential contributors know that you’re looking to start an opensource project and welcome their contributions,” Balter explained. In addition, project creators and maintainers should be open and welcoming to new contributors. It is helpful to take the extra time to welcome developers to the community and thank them for their contribution, Balter explained.


036-40_SDT031.qxp_Layout 1 12/20/19 1:26 PM Page 37

www.sdtimes.com

January 2020

SD Times

open-source community

According to Eclipse’s Milinkovich, part of the secret sauce at the Eclipse Foundation is that it has an open collaboration model that allows some of the largest companies in the world to work together with individual developers who are just interested in the technology. “Our ability to weave together contributions from many different people and organizations and in many cases direct competitors into something that delivers great value to the industry is definitely part of our success,” said Milinkovich. The project should also be sustainable. Milinkovich explained that even though open-source software is free to use, there is still a risk for organizations adopting open-source technology because if a piece of software is not sus-

tainable in the long term, the users will be forced to switch up their application. “There is real business value in enterprises putting their logo to a project or community to say yes we are using this stuff, it is very important to us. We really want to help support its sustainability and in addition to that, if they actually put some developers in to participate then that is actually a better path to sustainability,” Milinkovich said. Once project adoption and contribution starts to pick up, it is important to focus on changes, Milinkovich explained. “Let’s say you built some great software, you are getting a lot of attention and you are suddenly getting an influx of contributions... you really have to focus on making the path to contribution as easy as possible. Maybe

this started out as a one-person project, but now you want to start taking some of these contributions and turning them into committers and maintainers so you can grow the team a little bit.” Mike McQuaid, software engineer at GitHub and open-source project Homebrew maintainer, explained most maintainers start out as a contributor and user, and should continue to be a user “to maintain context, passion and empathy.” If a project gets very widespread adoption, or commercial adoption, project owners now have to think in terms of the stakeholders as well as the consumers. Each person is going to have different concerns about the provisioning of the code, licensing, support and maintenance. Balter explained the definition of stakeholders should be continually expanded and include non-technical, non-users, potential users, veteran users, subject matter experts, technical users, active developers and potential developers. “Think about an in-person community you’re part of. It could be the neighborhood or town you live in, the congregation at your place of worship, or your local bowling league. Communities are about groups of people coming together to solve a shared challenge (having a nice place to live, practicing one’s beliefs, or socializing). Each community has its leaders (elected officials, clergy, team captains) and some form of codified ideals (legal code, religious teachings, league regulations). When you move that community online, the social norms that build a sense of comradeship also follow,” he said.

Overcoming the challenges It’s not only about maintaining good relationships with developers and providing an open space to collaborate. Project creators and maintainers also have a number of different challenges they will have to continued on page 39 >

37


036-40_SDT031.qxp_Layout 1 12/20/19 1:26 PM Page 38

38

SD Times

January 2020

www.sdtimes.com

The ethical side of open source BY CHRISTINA CARDOZA

W

hen developers contribute, collaborate, or obtain open-source code, they look at how the code will help bolster their other projects as well as ensure they are complying with any open-source licenses. One thing that doesn’t get enough attention is the ethics of that open-source project, according to Heikki Nusiainen, CTO and co-founder at Aiven, an IT service management company. “Some of the ethical considerations one needs to take when using opensource code are checking for bias or exclusion, accuracy, crediting your collaborators and sharing code or finished projects in return,” he said. Over the summer, Facebook’s opensource JavaScript library React was under fire after racism and harassment were discovered within its community. The incident is known as #Reactgate and it ended with the designer Tatiana Mac, who raised awareness of some of the issues, resigning from the industry, and React software engineer Dan Abramov and library author Ken Wheeler deactivating their Twitter accounts temporarily. According to reports, the drama unfolded after a talk Mac gave at Clarity Conf about the broader impacts designing systems can have and how to design in a more ethical and inclusive way. After the talk, users commented that she was talking at a social justice

conference, not a tech conference, and another user tweeted that React developers were into weights, Trump and guns — things spiraled from there. “People care more about protecting the reputation of a **framework** than listening to **multiply marginalised** people that you have actual **white supremacists** in your niche community and our broader community,” Mac tweeted in response to the backlash. Abramov deactivated his account, stating “Hey all. I’m fine, and I plan to be back soon. This isn’t a ‘shut a door in your face’ kind of situation. The real answer is that I’ve bit off more social media than I can chew. I’ve been feeling anxious for the past few days and I need a clean break from checking it every ten minutes. Deactivating is a barrier to logging in that I needed. I plan to be back soon.” When he returned to Twitter, he said deactivating his account was “desperate and petty.” Wheeler also returned to Twitter stating, “Moving forward, I will be working to do better. To educate myself. To lift up minoritized folks. And to be a better member of the community. And if you are out there attacking and harassing people, you are not on my side.” As a result, Facebook has adopted a new code of conduct and vowed to combat online harassment. The code of conduct states, “In the interest of fostering an open and welcoming environment,

we as contributors and maintainers pledge to make participation in our project and our community a harassmentfree experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.” According to Nousiainen, other ethical issues in the open-source community include using code for profitable reasons without contributing in return. “But this is true in any online group, unfortunately, and I think the issues are limited considering the size of the open-source movement. However, businesses and developers should always be ensuring that they’re following the code of conduct for the community and playing fair,” he said. In order to ensure an open-source project promotes innovation while balancing ethics, Nousiainen explained that ethics should be ingrained into projects and initiatives from the beginning. “By understanding the opensource community’s code of conduct and implementing best ethical practices throughout the entire project, ethical considerations won’t be compromised in the name of innovation. In this way, the hope is that breaches of conduct or unethical acts are not suddenly revealed later down the line, but prevented beforehand.” “Our role is to empower maintainers to grow healthy and welcoming communities around their open-source projects. The goal isn’t just to prevent or reduce the visibility of disruptive behavior (blocking users, hiding content, etc.), but to actively encourage maintainers to adopt inclusive behaviors, even if they don’t have previous community management expertise,” added Ben Balter, senior product manager of community and safety at GitHub. “We want to encourage users to be good ‘online citizens,’ and can do that by either adding friction to disruptive behavior or reducing friction for inclusive behavior, with friction being how easy or hard it is to do something on the platform.” ❚


036-40_SDT031.qxp_Layout 1 12/20/19 1:27 PM Page 39

www.sdtimes.com

January 2020

SD Times

< continued from page 37

deal with on a daily basis. Scarce resources. Starting a project or opensource community can be hard, especially when you don’t have backing from a company or organization, so resources are limited. Eclipse’s Milinkovich believes users should focus on a couple of areas where the project shows energy and forward motion. In terms of prioritization, projects and features should be grouped together into programs. Security. Security will always be an issue in open-source software, so it is important to have a repeatable build process where you can demonstrate that the code being delivered is derived from the code being published, according to Milinkovich. “People want to ensure they are getting the real thing when they are downloading code,” he said. In addition, project maintainers should follow up with patches to make sure users are getting the latest and greatest stuff. Burnout. It can be a challenge for developers to keep up with the demands of the community when they are responsible for maintaining code, moving the platform forward, keeping up with the release cycle and dealing with feedback constantly, according to Milinkovich. To avoid burnout, GitHub’s Balter suggested to keep the community informed, set expectations, take a break or find someone to help. “You may find that finding ways to monetize your efforts through sponsorships, premium features, or support may help you to find that spark once again,” he said. The bus factor. “How many developers need to win the lottery tomorrow (or tragically get hit by a bus) for the project to fail? If it’s just you, that number is one. As a project grows and matures, you want that number to be as high as possible. Humans get sick or take vacations or get locked out of their account and a project shouldn’t grind to a halt as a result. It can be hard to know when a project goes from ‘my’ project to a community project, but as early as possible, move the project to a dedicated organization and empower contributors you trust to take on additional responsibility,” said Balter. Attracting and retaining talent: “Just as you might think of a sales funnel in terms of marketing to potential customers, engaging prospects, etc., the idea is to convert users to contributors and contributors to maintainers by lowering the activation energy required at each phase and thus growing your project by attracting and nurturing users, contributors, and eventually fellow maintainers,” said Balter. ❚

Organization-backed projects vs. smaller, independent projects Open-source projects come in all different shapes and sizes. Some projects have just the project creator and maintainer, while other projects have thousands of developers. Additionally, some projects are independently managed and other projects are backed by large organizations. While open-source projects are meant to promote innovation, how each project goes about it will be different. “For one, smaller, independent projects don’t need sophisticated workflows or community management practices at the onset, and often, that premature optimization can stifle community growth. We think of project growth through a ‘community maturity model.’ Projects should often wait to establish formal or documented processes as they mature, and not before they need them,” said Ben Balter, senior product manager of community and safety at GitHub. Balter explained individual developers prototyping a new library don’t need a code of conduct or forms for bug reports, however once the first outside pull request has been established, the individual developer might want to start looking deeper at their documentation and start to formalize contribution and review processes to get ready for additional contributors. “That’s not necessarily true for organization-backed open-source projects that can either anticipate the success of a project or have teams dedicated to establishing cross-project practices,” he explained, “If you’re Facebook or Google and you’re starting an open-source project, it may make sense to include a standardized code of conduct or contributing guidelines for all your projects on day one to start things off on the right foot and set yourself up for success as the project grows.” Open-source projects also differ with the willingness to invest in community infrastructure, according to Balter. For instance, it may not make sense for an individual developer maintaining an open-source project to provide technical support, but an organization-backed project with lots of developers can easily add new channels and categories that foster community engagement. Additionally, Balter suggested corporate-backed open-source projects be an internal developer advocate. “If your corporate lawyers are asking each developer to print, sign, and fax an agreement before they can contribute, it’s unlikely your project will gain many contributors. Similarly, if you can showcase contributors in your corporate communication, appreciation goes a long way when it comes to open source,” he said. ❚ —Christina Cardoza

39


036-40_SDT031.qxp_Layout 1 12/20/19 1:27 PM Page 40

40

SD Times

January 2020

www.sdtimes.com

Founding organizations: Creating companies that sustain our open-source community O pen source has a sustainability problem. A question that’s frequently discussed in the web development community is how to make open-source maintainable. As one of many examples, Henry Zhu, the lead maintainer of Babel, one of the most depended-on projects in the JavaScript ecosystem, until 2017 was working on Babel in his free-time while working a full-time job. Open source is key infrastructure: for comparison, imagine if the lead mechanic on the Brooklyn Bridge had to work on it in his spare time, or hustle for contracts!

One route to sustainability Creating sustainable open-source communities isn’t simply a problem for community maintainers; it’s a problem for everyone. When a project is created outside an established company, an increasingly popular way of sustaining it is to form a commercial organization around it, raise venture capital, hire maintainers, and create hosted services to pay the bills. We’ll term these “founding organizations” — companies founded by the creator or key contributors of an opensource project to support the project. A growing number of projects are now sustained in this way.

Public goods for communities As Dries Buytaert, the founder of Drupal and Acquia put it: “In economics, the concepts of public goods [is] decades old…for example, everyone can benefit from fishing grounds, whether they contribute to their maintenance or not. Simply put, public goods have open access. Open Source projects are public goods: everyone can use Open Source Sam Bhagwat is the cofounder and COO of Gatsby.

To analyze the level of investment a founding organization makes, we must ask two questions.

1.

Is the company effective in creating a sustainable business around the community? Founding organizations have a responsibility to build strong businesses in order to sustain continued investment into the community, and must determine the appropriate metrics to do so. Is the company investing product and engineering resources to open source? What’s a good baseline for how much a founding organization should invest in open source? A good comparable here is governments, who almost always spend 15-20+% of GDP on public goods. We’re deeply committed to opensource technology and to the community, and wanted to share our experiences fostering sustainability while building one of the most popular open source frameworks on the web. ❚

2.

software and someone using an Open Source project doesn’t prevent someone else from using it.” Great founding organizations keep their community’s technology up to date in a changing world. The most important public good founding organizations and other governing bodies provide is technological parity with up-and-coming tools. Technologies can catch up to the state of the art. In recent years, JavaScript, distrusted for browser incompatibility issues and awkward syntax, received a huge facelift after significant investments by major browser vendors and the TC39 community, culminating in ES5, ES6, ES7, etc. MongoDB was famously mocked in 2010 for not being durable or scalable, only to years later print official “MongoDB is web scale” t-shirts after they had solved these problems.

Other technologies can fall behind. In the website world, both WordPress and Drupal lack core support for version control and modern front-end tooling. In the infrastructure world, Docker exists and is widely used; however, as a project it has been more or less subsumed by Kubernetes, and today most development happens on Kubernetes rather than the container runtime. We believe founding organizations’ responsibility to the community extends to a sort of social contract. Its social contract is fulfilled when technology in the community stays current. Conversely, when a technology begins to fall behind the times, fails to invest in important public goods or keep up with changing development standards, the community should hold the founding organization largely responsible.

Investment critical to stay ‘cutting edge’ Keeping an open-source tool on the cutting edge doesn’t typically require technological breakthroughs. More often, it requires steady, methodological, persistent engineering work—and lots of it. Much of this work, especially in systems with plug-in architectures, can come from the community. Both Drupal and Magento, among others, have done excellent work crediting community contributors and commercial entities for work they’ve done or sponsored. But there are almost always key changes — fundamental re-architectures, foundational stability and scalability work — that should be done by a centralized entity. This requires sufficient economic investment from founding organizations to hire enough talented engineers, engineering managers, and product managers to move the work forward. To invest the requisite amount, a founding organization must be both able and willing to sufficiently invest in its community. ❚


Full Page Ads_SDT031.qxp_Layout 1 12/20/19 10:00 AM Page 56


Full Page Ads_SDT031.qxp_Layout 1 12/20/19 9:57 AM Page 43


www.sdtimes.com

Getting the most value out of your value streams

January 2020

SD Times

Buyers Guide

BY CHRISTINA CARDOZA

O

rganizations cannot ignore today’s ongoing digital transformation. With every company now becoming a software company, if software isn’t being created correctly and quickly enough, industry pundits argue, companies are going to find it difficult to stay alive. “There is a digital transformation going on and if you are not changing the way you are doing things in order to deliver software better and quicker, you are going to lose competitive advantage and become irrelevant as a company,” said Lance Knight, COO GM of ConnectALL, a value stream integration company. While there have been all these methodologies applied to creating better software faster, like Agile and DevOps, organizations are quickly realizing these are not enough. “All these things you need to be able to deliver software quicker, but just because you can program it and have it ready doesn’t mean you are getting it to production quicker,” said Knight. What Agile and DevOps has enabled teams to do is start to think end to end, but there has been a piece missing to achieving clearer results, according to Mik Kersten, CEO of Tasktop, a software tool integration company. “We somehow ended up pigeonholing everything we measure and everything we focus on. It is siloed thinking. Agile was never meant to be a silo. DevOps was never meant to be a silo. They were meant to be around val-

ue streams,” he said. Value stream is the flow of work throughout the delivery process. “If I don’t look at the system from one point to the other, it doesn’t matter how quickly I can deploy. I can deploy every minute, but if I am not getting something new every minute does that really matter?” asked Knight. Value stream aligns DevOps and Agile transformations with the business in order to uncover areas of opportunities for improvement, according to Eric Robertson, VP of product marketing and management at CollabNet VersionOne, an Agile planning, DevOps and VSM software provider. “Agile and DevOps produced great technical outcomes that efficiently got out products and produced output, but the question is was that output or release meaningful to the business? Did it drive any meaning or help achieve the type of outcome the business needs in order to be successful?” Robertson said. “It is great to get to deliver products sooner... but in the end if it is not meeting the customer need or the business objective it doesn’t matter. You can’t say you are delivering value. You are delivering output, you are delivering something. But you are not driving value,” he continued. The value stream lays out how everyone in the business is delivering value and what their role is overall, explained Robertson. “Customers want speed with direction, and that is really what value stream

management provides. Agile brought a lot to the equation in terms of optimizing the development cycle, but it was pretty narrow. DevOps expanded that point of view, but the focus for DevOps ended up being around automation,” added Brian Muskoff, director of DevOps strategy at HCL Technologies, IT and digital solutions provider. Additionally, there have been many shifts happening in terms of architecture approaches, such as microservices and the move to the cloud. In order to properly track features, epics and progress, teams need to understand if they are improving development and delivery velocity. “We demand visibility and transparency from every other part of the organization; value stream management lets us put a methodology and system in place that gives us transparency, visibility and governance independent of the level of automation and independent of the tooling that is there to help us start improving work that is going on in these teams,” said Jeff Keyes, director of product marketing for software company Plutora.

Getting the value out of value stream In most organizations, software delivery value streams are grown organically. Throughout the years, organizations have documented and changed up their workflows. However, their organic value streams weren’t created or crafted with continued on page 44 >

43


043-50_SDT031.qxp_Layout 1 12/20/19 1:36 PM Page 44

44

SD Times

January 2020

www.sdtimes.com

The emerging role of value stream managers in software delivery Managing the value stream has to be a human process, according to Lance Knight, COO of ConnectALL, a value stream integration company. Part of a successful value stream is having an analysis in place where you map, measure, and look how things flow throughout delivery. “You can do a lot of stuff with a tool, but the tool isn’t going to do software or value stream analysis for you. It is a human you need to go in and look at it. There is no tool that is going to go out and look at all the things you do in your value stream and tell you what to do,” Knight explained. Value stream manager is an emerging role in software delivery that aims to add the human element to value stream management. The value stream manager is in charge of making sure everything can flow, removing impediments, and getting releases out the door. According to Knight, a value stream manager has more of a product owner or project manager background and will look at how work gets done, and try to make it more effective.

< continued from page 43

the notion of how things can flow easily, nor grown with efficiency in mind, according to ConnectALL’s Knight. He explained that in order to understand the value stream, it starts with education. “If I were going to start today, I would really try to get as much education about value stream mapping, lean principles, waste and study systems thinking, so I can look at these things and do an optimization exercise, analyze the value stream, map it out and decide what tools I want to put in place based on that,” he said. Once there is comprehensive understanding, HCL’s Muskoff explained the best practice for getting started is to start small. “It is pretty much a best IT practice in general. You want to start small, get some wings and establish a benchmark. Then, try to make some improvements day over day. That is a great place to start.” Improvements should continue to be applied as an organization journeys down the value stream. “The team has to want to get better, and the only way to do that is to take an honest look at your practices and identify areas of improvement, focus on bottlenecks, and exploit those bottlenecks to increase flow,” Muskoff added. CollabNet’s Roberston said beginning with objectives and key results

Brian Muskoff, director of DevOps strategy at HCL Technologies, an IT and digital solutions provider, said a value stream manager also needs to come from a technical background like a developer and tester as well as have general management skills. “They need to be a good communicator, organized, have attention to detail, have the ability to take a current situation and identify areas of improvement, and actually deliver on those improvements,” he said. HCL has been experimenting with its own value stream manager internally, who they say is essential to their process. The HCL value stream manager takes on the role of Scrum master or release manager, being in charge of teams to get product shipped out the door, but also focuses on continuous improvement or process improvement. “The value stream manager is very much a necessity,” Muskoff said. “We are evolving. We are recognizing the importance of process improvement and formalizing it.” ❚ —Christina Cardoza

(OKRs) can also help companies quickly identify outcome hypotheses and provide more insights into what they are trying to achieve down the line.“That input of taking, identifying and prioritizing those business opportunities will help you start planning, start creating, start delivering and also be able to pivot and learn,” he explained. Additionally, businesses should be able to break down those outcomes and objectives and individually map them through the value stream. Value stream mapping enables users to understand what to look at, find bottlenecks, look into why things are taking so long, and remove waste, according to ConnectALL’s Knight. According to Tasktop’s Kersten, in order to get the most value out of the value stream, you should start with the customer. While everyone is jumping on the notion of value stream and how important it is, the way it is being translated into customer results and thought about with customers within a large organization is “completely rife with confusion,” he explained. The problem is that organizations don’t know where a value stream starts. “You need to be aligned with a customer goal. If your value stream doesn’t start with a customer, you are doing something wrong. You have to measure what the customer

is seeing in terms of delivery,” he said. It is not only external customers that organizations need to look at, Kersten explained. There are many internal value streams and internal customers that need to be considered also. If you are not treating your internal services as their own value streams, “with its own roadmap, with its own resources, instead of teams behind it, you are not going to help those customer-facing apps. If you are not treating your value stream network, your toolchain itself, as a product, you are also not going to get the kind of results you are seeking,” he said. And don’t boil the ocean with millions of metrics, HCL’s Muskoff warned. DevOps Research and Assessment (DORA) has provided four key metrics organizations can look at to help bring bottlenecks and waste to light: lead time, deployment frequency, mean time to repair, and change fail rate. Muskoff does note that each organization is going to have their own key performance indicators that are important to them, but the four DORA metrics are a good place to start. The right tools can help provide that visibility, but it needs to be able to see what is happening across the entire portfolio, Plutora’s Keyes explained. “If you can’t see what is happening in one continued on page 47 >


Full Page Ads_SDT031.qxp_Layout 1 12/20/19 9:50 AM Page 45

Forget Forgetabout about Forget opening opening about the the opening Black BlackBox. Box. the Black Box.

You Youjust justneed need You a awindow just window needwith with a window with Tasktop TasktopViz™. Viz™. Tasktop Viz™.

DoDo you you know know why: why: Do you know why: youyou only only delivered delivered two you two features only features delivered thisthis month? month? two features this month? was was in business in business analysis? analysis? was in business analysis? it took it took 2x 2x as as long long to to deliver it took deliver a2x new aas new long product product to deliver thisthis year? ayear? new product this year? your your development development teams your teams are development are overworked overworked teams andand falling arefalling overworked behind? behind?and falling behind?

Tasktop Tasktop VizViz

Tasktop Viz

Flow Flow Metrics. Metrics. Flow Metrics.

Find Find outout if you if you qualify qualify Find forfor our out our Flow if you Flow Framework qualify Framework for our Starter Starter FlowProgram Framework Program andand have Starter have your Program your and have your make make your your make your . . .

Tasktop.com


Full Page Ads_SDT031.qxp_Layout 1 12/20/19 9:57 AM Page 46

Measure Measureand andimprove improve your yoursoftware softwaredelivery delivery value valuestreams streams Plutora Plutora is the is the most most complete complete value value stream stream

Decision-Making Decision-Making & Analytics & Analytics

management management platform. platform. WeWe improve improve thethe

Management Management & Orchestration & Orchestration

speed speed andand quality quality of complex of complex application application delivery delivery providing providing complete complete visibility visibility of the of the entire entire process process across across thethe enterprise enterprise portfolio. portfolio. Learn Learn more more at www.plutora.com at www.plutora.com

Integration Integration & Common & Common Data Data Model Model


043-50_SDT031.qxp_Layout 1 12/20/19 1:37 PM Page 47

www.sdtimes.com

< continued from page 44

pipeline, then you can’t see what is happening across the portfolio. You need to have visibility into dependencies and related activities,” he said. Tools should also be able to take into account heterogeneous methodologies because teams will be doing a mixture of Agile, DevOps and even waterfall. It also shouldn’t matter what type of tool a team is using. Data should be standardized across the value stream so you can understand how to relate it all and see what’s going on, Keyes said.

Value stream in 2020 As value stream moves into 2020, more tool vendors are going to take interest and figure out where they fit in the overall pipeline. “2020 will be the year where all these tools start to realize I have to be hooked into the overall toolchain, not from a toolchain perspective but from value stream management,” said Plutora’s Keyes. 2020 will also be the year organizations start to see more results, according to Tasktop’s Kersten. SD Times declared 2019 was the year of the value stream as organizations started to turn their focus on how to better drive value, and a majority of the year was spent on getting through the hype and finding clarity around what value streams really were and how they are going to have an impact, HCL’s Muskoff explained. Now that there has been more research and understanding into the space, in 2020 “confusions will begin to get unwound, and some of these practices, definitions, and the way to apply value streams will become clear. It will go from this interest and start of the hype to actual company results,” said Kersten. These understandings will come from experimentation, and more experience with the value stream. According to ConnectALL’s Knight, most companies are not currently at a place where they are getting value stream right. There is going to be some trial and error before they get to a place where it is working, and companies will need to turn to consultants to help them improve. “What

January 2020

SD Times

Putting the value stream together There are many different components that make up the value stream process. Businesses need to be able to map their value stream, analyze it and manage it. Value stream mapping refers to the practice of looking at all activities throughout the delivery life cycle and mapping it out, according to Mik Kersten, CEO of Tasktop, a software tool integration company. “The approach we have taken with value stream management is implementing and measuring those product value streams and doing so end-to-end, making that a core part of our management model, operating model and toolchain,” he said. Value stream management refers to the process of managing the value stream. “Watching things flow, looking for areas to remove impediments and actually move things through and managing your value stream is a human process. Part of managing your value stream is you want to do things like value stream analysis. Value stream analysis actually includes doing a value stream map, measuring it, and looking how things flow through your system of delivery. The exercise for all that I call value stream optimization,” said Lance Knight, COO at ConnectALL. Jeff Keyes of Plutora, explained value stream management is not monitoring, it is not a feature management system, it is not a bug management system and it is not a build system. “Value stream management sits as a way to interconnect the entire toolchain under one umbrella to basically create a framework or an ecosystem that the tools can plug into to make them all work together. It serves to interconnect disparate tools, creating scenarios from teams that don't normally talk. For instance the help desk doesn't normally talk to development directly, but using value stream management they can, should, and do,” said Keyes. “Helping bring alignment from the business to what is happening in the development world is where value stream management really brings this all together.” —Christina Cardoza

we are going to see is value stream managers start to come to fruition a little more in particular companies. We are going to watch companies look at how their value stream grew organically and they are going to try to improve that. We are going to see a bunch of niche businesses that are keen on value stream management come out and help companies achieve better flow and deliver software quicker,” said Knight. Also expected from value stream management over the next year is the emergence of artificial intelligence and machine learning. According to CollabNet VersionOne’s Roberston, the value stream is increasing the amount of data businesses are receiving, and they are starting to get good at assessing the value they are delivering, but the next step is to really understand what that value is. For example, when trying to optimize the backlog and decide what to work on first, an intelligent layer can be added to the backlog to understand and assign business value levels or point to work that needs to be done and creat-

ed. “Being able to utilize machine learning to help make work not only easier as far as the delivery, development and delivery aspect, but also ensuring that what is delivered is optimal for the business,” said Robertson. HCLs’ Muskoff agreed that machine learning and artificial intelligence will be the next step in value stream. He predicts ML and AI are going to be able to go even deeper into solutions and uncover even more bottlenecks. “We are at the stage with the technology where dashboarding and KPIs are important, but to really get to where we need to go the tool needs to tell you where to focus,” he said. Areas where he believes machine learning and AI will be applied is bottleneck detection, planning, and predictive analytics around delivery time. “More organizations will realize this isn’t optional. This isn’t just for technology visionaries. This is survival. If you don’t know your product value streams then you have no place in the age of software,” said Kersten. ❚

47


Full Page Ads_SDT031.qxp_Layout 1 12/20/19 9:58 AM Page 48


043-50_SDT031.qxp_Layout 1 12/20/19 1:40 PM Page 49

www.sdtimes.com

January 2020

SD Times

How can organizations successfully navigate the value stream with your solution? Brian Muskoff, director of DevOps strategy at HCL Technologies, IT and digital solutions provider We believe value stream management is an everyday solution to improve collaboration, delivery flow and business results. With HCL UrbanCode Velocity, teams can immediately change the nature of their Agile + DevOps practices with our value stream visualization that we call Dots. This real-time, end-toend view enables teams to quickly answer many questions, like what are our bottlenecks, are we on track, and what should we work on next? Stand-ups, playbacks and retrospectives will take on a new, better shape. At the portfolio level, our insights capability provides metrics focused on speed, security and quality over time. Identify your high-performing teams, what they are doing differently, and how can you lift up lower-performing teams. You can’t manage what you don’t measure and we make it easy by tapping into the data from your existing tools and creating relationships from end-to-end. Mik Kersten, CEO of Tasktop, a software tool integration company Value streams begin and end with customers, but what happens in between is often a mystery. That’s where Tasktop comes in. Tasktop products and services are primarily focused on flow, which is essential to value stream optimization and management. To understand value stream flow, the teams that plan, build, deliver and support software need a single source of truth into the flow of events, from the earliest stages of product ideation through production — including customer feedback. While a product life cycle seems like a continuous flow conceptually, Tasktop reveals the otherwise hidden wait-states that interfere with value delivery. Value Stream Management is not just about delivering value faster, it’s about protecting business value by helping IT work collaboratively with the business so

they can be more responsive to the market and disruption. Tasktop helps customers navigate this with products and services that provide visibility into how business value flows across product value streams to meet business outcomes. Using Flow Metrics, global enterprises can measure what matters (in real-time) to move to a product-centric operating model that changes perception of IT as a set of projects working as a cost center to a continuous profit generator that truly helps them transform. Jeff Keyes, director of product marketing at value stream management platform provider Plutora Plutora offers a full-stack VSM solution for the enterprise. Its integrations alleviate the need to acquire an entire prescriptive toolchain to perform useful, actionable intelligence to software management. And Plutora is tool agnostic, providing the ability to deliver software with any tool of choice. Plutora has built a management system for the software delivery process – appropriate for all types of development methodologies and proven at enterprise scale. This approach enables the solution to extend from agile management, discovery and design through to delivery and production. Because of that, Plutora customers benefit from how seamless the VSM platform is across the entire toolchain — all while providing them with an unmatched advantage. Lance Knight, COO of ConnectALL, a value stream integration company We help enterprises of all sizes connect, visualize, and measure software delivery value streams. We connect, integrate, and capture data from all the tools in your software delivery value stream. Over the past decade, there has been an explosion of tools specifically in the DevOps space, including tools on the right side of your value stream: The tools that verify, package, secure, release, configure and monitor a company’s software.

In order to optimize the value stream more effectively and capture data from this explosion of tools, ConnectALL has recently announced its Universal Adapter that will connect to any solution or tool you use as part of your software delivery value stream, even the ones that haven’t been created yet! This ability allows our customers to automate and amplify feedback loops from their DevOps tools to their agile team backlogs. The new Universal Adapter allows ConnectALL to be your value stream control center and helps manage the flow of work through your software delivery organization. ConnectALL can see a developer complete a user story in your planning tool, and at that point, ConnectALL can instruct your code verification tool to analyze the code that was just submitted to it and return details back to your planning tool of any discovered issues. Eric Robertson, VP of product marketing and management at CollabNet VersionOne We are very unique in this space. There are other vendors that come in from different types of spaces. You have vendors from the traditional Agile planning side, project portfolio management, traditional ARA, CD and more. But because we have merged together with VersionOne, we bring a lot of those tools and aspects together under our platform. Together, we have the capability to do value stream mapping and value stream integration. We have a full enterprise planning and delivery toolset to help users with their enterprise planning up to the strategic teams and portfolio level, and down to when you start breaking down into features, epics and planning. We have the capability to bring in that data from that right-hand side and map it directly to my strategic themes and objectives and key results that drive business outcomes. We can actually show you that. We can map it full end-to-end. That is our unique value proposition. ❚

49


50

SD Times

January 2020

www.sdtimes.com

A guide to value stream management tools n

FEATURED PROVIDERS n

n CollabNet VersionOne is a leading platform provider for Value Stream Management, Agile planning, DevOps and source code management. Its offerings provide global enterprise and government industry leaders a cohesive solution that enables them to ideate, create and orchestrate the flow of value through continuous delivery pipelines with measurable business outcomes. n ConnectALL is a company dedicated to helping its customers achieve higher levels of agility, velocity and predictability. Teams from software development and delivery, IT and business units across large and small enterprises worldwide use ConnectALL’s value stream integration platform to connect people, processes, and tools from multiple ALM and DevOps providers, such as Atlassian, Microfocus, Microsoft, IBM, Salesforce, BMC, ServiceNow, and more. Designed to break down barriers to continuous delivery, ConnectALL helps companies rapidly create business value by bringing software innovation to market faster and increasing productivity through cross-team collaboration. n HCL UrbanCode Velocity is a value stream management platform that integrates with all of your tools, bringing your DevOps data together once and for all. HCL UrbanCode Velocity makes multi-tool data accessible and actionable using a powerful new DevOps Query Language, and a unique “dots” view to quickly spot bottlenecks. HCL UrbanCode is part of HCL Software DevOps, a solutions group that provides enterpriselevel security, testing, and continuous delivery software. n Plutora provides value stream management solutions for enterprise IT, improving the transparency, speed and quality of software development and delivery by correlating data from across the toolchains and analyzing critical indicators of every aspect of the delivery process. Acting as the “catwalk above the factory floor”, Plutora ensures organizational alignment between software development with business strategy and provides visibility, analytics and insights into the entire value stream. This approach guides continuous improvement and digital transformation progress through the measured outcomes of each effort. Plutora ensures governance and management across the entire portfolio by orchestrating release pipelines, managing hybrid test environments, and orchestrating complex application deployments — all independent of methodology, team structure, technology, and level of automation. n Tasktop is the only Value Stream Management company that takes a strategic approach to enterprise toolchain integration — connecting the complex network of best-of-breed tools used for planning, building and delivering software at an enterprise level. The backbone of the most impactful Agile and DevOps transformations, Tasktop is an easy-to-use, scalable and reliable tool integration infrastructure that connects, visualizes and measures software delivery value streams to accelerate the time to value of their software products and services.

n CA Technologies: Disparate tools may help an individual or a team do their job, but they impede the progress of the larger organization. With tools that span the application life cycle for planning, build, test, release and putting into production, CA (now a Broadcom company) provides an end-to-end view into the processes and products that deliver value for customers and bring efficiencies to the business. n CloudBees Flow, the industry’s first unified Application Release Orchestration (ARO) platform built for DevOps at enterprise scale, helps drive IT efficiency by automating and orchestrating software releases, pipelines and deployments with the analytics and insight to measure, track and improve results. The latest update, Version 9.1 adds a series of enhancements that make it easier than ever to eliminate release anxiety across the entire software delivery chain. n GitLab is a DevOps platform built from the ground up as a single application for all stages of the DevOps lifecycle enabling Product, Design, Development, QA, Security, and Operations teams to work concurrently on the same project. GitLab provides teams a single data store, one user interface, and one permission model across the DevOps lifecycle, allowing teams to collaborate and work on a project from a single conversation. n Intland: codeBeamer ALM is a holistically integrated Application Lifecycle Management tool that facilitates collaboration, increases transparency, and helps align software development processes with your strategic business objectives. n Jama Software centralizes upstream planning and requirements management in the software development process with its solution, Jama Connect. Product planning and engineering teams can collaborate quickly while building out traceable requirements and test cases to ensure development stays aligned to customer needs and compliance throughout the process. With integrations to task management and test automation solutions, development teams can centralize their process, mitigate risk, and have unparalleled visibility into what they’re building and why. n Micro Focus helps organizations run and transform their business through four core areas of digital transformation: enterprise DevOps, hybrid IT management, predictive analytics and security, risk and governance. Driven by customer-centric innovation, our software provides the critical tools they need to build, operate, secure, and analyze the enterprise. By design, these tools bridge the gap between existing and emerging technologies — enabling faster innovation, with less risk, in the race to digital transformation. n Panaya: Value Stream Management is about linking economic value to technical outcomes. Though not unique to the enterprise, large organizations have specific challenges and needs: siloed teams, waterfall or hybrid operational modes, as well as many nontechnical stakeholders. Panaya Release Dynamix links IT and business teams with an intuitive tool that strategically aligns demand streams with the overall business strategy. z


Full Page Ads_SDT031.qxp_Layout 1 12/20/19 9:59 AM Page 51

Bring Your DevOps Data Together Once & For All FEATURE

BUG

Long wait time

High priority

TASK

Blocker for customer

PULL REQUEST

Not linked to issue

HCL UrbanCode Velocity is a value stream management platform that integrates with all of your tools so you can visualize, orchestrate, and optimize your continuous delivery pipeline. UrbanCode Velocity makes multi-tool data accessible and actionable using a powerful new DevOps Query Language and a unique “dots” view to quickly spot bottlenecks and make better decisions.

VALUE STREAM VISUALIZATION RELEASE ORCHESTRATION

Learn More

hclsw.co/getvelocity UrbanCode is a trademark of IBM Corporation, registered in many jurisdictions, and is used under license.

DEVOPS INSIGHTS


052_SDT031.qxp_Layout 1 12/19/19 4:52 PM Page 52

52

SD Times

January 2020

www.sdtimes.com

Guest View BY MATT CHOTIN

Embracing a DevOps culture Matt Chotin is senior director of technology strategy at AppDynamics.

D

evOps, which refers to the increased communication and collaboration between development and IT operations, is an ever-changing, sometimes complicated term. While “dev” and “ops” were once siloed with separate philosophies, practices, tools, and workflows, they’re merging into one. The result? A more efficient, reliable process and product that is helping organizations create stronger ties between all stakeholders throughout the development lifecycle, so it’s no surprise that DevOps is rapidly gaining popularity around the world. In my experience, organizations that fail to embrace DevOps do so at their own considerable risk. Not too long ago, a major real estate developer was looking to solve a critical problem. The company’s application kept crashing and they couldn’t figure out why. Essentially, the company’s .NET installation was having problems with a third-party web asset management library, which had specific write-to-disk configuration requirements. These requirements were configured properly in the development environment, but not in production. Because developers were siloed from production — with no process for keeping these environments in sync — the company was unaware of the oversight. The end result? The company encountered ongoing performance problems in production and was unable to identify the root cause of ongoing application instance crashes that were masked by autorestart policies. On a foundational level, a dysfunctional culture was largely responsible for the company’s production mishaps, with “code being thrown over the wall” from development to production. Communication between these groups was so poor that a contractor was the primary liaison between developers, operations and management. Additionally, there was a loss of tribal knowledge every time a technical practitioner left the organization. None of the devs knew anything about the troublesome third-party tool, nor how it was being used. Even the contractor — the sole link between the siloed factions — was unaware of the problematic utility and the critical role it played.

Organizations that fail to embrace DevOps do so at their own considerable risk.

Embracing a new culture As we enter a new era of DevOps that takes advantage of collaboration, it is imperative that IT leaders look at the current state of their infrastructure and consider not only the technologies that will further enhance their application environments but also the cultural changes that might be necessary. For starters, communication and knowledge transfer between teams are critical. Agile development practices tend to come with methods of communication available to the whole team, be they daily standups, Scrum or Kanban boards, or narrow Slack channels. The modern DevOps organization should include representatives from all teams in these channels so that everyone can participate and be aware of what is happening, and when.

Embracing automation The number one lesson from DevOps and Agile is the need for automation. The first automation tool that most organizations adopt is Continuous Integration (CI), so that code is built early and often. To make this work well, organizations will also standardize environments. Each environment — development, production, and so on — should be as similar as possible, with continuous integration and eventually, continuous deployment (CD). Once code is being built regularly we want to improve testing so that we ensure code is of the highest quality before it is deployed. 2019 has seen some major innovations when it comes to automated testing tools, moving beyond unit tests and making it much easier for organizations to build the functional tests that reflect how users actually engage with applications. Cloud computing has made it possible for organizations to run thousands of functional tests in a short period of time, automatically. New analytics tools help organizations understand what code is changing and what needs to be tested, which allows this process to be optimized even further. Finally, more organizations are embracing modern application monitoring tools that allow both Devs and Ops to understand how applications are working in production. Overall, this means everyone is contributing to the success of the application which leads to end-user happiness and better business outcomes. ❚


053_SDT031.qxp_Layout 1 12/19/19 4:52 PM Page 53

www.sdtimes.com

January 2020

SD Times

Analyst View BY CHARLES ARAUJO

IT predictions, or parlor tricks? I

think it’s about time that we addressed the elephant in the room: predictions are a bit of a parlor trick! The problem is that we humans are horrible at predicting the future. There are several reasons for this, including challenges such as optimism bias, the curse of knowledge, and distinction bias. As analysts, we’re always focused on what’s coming next. That means that we’re always looking to the future — and the future is changing very quickly! But things develop on the ground much more slowly. And that makes the game of predictions challenging. After all, who wants to predict that things will be much the same as this year, but just a little bit different! With that elephant now gloriously on display, you may wonder why we even bother doing these sorts of prediction pieces if we’re sure that they’ll be (mostly) wrong. That said, I see three macro trends converging and beginning to settle into some semblance of reality in 2020:

Digital transformation loses its mojo First, I think that we’re going to see the term ‘digital transformation’ lose its cachet. We’re already seeing this start to happen (makes it easy to predict, right?) as tech marketers begin to look for new terms to use in their marketing salvos. A bit like Dr. Seuss’s star-bellied Sneetches, no one wants to be talking about digital transformation when everyone is talking about it. Ironically, however, this is probably going to be a good thing. As leaders get over the hype of the term, they’ll start getting down to the real business of actually transforming. The fundamental drivers that have always been at the heart of digital transformation are more real now than ever before, so this is a story that is just getting started.

The customer experience finds its place One of those fundamental digital transformation drivers is the now-critical importance of the customer experience. The challenge is that most of the industry has taken the term to be synonymous with the sales experience. This view is finally starting to shift (you see what I’m doing here, right?).

As digital transformation starts to get real (see prediction #1), organizations are getting their heads around the fact that the customer experience actually represents the totality of the customer journey — and that it’s at the center of real transformation. As a result, I believe that we’ll see a fundamental shift in 2020 as organizations start to dig into what it will really take to create a differentiated customer experience throughout the entire customer lifecycle. And this reckoning will lead to my third prediction.

Charles Araujo is Principal Analyst at Intellyx.

Technology evolution gives way to business evolution Up to now, talk about transformation (digital or otherwise), has really been a conversation about technology evolution. What has been fascinating and gratifying over the last several months, however, is the number of tech companies that are finally moving past the hype and hyperbole and acknowledging both the real role — and the limits — of their given software when it comes to real transformation. I’m not quite sure if this is because enterprise leaders have finally wisened up or if tech company leaders have finally realized that they don’t need to oversell anything (we need their solutions!). But whichever it is, it’s been a nice change of pace that we’re actually talking about the need for business transformation rather than just another technology project. I believe that in 2020 we’ll see this come to fruition as enterprise leaders start to fully embrace the need for business transformation and see technology in its rightful role as part of the rapid business evolution that they require.

We’re going to see the term ‘digital transformation’ lose its cachet.

The Intellyx take: It’s about time While prediction pieces may be an end-of-year tradition, I’m genuinely excited about what I see happening this time around. Paraphrasing futurist Daniel Burrus, it’s easy to predict the future as long as you see what’s already happening and follow it to its natural conclusion. That’s what I’m doing here this year. These predictions are things that are already starting to happen. The critical question, however, is what it will mean to you. Here’s to a transformative 2020! ❚

53


54

SD Times

January 2020

www.sdtimes.com

Industry Watch BY DAVID RUBINSTEIN

The little dirty data secret David Rubinstein is editor-in-chief of SD Times.

O

ur industry has a dirty little secret. Come closer, I’ll whisper it to you.

(Much of the data held in organizational databases, warehouses, lakes and stores is not very good.)

There, I’ve said it. Data quality remains a persistent problem for enterprises, and there are many reasons as to why. It could be that fields were filled out incorrectly, or that differences between what things are called are pervasive. Or, calculations that were done and stored have grown out of date or were incorrect to begin with. Do you live on Main St. or Main Street? Is your job software engineer, or just developer, or — as has been seen — code ninja? How do you know if they’re the same thing or not? Or, is 555-555-5555 an actual phone number? It looks like one. It has 10 digits. But is it valid? Is ‘mickemouse@noaddress.org’ an actual email address? It looks like one. But will anything mailed to this address go through or get bounced? What if your company relies on data for annual financial predictions, but the underlying data is fraught with errors? Or what if your company is in trucking, and can’t maximize the amount of goods a truck can hold because the data regarding box sizes is incorrect? Data quality is “something everyone struggles with, just keeping it clean at point of entry and enriched and useful for business purposes,” Greg Brown of data quality company Melissa told me at the company’s offices in Rancho Santa Margarita (mas tequila, por favor!), California. And, because few companies want to talk about the issue, Brown said, “That prevents other people from really knowing not only how pervasive the problem is, but that there are lots of solutions out there.” “A position I was in at another company before this, we’d just get return mail all the time and it was just the cost of doing business,” he continued. “We were marketers but we weren’t really direct mailers. We had enough knowledge to get something out, but we didn’t really know all of the parameters on how to process it to prevent undeliverable mail, and it was just really the cost of doing business. We’d get these big cartons of mail back, and we’d just dump them, and nobody would say anything.” Do people not talk about because they don’t

Typical MDM logic is 180 degrees from what GDPR is recommending.

want their customers to know just how bad their data problem is? “Oh yeah, we have that all the time,” Brown said. “It’s almost impossible for us to get case studies. The first thing they do with us is slap on an NDA, before we even get to look under the kimono, as Hunter Biden would say. Before we see their dirty laundry, they definitely want the NDA in place. We’ll help them out, and they’ll never, ever admit to how bad it was.” In some organizations, some team is in charge of making the data available to developers and different departments, and there is another team making sure it’s replicated across multiple servers. But who’s really in charge of making sure it’s accurate? “That’s where we struggle with talking about the stewards,” Brown said. “Do they really care how accurate it is if they’re not the end consumer of the data?” Throwing another wrench into all of this is the data matching rules outlined in the European Union’s General Data Protection Regulation. Brown said it seems to contradict well-understood data management practices. “One of the things those guys are saying is typical MDM logic is 180 degrees from what GDPR is recommending,” Brown said. “Traditionally, MDM, if it has a doubt about whether or not two records are identical or duplicates, it’s going to err on the side of, they’re not. What’s really the lost opportunity cost associated with merging them was greater than sending a couple of duplicate catalogs before you could ascertain that. GDPR says, if there’s almost any doubt that Julia Verella and Julio Verella are actually the same person, you’ve got to err on the side that they are the same person. So the logic behind the matching algorithms is completely different.” In software development, we hear all the time that organizations should “shift left” practices such as testing, and security. So, why don’t we as an industry shift data quality left? I get that is a massive undertaking in and of itself, and organizations are making progress in this regard with anomaly detection and more machine learning. But if,as everyone says, data is the lifeblood of business going forward, then data scientists must be open to talking about solutions. Because if your outcome fails because data you’re connecting to from an outside source is incorrect or invalid, you, your partners and your customers suffer. z


Full Page Ads_SDT030.qxp_Layout 1 11/18/19 11:42 AM Page 19


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.