SD Times September 2019

Page 1


Full Page Ads_SDT027.qxp_Layout 1 8/27/19 2:06 PM Page 2


003_SDT027.qxp_Layout 1 8/28/19 10:50 AM Page 3

Contents

VOLUME 2, ISSUE 27 • SEPTEMBER 2019

FEATURES

NEWS 6

News Watch

19

A managed approach can improve the health of open source supply chains

21

Progress releases Web Accessibility Guidebook for developers

23

Open-source Big Data processing at massive scale and warp speed

24

CloudBees announces vision for managing software delivery

Be (AI) smarter about your digital transformation

COLUMNS 48

ANALYST VIEW by Jason English Is open source the great equalizer?

50

INDUSTRY WATCH by David Rubinstein The winding road to software failures

26

Going ‘lights out’ with DevOps

28

Redgate: Starting a DevOps initiative requires cultural and technology shifts

31

Tasktop Illuminates the Value Stream

33

Broadcom: Your DevOps Initiatives Are Failing — Here’s How to Win

34

Scaled Agile: The Most Important Tool in DevOps — Value Stream Mapping

37

CircleCI: Avoiding The Hidden Costs of Continuous Integration

38

Bringing Rich Communication Experiences Where They Mattermost

41

Instana Monitoring at DevOps Speed

42

DevOps Showcase

page 8 THE THIRD OF

THREE PARTS

Legacy assets gain new life with low-code integrations

page 12

The next wave of API management

page 45

Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 80 Skyline Drive, Suite 303, Plainview, NY 11803. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2019 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 80 Skyline Drive, Suite 303, Plainview, NY 11803. SD Times subscriber services may be reached at subscriptions@d2emerge.com.


004_SDT027.qxp_Layout 1 8/27/19 1:00 PM Page 4

®

Instantly Search Terabytes

www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein drubinstein@d2emerge.com NEWS EDITOR Christina Cardoza ccardoza@d2emerge.com

dtSearch’s document filters VXSSRUW ‡ SRSXODU ILOH W\SHV ‡ HPDLOV ZLWK PXOWLOHYHO DWWDFKPHQWV ‡ D ZLGH YDULHW\ RI GDWDEDVHV ‡ ZHE GDWD

SOCIAL MEDIA AND ONLINE EDITORS Jenna Sargent jsargent@d2emerge.com Jakub Lewkowicz jlewkowicz@d2emerge.com ART DIRECTOR Mara Leonardi mleonardi@d2emerge.com CONTRIBUTING WRITERS Alyson Behr, Jacqueline Emigh, Lisa Morgan, Jeffrey Schwartz

2YHU VHDUFK RSWLRQV LQFOXGLQJ ‡ HIILFLHQW PXOWLWKUHDGHG VHDUFK ‡ HDV\ PXOWLFRORU KLW KLJKOLJKWLQJ ‡ IRUHQVLFV RSWLRQV OLNH FUHGLW FDUG VHDUFK

'HYHORSHUV ‡ 6'.V IRU :LQGRZV /LQX[ PDF26 ‡ &URVV SODWIRUP $3,V IRU & -DYD DQG NET ZLWK 1(7 6WDQGDUG 1(7 &RUH

.

.

.

‡ )$4V RQ IDFHWHG VHDUFK JUDQXODU GDWD FODVVLILFDWLRQ $]XUH DQG PRUH

CONTRIBUTING ANALYSTS Enderle Group, Gartner, IDC, Intellyx, Ovum

ADVERTISING SALES PUBLISHER David Lyman 978-465-2351 dlyman@d2emerge.com SALES MANAGER Jon Sawyer jsawyer@d2emerge.com

CUSTOMER SERVICE SUBSCRIPTIONS subscriptions@d2emerge.com ADVERTISING TRAFFIC Mara Leonardi adtraffic@d2emerge.com LIST SERVICES Jourdan Pedone jpedone@d2emerge.com

Visit dtSearch.com for ‡ KXQGUHGV RI UHYLHZV DQG FDVH VWXGLHV ‡ IXOO\ IXQFWLRQDO HQWHUSULVH DQG GHYHORSHU HYDOXDWLRQV

The Smart Choice for Text Retrieval® since 1991

dtSearch.com 1-800-IT-FINDS

REPRINTS reprints@d2emerge.com ACCOUNTING accounting@d2emerge.com

PRESIDENT & CEO David Lyman CHIEF OPERATING OFFICER David Rubinstein

D2 EMERGE LLC 80 Skyline Drive Suite 303 Plainview, NY 11803 www.d2emerge.com


Full Page Ads_SDT027.qxp_Layout 1 8/27/19 2:11 PM Page 52


006,7_SDT027.qxp_Layout 1 8/27/19 12:59 PM Page 6

6

SD Times

September 2019

www.sdtimes.com

NEWS WATCH GitHub update adds CI/CD capabilities GitHub is updating its automation and customization workflow solution to now include CI/CD capabilities. GitHub Actions is a community-powered platform. It’s API enables developers to create tasks and share them with the GitHub community. The new capabilities include: l Matrix builds to test multiple versions in parallel l Live logs that show realtime feedback l Ability to write, edit, reuse, share and fork actions and workflows like code l Ability to automate common developer workflow tasks such as triaging and managing issues, automated releases, and collaborating with users l Ability to publish and consume packages from container registries, and access to the GitHub Package Registry before general availability l Suggested workflows that help developers get started with CI/CD on Actions l An ecosystem of partners such as LaunchDarkly, mabl, Code Climate, GitKraken, and CircleCI

Infragistics reveals new embedded BI platform Reveal is an embedded analytics/dashboard platform that aims to reduce the time and money spent on embedding business analytics into applications by letting developers use pre-built components. According to the company, Reveal can reduce development time by 85 percent and cut costs by as much as $350,000. Through the new platform, enterprises can embed the

Businesses struggle to obtain the benefits of AI Despite the promises of artificial intelligence, companies are still trying to figure out how to stabilize and scale their AI initiatives. A newly released report revealed while 63.2 percent of businesses are investing between $500,000 and $10 million on AI efforts, 60.6 percent of respondents continue to experience a variety of operational challenges. According to the report, the top reasons for implementing AI initiatives included efficiency gains, growth initiatives and digital transformation. The top issues data science and machine learning teams faced after implementing an initiative included duplicated work, having to rewrite models after a team member has left, justifying the value of the project, and slow and unpredictable AI projects. dashboard/analytic engine into their SaaS and on-premise apps with containerized deployment and a microservice architecture. The solution also includes data connectors that allow developers to view insights in real time. Other features include the ability to share dashboards, annotating them and exporting them to common formats such as PDFs and PowerPoint.

SuperGLUE standard looks to advance NLP processing Artificial intelligence researchers are looking to advance natural language processing with the release of SuperGLUE. SuperGLUE builds off of the previous General Language Understanding Evaluation (GLUE) benchmark, but aims to provide more difficult language understanding tasks and a new public leaderboard. The benchmark was developed by AI researchers from Facebook, Google DeepMind, New York University and University of Washington. “By releasing new standards for measuring progress, introducing new methods for semi-supervised and selfsupervised learning, and training over ever-larger scales of data, we hope to inspire the

next generation of innovation. By challenging one another to go further, the NLP research community will continue to build stronger language processing systems,” the researchers wrote.

Samsung Galaxy Fold ready for relaunch Samsung is finally giving more details about its foldable device, the Galaxy Fold. The company announced plans to launch an improved Galaxy Fold this month, and promises it fixed the display issues that prevented it from releasing it earlier this year. The phone is designed to unfold like a book and accommodates a screen that spans nearly the entire inner surface area (with the exception of a bezel for the camera), and also sports a much smaller 4.6 inch display on the outside to be used for basic mobile tasks, the company explained.

Contract for the Web is becoming a reality Thirty years after the creation of the web, its founder has released the first draft of his Contract for the Web to gain feedback from web users around the globe and finalize

the list of guiding principles in the document. Sir Tim Berners-Lee has called it many things since he announced it at the 2018 Web Summit: a contract, a “magna carta” and a Bill of Rights. However, he says it revolves around one main goal — to bring governments, companies and citizens together around a shared set of commitments to build a better web and save it from abuse.

Stackery releases local Lambda development tool Stackery is enabling developers to locally debug and develop any Lambda function in any language or framework. The serverless solution provider announced cloudlocal for all, a new capability designed to speed up serverless development. According to the company, the tool is framework independent, enabling developers to develop and debug any Lambda function they are able to access in AWS. In addition, it invokes any of the 86 AWS CloudFormation resources and can connect to other cloud resources. Developers are not required to have a Stackery account in order to take advantage of its capabilities.


006,7_SDT027.qxp_Layout 1 8/27/19 12:59 PM Page 7

www.sdtimes.com

PyTorch 1.2 improves upon TorchScript environment The open-source machine learning framework PyTorch is tackling production usage in its latest release. PyTorch 1.2 features an update to the TorchScript environment. TorchScript enables users to create serializable models from PyTorch code and can be saved from a Python process. The new improvements are designed to make it easier to ship production models, expand support for ONNX formatted models and enhance support for Transformers. Improvements include support for the subset of Python in PyTorch models, and a new API for compiling models to TorchScript.

IntelliJ IDEA 2019.2 prepares for the release of Java 13 JetBrains has released a major update to its Java IDE. IntelliJ IDEA 2019.2 is getting ready for the September release of Java 13 with sup-

port for Switch Expressions preview feature and its new syntax and the Text Blocks preview feature. Other Java related features include the ability to perform the Inline method refactoring, the new find cause action, improved code duplication detection, and code complication now understands typos.

Microsoft launches new security lab Microsoft is boosting its efforts to make Azure more secure with the launch of Azure Security Lab, a set of dedicated cloud hosts for security researchers to test attacks against IaaS scenarios. In addition, the cloud giant is doubling the top bounty reward for Azure vulnerabilities to $40,000. The isolation of the Azure Security Lab now allows researchers to exploit vulnerabilities, allowing them to earn up to $300,000 for scenario-based challenges. In addition, the company is formalizing its two-decade commitment to the principle of

Safe Harbor, which places immunity from legal repercussions on researchers and to ensure researchers receive proper recognition. Applications to join the Azure Security Lab opened today.

CROKAGE helps developers find Stack Overflow answers A group of researchers have developed the Crowd Knowledge Answer Generator (CROKAGE), a new solution designed to help developers easily find relevant information and explanations on Stack Overflow. “Developers often search for relevant code examples on the web for their programming tasks. Unfortunately, they face two major problems. First, the search is impaired due to a lexical gap between their query (task description) and the information associated with the solution. Second, the retrieved solution may not be comprehensive, i.e., the code segment might miss a succinct explanation. These problems make the develop-

September 2019

SD Times

ers browse dozens of documents in order to synthesize an appropriate solution,” the researchers wrote in a paper. To address this, CROKAGE aims to take the description of a programming task as a query and then provide the relevant code snippets and explanations so that developers can easily use the code in their projects.

IBM releases opensource AI project IBM is releasing a new opensource project designed to help users understand how machine learning models make predictions as well as advance the responsibility and trustworthiness of AI. IBM’s AI Explainability 360 project is an open-source toolkit of algorithms that support the interoperability and explainability of machine learning models. According to the company, machine learning models are not often easily understood by people who interact with them, which is why the project aims to provide users with insight into a machine’s decision-making process. z

People on the move n The provider of the Scaled Agile Framework, Scaled Agile, Inc., announced a new CEO to lead the company through its next phase. Chris James first joined the company in 2015 as president and chief operating officer. “As president and chief operating officer, he successfully transitioned Scaled Agile from a startup to an international enterprise with customer operations in the US, Europe, Latin America, and Asia. With Chris at the helm, Scaled Agile is poised for even greater success,” said Dean Leffingwell, creator of SAFe and chief methodologist at Scaled Agile. n Shape Security is adding programming language expert Gilad Bracha to its team as a distinguished engineer. Bracha is the creator of the Newspeak programming language, and co-author of a number of books including the Java Language and Virtual Machine Specifications, and The Dart Pro-

gramming Language. He was awarded the 2017 senior DahlNygaard Prize, which is awarded annually by the European Conference on Object Oriented Programming for outstanding work in the field of software engineering. n Digital software engineering solution provider Exadel is restructuring its leadership with Igor Landes as its new chief technology officer and Mikhail Andrushkevich as its new vice president of engineering. As CTO, Landes will be responsible for identifying technology trends, defining and implementing technology innovation strategy and support solutions marketing and vertizaliton. As VP of engineering, Andrushkevich will be responsible for the delivery of Exadel solutions, improving the quality of solutions, enhancing the software development process and developing new leaders within the company.

7


008-10_SDT027.qxp_Layout 1 8/27/19 12:54 PM Page 8

8

SD Times

September 2019

www.sdtimes.com

BY CHRISTINA CARDOZA

“S

uccessful digital transformation is like a caterpillar turning into a butterfly. It’s still the same organism, but it now has superpowers.” George Westerman, principal research scientist at the MIT Initiative on the Digital Economy, first used the now popular analogy two years ago to explain the changes businesses were going through. But, over the last few years we haven’t seen many butterflies and that is because companies still have the mindset of caterpillars. “It’s hard to keep up with your competitors if you’re crawling ahead while they can fly,” according to Westerman’s analogy. Digital transformation is just another means to compete in the marketplace. It can be viewed as an overhaul to how the business operates, but the perceived outcomes of a digital transformation is what makes it desirable. According to a recent report from SnapLogic, 98 percent of IT decisionmakers are committed to digital transformation because they want to increase revenue, market share, and business speed; reduce operating costs and development time; and improve customer satisfaction. However, the expectations may be too high. The report goes on to reveal

that 40 percent of enterprises are behind schedule when it comes to meeting their expectations. Additionally, 69 percent are re-evaluating their digital transformation strategy and 59 percent would do it differently if they had the chance. This unexpected slowdown is causing enterprises to find new ways to get back on track. The report also found 68 percent of respondents are turning to artificial intelligence or machine learning to help speed up digital transformation. “Digital transformation doesn’t happen overnight, and there’s no silver bullet for success. To succeed with digital transformation, organizations must first take the time to get the right strategy and plans in place, appoint senior-level

leadership and ensure the whole of the organization is on-board and understands their respective roles, and embrace smart technology. In particular, enterprises must identify where they can put new AI or machine learning technologies to work; if done right, this will be a powerful accelerant to their digital transformation success,” said Gaurav Dhillon, CEO of iPaaS enterprise solution provider SnapLogic, an integration platform-as-a-service provider. One way AI is being used to drive


008-10_SDT027.qxp_Layout 1 8/27/19 12:55 PM Page 9

www.sdtimes.com

the digital transformation is to sort through the mounds of data coming in from all areas of the business and provide insights into that data in real time. “The sheer amount of data collected every day isn’t usable as an effective tool unless it’s in real-time,” said John McDonald, CEO of digital transformation company ClearObject. “There is simply too much data to ingest and analyze. So, training machine learning models to efficiently analyze the data and provide predictive and prescriptive solutions is vital to organizations who want to remain competitive in their industries.” For example, the self-driving car

industry needs to use machine learning to access and analyze data on the fly and make changes that allow the car to maneuver, brake or stop at any given moment. A trucking company can use machine learning to monitor engine performance in real time and be alerted to any issues that may cause them to have to shift plans around, McDonald explained. “Machine learning will continue to advance alongside the ever-expanding digital transformation because they go hand-in-hand in order to be truly effective for data analysis,” he said. Within development teams, machine learning is being used for predictive maintenance, quality and customer sentiment analysis. According to Kevin Surace, CEO of the AI-driven software testing provider Appvance, machine learning algorithms can study patterns on a network and learn from its behaviors. When something outside the set of natural behaviors happens, it can alert the proper teams. It can also indicate where or if there are things that are out of tolerance, things that are taking too long, and predict whether or not there will be anomalies in the finished product and insert fixes, McDonald added. For customer sentiment analysis, machine learning algorithms take in data points and infer how customers might behave based on those data points in order to better serve customers and improve the odds of making a sale. AI can also be used to help a business be more digital in general. For instance, if a business still operates with paper documents for things like audits, contracts and even lawsuits, it can result in slow, manual and mundane tasks, a

September 2019

SD Times

recent report from Conga revealed. “Not only is [digital transformation] fundamentally changing how we do things, but it is also changing customer expectations of how we will interact. These rising customer expectations matter more than ever, since the new, connected economy makes it easier to take one’s business elsewhere,” the report stated. Here, AI is being used to automate the manual document process and analyze the data within the document to provide more insight and help businesses and customers make smarter decisions. “Digital transformation is at the heart of every company’s move to a better bottom-line today,” said Surace. “ ‘Every company is a software company’ as you may have heard, and that really means that’s where the new profits must come from, that is, the productivity from letting your software systems drive bottom line.”

Automating your way through your digital transformation At the end of last year, research firm Forrester predicted that digital transformation would become more pragmatic in 2019. The organization found that 2018 was full of failure with 50 percent of digital transformation efforts stalling due to the challenging and costly changes businesses need to overcome. With more understanding and organizational readiness in 2019, Forrester envisioned more tangible efforts such as launching digital products, monetizing data assets and lower-cost digital channels. “Digital transformation is not just about technology. It’s the necessary but continued on page 10 >

9


008-10_SDT027.qxp_Layout 1 8/27/19 10:38 AM Page 10

10

SD Times

September 2019

www.sdtimes.com

< continued from page 9

challenging journey of operating digital-first with the speed and nimbleness to change rapidly, exploit technology to create lean operations, and free people to do more complex tasks,” Forrester explained in a post. Now that we are more than halfway through the year, the organization is seeing just how businesses are taking a pragmatic approach to their digital efforts, and it involves a shift in the conversation. According to Forrester principal analyst Craig Le Clair, instead of digital transformation, businesses are talking about automation. “Along comes this notion of doing rapid digitization through automation,” or robotic process automation (RPA), he explained. RPA is the automation of manual tasks. The reason why more organizations are turning to this trend is because it is a better entry point or “alternative” for digital transformation. “The ability to integrate legacy systems is the key driver for RPA projects. By using this technology, organizations can quickly accelerate their digital transformation initiatives, while unlocking the value associated with past technology investments,” said Fabrizio Biscotti, research vice president at Gartner. Along their journeys, companies are figuring out that digital transformation requires big structural and operational changes that they are not ready for or struggling to obtain. With RPA, businesses can start to use it in their existing UIs and legacy systems without having to change a whole lot and begin to move to a more modern way of working that can continue to be evolved as time goes on. “All of a sudden you have a digital process without touching those systems because you are operating against the applications that exist on the desktop,” said Le Clair. Appvance’s Surace added that RPA is coming into play because of its ability to automate repetitive tasks that can then be used later to help the system learn and do more.

are the ones you can’t see. The invisible robots,” he said. These “invisible robots” are in the form of virtual agent software, RPA, bots and machine learning, and while they won’t impact all jobs initially, they will slowly begin to replace some, Le Clair explained. However, ClearObject’s McDonald explained that it isn’t something to panic about because that “has been the entire point of technology since the beginning of technology. What is different about machine learning is the rate, not the dynamic that it is happening.” For instance, when cavemen created fire, they didn’t stop figuring things out. It freed them from figuring out how to heat their food, caves or homes and stay alive in the ‘You are going to need winter so they could turn their concern and focus to other to be retrained for things, McDonald explained. different jobs multiple It’s not about whether AI is times in your life.’ going to replace some jobs; it —John McDonald, ClearObject is about redesigning the education system to train and retrain people for other job roles, he explained. “We have an education system that was designed to teach you what you need to know for a career can only deal that at the time it was expected to be with structured what you would do for life. That is not or tagged fields, so the case anymore in a society where if you put a layer of text analytics or machine learning on top to technology is rapidly taking over a lot of be able to decipher data, RPA bots can those jobs. You are going to need to be take actions based on that data, accord- retrained for different jobs multiple times in your life,” he said. “The speed ing to Le Clair. “In the end, the application of and lack of a system to retrain people machine learning in the coming years multiple times throughout their life is actually the root of the problem and the will be a centerpiece of digital transformation. At every company. Because we fear that people have. Not that technolhave decades of data and knowledge ogy takes away jobs because it always which can be learned from, every com- has. It is, ‘What am I going to do pany can benefit and drive far better because there isn’t an education system to help me or relieve me of that presproductivity,” said Appvance’s Surace. sure, pain or concern.’ ” The changing workforce However, Forrester’s Le Clair doesn’t All of these new ways of automating believe that the traditional education manual tasks and helping remove bot- system will be able to help prepare tlenecks worries some in the industry workers and companies for the different that AI and machine learning will take processes and programs they will work over jobs. According to Forrester’s Le with in the future. Instead, he stated Clair, it is a real fear. “The robots that more education and certification needs are going to restructure the workforce to come from the private sector itself. z Le Clair went on to explain that RPA is already being widely used in finance and accounting departments where a lot of repetitive tasks like cutting and pasting spreadsheet fields is occupying a lot of a person’s time. “A bot can come in and do the same repetitive task in a fraction of the time,” he explained. To take RPA even further, businesses are starting to apply machine learning and AI to make the repetitive tasks a little more intelligent. Le Clair explained conversational intelligence can be added to bots in order to interact with customers, understand human intent and perform an action. Another area AI can help RPA is dealing with unstructured content. RPA


Full Page Ads_SDT027.qxp_Layout 1 8/27/19 2:13 PM Page 11


012-17_SDT027.qxp_Layout 1 8/27/19 1:04 PM Page 12

12

SD Times

September 2019

www.sdtimes.com

Legacy assets gain new life with low-code integrations BY LISA MORGAN

The maturity of an organization and rapid changes in technology have dovetailed to result in legacy assets that continue to drive value for some time. One of those technology advances has been in low-code/no-code solutions, which offer enterprises the ability to modernize their older applications through integrations. While old-school, hand-coded integrations with legacy systems are still necessary in some cases, low-code/nocode platforms tend to provide a variety of connectors out of the box. “There’s an important distinction between low-code tools and enterprise low-code tools,” said Jason Bloomberg, founder and president of industry analyst firm Intellyx. “The enterprise [platforms] have more sophisticated integration capabilities and more enterprise-centric security and regulatory compliance as well.” Enterprise low-code tools also enable developers to build custom connectors. In fact, building connectors is one of the main reasons developers still drop down into code when using a lowcode tool. “Even with one of the lower-end low-code products, if you want to integrate with something and the tool doesn’t do it out of the box, you can do it

with a traditional integration. You can build to the API or if there is no API, you could leverage a legacy integration tool to leverage the API and hand-code a connector so you can work in the nocode environment, but that’s a lot of trouble,” said Bloomberg. “It’s easier if the tool gives you a low-code way of doing that, but that’s difficult for the tool vendors to create.” Building a custom connector is fairly straightforward. However, legacy data integration can be a challenge given different data formats and contexts. It’s important to understand the metadata associated with the data schema, which some low-code/no-code tools do.

THE THIRD OF

THREE PARTS

“Some products on the market that are getting more sophisticated with that, using the UI to interpret what the metadata tell you about various integration challenges,” said Bloomberg. “In many cases, you have some sort of legacy application that has a database that supports it. You could bypass the application and send a SQL query to the database, but you often don’t want to because you lose the application’s business logic. When the application is maintaining consistency between tables, going directly to the database can cause problems.” Meanwhile, low-code/no-code development is extending out beyond applications to include robotics process automation (RPA), which makes sense since low-code/no-code is often used (particularly by business users) to improve workflows and processes. Some of those workflows and processes involve legacy assets. “RPA tools are in large part glorified


012-17_SDT027.qxp_Layout 1 8/27/19 1:04 PM Page 13

www.sdtimes.com

screen-scraping tools. They’re particularly useful where you have an application interface where there isn’t an API you can easily program to,” said Bloomberg. “You want to mimic what a user would do, which is a kind of screenscraping. Generally speaking, low-code players partner with the RPA players to offer this consolidated low-code environment that can essentially script interactions with legacy user interfaces.”

The impact on shadow IT On one hand, it could be argued that low-code/no-code platforms reduce shadow IT because line of business (LOB) users can build their own applications. While low-code/no-code may help reduce shadow IT, it won’t eliminate it completely. For one thing, shadow IT dates back at least to the 1980’s. The modern workforce is more tech-savvy than its predecessors and as a result,

today’s end users have definite opinions about user experiences. If they don’t like the tech available at work, they’ll find another solution, whether it’s devices, applications or low-code/nocode platforms. Also, because LOBs now tend to have their own IT budgets, they’re empowered to procure their own tech, including low-code/no-code platforms. The proliferation of low-code tools and subsequently low-code applications is often nowhere on IT’s radar until an issue arises, so in effect, IT lacks complete visibility into the enterprise’s application portfolio. In addition, a patchwork of low-code/no-code solutions could mean that different departments are duplicating development efforts, building functionality or optimizing tasks and workflows that could be reusable, with or without modifications. Security issues can also arise. For example, a recent Oracle/KPMG study

September 2019

SD Times

found that 93 percent of respondents are dealing with shadow IT. Half cited a lack of security controls and misconfigurations as common reasons for fraud and data exposures. Meanwhile, 26 percent said cloud services are their biggest cybersecurity challenge. Some organizations are attempting to manage risks and increase operational inefficiencies by taking a platform approach to self-service including lowcode/no-code development. That way, the appropriate enterprise controls are in place while end users or departmental IT have the freedom to build their own low-code/no-code applications. Fintech company NES Financial standardized on the OutSystems lowcode platform because it wanted an enterprise-class tool capable of supporting regulatory compliance. IT maintains control of enterprise data and then provides APIs that can be leveraged by departmental applications. “The big benefit of it is now all the shadow IT organizations use the same tools, so you get the same branding, styling and cross-utilization of capabilities that one department develops — tools, technologies, dashboards all sorts of things,” said Izak Joubert, CTO at NES Financial. “In the bigger scheme of things, low-code platforms probably mitigate risk and actually enhance our capability to deliver on the vision… whereas more point solutions in my estimation probably hurt an organization in terms of shadow IT because it’s not well-controlled. There’s no one that centrally looks after the company and prevents people from running with scissors.” Large enterprises have increasingly adopted a hub-and-spoke IT model so there is a center of excellence and dedicated, department-specific IT or software development resources. That way, departments get what they need while the core enterprise assets remain protected and managed by a central function. Centralized governance is necessary to ensure security, compliance, data integrity, etc. At the same time, centralized governance should not interfere with user continued on page 14 >

13


012-17_SDT027.qxp_Layout 1 8/27/19 1:05 PM Page 14

14

SD Times

September 2019

www.sdtimes.com

Is low-code/no-code conducive to CI/CD? Organizations are maturing from Agile to DevOps and CI/CD with the goal of releasing higher quality software faster. However, the need for speed has not necessarily been reflected in a shift from hand-coding to low-code development. “DevOps has always been focused on this hand-coding mentality,” said Jason Bloomberg, founder and president of industry analyst firm Intellyx. “It’s increasingly difficult to do DevOps without low-code because the hand-coding is the bottleneck.” As with all other tools, low-code/no-code platforms are adding machine learning and AI capabilities to increase coding efficiency, but they still aren’t capable of handling every nuance enterprise application development and delivery entails. Generally speaking, organizations are happy to talk about their CI/CD efforts or their low-code/no-code development efforts, but it’s a rare occurrence for organizations to talk about both simultaneously, at least for now. “A lot of times the tools to continuously deliver are built into the product. But those tools become inefficient when you try to do something that the tool isn’t expressly designed to do,” said Justin Rodenbostel, VP of delivery at digital transformation agency SPR. “As long as you can run it on the command line, you can automate it and you can schedule it to run continuously or based on some

< continued from page 13

experience, particularly if citizen developers are creating applications. If the user experience becomes awkward, sluggish or unresponsive, users will be inclined to find another alternative. “If you can get in the ideal world where you have a big enough organizations with these departmental IT organizations and you train them up to use the same low-code platform to develop their applications, now you have a relatively well-controlled processing place for a shadow IT organization because IT can still protect the data and implement corporate policies and you only expose those APIs in a controlled fashion to the shadow IT organizations,” said Joubert. The big benefit of it is all the shadow IT organizations [throughout the enterprise] use the same tool set so you get consistency in branding,

hook. Doing the same thing with low-code/no-code tools is totally dependent on the capabilities of the individual tool, so you’re coupling your CI/CD capabilities with the low-code/no-code tools that you’re using whereas in the custom dev world, those two things can be totally separate. You can use the same CD tools to deploy everything or you can have everything use their own CI/CD tool. There’s a limitless number of combinations there.” In many cases, low-code/no-code tools are being used at small companies that have few, if any, development resources. Generally speaking, they’re not doing DevOps or CI/CD. They’re just happy building custom applications quickly without development resources or with very limited resources. “When things become too complicated or when you need to grow out of the low-code/no-code solution and replace it with something custom, that’s when [low-code/no-code] tools lose their value proposition extremely quickly,” said Rosenbostel. “If you have a complex environment to deploy to, you’re going to run into roadblocks. If you have heavy regulatory constraints, SEC or the Department of Energy, you’re going to run into big problems. Anytime you have to do something out of the ordinary, buyer beware.” z — Lisa Morgan

functionality, styling [and] cross-utilization of capabilities that one department develops.”

Low-code/no-code as shadow IT On the other hand, it can also be argued that low-code/no-code tools are a form of shadow IT because LOBs are circumventing IT when they procure such tools. Going around IT is the very definition of shadow IT. “In the no-code space, the risk of exacerbating shadow IT is greater [than low-code] because the no-code space is a next-generation of tool like Access and Lotus Notes where users could build applications and the IT organization wasn’t aware of them, and couldn’t ensure they were secure. They might be redundant applications so there are all these shadow IT issues,” said Intellyx’s Bloomberg. “A lot of these tools

are trying to be proactive so you can build applications but someone will put governance in that will allow IT to manage the quality of the applications or ensure that security policies are being enforced.” Whether low-code/no-code tools eliminate or exacerbate shadow IT has less to do with the tools and more to do with the way companies operate. An important aspect of that is the relationship between IT and lines of business – whether they’re in the habit of working in partnership or whether IT is seen as a bottleneck or obstacle to what LOBs want to achieve. Another factor is marketing. No-code tools in particular are often touted as requiring no IT involvement, which may not be the case, especially when connecting to legacy systems and data sources. continued on page 17 >


Full Page Ads_SDT027.qxp_Layout 1 8/27/19 2:13 PM Page 15


012-17_SDT027.qxp_Layout 1 8/27/19 1:06 PM Page 16

16

SD Times

September 2019

www.sdtimes.com

Why family businesses choose low-code/no-code

BY LISA MORGAN

The typical family-owned business tends not to be known for its software development prowess. Instead, it tends to be known for serving a specific customer need such as manufacturing embroidered patches or distributing food supplies. Such are the core competencies of embroidered patch manufacturer A-B Emblem and food service distributor Flanagan Foodservice. Both companies are using low-code/no-code tools to modernize their businesses.

Flanagan Foodservice wants flexibility Flanagan Foodservice is the largest family-owned company in Canada. It distributes food items to restaurants, hotels, institutions and care homes in Ontario. Flanagan used the WaveMaker low-code platform to develop an ecommerce application that replaces what it had built previously using the Oracle platform. Oracle remains the go-to database, but the company wanted a solution that enables application development in the browser that’s

deployed to the browser. “In some cases, what you see is not what you get. We were also trying to decouple ourselves from certain homogeneous-type environments so we could connect to multiple back-end databases,” said Jerry Braga, senior programmer analyst at Flanagan Foodservice. Had Oracle offered the capabilities it does today, Flanagan might have kept its application development on the Oracle platform rather than moving to lowcode. At the time, the Flanagan development team felt constrained by Oracle’s capabilities and wanted greater flexibility moving forward. “Every application needs its own business logic that has to be integrated into it, but for a pure front-facing thing, [low-code] was the quickest way to give us the most current type of user experience without having to do a lot of plumbing,” said Braga. The development team lacked significant experience with Angular, CSS and HTML, but according to Braga they didn’t need it. Using the low-code platform they were able to take advan-

tage of those technologies without having to learn them first. Interestingly, the more they used the low-code platform, the better they understood the underlying technologies. Braga said another attractive benefit was the ability to build an application once and deploy it to different devices including tablets and phones. Developers are still needed to code the minor percentage of functionality that the low-code platform does not provide. Of the 10 people Flanagan has in IT, three are developers. Flanagan’s ecommerce application design is more like a wrapper because the business logic still resides in the database, like the original application. The business logic could have been extracted from the database but developers didn’t want to reengineer the application architecture. “Later, if we want to move the database platform to another vendor, then we’ll revisit how we replace the things that are being called directly in the database,” said Braga. For now, Flanagan’s development team is content with


012-17_SDT027.qxp_Layout 1 8/27/19 1:07 PM Page 17

www.sdtimes.com

the low-code application’s performance, availability and scalability.

A-B Emblem ditches management by spreadsheets Management by spreadsheets is an unwieldy process. Yet, some organizations still do it. One of them used to be A-B Emblem, which produces embroidered patches for NASA, Boy Scouts, Girl Scouts and thousands of other organizations. A-B Emblem used the Kintone low-code/no-code platform to create its own supply chain management system and to digitize some of its businesses processes. Leading low-code/no-code development is compliance officer Heather Johnson, who originally joined the company as an IT intern. Since then, A-B Emblem has outsourced most of its IT services and Johnson moved into her compliance officer role, which now includes low-code/no-code development for the entire company. “Even with my degree in computer IT, I was not familiar with low-code,” said Johnson. “I helped our purchasing manager with a very specific problem. That’s how I became introduced to the low-code/nocode movement.” A self-professed “software junkie,” Johnson had a habit of downloading software that she tried to break. Eventually, she Jerry Braga stumbled upon the lowcode/no-code platform, which she tried on a trial basis. Within the 30-day trial period, she was able to build the supply chain solution. The solution was implemented after signing the contract with the low-code/no-code vendor. Johnson later trained A-B Emblem suppliers on its use. Now, Johnson is in the process of building a custom low-code ERP solution. Since a formal IT function no longer exists in the company, application developers tend to be citizen developers. Both A-B Emblem and Flanagan Foodservice have required the assistance of their respective low-code/no-

code vendors to meet their goals since neither company has significant inhouse developer resources. Flanagan needs help coding what can’t be done with the platform; A-B Emblem needs application development experience it lacks. Johnson said A-B Emblem’s vendor has helped expand the scope of low-code/no-code use cases. For example, the Microsoft Excel-based order system was replaced with a custom low-code application. “Our entire quoting system was in Excel forms originally with a sick amount of DBA code. It was a beast,” said Johnson. “When we started putting things in [the low-code/no-code platform] we found that we had all these extra check points because we didn’t trust the system we’d built [in Excel]. Instead of having thousands of rows in one database, [we had] thousands of databases in one row, which is ludicrous and they weren’t even connected.” A-B Emblem maintains very little stock on-hand because most customer orders are very small one-time orders that can be unprofitable. The low-code approach has enabled the company to adopt a Vista Print-type ordering model in which customers can upload designs and order as many patches as they need. Within the company, low-code/no-code development has enabled the rapid implementation of application features and enhancements. According to Johnson, most requests can be handled on the spot in a matter of minutes as opposed to days or weeks. The business process improvements enabled by low-code mean fewer humans are required to do the same amount of work. However, as a familyowned business, A-B Emblem may be more sensitive to the personal impact of change on its employees, so it has made a point of moving the displaced workers into new roles. “One of the things I’m adamant about is not to forget the people element of this,” said Johnson. z

September 2019

SD Times

< continued from page 14

Low-code/no-code isn’t perfect It’s important to understand the capabilities and limitations of a low-code/nocode platform before adopting it. While business users and developers can accomplish a lot using such tools, traditional development expertise is typically necessary to deal with customizations. “Low-code/no-code can be customized only to some extent and legacy systems written with code, libraries and APIs much more than often have complex business rules that are hard to port to no-code platforms,” said Sebastian Dolber, CEO and founder of software development service provider Astor Software. “Another big limitation/red flag with these new platforms is vendor lock-in. Once you develop your app, it’s almost impossible to move to another platform/provider.” Although there are low-code/no-code tools aimed specifically at professional developers, web developers and LOB users, the vendors are actively looking to address other markets. That means the high-end tools providers are offering (or may plan to offer) simpler tools for web developers and/or LOB users. Meanwhile, no-code (simpler to use) tool providers are targeting web developers and/or professional developers. While low-code/no-code tools aren’t perfect, they help enable faster time to market. Moreover, with digital transformation, software requirements are outpacing the capabilities of traditional software teams because there just aren’t enough programmers to handle all the work. Low-code/no-code tools are seen as a solution to the problem, but it’s important to approach democratized application development in a sound manner that manages potential risks while enabling new opportunities. Generally speaking, low-code/no-code development must be supplemented with traditional coding expertise to achieve its goals. Moving forward, developers may find themselves under pressure to utilize low-code tools if hand-coded software development is viewed as a bottleneck to delivering business value. z

17


Full Page Ads_SDT027.qxp_Layout 1 8/27/19 2:13 PM Page 18


019_SDT027.qxp_Layout 1 8/27/19 4:15 PM Page 19

www.sdtimes.com

September 2019

SD Times

INDUSTRY SPOTLIGHT

A managed approach can improve the health of open source supply chains BY JEFFREY SCHWARTZ

The rise in attacks against the software supply chain is one outgrowth of vulnerabilities in open-source code that go unnoticed and therefore unpatched, a problem that has escalated despite the best efforts of enterprise development teams. As many recent high-profile breaches have underscored, it takes little for an overlooked patch to wreak havoc. Even those organizations that follow recommended secure development life cycle processes are finding themselves overwhelmed by the complexity in keeping up-to-date their modern, businesscritical systems and applications that are built with open-source components from a broad array of providers. Accordingly, this raises the risk of vulnerabilities compromising the software supply chain. Supply chain compromises increased 78 percent last year, according to the most recent Symantec 2019 Internet Security Threat Report. The attacks on the software supply chain haven’t let up. Last autumn, users of the event-stream JavaScript library employed by large open source projects and commercial codebases all over the world discovered a vulnerability in the package caused by a malicious actor who had taken over as the project maintainer and was trying to steal Bitcoin. This year, compromised projects included Webmin, where thousands of public-facing web servers were potentially impacted, and a set of eleven ruby libraries, where code was also inserted to mine for cryptocurrency. Often under the radar when it comes to these software supply chain vulnerabilities are the limitations in managing these open-source components. Donald Fischer, co-founder and CEO of Tidelift, a startup that removes the burden of managing open source dependencies from dev teams, said that Content provided by SD Times and

70 percent of the software in modern solutions that support open-source enterprise-developed applications and components to commercial standards. their underlying systems now consists Tidelift’s subscription-based service lets of open-source components from vari- dev teams effectively offload the licensous package repositories and communi- ing, security and maintenance aspects ty-led efforts rather than big projects of open-source components. that have direct corporate backing. Led by open-source industry veterMany of the maintainers behind ans, many whom were on the original these projects aren’t paid to keep them Red Hat Enterprise Linux team— up to date, so lack the incentive to add including Fischer—Tidelift has partnew functionality over time or even to nered with a network of developers who patch them when vulnerabiltypically are the original creities are discovered. Conseators and maintainers of quently, tracking patches open-source components. from the disparate commuMaintainers collaborating nities that build and maintain with Tidelift, or “lifters,” are these components, or forking compensated to deliver vetand patching projects themted updates as they’re selves, can take up to 20 perreleased and then Tidelift cent of developers’ workdelivers them to its subdays, according to research scribers. As part of the servconducted by Tidelift. ice, Tidelift helps organizaDonald Fischer Managing open-source tions select and identify all the packages requires developers to prop- components within an environment. The erly choose the right packages in the service also draws on knowledge from first place. It also requires that dev Tidelift’s database of information on 3.3 teams understand how well these pack- million open-source packages. ages are being maintained and who is Tidelift connects to a customer’s softdoing the maintenance work. While ware development lifecycle by linking to some might come from established their code collaboration environment, providers, many open-source compo- such as GitHub, Atlassian Bitbucket or nents are from a single maintainer. other source code control or develop“Maintenance is probably for many ment tools, including the Microsoft tool the second biggest chunk of time devel- suites. “We help ensure that with the opers spend in their day and a lot of that open-source software that they’re conis open-source related maintenance,” suming that they don’t pull in anything Fischer told SD Times. “If you could with a known security vulnerability,” Fisyou take that work off of these people’s cher says. “When they pull a new packplates, pay the people who actually are in age into their application, we ensure that the best position to do the maintenance the open source licenses are clearly and gave developers that time back, articulated, and in compliance with their imagine what great things could be organization’s policies. And we ensure done.” that when they add a package to their While this is not a new problem, Fis- application, it works and somebody’s on cher says some companies are now the hook to keep it working in the responding with managed open source future.” z

19


Full Page Ads_SDT027.qxp_Layout 1 8/27/19 2:13 PM Page 20


021_SDT027.qxp_Layout 1 8/27/19 1:11 PM Page 21

www.sdtimes.com

September 2019

SD Times

Progress releases Web Accessibility Guidebook for developers BY CHRISTINA CARDOZA

There are a number of things developers have to consider when developing for the web, but one thing that may not get enough attention is the accessibility of their application. The notion of web accessibility is not new, with the World Wide Web Consortium (W3C) publishing the first Web Content Accessibility Guidelines in 1999, but it is still something not enough developers are putting effort into. As a result, Progress has released new guidelines to help developers better understand web accessibility and how they can develop their apps to adhere to it. “The web is the ultimate equalizer. If you have access to an internet connection, you have access to essentially unlimited knowledge at your fingertips,

no matter who you are or where in the world you may be from. Unfortunately, this is not actually true for folks that need assistance to access the Internet. While accessibility standards have been around for some time it hasn’t been until recently that accessibility has become a focus for websites and applications,” said Carl Bergenhem, product manager for Kendo UI at Progress. Additionally, it’s hard to build something that developers don’t understand, it takes a lot of work to maintain, and a majority of developers’ clients aren’t necessarily going to be affected by a disability, Bergenhem explained. “Developers are struggling with implementing accessibility compliance because application requirements are becoming more complex while deadlines stay the same or get

Report: Majority of websites are inaccessible to blind users Despite efforts to make the web more accessible for people with disabilities, cognitive impairments and vision/hearing difficulties, there is still a digital divide. A new report from accessibility software company Deque Systems and conducted by Nucleus Research found that Internet websites in certain industries are largely inaccessible to people who have trouble with vision. After interviewing 73 blind adults, the researchers found that around 70 percent of e-commerce, news and information and government categories had significant accessibility issues, prompting users to take their business to rival sites. The research also found that internet users who are blind abandon two internet transactions a month because of inaccessibility, call a company’s customer service department once a week to navigate around the accessibilities, and that fewer than one in three websites had clear contact information or means for a consumer who is blind to report accessibility challenges or request. According to Deque, this divide results in a $6.9 billion missed opportunity market. While many people with these disabilities use screen readers or screen magnifiers to navigate websites, many websites are not built with accessibility in mind and aren’t optimized to work with those, the company explained. The report did find that well-known companies such as Amazon, Best Buy and Target excelled in fixing accessibility issues in the ecommerce space. “A focus on accessibility needs to be a core part of the website design and development process,” said Preety Kumar, the CEO of Deque Systems. “Considering accessibility as early as the conception phase, and proactively building and testing sites for accessibility as they move towards production, is significantly more effective than remediating it later, helping organizations save significant time and resources while avoiding unnecessary customer grievances.” “Besides the moral dilemma and legal risk, businesses with inaccessible websites are missing a huge revenue opportunity by ignoring an untapped market,” said Kumar. z —Jakub Lewkowicz

even tighter. Ensuring accessibility compliance takes time and dedication. Unfortunately, as projects fall behind and deadlines loom, accessibility is one of the first things to go,” he said. The Progress Web Accessibility Guidebook for Developers is designed to make web accessibility as much as a priority as any feature or bug fix. The guidebook covers why accessibility is important, current legislation that is working to make accessibility a mandatory feature, types of disabilities and accessibility best practices, and an introduction to assistive technology. The latest version of W3C’s Web Content Accessibility Guidelines adds new criteria for low-vision requirements and improves guidelines around cognitive, language and learning disabilities. It also goes over four principles of accessibility: perceivable, operable, understandable, and robust. “While it is nice to read over and discuss the standards and how to follow them, we cannot improve our knowledge around accessibility without actually attempting to implement it. Learning by doing is the only true way to improve. So, attempting to follow accessibility standards, testing the result to see what feedback is provided, and implementing the suggested improvements will help any developer become more familiar with accessibility,” said Bergenhem. Other resources: • Understanding the Web Content Accessibility Guidelines by Mozilla • A free web accessibility course by Google • And We have web accessibility in mind, an organization dedicated to providing education. z

21


Full Page Ads_SDT027.qxp_Layout 1 8/27/19 2:14 PM Page 22


023_SDT027.qxp_Layout 1 8/27/19 4:30 PM Page 23

www.sdtimes.com

September 2019

SD Times

INDUSTRY SPOTLIGHT

Open-source big data processing at massive scale and warp speed god of thunder, is the data refinery on HPCC Systems (High Performance the left side of the data pipeline, able to Computing Cluster), a dba of Lexis- ingest massive amounts of data. Its job Nexis Risk Solutions, is an open-source is the general processing of large volbig-data computing platform. Flavio umes of any type of raw data, to perVillanustre, vice president technology form ETL (extract, transform, load), and CISO at LexisNexis Risk Solutions, data cleansing, normalization and explained HPCC Systems’s evolution hygiene, and to perform the data intecame as a necessity. gration process, either rules-based or “In 2000 we were getting into data probabilistic. Challenges that this part analytics, using the platforms, databas- of the data pipeline can pose include es, and data integration tools that were timely processing of huge data volumes, available at the time. None non-stop operation, expresof these tools would scale to siveness for extremely comhandle the quantity of data plex transformations, and and complexity of processes managing the complexities that we were doing.” He of parallel processing tasks. added, “That drove us to Security, fast response create our own platform, time and scalability to huge now known as HPCC Sysnumbers of clients are just a tems, a completely free, few data delivery challenges. end-to-end big data platROXIE, the other parallel form.” data processing component According to Villanustre, Flavio Villanustre in the HPCC Systems platAccurint is the first product that uti- form, provides rapid data delivery capalized the platform. Accurint began as a bilities for online applications through data lookup service that took large web services using a distributed amounts of data from numerous data indexed file system and functions simisets and provided basic search capabili- lar to Hadoop with HBase and Hive ties to other companies and organiza- added, but is significantly faster, tions. Today, Accurint has evolved and according to Villanustre. developed capabilities to help detect Villanustre said, “Thor is the data fraud and verify identities. refinery engine of HPCC Systems that The open source HPCC Systems performs massive batch-oriented data platform is a programmable Big Data processing. It can easily profile, clean, store made up of two major cluster pro- enhance, transform and analyze mixedcessing environment: Thor and ROX- schema data. All of the data and models IE. Each can be used independently are then taken into the data delivery but the real power of the system comes system called ROXIE, which can prowhen they are used together, seamless- vide high performance, highly concurly allowing data analysts and data scien- rent, highly reliable data querying and tists to fulfill the overall data lifecycle, delivery strategies in massive data from acquisition to delivery, in one stores.” homogeneous and consistent platform. Both platform components use the Thor, a homage to the ancient Norse HPCC Systems Enterprise Control BY ALYSON BEHR

Content provided by SD Times and

Language (ECL). “HPCC Systems leverages several strengths that are inherent to ECL,” Villanustre said. “ECL is a declarative dataflow programming language that disallows side effects in functions, ensuring referential transparency. This fact, combined with a number of other capabilities, allow your ECL work to compile into a highly efficient machine code version of that program and ensures that the parallel process across the platform will run as fast as possible. In lower level programming languages like Java for Hadoop, or programming languages that are more imperative in nature it can be quite challenging to implement a program efficiently in a distributed platform where processing close to where the data resides (data locality) is key to the overall performance of the system.” There are several benefits for developers using HPCC Systems. Villanustre pointed out a key difference: “The HPCC Systems platform gives you a single homogeneous data pipeline. This significantly reduces the effort necessary to install and manage the platform. Above and foremost, this eliminates that dependency nightmare that people managing other open source big data platforms usually suffer when patching and upgrading their systems.” He also said that ECL helps data analysts, data programmers and data scientists focus on the solution of the problem at hand rather than dealing with the underlying platform details. HPCC Systems Community Edition, as a platform, is completely open source and licensed under the Apache 2.0 license. As such it’s free to download, use, change and gives the power to users to “look under the hood” to understand how things are done. z

23


024_SDT027.qxp_Layout 1 8/27/19 1:12 PM Page 24

24

SD Times

September 2019

www.sdtimes.com

DEVOPS WATCH

CloudBees announces vision for managing software delivery BY CHRISTINA CARDOZA

CloudBees announced its vision for software delivery management (SDM) at its annual DevOps World | Jenkins World conference in San Francisco last month. SDM is an ongoing trend that aims to help organizations connect their entire business through delivery, teams, tools and technologies. “What we are observing in organizations is a lot of them have acquired a lot of different systems to develop and deliver better and different teams have different needs and that leads to a lot of silos among those teams and even within teams,” said Sacha Labourey, CEO and founder of CloudBees. As a result, the company announced the early preview version of its CloudBees SDM Platform. The platform is designed to tie together all of the artifacts, data and events within an organization's DevOps toolchain and bring them together in a unified system of record.

“There are many things you want to know about your organization. Sometimes you want to know why it is not working. Why it is not working fast enough. Where are the bottlenecks. How can you do things better. We are building this SDM that is essentially a data back end,” said Labourey. Bringing all the data together will make it possible to unlock value for the business, seeing where the bottlenecks are and understanding why you are not getting the outcomes you are looking for, Labourey explained. In addition, Labourey said the SDM platform is not just a dashboard for viewing everything in one place, but it also helps connect common processes and data within the software delivery life cycle. Features include a product hub, policy engine, efficiency dashboard, contributions dashboard, realtime value stream management and integrated feature flag management. z

XebiaLabs DevOps Platform 9.0 comes with new push-button audit reporting BY JAKUB LEWKOWICZ

XebiaLabs is updated its DevOps Platform to provide compliance and visibility across the entire software delivery pipeline. Version 9.0 of the platform comes with a new release audit report that covers the entire release cycle. According to the company, it enables users to see “what happened, when it happened, where it happened, and who made it happen.” “Until now, collecting and analyzing that proof and providing it in a format auditors can use has been almost impossible,” said Derek Langone, the CEO of XebiaLabs. According to the company, auditing

requires constant exchanges between security, compliance, development and DevOps teams. Other features of the audit report include ability to visualize and monitor the software chain of custody, verify security and compliance, drill down into the chain of custody for any release and any tasks, understand risks, and easily identify bottlenecks. The DevOps platform has also been updated to improve configuration management, integrate with secrets management solutions, automatically start releases based on events, and new integrations with Compuware Topaz for Total Test and Delphix Dynamic Data Platform. z

In other DevOps news… n CircleCI announced that it is releasing orbs that allow users to easily add integrations to tools and services to address security best practices for CI/CD. According to the company, orbs are reusable and shareable open-source packages of CircleCI configurations that enable the integration of services for the three important categories of security for CI/CD. These include securing the pipeline configuration, securing code and Git history analysis, and enforcing security policy.

n The DevOps Institute announced its Ambassador Program, a volunteerbased program that connects industry leaders with DevOps Institute community members through the SKIL Framework. Ambassadors can offer contributed content, participate in forums and online groups, organize local community events known as SKILups, and create other avenues to engage learning pathways within the community.

n SmartBear revealed TestEngine, a new solution designed to automate test execution in CI/CD environments. According to the company, users can now execute ReadyAPI, SoapUI Pro and SoapUI Open Source tests simultaneously on a central source that’s integrated into their development processes. This tackles the challenges that Agile and DevOps teams have such as complex deployments, large regression suites, and global development teams. “Coordinating and managing test execution and reporting are a hassle for Agile and DevOps teams. They’re hampered by complex deployments, large regression suites, and global development teams. It’s hard to efficiently run tests, not to mention effectively manage all of the organization’s growing testing needs. TestEngine fixes all of this, and it’s a package that can empower the most efficient software teams, even those distributed across the globe.” said Gail Shlansky, director of products at SmartBear. z


Full Page Ads_SDT027.qxp_Layout 1 8/27/19 2:14 PM Page 25

LAS VEGAS

DON’T START YOUR DEVOPS JOURNEY ALONE.

REGISTER NOW itrevolution.com/sdtimes


026,27_SDT027.qxp_Layout 1 8/27/19 2:04 PM Page 26

P

BY NATE BERENT-SPILLSON

eople sometimes describe DevOps as a factory. It’s a good analogy. Like a factory, code goes go in one end of the DevOps line. Finished software comes out the other. I’d take the idea one step further. In its highest form, DevOps is not just any factory, but a ‘lights-out’ factory. Also called a “dark factory,” a lightsout factory is one so automated it can perform most tasks in the dark, needing only a small team of supervisors to keep an eye on things in the control room. That’s the level of automation DevOps should strive for. In a lights-out DevOps factory, submitted code is automatically reviewed for adherence to coding standards, static analysis, security vulnerabilities and automated test coverage. After making it through the first pass, the code gets put through its paces with automated integration, performance, load and end-toend tests. Only then, after completing all those tests, is it ready for deployment to an approved environment. As for those environments, the lights-out DevOps factory automatically sets them up, provisions them, deploys to them and tears them down as needed. All software configuration, Nate Berent-Spillson is a technology principal at software services provider Nexient.

secrets, certificates, networks and so forth spring into being at deploy time, requiring no manual fidgeting with the settings. Application health is monitored down to a fine-grained level, and the actual production runtime performance is visible through intuitive dashboards and queryable operator consoles (the DevOps version of the factory control room). When needed, the system can self-heal as issues are detected. This might sound like something out of science fiction, but it’s as real as an actual, full-fledged lights-out factory. Which is to say, “real, but rare.” Many automated factories approach lights-out status, but few go all the way. The same could be said of DevOps. The good news is that you can design a basic factory line that delivers most of the benefits of a “lights-out” operation and isn’t too hard to create. You’ll get most of the ROI just by creating a DevOps dark factory between production and test. Here is a checklist for putting together your own “almost lights-out” DevOps solution. Don’t worry. None of these decisions are irreversible. You can always change your mind. It will just take some rework. IaaS or PaaS or containers: I recommend PaaS or containers. I’m a big fan of PaaS because you get a nice price point and just the right

1.

amount of configurability, without the added complexity of full specification. Containers are a nice middle ground. The spend for a container cluster is still there, but if you’re managing a large ecosystem, the orchestration capabilities of containers could become the deciding factor. Public cloud or on-premises cloud: I recommend public cloud. Going back to our factory analogy, a hundred years ago factories generated their own power, but that meant they also had to own the power infrastructure and keep people on staff to manage it. Eventually centralized power production became the norm. Utility companies specialized in generating and distributing power, and companies went back to focusing on manufacturing. The same thing is happening with compute infrastructure and the cloud providers. The likes of Google, Amazon and Microsoft have taken the place of the power companies, having developed the specialized services and skills needed to run large data centers. I say let them own the problem while you pay for the service. There are situations where a private cloud can make sense, but it’s largely a function of organizational size. If you’re already running a lot of large data centers, you may have enough core infrastructure and competency in place to make the shift to private cloud. If you

2.


026,27_SDT027.qxp_Layout 1 8/27/19 3:43 PM Page 27

www.sdtimes.com

centric, as they were in the 90’s and 00’s, with the RDBMS the center of the enterprise universe. Relational still has its place, but cloud-native storage options like table, document, and blob provide super-cheap high-performance options. I’ve seen many organizations that basically applied their old standards to the cloud, and said, “Well, you can’t use blob storage because it’s not an approved technology,” or “You can’t use serverless because it’s an ‘unbounded’ resource.” That’s the wrong way to do it. You need to re-examine your application strategy to use the best approach for the price point. Mobile: Mobile builds are one of the things that can throw you for a loop. Android is easy, Mac is a little more complicated. You’ll either need a physical Mac for builds, or if you go with Azure DevOps, you can have it run on a Microsoft Mac instance in Azure. Some organizations still haven’t figured out that they need a Mac compute strategy. I once had a team so hamstrung by corporate policy, they were literally trying to figure out how to build a “hack-intosh” because the business wanted to build an iOS app but corporate IT shot down buying any Macs. Once we informed them we couldn’t legally develop on a “hack-intosh,” they just killed the project instead of trying to convince IT to use Mac infrastructure. Yes, they abandoned a project, with a real business case and positive ROI because IT was too rigid. DB versioning: Use a tool like Liquibase or Flyway. Your process can only run as fast as your rate-limiting step, and if you’re still versioning your database by hand, you’ll never go faster than your DBAs can execute scripts. Besides, they have more important things to do. Artifact management, security scanning, log aggregation, monitoring: Don’t get hung up on this stuff. You can figure it out as you go. Get items in your backlog for each of these activities and have a more junior DevOps resource ripple each extension through to the process as its developed. Code promotion: Lay out your strategy to go from Dev to Test to Stage to Prod, and replace any manual

5. decide to go that route, you absolutely must commit to a true DevOps approach. I’ve seen several organizations say they’re doing “private cloud” when in reality they’re doing business as usual and don’t understand why they’re not getting any of the temporal or financial benefits of DevOps. If you find yourself in this situation, do a quick value-stream analysis of your development process, compare it to a lights-out process, and you’ll see nothing’s changed from your old Ops model. Durable storage for databases, queues, etc.: I recommend using a DB service from the cloud provider. Similar to the decision between IaaS and PaaS, I’d rather pay someone else to own the problem. Making any service resilient means having to worry about redundancy and disk management. With a database, queue, or messaging service, you’ll need a durable store for the runtime service. Then, over time, you’ll not only have to patch the service but take down and reattach the storage to the runtime system. This is largely a solved problem from a technological standpoint, but it’s just more complexity to manage. Add in the need for service and storage redundancy and backup and disaster recovery, and the equation gets even more complex. SQL vs. NoSQL: Many organizations are still relational database-

3.

4.

6. 7.

8.

September 2019

SD Times

setup like networking, certificates and gateways with automated scripts. Secrets: Decide on a basic toolchain for secrets management, even if it’s really basic. There’s just no excuse for storing secrets with the source control. There are even tools like git-secret, black-box, and git-crypt that provide simple tooling and patterns for storing secrets encrypted. CI: Set up and configure your CI tool, including a backup / restore process. When you get more sophisticated, you’ll actually want to apply DevOps to your DevOps, but for now just make sure you can stand up your CI tool in a reasonable amount of time, repeatedly, with backup. Now that you’ve made some initial technology decisions and established your baseline infrastructure, make sure you have at least one solid reference project. This is a project you keep evergreen and use to develop new extensions and capabilities to your pipelines. You should have an example for each type of application in your ecosystem. This is the project people should refer to when they want to know how to do something. As you evolve your pipelines, update this project with the latest and greatest features and steps. For each type of deployment — database, API, front end and mobile — you’ll want to start with a basic assembly line. The key elements to your line will be Build, Unit Testing, Reporting, and Artifact Creation. Once you have those, you’ll need to design a process for deploying an artifact into an environment (i.e. deploying to Test, Stage, Prod) with its runtime configuration. From there, keep adding components to your factory. Choose projects in the order that gets you the most ROI, either by eliminating a constraint or reducing wait time. At each stage, try to make “everything as code.” Always create both a deployment and rollback and exercise the heck out of it all the time. When it comes to tooling, there are more than enough good open-source options to get you started. To sum up, going lights-out means committing to making everything code, automated, and tested. z

9.

10.

27


028_SDT027.qxp_Layout 1 8/27/19 1:13 PM Page 28

Starting a DevOps initiative requires cultural and technology shifts

28 SD Times 28

September 2019

www.sdtimes.com

Redgate Software, which builds tools for developers and data professionals, is about to celebrate the 20th anniversary of its SQL Compare tool. Kendra Little, DevOps advocate at Redgate said, “This product has evolved quite a lot over the last 20 years, but the technology is still at the heart of a lot of our products because a lot of what SQL Compare originally did was just help people compare databases,” she added. “We want to help people develop in a simple, intuitive way, and know how to adapt this tooling with other tools as much as possible to create a solution that helps people avoid manual work and create quality code for databases.” GDPR has had a big impact on the solutions in the space. The company has created technology that helps people implement guardrails for their databases around sensitive data. “Guardrails have become more common in this area since GDPR has started because we need to be able to foster innovation more than ever, but we can’t just let the data be at risk,” Little pointed out. “We have to protect it by design. In the software development lifecycle, we enable ways for people to use realistic data that has been masked so that you can do things like on demand, create environments, provision databases, but you’re not just copying around risky data.” The mission of delivering a compliant database service for Redgate is guided by the philosophy to meet people where they work and support them throughout their DevOps journey. It knows starting a DevOps initiative is tough and there are a lot of cultural and technology changes that have to happen. Little explained, “We want to help people continue to use familiar tools that they like,

and we want our solution to map into that. We also recognize that the way they work is going to evolve over their journey.” The cultural changes that have to happen for people to start DevOps are significant. Developers can be resistant to DevOps, even though it’s a very developer-focused discipline. The database area is particularly siloed. It’s common for database specialists to be in a gatekeeper position for production, and for developers to try to throw changes over the walls. The cultural changes in DevOps require shifting this relationship dramatically and finding ways to bring these specialists into the process early in the development lifecycle. Little said, “A lot of what we help folks with is

“The central problem that people hit even after doing an Agile initiative is they end up having what we call a two-speed culture.” —Kendra Little

to identify the places in their process where they can bring these people in and what’s the most effective way to make the best use of everyone’s time. You can’t have your specialist just attend six or seven meetings every day. That doesn’t work.” Redgate’s Compliant Database DevOps stands out from its competitors because it does not require developers to learn to use a new development environment. It creates extensions that hook into environments and tools that its customers already use. The automation components enable people to work with scripting and provides graphical extensions for tools like Octopus Deploy or Azure DevOps so developers can use their orchestration functionality. “One of

the latest things we’re working on that’s in our early access program right now, is a way that developers who prefer to use Microsoft Visual Studio can work in Visual Studio and collaborate on the same project with database administrators who are using SQL Server Management Studio,” adds Little. Developers tend to be quite nervous about changes to a database and one of the philosophies of DevOps is, if something causes you pain you should do it more so you get used to it. Little said that the guardrails in Compliant Database DevOps reduce risk and allow developers to keep all of the benefits of practices that they’ve been using to produce quality code for applications for years, but now they have the ability to do that for databases as well. Little attended a Gartner Architecture conference and found that people were surprised that you can do DevOps to the database. She believes this is an area where Agile has been taking over. The problem she sees is when people implement Agile they tend to implement the lowest hanging fruit first, and at the application level only, without realizing there are other areas where they can implement. “So essentially the central problem that people hit even after doing an Agile initiative is they end up having what we call a two-speed culture, where they can deploy these application changes really fast but as soon as they have to touch the database, everything slows down. When you explain you can apply Agile methodologies to the database, and you can do DevOps to the database, people are amazed.” Tough problems can now be solved using Agile with DevOps. There’s been such a fixed mindset about this that even explaining to people that, “Yes, you really can do this successfully!” is absolutely a mind-blowing, mind-opening thing. z


Full Page Ads_SDT027.qxp_Layout 1 8/27/19 2:14 PM Page 29

Compliant Database DevOps Deliver value quicker while keeping your data safe

Redgate's Compliant Database DevOps solution gives you an end-to-end framework for extending DevOps to your database while complying with regulations and protecting your data.

Find out more at www.red-gate.com/DevOps


Full Page Ads_SDT027.qxp_Layout 1 8/27/19 2:16 PM Page 30

CI/CD is just the beginning.

;ย rv -11;ัด;u-|;v เฆ l; =uol 1o7; 1ollb| |o 7;rัดoย ฤป 0ย | ย _-| -0oย | ;ย ;uย |_bm] |_-| _-rr;mv 0;=ou; -m7 -[;uฤต Connect your DevOps tools and teams to all work upstream Automate |_; Yoย o= ย ouh =uol b7;-เฆ om |o or;u-เฆ om -m7 0-1h Visualize ย oย u ;m7ล |oล ;m7 ย ouhYoย |o vro| 0oย ัด;m;1hv Accelerate |_; ย -ัดย ; 7;ัดbย ;uย o= |_; -ย ;vol; ruo7ย 1|v ย oย 0ย bัด7 o lou; 7ย rัดb1-|; ;m|uย ฤท mo lou; ย -v|; ล fย v| ย -ัดย ;ฤบ

Coming 2020 bm-ัดัดย - 0ย vbm;vv l;|ub1v |ooัด =ou vo[ย -u; 7;ัดbย ;uย ฤบ Tasktop.com


031_SDT027.qxp_Layout 1 8/27/19 1:13 PM Page 31

Tasktop Illuminates the Value Stream Software delivery teams are delivering software faster with the help of Agile and DevOps, but are they delivering value faster? Although release frequency is increasing, most teams lack the visibility across the product lifecycle — and the associated toolchain — to help identify the obstacles to value stream flow. “Value stream management is not just about delivery; it’s also about protecting business value,” said Carmen DeArdo, senior VSM strategist at Tasktop. “Enterprises need to have a ‘True North,’ and challenge how they can work more closely with the business so they can be more responsive to the market and disruption. We help customers navigate that, which also means moving from a project to a product model to change the perception of IT as a cost center, rather than a revenue driver.”

Understanding the Value Stream Flow is essential to value stream optimization and management. To understand value stream flow, the teams that plan, build and deliver software need a single source of truth into the flow of events, from the earliest stages of product ideation through production —including customer feedback. While a product life cycle seems like a continuous flow conceptually, Tasktop reveals the otherwise hidden wait-state points that interfere with value delivery. “High-performing companies are focused on products because products are sustainable. Projects come and go,” said DeArdo. “A product model prioritizes work, ensuring you have the right distribution (across all work), as well as paying close attention to technical debt which is critical. Debt is what causes companies to go under, hurting their ability to compete with disruptive forces in the marketplace.”

Value streams begin and end with customers, but what happens in between is often a mystery. Although specialist teams have insight into artifacts within the context of a particular tool, they tend to lack visibility across the artifacts in other tools in the value stream to understand where the obstacles to value delivery reside. “Enterprises spend a lot of time assuming they’re improving the way we work, but how do we really know?” said DeArdo. “It’s hard to determine how long it takes for a feature or defect to go through the value stream if you lack the data to quantify it. Quite often, I talk to teams that are focused on delivery speed, but they tend not to think about the toolchain as a product that is intentionally architected for speed of delivery.” Tasktop enables teams to measure work artifacts in real-time across 58 of the most popular tools that plan, build and deliver software. They’re also able to visualize value stream flow to pinpoint bottlenecks. “It’s about making work visible,”

Value stream management is not just about delivery; it’s also about protecting business value. said DeArdo. “It’s not just the Scrum team’s board, it’s how you manage the entire value stream — features, defects, risks, debt. When you understand the value stream, you’re in a better position to prioritize work and optimize its flow.”

Optimize Value Stream Flow Understanding value stream flow is crucial. Start small, with a few teams that might represent a slice of some giv-

31

en product and that have a supportive IT leader and a business leader. Then, observe the flow, considering the following five elements: l Flow load — the number of flow items (features, risks, defects, debt) in progress l Flow time — time elapsed from when flow item is created to delivery l Flow velocity — number of flow items completed in a given time l Flow distribution — allocation of flow items completed across all four flow item types l Flow efficiency — the proportion of time flow items are actively worked compared to the total time elapsed. “If you ask a developer or operations what’s slowing them down, they will typically say they are waiting — for work to flow, for infrastructure, for approval,” said DeArdo. “Those holdups are what you want to discuss and get metrics around. Pick an experiment, run it, learn from it and then use what you learn as a model. That model can be used to scale and sustain what you’re doing across the organization.” Tasktop helps enterprises to extract technical data from the toolchains that underpin product value streams and translates them into Flow Metrics, presenting them in a common language and context that the business can understand. This view includes the value delivered, cost, quality and the team’s happiness since a lack of happiness is an indicator that something is amiss — e.g., too much debt that’s impeding a team’s ability to work. This helps the teams doing the work to better understand the value stream flow and the business outcomes they are aiming to achieve. Conversations can center around these metrics to support strategic investment decisions in IT. Learn more at www.tasktop.com. z


032-33_SDT027.qxp_Layout 1 8/27/19 1:23 PM Page 32


032-33_SDT027.qxp_Layout 1 8/27/19 1:23 PM Page 33

Your DevOps Initiatives Are Failing: Here’s How to Win

Everyone wants to do DevOps, but not everyone understands how to do it. While most organizations understand the benefits successful DevOps brings to the business, they don’t understand how to get there. Harvard Business Review recently found 86 percent of respondents want to build and deploy software more quickly, but only 10 percent are successful at it. Additionally, Gartner predicted by 2023, 90 percent of DevOps initiatives will fail to meet expectations. So despite the desire to do DevOps, there is a disconnect between wanting to do it and actually doing it. Businesses that don’t want to become one of these statistics need to understand why they are not meeting expectations and how to address the problem.

Change is hard According to Stephen Feloney, head of products for continuous testing at Broadcom, it all comes down to the culture, skills and tools. When businesses say they want to do DevOps, they mean they want to release applications faster with higher quality. “When we say these DevOps initiatives fail, they fail because they are failing to meet the needs of the customers,” he said. “They are failing to meet the goal of giving their customers, their users, what they want in a timely fashion. They might provide the feature customers want, but the quality is poor, or they take too long and the feature isn’t differentiating. “DevOps is a big culture change, and if everyone is not on the same page, things can get lost in translation. It is not that people don’t want to change; they don’t know how to change,” Feloney explained. “So even if a DevOps initiative is being mandated from the top, businesses need to provide the necessary resources and training to deploy it.”

Different teams use different tools

In addition to culture and skills, businesses have to take into account that historically, developers and testers use different tools. According to Feloney, you don’t want to make it harder on teams by forcing them to use tools that don’t suit the way different teams work. Agile testers are going to lean towards tools that work “as code” in the IDE, and are compatible with open source. Traditional testers tend to prefer tools with a UI. The key is to let teams work the way they want to work, but ideally with a single platform that allows teams to collaborate; share aspects like tests, virtual services, analytics, and reports; and ultimately break down the silos so your company no longer has Dev + Ops, but truly has DevOps. Broadcom recently announced BlazeMeter Continuous Testing Platform, a shift-left testing solution that handles GUI functional, performance, unit testing, 360° API testing, and mock services. It delivers everything you need in a single UI, so organizations can address problems much faster. With support for mock services, developers can write their own virtual, or mock services, and deploy a number of tests to a service without having to change the application. The solution will store those requests and responses so a tester can go in, see what happened, and enhance it. BlazeMeter CT also supports popular open-source tools developers are accustomed to using, such as Apache Jmeter, Selenium and Grinder. “You are learning from what the developers have done as opposed to reinventing the wheel,” said Feloney. “BlazeMeter CT enables development and Agile teams to get the testing they need to get done much easier and much faster while allowing the collaboration that DevOps requires.”

There is no insight into what is happening

As mentioned earlier, DevOps activities often fail because they take too much time. If teams are unable to find where the bottlenecks are, they are unable to come up with a solution. Having insight into what is happening and how it affects the overall success of the application and business is crucial, according to Uri Scheiner, senior product line manager for Automic Continuous Delivery Director. To this end, Automic Continuous Delivery Director provides a real-time workflow for monitoring and managing features and fixes throughout your entire pipeline. Teams have full visibility into app progress, can easily manage multiapp dependencies, map development efforts to business requirements, and more. Automic Continuous Delivery Director is more than release planning. It’s end-to-end orchestration and pipeline optimization that empowers a culture of shared ownership and helps you deliver higher-quality applications faster.

Businesses are impatient “You can’t go into DevOps saying it is all or nothing,” Feloney explained. DevOps happens in baby steps and with the understanding that it is going to take time to learn and build upon that learning. “A lot of companies are impatient,” Feloney said. “You have to do this in bite sizes and find a team that is willing to do it, work through the problems, know you are going to fail and work on those failures. Once you get that success, and it will be a success if you have the right mentality, then you can share that — make that team and that project the example — and grow that success across your company.” Learn more at www.blazemeter.com and www.cddirector.io. z


034_SDT027.qxp_Layout 1 8/27/19 1:22 PM Page 34

The Most Important Tool in DevOps: Value Stream Mapping

34 SD Times 34

September 2019

www.sdtimes.com

“We want to change the conversation about tooling in DevOps. Everyone is ‘doing’ DevOps, but only a handful are getting the value they expected. Why? They’re using the wrong tools or applying tools in the wrong ways. The solution is to apply the right tools at the right times, and for the right reasons,” said Marc Rix, SAFe Fellow and Curriculum Product Manager at Scaled Agile. To help organizations achieve the best customer-centric results, Scaled Agile now urges teams to start out with Value Stream mapping in order to get both the business and tech sides fully involved in DevOps. While some engineering teams are achieving technically impressive results doing “pieces” of DevOps, DevOps is about much more than just “Dev” and “Ops,” according to Rix, who joined Scaled Agile in January, 2019 after five years of leading largescale Agile and DevOps transformations. The Scaled Agile Framework (SAFe) is a knowledge base of proven, integrated principles and practices for Lean, Agile, and DevOps, aimed at helping organizations deliver high-quality value to customers in the shortest sustainable lead time. In addition to providing SAFe free of charge, Scaled Agile offers guidance services for implementation and licenses its courseware directly and through third-party partners for coaching customers in SAFe. The SAFe DevOps course also prepares participants for the SAFe DevOps Practitioner (SDP) certification exam. “DevOps technology is really cool. But it’s not for winning science-fair prizes,” quipped Rix. “DevOps is for solving real business problems. It’s our mission to help everyone investing in DevOps to achieve the culture of continuous delivery they’re looking for so they can win in their markets.” Scaled Agile frequently advises customers licensing its courseware to kick

things off by using the Value Stream mapping within SAFe DevOps, focusing on the following three learning objectives.

“How you get work into the deployment pipeline is equally as important as how you move work through the pipeline,” said the Scaled Agile product manager.

Mindset over practices As Gene Kim and his co-authors pointed out in The DevOps Handbook, “In DevOps, we define the value stream as the process required to turn a business hypothesis into a technology enabled service that provides value to the customer.” Business value is the ultimate goal of DevOps, and value begins and ends with the customer. DevOps needs to optimize the entire system, not just parts of it. Flow should be Lean across the entire organization, and Value Stream mapping is a Lean tool, said Rix. “DevOps is the result of applying Lean principles to the technology value stream,” attested The DevOps Handbook.

Everyone is essential “If someone touches the product or influences product delivery in any way, they are involved,” according to Rix. Participation in DevOps shouldn’t be offered on an opt-in/opt-out basis, he added. DevOps must involve both IT leaders and business leaders such as corporate executives, line managers, and department heads, Rix said. Non-technical participants should also include product managers, product owners, program managers, analysts, and Scrum Masters, for example. Technical folks should include testers, architects, and info-security specialists, along with developers and operations engineers. “An IT team could be deploying a hundred times per day, but if their work intake is not connected to the business, the results will not materialize.,” observed Mik Kersten, in the book Project to Product.

Plan the work, work the plan (together) By embracing DevOps, organizationwide teams need to face the realities of the current system. Teams should avoid simply “automating for automation’s sake,” or automating a broken system. By mapping and baselining the current system, team members can “think outside the box” and discover the true bottlenecks. Then, they can work together on designing the target state Value Stream, re-engineering the current system based on business needs, and quantifying the expected benefits. “DevOps then evolves incrementally and systematically, with everyone committed, participating, and learning as one team,” Rix maintained.

Applying Value Stream mapping The Value Stream mapping exercises in SAFe DevOps facilitate all three learning objectives. Initially, they should be applied to fully understand the current situation, from the customer point of view, align on the problem across all roles in the organization, and identify the right solutions and metrics, Rix said. In the SAFe DevOps experiential class, attendees from throughout the organization use Value Stream mapping to visualize their end-to-end delivery process, pinpoint systemic bottlenecks, and build an action plan around the top three improvement items that will provide the best results in their environment. Learn more at www.scaledagile .com/devops/ z


Full Page Ads_SDT027.qxp_Layout 1 8/27/19 2:16 PM Page 35

Is Your 9DOXH 6WUHDP Optimized?

$FKLHYH FXVWRPHU FHQWULFLW\ DQG IDVW Ŵ RZ ZLWK 6$)H® DevOps Mindset over practices Business value is the ultimate goal, and value starts and ends with the customer.

Everyone is essential Technical, non-technical, and leadership roles come together to optimize the end-toend Value Stream.

Relentlessly improve Understand your UHDO ZRUNŴ RZ DQG bottlenecks. Then design your targetstate Value Streams.

Learn more at scaledagile.com/devops

© Scaled Agile, Inc.


Full Page Ads_SDT027.qxp_Layout 1 8/27/19 2:16 PM Page 36


037_SDT027.qxp_Layout 1 8/27/19 1:22 PM Page 37

Avoiding The Hidden Costs of Continuous Integration In 2018, 38 percent of infrastructure decision-makers that implemented DevOps and automated their continuous deployment and release automation efforts saw revenue growth of 10 percent or more from the prior year. In contrast, only 25 percent of those that had not adopted DevOps reported comparable growth, according to Forrester Research, Inc. Continuous integration and continuous delivery provide teams with faster feedback, higher confidence in code, and the agility that can give them the competitive advantage to win. The overlooked reality is that implementing and managing a CI/CD platform for any reasonably sized organization can tally up to huge expenses in terms of the training, operations, and rollout. These costs exist whether you’re leveraging a SaaS, or running it yourself. There are several key factors to CI/CD implementation and management that will keep costs in check, and help teams optimize their software delivery, dramatically increasing the value they get out of CI. There are also some key strategies that will shorten the time to value for teams embarking on new or updated implementations of CI/CD pipelines. Here are the hidden costs you need to watch out for, some tradeoffs worth thinking about, and how to mitigate common pitfalls to optimize your CI/CD expenditure.

Reduce people spend DevOps teams are expensive, and often some of the most knowledgeable people in a company when it comes to the intricacies of your company’s software and infrastructure. Despite this wealth of knowledge, many of these specialists spend their days configuring delivery pipelines for other developers on the team to use. “When we're talking about people spend, that's obviously a very sensitive subject and so, I think the point

here is really that it’s lovely to reallocate your people spend to the areas that it can have the most impact,” says Edward Webb, Director of Solutions Engineering at CircleCI. Find a CI/CD vendor that can abstract common operational concerns, so that these folks can be freed to work on higher leverage projects. Although your team might only have a few designated DevOps engineers, consider the opportunity costs of what they could be working on instead.

when they try to follow this paradigm of lift and shift. You can’t continue doing things the same way you've always done them, in the cloud, and save money. That means, in order to achieve the cost savings, you need to find ways to reinvest the people.” Organizations transitioning to the cloud should seize the opportunity to take a close look at their pipeline for wholesale opportunities for improvement, rather than just migrating a legacy setup to new infrastructure.

Reduce infrastructure spend

Increase agility and speed

According to Webb, “Servers individually don't cost a ton of money, but when you look at them in aggregate, having a large number of servers that run day in and day out over the course of days, months, and years, those actually add up to be pretty substantial.” If you have teams writing in multiple languages, or different versions of the same language, a common practice is to maintain separate servers running each language or language version. The result is that maintaining a large fleet of heterogeneous CI agents can be prohibitively expensive from a pure infrastructure cost perspective. Instead, consider running jobs within isolated containers, leveraging commodity compute power. This gives teams the ability to define exactly what languages and frameworks they need as they need them, without carrying the overhead of pre-provisioning the maximum number/variety of machines. On top of the effort saved from maintaining a huge fleet of designated servers, you’ll also be able to run more work on fewer servers total, saving infrastructure costs.

By working with systems that shift the configuration tasks and provisioning away from specialized DevOps engineers to the developers working closest to the code, you’ll save infrastructure cost and optimize your people spend. This shift can be a huge cost savings, but it means that less specialized team members are taking on responsibility for operations. It will require a CI/CD platform that is intuitive to your application teams, and that limits the amount of product or domain-specific knowledge they must absorb to get started. Finding a platform that leverages a simple configuration format like YAML or HCL over custom DSL can reduce this burden. Bonus points if you find a provider that lets teams replicate best practices and patterns through shareable configuration. Once teams have the autonomy to update pipelines on their own, you will start to see a multiplier effect of increased agility, iteration, and responsiveness across the entire organization. You will have removed the bottlenecks to experimentation and making change, which means the team will learn faster in service of gaining a crucial competitive advantage. While getting started with CI/CD can seem daunting, the benefit of implementing it is worth the effort. Ultimately, the biggest cost is the cost of not taking action; not doing anything at all. z

Leverage SaaS intelligently People talk about SaaS offerings and working in the cloud as being much more cost-effective, but that isn't automatically the case. Webb points out, “Teams don't end up seeing cost savings

37


038_SDT027.qxp_Layout 1 8/27/19 4:25 PM Page 38

Bringing Rich Communication Experiences Where They Mattermost

38 SD Times 38

September 2019

www.sdtimes.com

One of the basic premises of DevOps is to break down barriers between development and IT teams that historically functioned in relative siloes. Collaboration is the core tennant that makes DevOps work. As teams start to collaborate in real time, the idea is they will see fewer errors and more opportunities for innovation. Getting teams to work together is entirely different from getting teams to simply share status updates with each other. Corey Hulen, CTO and co-founder of the open source messaging and collaboration platform Mattermost, explained developer and operations teams have silos of information where they are looking at different metrics, reports and log files and sometimes monitoring completely different things. In order to really gain the true value of DevOps, they need a common space where they can not only easily connect with each other (chat/video..), but also share files, systems and workflows. Mattermost provides a central communications hub where everyone in an organization can come together, share updates and critical messages, work together to resolve incidents and outages, integrate DevOps tools and create a single shared view of ‘all the things’. Its notifications hub keeps everyone updated and on the same page, and social coding features enables teams to collaborate on code snippets. Integrations with popular tools like Jenkins, Jira, GitHub, Bitbucket, GitLab and PagerDuty allow teams to see all of the real-time notifications from across the development lifecycle — all without having to login to each of the systems. The information gets placed in private or shared channels (depending on a team’s security needs) where developers, operations and even non-technical team members can participate in the conversa-

tion and work together — everyone having the same access to information. Sometimes, what ends up happening is there will be a team collaborating on a performance issue or system outage in the “war room channel,” and someone unexpected is listening in who has a solution to the problem, Hulen explained. “I always describe it as a cooperative board game. You are all cooperatively trying to solve this problem, and every person brings a unique piece to the puzzle to solve it,” he said. “This really can

“It is about keeping that human connection beyond work.” —Corey Hulen

only happen when everyone can see all of the conversations, files and data that are relevant to the issue.”

Making sure the conversation isn’t too loud Starting out, the ability to freely communicate is a very valuable experience, but as the conversations continue to build up it can start to feel like an information overload. Mattermost enables bot integrations so teams can work better and faster. Some of the bots include the ability to monitor and debug clusters, receive best practices, respond to messages, and send notifications. Webhooks can be added to post messages to public, private and direct channels. Additionally, organizations can go in and remove some of the cruft by removing certain bots and webhooks that they find aren’t providing valuable information over time or moving them to anoth-

er channel so they don’t block productivity. Teams can also use multiple channels to put all the webhook and bot information in one channel and then have a secondary channel to interact with all that information. “Mattermost has taken an approach where we’ve built a rock-solid platform and we give developers the API, and many integration options so they can extend the platform to fit their needs. We’ve found developers, in particular, really appreciate the ability to customize their collaboration tools.

Staying connected Remote work is another reason why a central communication hub is essential to a business. According to Hulen, Mattermost itself is a remote-first company, and having the messaging platform enables them to stay connected and a part of the team. Video conferencing plugins with Zoom, Skype, BigBlueButtom and other popular services enable teams to have face-to-face meetings. Mattermost also supports voice and screen sharing capabilities to expand remote teams ability to work together. In addition, channel-based or topicbased communication features become an essential communication tool. “When you are in a remote environment, those types of channel-based systems really take off and lend to that experience. You may have really needy channels where people are doing real work or monitoring outages, but you can also have some fun social channels,” Hulen said. “Minor features” like emoji reactions or Giphy integration enable team members to convey emotion and have some fun with their posts. “Those things really make remote culture thrive,” said Hulen. “It is about keeping that human connection beyond work.” Learn more at https://mattermost .com/ z


Full Page Ads_SDT027.qxp_Layout 1 8/27/19 2:17 PM Page 39


Full Page Ads_SDT027.qxp_Layout 1 8/27/19 2:17 PM Page 40

#WV QOCV GF #2/ H QT &GX1RU 6WT DQEJCT IG [QWT %+ %& RK RGN K PG

7U\ LW IRU \RXUVHOI ZZZ LQVWDQD FRP VGWLPHV IUHH WULDO


041_SDT027.qxp_Layout 1 8/27/19 3:38 PM Page 41

Instana Monitoring at DevOps Speed There is no one-size-fits-all approach when it comes to successfully implementing DevOps, but there are some concrete methods you need in place to help get you there. The “2019 Accelerate State of DevOps” report found that efforts like automation and monitoring can help organizations improve the speed of software delivery and create value. “For example, teams can monitor their own code, but will not see full benefits if both application and infrastructure are not monitored and used to make decisions,” the report stated. Kevin Crawley, DevOps evangelist for the APM company Instana, added that deployment frequency, lead time for changes, time to restore services, and change fail rate are leading indicators of how mature a given company’s software development life cycle is. Crawley explained that successful CI/CD or DevOps cannot happen if there is not monitoring and observability in place. Without it, rapid introduction of performance problems and errors, new endpoints causing monitoring issues, and lengthy root cause analysis will occur as the number of services expand. “In order to successfully have a true continuous integration pipeline where you are continuously deploying changes, you have to have solid monitoring around those systems so you can understand when you do push a change that does break something, you have immediate feedback,” he said in a recent webinar. The problem is that most traditional monitoring tools require manual efforts for many tasks such as: l writing and configuring data collectors l instrumenting code for tracing l discovering dependencies l deciding how to create data l building dashboards to visualize correlation l configuring alerting rules and

thresholds building data collection to store metrics. All of this can be quite a bit of work and very time consuming, Crawley explained. The loop of CI/CD and DevOps is a never-ending delivery process and any manual steps will slow teams down. “Without monitoring, the SRE or DevOps teams really have no visibility into operations and how the application is performing in production,” he said. “In this world where we talk about continuous integration and continuous deployments, manual steps really prevent the velocity and the speed that your organization needs to get software out the door.” When evaluating how an organization's monitoring solutions are working for them, some of the questions they need to ask themselves are, how many services are they capable of monitoring and what are they collecting from that monitoring? “There are a lot of questions we can

“Without monitoring, the SRE or DevOps teams really have no visibility into operations and how the application is performing in production.” —Kevin Crawley

ask here that will give you an idea of how much value Instana can bring to your operation teams,” he added. “How much effort are the engineers spending to build this visibility and if you are using an APM tool, what are you using and does it help you automate some of these steps.” Crawley went on to say “If you don’t have automation in your monitoring, you likely won’t have good visibility, and therefore you won’t have the velocity needed to get new services and new changes confidently out the door. What

41

this ends up resulting in is unhappy customers and loss of revenue. Without having an automated monitoring solution, you are left only with limited visibility and turtle pace speed.” When looking for a tool to automate as much as possible, organizations should look for automated monitoring solutions that provide zero or minimal configuration for the automatic discovery of infrastructure and software components, automatic instrumentation and tracing of every component, pre-existing alerts, and high resolution metrics and analytics. “With our solution, all you will need to do is install a single agent per virtual host and Instana will continually discover every technology. It will automatically collect the metrics and the traces for every app request, and will automatically map all the dependencies so that when an issue does occur we can correlate that issue back to a root cause or a service which intiatied that issue,” said Crawley. “At the end of the day what we determined is dynamic applications need automatic monitoring and what that ultimately translates to is that we need the ability to automatically detect technology as it is deployed and or scaled. We will need to automatically capture time series metrics and automatically capture distributed traces between your services and then we also need to utilize machine learning to analyze all that data and give you actionable insights for your environment.” Instana can also help DevOps teams use AI techniques to identify and resolve performance issues, achieve zero-effort monitoring for every service's health and availability, and accelerate delivery through automatic observability and analysis. “We don’t want to slow you down, we want to let the Instana robot do all the work,” said Crawley. Learn more at www.instana.com. z


042-43_SDT027.qxp_Layout 1 8/27/19 3:21 PM Page 42

FEATURED COMPANIES n Broadcom:

With an integrated portfolio spanning the complete DevOps toolchain from planning to performance, Broadcom delivers the tools and expertise to help companies achieve DevOps success on platforms from mobile to mainframe. We are driving innovation with BlazeMeter Continuous Testing Platform, Intelligent Pipeline from Automic, Mainframe DevOps with Zowe, and more.

n CircleCI: The company offers a continuous integration and continuous deliv-

ery platform that helps software teams work smarter, faster. CircleCI helps teams shorten feedback loops, and gives them the confidence to iterate, automate, and ship often without breaking anything. CircleCI builds world-class CI/CD so teams can focus on what matters: building great products and services.

n Instana: Agile continuous deployment practices create constant change.

Instana automatically and continuously aligns to every change. Instana’s APM platform delivers actionable information in seconds, not minutes, allowing you to operate at the speed of CI/CD. AI-powered APM delivers the intelligent analysis and actionable information required to keep your applications healthy.

n Mattermost: The open-source messaging platform built for DevOps teams. Its on-premises and private cloud deployment provides the autonomy and control teams need to be more productive while meeting the requirements of IT and security. Organizations use Mattermost to automate workflows, streamline coordination, and increase organizational agility. It maximizes efficiency by making information easier to find and increases the value of existing software and data by integrating with other tools and systems. n Redgate: Its SQL Toolbelt integrates database development into DevOps

software delivery, plugging into and integrating with the infrastructure already in place for applications. It helps companies take a compliant DevOps approach by standardizing team-based development, automating database deployments, and monitoring performance and availability. With data privacy concerns entering the picture, its SQL Provision solution also helps to mask and provision database copies for use in development so that data is preserved and protected in every environment.

n Scaled Agile: To compete, every organization needs to deliver valuable

technology solutions. This requires a shared DevOps mindset among everyone needed to define, build, test, deploy, and release software-driven systems. SAFe DevOps helps people across technical, non-technical, and leadership roles work together to optimize their end-to-end value stream. Map your current state value stream from concept to cash, identify major bottlenecks to flow, and build a plan that will accelerate the benefits of DevOps in your organization.

n Tasktop: Transforming the way software is built and delivered, Tasktop’s unique model-based integration paradigm unifies fragmented best-of-breed tools and automates the flow of project-critical information across dozens of tools, hundreds of projects and thousands of practitioners. The ultimate collaboration solution for DevOps specialists and all other teams in the software lifecycle, Tasktop’s pioneering Value Stream Integration technology provides organizations with unprecedented visibility and traceability into their value stream. Specialists are empowered, unnecessary waste is eradicated, team effectiveness is enhanced, and DevOps and Agile initiatives can be seamlessly scaled across organizations to ensure quality software is in production and delivering customer value at all times.

n Atlassian:

Atlassian offers cloud and on-premises versions of continuous delivery tools. Bamboo is Atlassian’s on-premises option with first-class support for the “delivery” aspect of Continuous Delivery, tying automated builds, tests and releases together in a single workflow. For cloud customers, Bitbucket Pipelines offers a modern Continuous Delivery service that’s built right into Atlassian’s version control system, Bitbucket Cloud.

n Appvance: The Appvance IQ solution is

an AI-driven, unified test automation system designed to provide test creation and text execution capabilities. It plugs directly into popular DevOps tools such as Chef, CircleCi, Jenkins, and Bamboo.

n Chef: Chef Automate, the leader in Continuous Automation, provides a platform that enables you to build, deploy and manage your infrastructure and applications collaboratively. Chef Automate works with Chef’s three open source projects; Chef for infrastructure automation, Habitat for application automation, and InSpec for compliance automation, as well as associated tools. n CloudBees:

CloudBees is the hub of enterprise Jenkins and DevOps. CloudBees starts with Jenkins, the most trusted and widely adopted continuous delivery platform, and adds enterprise-grade security, scalability, manageability and expert-level support. The company also provides CloudBees DevOptics for visibility and insights into the software delivery pipeline.

n CollabNet VersionOne:

CollabNet VersionOne’s Continuum product brings automation to DevOps performance with performance management, value stream orchestration, release automation and compliance and audit capabilities. In addition, users can connect to to DevOps tools such has Jenkins, AWS, Chef, Selenium, Subversion, Jira and Docker.

n Compuware: Our products fit into a unified DevOps toolchain enabling crossplatform teams to manage mainframe applications, data and operations with one process, one culture and with leading tools of choice. With a mainstreamed mainframe, any developer can build, analyze, test, deploy and manage COBOL applications. n Datical:

Datical solutions deliver the database release automation capabilities IT teams need to bring applications to market


43

faster while eliminating the security vulnerabilities, costly errors and downtime often associated with today’s application release process.

n New Relic:

Dynatrace provides the industry’s only AI-powered application monitoring. We help companies mature existing enterprise processes from CI to CD to DevOps, and bridge the gap from DevOps to hybrid-to-native NoOps.

Its comprehensive SaaSbased solution provides one powerful interface for Web and native mobile applications, and it consolidates the performancemonitoring data for any chosen technology in your environment. It offers code-level visibility for applications in production that cross six languages (Java, .NET, Ruby, Python, PHP and Node.js), and more than 60 frameworks are supported.

DevOps lifecycle by enabling Concurrent DevOps. Concurrent DevOps is a new vision for how we think about creating and shipping software. It unlocks organizations from the constraints of the toolchain and allows for better visibility, opportunities to contribute earlier, and the freedom to work asynchronously.

in Continuous Performance Validation for Web and mobile applications. Neotys load testing (NeoLoad) and performance-monitoring (NeoSense) products enable teams to produce faster applications, deliver new features and enhancements in less time, and simplify interactions across Dev, QA, Ops and business stakeholders.

n Dynatrace:

n GitLab: GitLab aims to tackle the entire

n JFrog:

JFrog’s four products, JFrog Artifactory, the Universal Artifact Repository; JFrog Bintray, the Universal Distribution Platform; JFrog Mission Control, for Universal DevOps flow Management; and JFrog Xray, Universal Component Analyzer, are available as open-source, on-premise and SaaS cloud solutions.

n JetBrains:

TeamCity is a Continuous Integration and Delivery server from JetBrains. It takes moments to set up, shows your build results on the fly, and works out of the box. TeamCity integrates with all major development frameworks, version-control systems, issue trackers, IDEs, and cloud services.

n Micro Focus: Continuous Delivery and

Deployment are essential elements of the company’s DevOps solutions, enabling Continuous Assessment of applications throughout the software delivery cycle to deliver rapid and frequent application feedback to teams. Moreover, the DevOps solution helps IT operations support rapid application delivery (without any downtime) by supporting a Continuous Operations model.

n Microsoft: Microsoft Azure DevOps is a

suite of DevOps tools that help teams collaborate to deliver high-quality solutions faster. The solution features Azure Pipelines for CI/CD initiatives, Azure Boards for planning and tracking, Azure Artifacts for creating, hosting and sharing packages, Azure Repos for collaboration and Azure Test Plans for testing and shipping.

n Neotys: Neotys is the leading innovator

n OpenMake: OpenMake builds scalable

Agile DevOps solutions to help solve continuous delivery programs. DeployHub Pro takes traditional software deployment challenges with safe, agentless software release automation to help users realize the full benefits of agile DevOps and CD. Meister build automation accelerates compilations of binaries to match the iterative and adaptive methods of Agile DevOps

n Perfecto: A Perforce company, Perfecto enables exceptional digital experiences and helps you strengthen every interaction with a quality-first approach for web and native apps through a cloud-based test environment called the Smart Testing Lab. The lab is comprised of real devices and real end-user conditions, giving you the truest test environment available.

n Puppet: Puppet provides the leading IT automation platform to deliver and operate modern software. With Puppet, organizations know exactly what’s happening across all of their software, and get the automation needed to drive changes with confidence. More than 75% of the Fortune 100 rely on Puppet to adopt DevOps practices, move to the cloud, ensure security and compliance, and deliver better software faster. n Rogue Wave Software by Perforce:

Rogue Wave helps thousands of global enterprise customers tackle the hardest and most complex issues in building, connecting, and securing applications. Since 1989, our platforms, tools, components, and support have been used across financial servic-

es, technology, healthcare, government, entertainment, and manufacturing to deliver value and reduce risk.

n Sauce Labs:

Sauce Labs provides the world’s largest cloud-based platform for automated testing of Web and mobile applications. Optimized for use in CI and CD environments, and built with an emphasis on security, reliability and scalability, users can run tests written in any language or framework using Selenium or Appium, both widely adopted open-source standards for automating browser and mobile application functionality.

n SOASTA: SOASTA, now part of Akamai, is the leader in performance measurement and analytics. The SOASTA platform enables digital business owners to gain unprecedented and continuous performance insights into their real user experience on mobile and web devices — in real time, and at scale. n TechExcel:

DevSuite helps organizations manage and standardize development and releases via agile development methods and complete traceability. We understand the importance of rapid deployment and are focused on helping companies make the transition over to DevOps. To do this, we have partnered with many automation tools for testing and Continuous Integration, such as Ranorex and Jenkins. Right out of the box, DevSuite will include these technologies.

n Tricentis: Tricentis Tosca is a Continuous Testing platform that accelerates software testing to keep pace with Agile and DevOps. With the industry’s most innovative functional testing technologies, Tricentis Tosca breaks through the barriers experienced with conventional software testing tools. Using Tricentis Tosca, enterprise teams achieve unprecedented test automation rates (90%+) — enabling them to deliver the fast feedback required for Agile and DevOps.

n XebiaLabs: XebiaLabs develops enterprise-scale Continuous Delivery and DevOps software, providing companies with the visibility, automation and control they need to deliver software faster and with less risk. Global market leaders rely on XebiaLabs to meet the increasing demand for accelerated and more reliable software releases. z


Full Page Ads_SDT027.qxp_Layout 1 8/27/19 2:17 PM Page 44

Bring your data together. Set your business apart. Applications, data, APIs, and processes emerge from every corner of an IT ecosystem. Those elements multiply faster as your business grows. With integration that works across environments, you can coordinate the accumulation. Our flexible, distributed solution brings smart integration to every layer of your stack. See how at redhat.com/agile-integration.

Copyright Š 2019 Red Hat, Inc. Red Hat and the Red Hat logo are trademarks or registered trademarks of Red Hat, Inc., in the U.S. and other countries.

Hybrid Cloud | Cloud-Native Development Automation | IT Optimization | Agile Integration


045-47_SDT027.qxp_Layout 1 8/27/19 3:32 PM Page 45

45

Buyers Guide

BY CHRISTINA CARDOZA

M

odern-day software development has made APIs more important than ever. Whether it is microservices, Agile or digital transformation, developers need APIs to connect data, apps and devices. In order to be able to deploy, manage, and run all the APIs necessary for their solutions, they implement API management strategies to make sure everything goes smoothly. This has been the API status quo for the last couple of years, and API and API management have been steadily moving along. “API management is a pretty mature discipline now. When API management companies like 3scale were conceived 10-12 years ago, that was really a response to a real need from Agile

developers who were saying our interoperability needs are not met by the ESD (electronic software distribution) model that had dominated for 20 years up until that point,” said David Codelli, senior principal product marketing manager for the open-source software company Red Hat. “Today API management is doing an outstanding job of allowing microservices teams to get the interoperability and the self service they need. It is a well established business for mature companies.” But as time has proven again and again, most things in software development don’t stay the same for long. The advent of these modern software techniques has spurred new technologies to support the new techniques, and API management will have to evolve to continue to meet the needs and expectations of users.

API management as code The next big thing after service mesh for APIs will be APIs as products or API management as code, according to Red Hat’s senior principal product marketing manager David Codelli. “There has been a lot of buzz around infrastructure as a service, where you can program your information technology landscape, and we are starting to embrace that in API management efforts so that users have a development pipeline that orchestrates their hardware and software resources as well as API artifacts,” said Codelli. Codelli went on to explain that things like opening an API contract, mock services, service metadata, configuration of policies, and configuration of other API management aspects such as security, will start to be managed units that undergo the same scrutiny and testing as the code for APIs. “This will save overhead, ensuring predictable and lower risk deployment, streamlining deployment cycles, being able to provide better resolution. Those are all the same things we got with infrastructure as a service and we will get even more as we do API management as code,” he said. “APIs are extremely important and the benefits of instrumenting it as code are going to be extremely important.”

API management and the service mesh The microservice architecture has introduced a new concept over the last couple of years to help deal with the overall visibility and insight into microservices. A service mesh is “a way to control how different parts of an application share data with one another,” according to Red Hat. While microservices enable developers to easily make changes to their services, a service mesh is used to handle the service-to-service communication. According to Kevin Matheny, a senior director analyst for Gartner, technical professionals, service meshes and API management are related, but also very different. Over time, developers are going to start, and some have already started, to combine service meshes into their API management initiatives. “Our customers are engaging with us to try to sort out the landscape and figure out what is complementary and what is overlapping. What are the ways they can build a plan to capitalize on both: advancements in service mesh and advancements in API management,” said Red Hat’s Codelli. Matheny explained since this is a new emerging space, a lot of users are having trouble understanding how to bring those together. “API management is about gaining access to the APIs that are exposed to an application. Service mesh is about the peer-to-peer connectivity, the API connectivity inside of an application. Many organizations think because they have a service mesh, they don’t need API managecontinued on page 46 >


045-47_SDT027.qxp_Layout 1 8/27/19 3:33 PM Page 46

46

SD Times

September 2019

www.sdtimes.com

< continued from page 45

ment, and that is not the case,” he explained. A service mesh is necessary to handle the service-to-service communication within independently deployable pieces of software that are loosely coupled. However, a service mesh does not provide the same set of functionality that an API gateway does. API management is necessary for any internally or externally exposed apps, and a service mesh is necessary to handle the side-to-side communications, Matheny explained. “The way to think about it is east-west versus north-south communication. Your north-south communications are API gateway-based. Someone from outside the organization wants to get something. In the case of a microservice-based application, it is another application that wants to get something from this application. Then that is mediated by an API gateway. But inside the boundary of your microservices cluster, the peer-topeer connections — the east-west connections — are handled using the service mesh,” Matheny explained. The confusion comes from traditional architectures such as monolithic applications, where peer-to-peer communications are also handled by an API gateway or micro gateway. “You’re not taking advantage of the greater scope of functionality that API management platforms offer. Monolithic applications really narrow it down to just the gateway portion, so that tends to wind up with people saying this gateway portion can handle service-toservice communications between applications and a service mesh handles that. The implementations are different and the scope of use is different,” said Matheny. One way Red Hat tackled bringing the two together was by adding an adapter in the service mesh Istio to provide API management capabilities through Red Hat Integration. These capabilities include developer self-ser-

What should you look for in an API management solution? A common mistake organizations look for when evaluating API management solutions is whether or not it has monetization abilities, according to Kevin Matheny, a senior director analyst for Gartner technical professionals. “Very few organizations I speak to are actually directly monetizing their APIs. If you are not going to do that, or don’t have a real plan to do that, don’t put that as one of the criteria you need,” he explained. “You may wind up selecting a product that doesn’t meet your other needs because you are valuing something you are not going to use.” When choosing an API management solution, organizations should start with a baseline of an API gateway, according to Matheny. API gateways are an important piece of the puzzle because it takes API requests and determines the services necessary to carry out that request. Organizations are working in many different environments with some on-premises, in the cloud, in multiple clouds or a mix of both. An API gateway should have the ability to be deployed when and where you need it, he explained. The problem, however, is that most organizations will need multiple deployed API gateways and that is not something a lot of vendors are currently able to provide, according to Matheny. This is one area, however, that David Codelli, senior principal product marketing manager for the software company Red Hat, said the company took into consideration from the very beginning. Red Hat’s 3scale API Management solution provides hybrid cloud support across all components, enabling users to design APIs for on-premises, in the cloud, or any combination of the two, he explained. According to Codelli, this is possible through Red Hat Integration, which is an “end-to-end experience for receiving, building, implementing, deploying and even retiring APIs,” he said. “What is different about Red Hat Integration than what we have done before is the hybrid cloud is the platform from the beginning.” The company has also made a number of investments in Kubernetes to enable its API management solutions to run on-premises, or on private or public cloud, and capitalize on the high availability and stability Kubernetes offers. “You have this seamless experience. This unified identity management for all classes of users, and anything we do is based on deployment by the containers and targets the hybrid cloud by targeting the state-of-the-art container management system, which today is Kubernetes,” said Codelli. Red Hat also takes the end-to-end user experience into account to separate itself from the rest of the API management market. “You can design your contract first. You can deliver that contract to different partners on the consumption side and the delivery side so you can test in parallel. You have built-in mock testing. You have sophisticated tools for implementing those services in a user friendly canvas,” he said. “We have some very complicated business challenges that our customers are facing and they want a productive canvas for implementing that complexity. So they have the fulfillment tools, the design tools, collaboration tools, and these are all built on open standards for CI and CD that are essentially demanded by Agile developers today.” z

vice and on-boarding, API documentation, monetization, and usage analytics. “We also want to make sure our customers understand that the higher level of API management like billing, rate limiting, analytics and developer portals are not addressed by Istio, so we

encourage customers to look at the whole picture in planning out their API strategy,” said Red Hat’s Codelli. Codelli noted this space is still in its very early days and it will be at least a year until real product manifestations come to fruition. z


045-47_SDT027.qxp_Layout 1 8/27/19 3:20 PM Page 47

www.sdtimes.com

September 2019

SD Times

A guide to API management tools n Apigee is an API management platform for modernizing IT infrastructure, building microservices and managing applications. The platform was acquired by Google in 2016 and added to the Google Cloud. It includes gateway, security, analytics, developer portal, and operations capabilities. n Akana by Perforce provides an endto-end API management solution for designing, implementing, securing, managing, monitoring, and publishing APIs. The Akana API Platform helps you create and publish secure, reliable APIs that are elegant, easy to consume, built the right way, and running as they should be to improve the customer experience and drive growth in your business. n CA Technologies, a Broadcom company, helps customers create an agile business by modernizing application architectures with APIs and microservices. Layer7 API Management includes the industry’s most innovative solution for microservices, and provides the most trusted and complete capabilities across the API lifecycle for development, orchestration, security, management, monitoring, deployment, discovery and consumption.” n Cloud Elements delivers an API integration platform that allows APIs to work uniformly across hundreds of applications while sharing common data models. “Elements” unify APIs with enhanced capabilities for authentication, discovery, search, error handling and API maintenance. “Formulas” combine those Elements to automate business processes across applications. “Virtual Data Hubs” provide a normalized view of data objects, such as “accounts” or “payments.” All can be shared, modified and re-used. n Dell Boomi’s API management solution provides a unified and scalable, cloud-based platform to centrally manage and enrich API interactions through their entire lifecycle. With Boomi, users can rapidly configure any endpoint as an API, publish APIs onpremise or in the cloud, manage APIs with traffic control and usage dashboards. n IBM’s API Connect is designed for organizations looking to streamline and acceler-

FEATURED PROVIDER n Red Hat: 3scale API Management is an award winning platform that gives control, visibility and flexibility to organizations seeking to create and deploy an API program. It features comprehensive security, monetization, rate limiting, and community features that businesses seek backed by Red Hat’s solid scalability and performance. ate their journey into digital transformation; API Connect on IBM Cloud is an API lifecycle management offering which allows any organization to secure, manage and share APIs across cloud environments — including multi-cloud and hybrid environments. This makes API Connect an ideal, scalable solution for those that have, and need to expose APIs without fear of cloudspecific vendor lock-in. n Kong delivers a next-generation API and service lifecycle management platform designed for modern architectures, including microservices, containers, cloud and serverless. Offering high flexibility, scalability, speed and performance, Kong enables developers and Global 5000 enterprises to reliably secure, connect and orchestrate microservice APIs for modern applications. Kong is building the future of service control platforms to intelligently broker information across services. n Microsoft’s Azure API Management solution enables users to publish, manage, secure and analyze APIs in minutes. It features the ability to create an API gateway and developer portal quickly, ability to manage all APIs in one place, provides insights into APIs, and connects to back-end services. n As part of MuleSoft’s Anypoint Platform, MuleSoft’s Anypoint API Manager is designed to help users manage, monitor, analyze and secure APIs in a few simple steps. The manager enables users to proxy existing services or secure APIs with an API management gateway; add or remove pre-built or custom policies; deliver access management; provision access; and set alerts so users can respond proactively.

n Nevatech Sentinet is an enterprise class API Management platform written in .NET that is available for on-premises, cloud and hybrid environments. Sentinet supports industry SOAP and REST standards as well as Microsoft specific technologies and includes an API Repository for API Governance, API versioning, autodiscovery, description, publishing and Lifecycle Management. n Oracle’s API Platform Cloud Service provides an end-to-end service for designing, prototyping, documenting, testing and managing the proliferation of critical APIs. n Postman is the leading collaboration platform for API development, used by more than 7 million developers and 300,000+ companies worldwide. Postman’s native apps for macOS, Windows, and Linux provide advanced features and a variety of tools that can be used to extend Postman including Newman, Postman’s command-line tool, the Postman API, the API Network, and integrations. n SmartBear Software empowers users to thrive in the API economy with tools to accelerate every phase of the API lifecycle. SmartBear is behind some of the biggest names in the API market, including Swagger, SoapUI and ServiceV. With Swagger’s easy-to-use API development tools, SoapUI’s automated testing proficiency, AlertSite’s API-monitoring and ServiceV’s mocking and virtualization capabilities, users can build, test, share and manage the best performing APIs. n TIBCO Cloud Mashery is a cloud-native API management platform that can be deployed anywhere, either as a SaaS service or containerized in cloud-native and on-premise environments. Mashery delivers market-leading full lifecycle API management capabilities for enterprises adopting cloud-native development and deployment practices, such as DevOps, Microservices, and Containers. Its capabilities includes API creation, productization, security, and analytics of an API program and community of developers. z

47


048_SDT027.qxp_Layout 1 8/27/19 1:21 PM Page 48

48

SD Times

September 2019

www.sdtimes.com

Analyst View BY JASON ENGLISH

Is open source the great equalizer? Jason English is principal analyst and CMO at Intellyx.

I

f you had told me 25 years ago that open source would be the predominant force in software development, I would’ve laughed. Back then, at my industrial software gig, we were encouraged to patent as much IP as possible, even processes that seemed like common-sense business practices, or generally useful capabilities for any software developer. If you didn’t, your nearest competitor would surely come out with their own patent claims, or inevitable patent trolls would show up demanding fees for any uncovered bit of code. We did have this one developer who was constantly talking about fiddling with his Linux kernel at home, on his personal time. Interesting hobby. It is hard to fathom how much the whole Linux open-source franchise is worth today when you look at all of the Apache servers and related open Linuxrelated projects that are in production now in huge enterprises, across so many different technology spaces. In 2015 a Linux Foundation study estimated the total development contribution value to the codebase at US$5 billion — or 41,192.25 collaborative years of work!

What matters is the open source ecosystem. Almost nothing is proprietary anymore, so value comes from net adoption.

From proprietary to commercial open-source With each new ring added to the trunk of this open-source tree, comes a unique vendor. Commercial open-source companies like CloudBees and Pivotal build their reputation by supporting open-source developers seeking to push the boundaries of Agile development and continuous delivery. Success in commercial open source requires a careful balance of contribution and evangelism to the ecosystem — which may contain direct competitors who leverage the code themselves — combined with the ability to upsell related tools and services. What matters is the open source ecosystem. Almost nothing is proprietary anymore, so value comes from net adoption. So whether you are SmartBear contributing to Swagger for APIs, or MongoDB, or Chef opening up its stack and making IaC recipes available to all on GitHub, there’s a reinvention afoot for many established vendors. Big companies have an increased appetite for compliance — and they are willing to pay vendors handsomely for enterprise-level support, certified builds and regular updates. They can realize the benefits of open-source software with far less risk.

Large vendors can afford to take a longer view Taking down barriers to entry Is open source leveling the playing field for startups? We are seeing small enterprises going from zero to delivering enterprise-grade tools and platforms within 12-18 months — utilizing Kubernetes clusters atop hybrid IT infrastructure, running in ready-made templates in elastic cloud services or on bare metal in data centers. The new open startup doesn’t see the need to defend patents — they can literally give away the code to their software in exchange for the opportunity to have it catch on with other developers, and that paying enterprises or SaaS subscribers will trust them to support that code in the future and build upon it. But wait, if small startups can benefit from being able to incorporate so much open R&D innovation right out of the box, could that open IP help more established companies differentiate even more significantly?

It’s not just small companies getting on the open source game, which was once famously dubbed a ‘cancer’ by former Microsoft head Steve Ballmer. Major technology players are competing to dominate the open source lane. Think back 15 years ago, who’d believe IBM would invest $1 billion in an IDE called Eclipse? Or, that Microsoft would return to trump them 10 years later with Visual Studio, a completely open IDE that 70% of the worlds developers now use? Or, that IBM would spend $32 billion on Red Hat, a company that distributed its own approved versions of Linux and OpenStack software?

Look to job satisfaction If open source causes businesses to become more transparent and contribute to the greater good, it may not equalize every other industry, but it has certainly made software development a better one to be in. z


Full Page Ads_SDT027.qxp_Layout 1 8/27/19 2:18 PM Page 49

Bad address data costs you money, customers and insight. Melissa’s 30+ years of domain experience in address management, patented fuzzy matching and multi-sourced reference datasets power the global data quality tools you need to keep addresses clean, correct and current. The result? Trusted information that improves customer communication, fraud prevention, predictive analytics, and the bottom line. • Global Address Verification • Digital Identity Verification • Email & Phone Verification • Location Intelligence • Single Customer View See the Elephant in Your Business -

Name it and Tame it!

www.Melissa.com | 1-800-MELISSA

Free API Trials, Data Quality Audit & Professional Services.


050_SDT027.qxp_Layout 1 8/27/19 1:21 PM Page 50

50

SD Times

September 2019

www.sdtimes.com

Industry Watch BY DAVID RUBINSTEIN

The winding road to software failures David Rubinstein is editor-in-chief of SD Times.

W

hat began as a conversation about highprofile application failures and outages this summer took a winding road toward the root of what ails software development teams. I was speaking with Mehdi Daoudi, who runs the experience monitoring solution provider Catchpoint, to try to better understand why website and web application failures continue on. He spoke about such things as the complexity of modern computing systems, which make those systems more prone to failure. He went on to say that the more complex systems are, the harder it is to document everything. On top of that is that the industry is now in constant release mode. “Everyone wants to push features as quickly as possible, but you cannot do that without things breaking, right?” he said. “Some of these eggs are going to be made into an omelet.” We talked a bit more, and the discussion turned to the consumerization of software, and giving employees the same great experiences with their work tools as they expect from the applications they enjoy in their non-work lives. While organizations can control this when the applications are running in their own data centers, what can they do when they have to rely on cloud providers that they have no control over? Daoudi said, “What we’re starting to see in the workplace is a consumer mentality happen, where people at the beginning it was BYOD — bring your own device — but now it’s literally the freedom of using whichever tool you want to get your job done. CIOs don’t have as much control anymore over what productivity suite is used, so we’re seeing this shift. People used to call it shadow IT, but in my opinion this is ‘whatever it takes to get the job done IT.’ Even at Catchpoint, we have 200 employees, believe it or not we use 120 SaaS applications. When I saw the audit I was scared because now you have to take into account security and all that stuff. But it is what it is. You can’t go back to five applications; you can’t force people.” With the proliferation of cloud-based productivity tools — Salesforce, Microsoft Office 365,

People are literally exhausted. It’s this constant firefighting, putting out fires, at some point, you let your guard down.

Slack, Gmail and many others — when an AWS or Azure region goes down, your workforce is completely unproductive, and the costs of that are enormous. This, he said, is especially problematic in today’s world in which more and more people are working remotely, and using cloud services to get their jobs done. “But when you have that, you cannot tell people if you’re in headquarters you’re going to have super-fast access to X,Y and Z, but if you’re at home, or work at WeWork, or Starbucks, well, good luck, it’s on you. Everybody deserves a first-class experience, and that’s the mentality we’re seeing.” From there, we talked about service level agreements, and writing them in such a way that they have real teeth and can hold cloud providers’ feet to the fire. The onus is on them to provide that great user experience, and if they cannot, they should be held accountable. In fact, Daoudi said one of his customers, Autodesk, was able to claim back millions of dollars from a vendor that failed to provide service at the agreed-to level. Monitoring, Daoudi said, is built on four pillars: Reachability, availability, performance and reliability. “Companies try to do four things when it comes to monitoring in general. There is the reachability aspect: Can I get to Point B from Point A, and once I get there, is it up or is it down? Is it fast or is it slow, and then how reliable is it?” Finally, Daoudi touched on what could be the root cause of software failures and systems outages. He put it plainly. People are tired. “People are literally exhausted. It’s this constant firefighting, putting out fires, at some point, you let your guard down. You become a little bit weak. In our SRE survey of 2019, one of the topics that came up is this fatigue, the stress. Some site reliability engineers even talk about PTSD to some degree. It’s just that people burn.” So as systems grow in complexity, and organizations drive people harder and harder to release more features faster than ever — “Do more with less” is the mantra of the day — they risk burning out their development teams. That will put their systems and applications at risk, which in turn will put the organization at risk. z


Full Page Ads_SDT027.qxp_Layout 1 8/27/19 2:07 PM Page 51

15 Clarity

|

Tooling

|

Extensibility

| Vision

|

Control

|

Governance

UML | BPMN | BPSim | BPEL | DMN ™ | Google & AWS Icon Sets | TOGAF | Zachman | XSD | ArchiMate | MARTE | SysML | NIEM™ | BABOK | BIZBOK ®

®

®

®

®

BMM ™ | CMMN™ | Code | DataBase | IFML™ | GML | ODM ™ | Schema | SoaML™ |SOMF ™ | SPEM™ | UAF | UBL | UPMC | VDML ™ | *More

sparxsystems.com

Modeling and Design Tools for Changing Worlds

®


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.