FC_SDT022.qxp_Layout 1 3/25/19 2:34 PM Page 1
APRIL 2019 • VOL. 2, ISSUE 022 • $9.95 • www.sdtimes.com
Full Page Ads_SDT022.qxp_Layout 1 3/22/19 4:50 PM Page 2
003_SDT022.qxp_Layout 1 3/25/19 12:51 PM Page 3
Contents
VOLUME 2, ISSUE 22 • APRIL 2019
FEATURES
NEWS 6
News Watch
14
Making project management easier… for developers
39
Drive more data warehouse insights
With microservices, more isn’t always better
COLUMNS 40 41 42
GUEST VIEW by Gabrielle Gasse Developers need to focus on ‘Code UX’
page 8
ANALYST VIEW by Jason Wong and Elizabeth Golluscio Invest in your ‘cool factor’ INDUSTRY WATCH by I.B.Phoolen Crumbs for cupcake-native development
Why continuous testing is so important
BUYERS GUIDE Is DataOps the Next Big Thing? page 34
page 23
ATLASSIAN SHOWCASE 17 Atlassian eyes future of development
AI and ML is the future of RPA — but don’t forget the people
19 Testim enables intelligent testing
page 30 Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 80 Skyline Drive, Suite 303, Plainview, NY 11803. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2019 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 80 Skyline Drive, Suite 303, Plainview, NY 11803. SD Times subscriber services may be reached at subscriptions@d2emerge.com.
004_SDT022.qxp_Layout 1 3/22/19 4:23 PM Page 4
®
Instantly Search Terabytes
www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein drubinstein@d2emerge.com NEWS EDITOR Christina Cardoza ccardoza@d2emerge.com
dtSearch’s document filters support: ‡ popular file types ‡ emails with multilevel attachments ‡ a wide variety of databases ‡ web data
SOCIAL MEDIA AND ONLINE EDITOR Jenna Sargent jsargent@d2emerge.com ASSOCIATE EDITOR Ian Schafer ischafer@d2emerge.com ART DIRECTOR Mara Leonardi mleonardi@d2emerge.com CONTRIBUTING WRITERS
Over 25 search options including: ‡ efficient multithreaded search ‡ HDV\ PXOWLFRORU KLW KLJKOLJKWLQJ ‡ forensics options like credit card search
Alyson Behr, Jacqueline Emigh, Lisa Morgan, Jeffrey Schwartz CONTRIBUTING ANALYSTS Cambashi, Enderle Group, Gartner, IDC, Ovum
ADVERTISING SALES PUBLISHER David Lyman 978-465-2351 dlyman@d2emerge.com
Developers: ‡ $3,V IRU & -DYD DQG 1(7 LQFOXGLQJ FURVV SODWIRUP NET Standard with Xamarin and 1(7 &RUH ‡ 6'.V IRU :LQGRZV 8:3 /LQX[ 0DF L26 LQ EHWD $QGURLG LQ EHWD ‡ )$4V RQ IDFHWHG VHDUFK JUDQXODU GDWD FODVVLILFDWLRQ $]XUH DQG PRUH
.
.
.
SALES MANAGER Jon Sawyer jsawyer@d2emerge.com
CUSTOMER SERVICE SUBSCRIPTIONS subscriptions@d2emerge.com ADVERTISING TRAFFIC Mara Leonardi adtraffic@d2emerge.com LIST SERVICES Jourdan Pedone jpedone@d2emerge.com
Visit dtSearch.com for ‡ KXQGUHGV RI UHYLHZV DQG FDVH VWXGLHV ‡ IXOO\ IXQFWLRQDO HQWHUSULVH DQG developer evaluations
The Smart Choice for Text Retrieval® since 1991
dtSearch.com 1-800-IT-FINDS
REPRINTS reprints@d2emerge.com ACCOUNTING accounting@d2emerge.com
PRESIDENT & CEO David Lyman CHIEF OPERATING OFFICER David Rubinstein
D2 EMERGE LLC 80 Skyline Drive Suite 303 Plainview, NY 11803 www.d2emerge.com
Full Page Ads_SDT022.qxp_Layout 1 3/22/19 4:50 PM Page 5
Get
Inspired at the premier event for
software Testing
Professionals
April 28-May 3, 2019 orlando, FL s t a r e a s t. t e c h w e l l . c o m
SPECIAL OFFER: Register by March 29 with promo code SECM to save up to an additional $200!* *Discount valid on packages over $400
006,7_SDT022.qxp_Layout 1 3/25/19 12:10 PM Page 6
6
SD Times
April 2019
www.sdtimes.com
NEWS WATCH The Linux Foundation forms new JavaScript community The Linux Foundation announced the formation of the OpenJS Foundation. The new foundation is a result of a merger between the Node.js Foundation and the JS Foundation. The OpenJS Foundation will aim to provide a neutral location for hosting, sustaining and funding projects and activities that benefit the entire ecosystem. It is currently made up of 31 open-source projects such as the popular Appium, Dojo, jQuery, Node.js and webpack projects. In addition, the foundation will extend non-hosted projects. “This is an exciting step forward for the entire open source JavaScript community, as it strengthens the impact of our collective efforts under one united Foundation,” said Dave Methvin, technical advisory committee chair of JS Foundation. “A new merged Foundation is able to better serve the community and members to grow the JavaScript ecosystem from a technology and standards perspective.”
WebAuthn becomes official recommended web standard The Internet is one step closer to a passwordless future. The World Wide Web Consortium (W3C), along with the FIDO Alliance, announced that Web Authentication (WebAuthn) specification is now a web standard. WebAuthn is a core component of the FIDO Alliance’s FIDO 2 set of specifications, which aims to provide easier authentication services to mobile and
The Continuous Delivery Foundation to serve as new home for open-source CI/CD projects The Linux Foundation announced at the Open Source Leadership Summit that it is launching a new foundation for the CI/CD space. The Continuous Delivery Foundation (CDF) will be a vendor-neutral home for open-source CI/CD projects and will foster collaboration between developers, end users, and vendors. The project is launching with four initial projects: Jenkins, Jenkins X, Tekton, and Spinnaker. The CDF will soon be forming a Technical Oversight Committee, which will enable more projects to join. According to the Linux Foundation, the CDF will have an open governance model that will encourage participation and contribution while also providing a framework for long-term stewardship and sustainability. desktop environments. According to the alliance, WebAuthn enables online services to leverage FIDO Authentication through a standard web API that can be used in browsers and other web platform infrastructure. The W3C’s WebAuthn Recommendation currently supports WIndows 10, Android, Google Chrome, Mozilla Firefox, Microsoft Edge and Apple Safari.
TensorFlow Dev Summit 2019 brings new AI capabilities The open-source machine learning platform TensorFlow is expanding its AI capabilities in mobile, IoT, and JavaScript. The team announced the alpha version of Tensor 2.0, TensorFlow.js 1.0, TensorFlow Lite and TensorFlow Extended at its TensorFlow Developer Summit in Sunnyvale, CA last month. TensorFlow 2.0 is available as a developer provide, and comes with a focus on making the library and APIs more accessible to beginner and expert machine learning researchers. For production,
the team announced new features to its TensorFlow Extended platform for creating and managing a production machine learning pipeline. The biggest update is the new orchestration support for managing components, artifacts and enabling things like experiment tracking and model comparison. TensorFlow Lite 1.0 is designed for mobile and embedded systems, and comes with new improvements for training smaller models. Other announcements included TensorFlow Datasets for importing common standard datasets; Swift for TensorFlow, TensorFlow Federated for experimenting with machine learning and decentralized data; and TensorFlow Privacy for training machine learning models with privacy in mind.
JFrog to acquire Shippable for CI/CD capabilities DevOps company JFrog has announced its intention to acquire Shippable for its cloud-native and Kubernetesready CI/CD capabilities. Shippable is a DevOps and
CI automation solution provider that offers an assembly platform for shipping software faster. JFrog plans on incorporating Shippable’s solutions into its platform to create a comprehensive DevOps pipeline solution, the company explained. Shippable’s technology will enable JFrog customers to more completely automated their development processes.
Progress’ OpenEdge gets milestone release Progress announced a threefold throughput performance improvement with the latest release of its application development platform. According to the company, Progress OpenEdge 12 features a 200 percent improvement in database throughput performance, responsiveness and scalability. In addition, the release comes with new advanced features and functionality to give modern apps always-on availability and enhanced agility. “Many enterprises around the globe are operating with business-critical applications
006,7_SDT022.qxp_Layout 1 3/25/19 12:11 PM Page 7
www.sdtimes.com
that were developed years ago, and struggle to continuously deliver application functionality that will help them, and their customers, evolve to meet increasing business demands,” said John Ainsworth, SVP of core products at Progress. “Designed to support the needs of business-critical application delivery and deployment, OpenEdge 12 is the highest performing, highest quality, most secure and productive version of OpenEdge ever released.”
Postman 7.0 released with extended roles and permissions API development solution provider Postman announced the latest version of its platform with new ways for developers and teams to manage access control. Postman 7.0 includes extended roles and permissions, allowing users to manage teammates on collection, team and workspace levels. According to the company, this is a major update that gives team leaders the ability to assign permissions at every level. In order to take advantage
of this new feature, users will have to migrate to the latest version of Postman. Additionally, the company announced its API development platform is expected to come to beta this spring.
Java 12 is now available Oracle has released the latest version of Java, which is the first of two major releases for the programming language this year. According to Oracle, Java 12 will receive at least two more updates before Java 13 is released in September. New features in Java 12 includes a new low-pause-time garbage collector, microbenchmark suite, switch expressions, a JVM constraints API, a single AArch64 port, default CDS archives, abortable mixed collections for G1, and the ability to return unused committed memory from G1. More detailed information on these features can be found here. According to Oracle, the rate of change between updates has drastically improved since the company switched to releasing updates
enhancements for innovations like foldable phones. Other features for the new Android will include more control over location for users; more updates to transparency and personal data; limited access to non-resettable device identifiers; new ways to engage users such as foldables and new screens; ability to share shortcuts; and improved peer-to-peer and Internet connectivity.
Google unveils the next version of Android
Atlassian to acquire AgileCraft for Agile planning
The upcoming version of Google’s mobile operating system Android is now available for early adopters and developers. Early adopters can access Beta 1 of Android Q by enrolling any Pixel device and developers will have access through a preview SDK. According to Google, the release continues to focus on security and privacy by including work from Google Play Protect and runtime permissions as well as new privacy and security features, new APIs, camera capabilities, faster app startup, and
Atlassian has announced that it is acquiring Agile planning software provider AgileCraft. According to Atlassian, AgileCraft enables organizations to create a ‘master plan’ for the strategic projects and workstreams. Atlassian tools are already focused on enabling Agile transformation, and AgileCraft will help connect the work of those tools to the business objectives and strategic outcomes of the enterprise. The acquisition is being valued at $166 million and is expected to close in April.z
Docker has added four new members to its leadership team: n GitHub executive Papi Menon as VP of product. Menon will drive product strategy for the Docker container platform.
Camposano
n Docker EVP Brian Camposano as chief financial officer. Camposano will oversee the company’s strategic finance, accounting, HR, legal, IT and corporate development teams as well as scale Docker’s presence globally. n Former Dell executive Victor Raisys as EVP and GM of new markets. Raisys will lead the company’s development and delivery of products for developers through new channels.
Raisys
AndersonBrooke
SD Times
every six months. This is because instead of making tens of thousands of fixes available in a release every few years, enhancements can be made on a more manageable and predictable schedule, Oracle explained. There were 1,919 JIRA issues that were marked as being fixed in Java 12, of which 1,433 were completed by Oracle employees and 486 by individual developers.
People on the move
Menon
April 2019
n Intel’s Debbie Anderson-Brooke as SVP of corporate marketing. Anderson-Brooke will manage brand strategy, corporate communications and community for the company.
Vince Steckler has announced his resignation as CEO of cybersecurity company Avast after 10 years at the company. Ondrej Vlcek, the president of Consumer Business, will take his place. Maggie Chan Jones and Tamara Minick-Scokalo have joined the Board. Sharon Hagi has joined Silicon Labs as its new Chief Security Officer, which is a new position within the company. In the past, Hagi served as vice president of security at Ethoca and chief technology strategist at IBM Security.
7
008-10,12_SDT022.qxp_Layout 1 3/22/19 4:39 PM Page 8
8
SD Times
April 2019
www.sdtimes.com
With microservices, more isn’t always better BY CHRISTINA CARDOZA
he benefits of microservices are undeniable. Software development companies want to be able to deliver software rapidly, frequently and reliably — and microservices are a means to that end. A recent O’Reilly Media report found that more than 50 percent of software projects are currently using microservices. Of those surveyed, 86 percent have found at least partial success with microservices while only 15 percent have seen massive success. The problem is that, like with any new trend, businesses tend to jump on the bandwagon too quickly, according
T
to Michael Hamrah, chief architect at the HR software company Namely. “You think just by having it that you unlock the value proposition, but like anything there is a learning experience,” he said. A common mistake Hamrah sees happening when businesses move to microservices without a clear understanding or intent is they end up with what some know as Frankenstein microservices, microliths or monoservices, where you end up with an inability to manage services, set service boundaries or provide good reliability. “Organizations need to assess if
microservices are a right fit for them and adapt their organizational structure and culture before embarking on a microservices adventure,” said Carlos Sanchez, principal software engineer at continuous delivery software provider CloudBees. “Microservice architectures can start with good intentions, but can go wrong and end up with Frankenstein microservices or distributed monoliths that manage to get the worst of microservices and monoliths: slow development velocity, more defects and increased complexity.” However, since microservices are a completely different way of working
008-10,12_SDT022.qxp_Layout 1 3/25/19 11:49 AM Page 9
www.sdtimes.com
and there is no framework out there to tell you want to do, it can be hard to tell whether or not you are on the right path before it is too late, Hamrah explained. You need to ask yourself, are you releasing to production and continuing to release to production? “If you are doing that and you are feeling really good about your ability to develop features and work on features then that is the most important thing no matter how you are doing it,” he said. If you are struggling to manage infrastructure you probably need to rethink your architecture. Hamrah provides four considerations to keep in mind when creating a microservice: 1. Think about what the boundaries are and the APIs. Hamrah explained a service “must be the definitive source of truth for data and functionality it is intended to cover.” 2. Microservices must promote loose coupling so operations are simplified and services can evolve independently of another. 3. Microservices should create opportunities and add value. “You should really be thinking about what new service can you leverage in various ways. What new data can you provide? What new functionality can you enhance through your product?” he asked. “And then going back to your first two principles are you able to do that independently and just focus on that piece?” 4. Service must be reliable, so it is important to think about uptime and usage, according to Hamrah. This includes properly monitoring it and having observability into the service. Another reason companies are having trouble successfully implementing microservices is because they get too caught up in the term microservice itself, according to Chris Richardson, microservices consultant and founder of the transactional microservices startup Eventuate. While a common belief around microservices is that they should be small, single-function services, Richard believes that is a nebulous idea and it is better to create cohesive services with functions that
belong together. “The big idea of microservices is functional decomposition to accelerate development,” he explained. “If a team, one of many teams developing an application, is in production with a single service then why split it since more services equals more complexity.” Richardson explained micro tends to imply that things have to be small when really it is an architectural style that should imply loosely coupled services within an application. Organizations often have the misconception of the more services the better. “This pushes people down the route of creating or defining too many services,” he said. “I always try to tell people to aim for as few services as possible and only if there is a problem should you start to split and introduce more services.” Your services should implement meaningful chunks of business functionality, he added. Richardson’s characteristics for a microservice are that they are easy to maintain, evolve, test and deploy.
April 2019
SD Times
and too much like the old system, but we kept refining and breaking it up until we got to a place where we were comfortable.” Hamrah added, “If you can move forward to solve your immediate problem and you have a very healthy culture of refactoring, improving and iterating — I think you are going to work through these common early mistakes and get to a point where you fully understand and adopt patterns of a healthy microservice ecosystem.”
Managing with the complexity The benefits of microservices can overshadow an important fact of moving to this architecture: it’s sometimes more complicated moving to microservices because data is distributed, there are many moving parts, and there is a lot more to manage and monitor, according to Eventuate’s Richardson. It is a big change going from having to manage one codebase to multiple and even hundreds of codebases and services; and sometimes it’s not always the
“A service must be the definitive source of truth for data and functionality it is intended to cover.” —Michael Hamrah, Namely
If you find yourself in the trough of disillusionment with microservices, Hamrah said it is important not to get discouraged. “There is initial excitement when people go and and maybe they go in too fast, but you learn from that and hopefully you are constantly learning, iterating and improving,” he said. When the computer software company Nuance recently went through a microservices transformation, the team found it was hard to get the balance right. “What service should be responsible for what and avoiding creating a new monolith in the process by trying to put too much into one place was really hard,” said Tom Coates, senior principal architect for Nuance. “We had a couple false starts that were way too big
right choice. “Companies jump too early into microservices. I don’t think there is anything wrong with monoliths. There is nothing wrong with starting out early products or even projects in large organizations in a monolith way,” said Hamrah. “You really want to be focused on where you are spending your effort and if you are spending too much effort in dealing with bugs and not being able to release your code because things are too tightly coupled or not being able make safe factor choices, you probably need to move to microservices.” If your application is not large or complex and you don’t need to deliver it rapidly, then you are probably better continued on page 12 >
9
008-10,12_SDT022.qxp_Layout 1 3/25/19 11:49 AM Page 10
10
SD Times
April 2019
www.sdtimes.com
nce you hit a stride with microservices and you are able to iterate more quickly, find and fix bugs faster, and introduce new features rapidly, it is crucial not to go overboard. You may want to try to start moving all your pieces of infrastructure to a microservice architecture, but as one company found out, not all monoliths are worth changing. Customer data infrastructure company Segment’s infrastructure is currently made up of hundreds of different microservices, but there is one piece of infrastructure where the company took it too far. According to Calvin FrenchOwen, CTO and co-founder of Segment, the company decided to move to microservices because the architecture tends to allow more people to work on different parents of different codebases independently. However, Segment decided to try to split up a piece of its infrastructure based on where data was being sent and not based upon the individual teams using or making changes to that service. For instance, Google Analytics had its own service, Salesforce had its own service, Optimizely had its own service, and so on with each data destination Segment provided. The problem here, however, is that there are more than 100 types of destinations like this. So, instead of being able to scale the product, the company was creating more friction, more operational overhead and more operational load every time a new integration was added on, explained FrenchOwen. Alexandra Noonan, software engineer for Segment, explained in a blog post: “In early 2017 we reached a tipping point with a core piece of Segment’s product. It seemed as if we were falling from the microservices tree, hitting every branch on the way down. Instead of enabling us to move faster, the small team found themselves mired in exploding complexity. Essential benefits of this architecture became burdens. As our velocity plummeted, our defect rate exploded.”
O
“It got to a point where we were no longer making progress. Our backlog was building up with things we needed to fix. We couldn’t add any new destinations anymore because these teams were just struggling to keep the system alive,” said Noonan. “We knew something had to change if we wanted to be able to scale without just throwing more engineers on the team.” That change came in the form of going back to a monolith, but it wasn’t easy to bring all these microservices into one service again because each service had its own individual queue, so the team had to rethink an entirely new way to bring everything back together. The company developed an entirely new product called Centrifuge to replace the individual queues and be solely responsible for sending events to the monolith. Overall, the company was operating these different data integrations for about three years until it reached an inflection point where it realized things were becoming too difficult to manage. Centrifuge was built over the course of six to nine months and it wasn’t until that piece of infrastructure was built that the company was able to put the codebase back together again in a way that was easier to work with. “With microservices, you have a tradeoff where the nice reason to adopt them is it allows more people to work on different parents of the codebase independently, but in our case it requires more work operationally and more operational overhead to monitor multiple services instead of just one,” said French-Owen. “In the beginning, that tradeoff made sense and we were willing to accept it, but as we evolved we realized there were only a couple of teams contributing to the service and there were just way too many copies of it running around.” “Now, we rarely get paged to scale up that service. It scales on its own. We have everything in one monolith that is able to absorb these spikes and load, and handle them much easier, Noonan added. z
Microservices gone wrong Noonan explained that in addition to the overhead that came with adding new integrations, there was just a ton of overhead having to maintain the current ones. Among the problems the team encountered was that while the services shared libraries, the team didn’t have a good way of testing changes to that shared library across all the services. “What happened is we would release an update to this library, and then one service would use the new service and now all of a sudden all these other services were using an older version and we had to try to keep track of which service was using what version of the library,” Noonan told SD Times. Other problems included trying to autoscale these services when each service had a radically different load pattern in terms of how much data it was processing and trying to decrease the amount of differences between all the services.
Full Page Ads_SDT022.qxp_Layout 1 3/22/19 4:51 PM Page 11
008-10,12_SDT022.qxp_Layout 1 3/25/19 11:55 AM Page 12
12
SD Times
April 2019
www.sdtimes.com
< continued from page 9
off with a monolith, explained Richardson. “You shouldn’t do microservices until you need them. You shouldn’t use them for small applications, or in small organizations that don’t have any of the problems that microservices are trying to solve,” said CloudBees’ Sanchez. If you do decide a microservice architecture is the best fit for your organization and teams, Nuance’s Coates said from experience it is best not to try and go in halfway. “It is an allor-nothing proposition in my opinion,” he said. “Unless you have a system that already has some clearly defined interfaces then maybe you can try to piecemeal it. Our first attempts were, let’s do
a little here and there, and that just doesn’t work. Since it is such a fundamental shift, it’s tough to make it play nicely with other legacy systems. If you are going to migrate from a classic architecture to a microservice architecture, you have to more or less greenfield the entire thing and start from ground zero.” To do this, you need tooling and processes in place to ease the complexity, such as service meshes or server monitoring and distributed tracing tools, according to Sanchez. “You need an automated pipeline from development to production, as you can’t manually scale the monolith architecture processes to dozens or hundreds of microservices. You also need to take
Microservices anti-patterns It can be easy to fall into bad patterns when moving to microservices, according to Chris Richardson, microservices consultant and founder of the transactional microservices startup Eventuate. The important thing is to recognize the mistakes you are making and address them. Some common mistakes or anti-patterns Richardson sees organizations fall into are: Distribution is free: Developers treat their services like programming language-level modules, which results in high latency and reduced availability. Instead, developers should pay attention to how their services interact with each other and ensure they can handle requests without relying on another service. Shared databases: According to Richardson, developers have a hard time wrapping their heads around the fact that there is no longer a single database. Databases should not be accessed by multiple services in a microservice architecture because it requires too much coordination across teams. Instead, tables should be accessed by a single service and services should collaborate through APIs. Unencapsulated service: This creates a large API that doesn’t encapsulate a whole lot and basically negates the whole idea of a microservice, Richardson explained. The solution is to have an encapsulated service that hides as much implementation as possible and enables loosely coupled teams. For instance, an iceberg service encapsulates significant business functionality, is small, and bundles data and behavior. Distributed monolith services: A distributed monolith pattern creates two or more services that implement the same concept. Richardson explained that things that change together should be packaged together. On the non-technical side of things, some problems Richardson sees are that organizations: • Believe microservices will solve all their problems instead of using microservices to solve a particular problem, like accelerating development. • Set a goal of adopting microservices when accelerating software delivery is the real goal. • Attempt microservices when they are not ready with the necessary skills such as clean code, object-oriented design and automated testing. • Create a bunch of services just for the sake of having a large number of services, which creates unnecessary complexity. • Adopt microservices without addressing process, policies and culture such as siloed teams, manual testing and monthly deployments. z —Christina Cardoza
advantage of DevOps and progressive delivery methodologies, like blue-green or canary,” Sanchez added. Additionally, APM-based tools will help teams get the right observability capabilities in place, according to Namely’s Hamrah. Some tools Hamrah recommended include the opensource framework gRPC for defining services and Istio for monitoring traffic. But when it comes to picking tools and addressing challenges, Hamrah explained teams should be aware of whether they are actually struggling with managing their infrastructure or struggling with a particular technology they are using to accomplish their goals. Calvin French-Owen, CTO and cofounder of the data infrastructure company Segment focuses on three different pieces when it comes to building microservices: Making sure they are easy to build in the first place by providing a service template and scaffolding necessary to get users going; or making sure they are easy to deploy and spin up the infrastructure; and making sure it is easy to monitor and understand what is going on in production. In order to tell if you are actually on the right path and improving, some key metrics you should be looking at are the time it takes for developers to commit or check in a change until that change is deployed into production, and how frequently you are relaying changes to production, Eventuate’s Richardson explained. “Improvement efforts should be centered around improving those metrics. If you are adopting microservices but not seeing those metrics improve, then something isn’t right,” he said. Lastly, Nuance’s Coates added having architectural guidelines in place can help teams understand what a service should and shouldn’t do. “Each service has a purpose and should be able to stand on its own, describe what it is for, and how someone might use it. No matter what microservice you are looking at, you know your way around it because it is packaged and laid out similar to other ones.” z
Full Page Ads_SDT022.qxp_Layout 1 3/22/19 4:51 PM Page 13
014,15_SDT022.qxp_Layout 1 3/22/19 4:22 PM Page 14
14
SD Times
April 2019
www.sdtimes.com
Making project management BY DAVID RUBINSTEIN
In an effort to make it easier for developers to “work where they are” yet still get a bigger picture of enterprise projects and what’s coming down the pike, several project management tools providers are bringing their solutions down to the engineering level. These tools have been important for project managers, but three relatively new companies — Anaxi, Clubhouse and ZenHub — think it’s equally important for developers to have the broad view, while still being able to see their assignments and provide a view into their work without interruption. Meanwhile, Atlassian — whose Jira solution dominates the market — has put integrations in place to remove what has essentially been a tax on developers to manually update their status in the tool, which takes them out of their development environment and costs them in terms of time and having to refocus on the work they were doing. Aaron Upright, co-founder of ZenHub, said his company’s offering is designed “to reduce the context-switching, and the inefficiencies that creates,” when developers have to leave their workspace to update the project management tools. In fact, Sean Regan, head of developer growth at Atlassian, cited a statistic that showed 56 percent of developers say they have to update tools more than once per day, and need to check 3.3 tools for the status of a project. A project management tool, Regan said, should be smart enough to bring context to everyone automatically, so that when an issue comes in and developers work in the UI, that task automatically updates so designers can update the UI. “They all can do their jobs and not be project management.” APIs are the order of the day, as each solution relies on integrations so users can remain in the tools they prefer, but gives visibility into the bigger plan and the work that will be necessary
to complete. “When you have APIs, you need fewer project managers and fewer updates to the tools,” Regan said. Kurt Schrader, co-founder of the project management platform Clubhouse, explained that his company built out its solution to work well for software and hardware teams from day one. “We’ve tried to build something that lets teams manage their tasks, manage their work like a lot of tradi-
do updates. “We’ve tried to lower the bar of overhead as much as possible, so that what you see in Clubhouse really reflects the actual truth of the world,” Schrader said. The Anaxi and ZenHub offerings also work with GitHub. Atlassian, of course, is the acknowledged big gorilla in the project management space, run by a big number of big organizations. Clubhouse entered the
‘We’ve tried to build something that lets teams manage their tasks, manage their work like a lot of traditional software project management tools let you do.’ —Kurt Schrader, Clubhouse
tional software project management tools let you do, but also to pull back and see things from a high level, see what the priorities are for the company, what are the five or six big things you want to build this quarter,” he said. “I think we’ll see a movement away from structured, monthly road maps to sort of a continuous flow of information, your big things, and we want to enable that so organizations can move quickly, have their work in there but be able to pull back so everyone that needs to participate can get the next feature, the next value out the door, and still work together in sync.” People like to work in the tools they’re comfortable with, and that work for them. A developer working in GitHub, for instance, can link their work to a Clubhouse task — it calls them Actions — that they’re performing and just work. There’s no need to go back into the management tool and
market targeting companies of 50 to 100 people, and is now looking to scale upward, while retaining its core mission of making developer lives easier. “To me, the goal of Jira is has always been that everybody works together, to build things, and track the work that’s going on,” he said. “But because of the way it’s grown over the years, you‘ll see a situation where there’s a program manager or a project manager basically walking around asking, ‘Is this up to date?’ We did things from the very beginning to say, ‘This is the thing you’re going to building for the next week, or choose the thing you’re going to build the next week.’ Where do you want to work? GitHub? So you can hook your GitHub branch to it, and as you build things in GitHub, as you open a pull request, as you branch, it automatically moves things for you. You don’t have to come into Clubhouse to move things around, you just have to be
014,15_SDT022.qxp_Layout 1 3/22/19 4:22 PM Page 15
www.sdtimes.com
April 2019
SD Times
easier… for developers in there so if somebody has a question, you can respond through email, respond through Slack… wherever you want to work, let’s make it as easy as possible to have this reflect reality without someone wandering around asking, ‘Is this up to date? What’s going on with this?’” The company, though, is quite aware that organizational teams need to collaborate, and it says that Clubhouse can bridge communication gaps between teams. “The beauty of what we’ve created is we’ve kept it really simple and intuitive so teams outside of engineering and product can communicate and collaborate to ship products,” said Mitch Wainer, CMO at Clubhouse and co-founder of cloud provider DigitalOcean. “I experienced this pain at DigitalOcean... What we saw is that it siloed teams, because engineering teams were using Jira and marketing was using Trello and Airtable, depending upon the team. You see a lot of fragmentation. Clubhouse has struck the balance between simplicity and flexibility, so you’re able to bridge that gap of communication in the organization. If you’re a CTO or leading the organization, it’s nice to have a birds-eye view to see the status of a project, its lifecycle from start to finish.” Part of what makes project management so complex is the mix of people involved in seeing work through to its completion, and overcoming the reality that each stakeholder has a preferred tool for work, communication and collaboration. “Development is a team sport,” said ZenHub’s Upright. “If developers are in their dev environments committing code all day, they could be blind to other aspects of the project. Instead of
developers being pulled into meetings with project managers who will ask for their status, it’s better to have that conversation asynchronously in the tool.” Upright said that developers working in GitHub today have no way to get insights into how work is piling up, which areas of the work board are increasing or decreasing, or how issues are being handled in the pipeline. To help with that, ZenHub last month released new reporting capabilities in its platform. “There are so many personas around project management, so how do we serve them a little better with data and insights we can generate? Our reporting suite is a manifestation of that.” ZenHub takes advantage of the underlying structure GitHub provides. “We use GitHub issues, or pull requests, and display them on a board and integrate that into our reports,” Upright explained. “We don’t need to create our own system.”
Anaxi, a startup that literally is just getting started, provides an interface for Jira and GitHub that enables users to create project threads similar to Slack channels, but never extracts data, leaving it more secure and as the single source of truth. The mobile web application, made publicly available in December for iOS, gives users an organized stack on top of Jira and
GitHub for tickets, pull requests and conversation. It can also be filtered to give the user insights into only the things he or she needs to see. Marc Verstaen, co-founder and CEO at Anaxi, explained that Anaxi does not have access to its customers’ data; it uses IDs to query the back-end servers and presents the answers to the users. “All the work is done by the Jira server, or GitHub,” he said, noting that method keeps the data secure— especially an organization’s issues. “Issues are something very personal for a company, even more than the source code,” he said. “In source code, you wouldn’t know the next features. If you get access to the issues that Apple is managing, you’d know the problems that will not be fixed and why, you’d have vision of the next release, ones being considered for future, etc. So gating access to issues is much more personal.” Meanwhile, Atlassian is signaling a strategic shift after adding integrations over time with developer tools that let developers have more autonomy over their work while still giving managers the views into project progress they need. “The is no common way of doing work; there are too many tools,” said Atlassian’s Regan “The software world needs a protocol for work.” Atlassian sees Jira as that protocol, a bus beneath all the tools in use, Regan said. Atlassian too wants developers to work where they are, saying the company can bring JQL to Microsoft Excel or Google Sheets. “People are working in an IDE, in Git, Sketch, or doing CI/CD and deploy,” he said. “They all want to work in the tools that are best for their job. You need a common protocol underneath, and that’s what we’re doing with Jira.” z
15
Full Page Ads_SDT022.qxp_Layout 1 3/25/19 10:36 AM Page 44
017,20,21_SDT022.qxp_Layout 1 3/25/19 10:59 AM Page 17
ATLASSIAN SHOWCASE
17
Atlassian eyes future of development D
evOps has moved from philosophy to practical. Development tools are moving to the cloud, but there are hurdles. The industry needs a benevolent center of gravity, through which data from numerous tools can flow, for analysis and action. These are some of the trends software toolmaker Atlassian is seeing in the development market, and some of the topics it will be discussing at the upcoming Atlassian Summit 2019. Modern development methodologies have created new ways of working for development teams and broader organizations as a whole, but in many cases tooling has not kept up with this pace of change. Take DevOps for example. Advanced a decade ago as a way to extend the quickening pace of code development into application updates and deployment, it has seen wide adoption in large enterprises where the competitive advantages of faster software releases has been realized. “What dev and ops had to figure out — this concept of fast iteration, of moving value quickly, working together
BY DAVID RUBINSTEIN across org structures — whole companies are now having to figure out,” explained Sean Regan, head of growth for software teams at Atlassian. “The best example of this is 100 years ago, in the industrial economy, your product was made out of atoms, and it was stored in warehouses and moved along train tracks. Your ability to be agile was limited by the physical forces of the earth, so to speak. Your product today can change as fast as your dev and ops teams can collaborate on the idea and get it to production. So we’ve moved from years or months, to days or weeks. For the C suite, that is a lesson that they’re still trying to learn. How do you run a company that used to be agile at the annual calendar level that can now be agile in a two-week sprint?” Regan said he’s seeing developers moving code into production, and building the pipelines themselves, yet the culture of shared responsibility remains in play. Tools such as Bitbucket allow for visibility into the code being written and the app being built, into the YAML configuration files, as well as for versioning
the code and config file and sharing with other members of the team. “It’s not the old world of developers doing things in a silo; they’re actually working with ops and IT to help build these pipelines in a more stable way,” he said. But Regan believes pipelines for CI/CD do not stand on their own. He believes pipelines are an extension of the code repository, which is why Atlassian developed Bitbucket Pipelines as a feature of the code repository and not as a separate product.
Moving to the cloud Software tool providers want to make it easy for developers to create cloudnative applications, and are doing so by putting their tools into the cloud. This, of course, has the benefits of application scaling, access to services and the latest technologies, as well as worldwide reach and ease of deployments. But Regan described a couple of factors that are working against this movement, including issues with security and performance. Many organizations, he said, still have questions about their continued on page 20 >
On W e & Mo b bile
Visit Us bit.ly/Testim-Free-Trial
019_SDT022.qxp_Layout 1 3/25/19 12:15 PM Page 19
ATLASSIAN SHOWCASE
19
Testim enables intelligent testing A
gile and DevOps are all about managing change through the continuous improvements of people, processes and technologies. Their common goal is to deliver high quality software as fast as possible. However, software testing processes slow release cycles. In fact, according to the Capgemini 2017-18 World Quality Report, Agile adoption is now widespread across 96% of organizations, but only 16% of them are automating test activities. Given so many technological advances over the years, why is testing still so challenging? “Today software development is much more complex,’ said Ron Shoshani, VP of R&D for Testim. “The pressure from trying to keep up with customer demands while navigating competition, coupled with the need to support so many different types of operating systems, browsers, devices and security measures creates a web of challenges for software teams.” That’s just the front end. When you consider the back end and infrastructure including cloud, containers, microservices and integrations, a lot of moving parts are required to maintain app performance, end user experience and cross-platform compatibility. As teams push app changes more frequently, these factors often cause tests to break.
Testim keeps pace with development Testim.io leverages AI for the authoring, execution and maintenance of automated tests. Its flexibility allows record and playback or the use of custom code to fully control the tests actions, so anyone on the greater project team can get involved in the testing process. From developers and testers to product or project managers, analysts and even marketers and salespeople, Testim helps teams make quality an organizational initiative. The average Testim customer increases test coverage by 30% in three
months. For example, leading accountbased marketing provider Engagio created 40 tests in a one-day blitz and increased test coverage by more than 40% in three months, while reducing maintenance by 90%. Globally, the number one B2B marketplace used by millions of people and transacts billions of dollars every year was able to move from manual to automated testing in under five months, growing their repository to more than 40,000 test executions.
processes evolve over time.” Testim makes automating, running and capturing tests fast and easy. The branching capabilities allow users to change the tests, then merge them back to the master branch. The automated tests are so stable, they won’t break even when an application changes. The AI analyzes thousands of parameters for each element, weights each based on their overall reliability and ranks them accordingly. Essentially, with each execution, users are teaching the AI how the software works and what breaks it, so Testim can isolate the issue. Integrating the platform into the CI/CD pipeline is as simple as invoking a command. Additionally, users can create rich detailed bug reports, including video, automated tests and step-by-step, annotated screenshots so QA and developers can communicate faster, more effectively and more efficiently. Rather than arguing about whether a bug is reproducible, a developer can click on an automated test to automatically reproduce the bug in a browser. Once the bug is fixed, the developer can add it to the regression test suite to ensure everything works well, even after updates.
‘We... built Testim in a way that will support any size team and development methodology... giving The Future of Test Automation them the flexibility to change as Eventually, Testim’s AI capabilities their processes evolve over time.’ will be built out further to enable —Ron Shoshani
Support for software teams of any size at any stage “Agile and DevOps is a journey which will continue to evolve,” Shoshani said. “Some teams are much further along in their journey and have very mature automated processes and tests. We understand that and built Testim in a way that will support any size team and development methodology regardless of where they are in their automation efforts, giving them the flexibility to change as their
autonomous testing, similar to Google Analytics, a snippet of code can be placed on the application and automatically collect all the user scenarios in real-time, create tests in the background and run them, making realtime continuous testing a reality. The data captured will also reveal what area of an application is most actively used so the tests can be prioritized for supplemental manual, exploratory and ad hoc testing. Edge case testing will be practical and affordable. Try Testim for free. Learn more at www.testim.io. z
017,20,21_SDT022.qxp_Layout 1 3/25/19 11:00 AM Page 20
20 ATLASSIAN SHOWCASE
< continued from page 17
code being safe in the cloud, even though they use Gmail or Office 365, or Salesforce, which Regan said “has the most intimate details of their customer relationships.” He went on to talk about blockers, or the perceived inhibitors to cloud computing. “One of the blockers is, is the cloud safe for our source code? Bitbucket Cloud is now the only major Git repository that offers SOP2 Type 2 compliance,” Regan said. “I know that sounds like a small, esoteric nerdy topic, but SOP2 Type 2 gets into the details of whether or not the provider is delivering the availability, security, the privacy, the uptime and the confidentiality that meets the standards.” Performance is another blocker to the cloud. When an application sits on a server in a close-by server room or even under a developer’s desk, performance is snappy. Yet when an application needs to call a service running on a server halfway around the world, performance can degrade. That is why Atlassian has invested in four new data centers around the world. It also is transforming its Bitbucket and Jira clouds into single-page apps. “These apps update the contents on the page without refreshing the whole page,” he pointed out. “We’re doing the exact same thing for Bitbucket Cloud and Jira Software Cloud, so that you get that same snappy feel.” Software development has gone from one team shipping once or twice a year to multiple teams shipping every few weeks or months. The advent of tools as cloud services has made it easier for developers to work in the IDE they want, and for others involved with ideation, design and ultimately deployment to work in the tools they prefer. This also adds complexity around managing the process. “I think the industry has to ask itself a question,” Regan said, “which is, now that there’s been this explosion of SaaS apps, and everyone can have the tool that they want, how do we keep it all together? How do we do all this work independently, with
these semi-autonomous teams, and not spiral out of control?” Atlassian sees this problem as a way to frame the company’s promotion of Jira as a center of gravity for the software industry. Regan said, “I think the idea of a center of gravity is interesting, because in the past, to get coordination, you sort of had to lock things down. Jira brings a loose coordination. It’s kind of three things.. It’s open, it’s structured and it’s everywhere. It’s open enough that you can use almost any tool you want and it’ll integrate with Jira. You can share information back and forth, share the Jira issue keys. It’s also structured enough that all of these teams that work independently can still work together.” About a month ago, Atlassian started to share the idea of Jira Everywhere. People, and work, are everywhere, in different geographic zones and using different tools. Some of the work is in an email inbox, or in the Invision design tool, or in an IDE or analytics tool. With Jira in each of those tools, these people could be reminded of work they have to do, or start work that other people are giving them, and it all can be tracked through Jira. In fact, Atlassian kicked this off by putting the Jira UI in Bitbucket. Now, the company is doing the same thing with Invision and Microsoft VS Code.. “Think about VS Code; that’s where developers spend their time,” Regan said. “What do developers hate doing? Developers hate bouncing out of their code to update a project management tool. It drives them bonkers. “What we’ve done is, the developer can see their issues right in VS Code,” he continued. “They can assign the issues, they can comment, they can change them to in-progress, and it’s automatically reflected back in Jira. No tab, no new browser, no log-in, no email to update... it just happens. We’ve automated some of the most tedious aspects of incident management with this new idea of Jira as this center of gravity.” VS Code is a particularly good exam-
ple. Imagine a developer doing work in his IDE without telling anyone, or changing code and moving things into production. If they don’t jump out of their IDE to update Jira, there’s no way to track those changes should an incident occur. “That’s when the stuff hits the fan,” Regan said, “and there is no way for your SREs or incident management teams to trace back cleanly to understand who made the change, who pushed the code, who was working on this pull request, right? It’s kind of unclear, and so by making Jira part of that IDE experience we’re going to get a better-quality incident response. A Jira issue key is just like a UPS tracking code. You can look at a Jira issue key and see who did what work where and in what tool when.” The idea of cloud-based tools is a good one because those tools are changing so fast. Three years ago, people couldn’t stop talking about Puppet or Chef, Regan said, and now people can’t stop talking about Kubernetes. “The pace of change is insane,” he said. “And this kind of underpins our strategy at a macro level. You think about there’s a left and a right.. On the left, you have bring your own tools, design your own way of working, do whatever you want, just build a great toolchain to build great software. On the other side, you have some vendors saying I’m going to build an all-in-one. I’m going to build every development tool you need and I’m going to sell it to you in one package. I think that’s completely insane given how fast software development is changing, and how fast the way people work and the process and the flow changes. Atlassian believes there’s a third way that’s not complete entropy and it’s not a guy in a suit selling you a suite. That’s a very 1990s approach. There’s a middle ground which is, we’ll give you a benevolent center of gravity that can connect your team and share data and share information, but it’s loose enough that you can go and design your own workflow; you can design your own toolchain, you can design your own way of working. Atlassian runs this third option that didn’t exist back in the era of suites.” Atlassian, he said, can be friends with everyone. z
017,20,21_SDT022.qxp_Layout 1 3/25/19 11:00 AM Page 21
ATLASSIAN SHOWCASE
The tool ecosystem l AgileCraft, recently acquired by Atlassian, delivers the most comprehensive software solution available for scaling agile to the enterprise. The AgileCraft platform combines sophisticated planning, analysis, forecasting and visualization with robust, multi-level collaboration and management. Designed to be open, the AgileCraft platform compliments and extends existing agile tools, methods and processes and can be deployed through the cloud or on-premises l AppDynamics: Our intelligent, highly efficient APM monitors every line of code, and immediately provides the relevant information our customers need to quickly resolve issues, make user experience improvements, and ensure that apps always meet employee or customer performance expectations. l Appfire: Our portfolio includes dozens of favorites like Create on Transition, Advanced Tables, and the powerful Command Line Interface (CLI) family of tools from Bob Swift — as well as administrator favorites like Delegated Project Admin Pro, Delegated Project Creator, and the Announcer series from Wittified. Our latest cloud release is Smart Queues for Jira Service Desk. l ConnectALL, an Orasi software company, is dedicated to helping companies achieve higher levels of agility and velocity. ConnectALL’s Integration Platform helps with achieving effective Value Stream Management by connecting the applications used to collaborate, drive decisions and manage artifacts used during the software delivery process, like ALM, Agile and DevOps. l Datadog is a SaaS-based monitoring and analytics platform for complex cloud environments. With 250+ integrations and advanced features like distributed tracing and log management, Datadog provides deep visibility into the health and performance of your applications and infrastructure. l draw.io supports the creation of and collaboration on professional diagrams directly within Confluence pages. From mind-mapping for executive planning, modeling relationships in a business model, designing layouts for software interfaces, to creating process diagrams to optimize business procedures, and map-
ping factory layouts for manufacturing: draw.io offers a powerful and easy-to-use solution that everyone in an organization can take advantage of. draw.io has been available in the Atlassian Marketplace since 2012.
FEATURED l Testim.io uses machine learning to speed the authoring, execution, and maintenance of automated tests. A developer can author a test case in minutes and execute them on multiple web and mobile platforms. We learn from every execution, self-improving the stability of test cases, resulting in a test suite that doesn’t break on every code change. We analyze hundreds of attributes in realtime to identify each element vs. static locators. Little effort, if any, is then required to maintain these test cases yet they are stable and trustworthy. l Gliffy offers two solutions that help teams improve communication and collaboration by using visuals to explain complex concepts and help teams prioritize tasks and build better products. Our diagramming and visual project management software kindles creative thinking and clear communication — two crucial ingredients for innovation. Gliffy Diagram provides users a tool to add diagrams directly to a Confluence workspace or Jira ticket and leverage visuals to provide more context into a project. In addition, the newly released Gliffy Project for Jira enables teams to plan releases and development workloads by combining visual workflows like whiteboards, flowcharts, and wireframes, with Jira. l Jama Software provides the leading platform for requirements, risk and test management. With the Jama Product Development Platform, teams building complex products, systems and software improve cycle times, increase quality, reduce rework and minimize effort proving compliance. l Optimizely is the world's leader in customer experience optimization, allowing businesses to dramatically drive up the value of their digital products, commerce and campaigns through its best in class experimentation software platform. By replacing
21
digital guesswork with evidence-based results, Optimizely enables product and marketing professionals to accelerate innovation, lower the risk of new features, and drive up the return on investment from digital by up to 10X. l Perforce: From requirements to testing to version control, our DevOps-proven solutions let teams use the Atlassian tools they love, while adding more productivity, visibility, traceability, and security at each phase of the product development lifecycle. l SmartBear is behind the software that empowers developers, testers, and operations engineers. More than 6 million people use our tools to build, test, and monitor great software, faster. l SmartDraw works to expand the ways in which people communicate so that we can clearly understand each other, make informed decisions, and work together to improve our businesses and the world. We accomplish this by creating software and services that make it possible for people to capture and present information as pictures, while being a pleasure to use. l SonarSource provides world-class solutions for continuous code quality. Our open-source and commercial products (SonarLint, SonarCloud, SonarQube) help developers and organizations of all sizes to manage the quality & security of their code, and ultimately deliver better software. SonarSource solutions support development in 25+ programming languages such as Java, C#, JavaScript, TypeScript, C/C++, COBOL and many more. l Synopsys helps development teams build secure, high-quality software, minimizing risks while maximizing speed and productivity. Synopsys, a recognized leader in static analysis, software composition analysis, and application security testing, is uniquely positioned to apply best practices across proprietary code, open source, and the runtime environment. l Tricentis is recognized for reinventing software testing for DevOps. Through agile test management and advanced test automation, we provide automated insight into the business risks of your software releases — transforming testing from a roadblock to a catalyst for innovation. Customers rely on Tricentis to achievce and sustain test automation rates of over 90 percent — increasing risk coverage while accelerating testing to keep pace with Agile and DevOps. z
Full Page Ads_SDT022.qxp_Layout 1 3/25/19 11:46 AM Page 22
023-28_SDT022.qxp_Layout 1 3/25/19 12:43 PM Page 23
www.sdtimes.com
April 2019
Why Continuous Testing Is So Important
SD Times
BY LISA MORGAN
rganizations continue to modernize their software development and delivery practices to minimize the impact of business disruption and stay competitive. Many of them have adopted continuous integration (CI) and continuous delivery (CD), but continuous testing (CT) tends to be missing. When CT is absent, software delivery speed and code quality suffer. In short, CT is the missing link required to achieve an end-to-end continuous process.
O
continued on page 24 >
23
023-28_SDT022.qxp_Layout 1 3/25/19 12:42 PM Page 24
24
SD Times
April 2019
www.sdtimes.com
< continued from page 23
“Most companies say they are Agile or want to be Agile, so they’re doing development at the speed of whatever Agile practice they’re using, but QA [gets] in the way,” Manish Mathuria, CTO and founder of digital strategy and services company Infostretch. “If you want to test within the sprint’s boundary, certifying it and managing the business around it, continuous testing is the only way to go.” Part of the problem has to do with the processrelated designations the software industry has chosen, according to Nancy Kastl, executive director of testing at digital transformation agency SPR.
THE FIRST OF THREE PARTS “DevOps should have been called DevTestOps [and] CI/CD should have been called CI/CT/CD,” said Kastl. “In order to achieve that accelerated theme, you need to do continuous testing in parallel with coding. Otherwise, you’re going to be deploying poor quality software faster.” Although many types of testing have shifted left, CT has not yet become an integral part of an end-toend, continuous process. When CT is added to CI and CD, companies see improvements in speed and quality.
Automation is key Test automation is necessary for CT; however, the two are not synonymous. In fact, organizations should update their automated testing strategy to accommodate the end-to-end nature of continuous processes. “Automated testing is a part of the CI/CD pipeline,” said Vishnu Nallani Chekravarthula, VP and head of innovation at software development and quality assurance consultancy Qentelli. “All the quality check gates should be automated to ensure there is no manual intervention for code promotion between [the] different stages.” That’s not to say that manual testing is dead. The critical question is whether manual testing or automated testing is more efficient based on the situation. “You have to do a cost/benefit analysis [because] the automation has to pay for itself versus ongoing manual execution of the script,” said Neil PriceJones, president of software testing and QA consul-
tancy NVP Testing. “The best thing I heard was someone who said he could automate a test in twice the time it would take you to write it.” CT is not about automating everything all the time because that’s impractical. “Automated testing is often misunderstood as CT, but the key difference is that automated testing is not in-sprint most times [and] CT is always done in-sprint,” Qentelli’s Chekravarthula said. Deciding what to automate and what not to automate depends on what’s critical and what’s not. Mush Honda, vice president of testing at KMS Technology, considers the business priority issues first. “From a strategy perspective, you have to consider the ROI,” said Honda. “[During a] project I worked on, we realized that one area of the system was used only two percent of the total time so a business decision was made that only three cases in that two percent can be the descriptors, so we automated those.” In addition to doing a cost/benefit analysis, it’s also important to consider the risks, such as the business impact if a given functionality or capability were to fail. “The whole philosophy between CI and CD is you want to have the code coming from multiple developers that will get integrated. You want to make sure that code works itself as well as its interdependencies,” said SPR’s Kastl.
Tests that should be included CT involves shifting many types of tests left, including integration, API, performance and security, in addition to unit tests. All of those tests also apply to microservices. “For modern technology architectures such as microservices, automated API testing and contracts testing are important to ensure that services are able to communicate with each other in a much quicker way than compared to integration tests,” said Chekravarthula. “At various stages in the CI/CD pipeline, different types of tests are executed to ensure the code passes the quality check gates.” Importantly, CT occurs throughout the entire SDLC as orchestrated by a test strategy and tools that facilitate the progression of code. “The tests executed should ensure that the features that are part of a particular commit/release are covered,” said Chekravarthula. “Apart from the insprint tests, regression tests and non-functional tests should be executed to ensure that the application that goes into production does not cause failures. At each stage of the CI/CD pipeline, the quality check gates should also reach a specified pass percentage threshold for the build to qualify for promotion to the next stage.”
Full Page Ads_SDT022.qxp_Layout 1 3/25/19 1:10 PM Page 27
A brief history of web and mobile app testing.
BEFORE SAUCE LABS Devices. Delays. Despair.
AFTER SAUCE LABS Automated. Accelerated. Awesome.
Find out how Sauce Labs can accelerate continuous testing to the speed of awesome. For a demo, please visit saucelabs.com/demo Email sales@saucelabs.com or call (855) 677-0011 to learn more.
The Cloudâ&#x20AC;&#x2122;s #1 Continuous Testing Platform Accelerate digital transformation across the enterprise with a comprehensive suite of software testing tools â&#x20AC;&#x201C; from agile test management to automated continuous testing for enterprise architectures.
023-28_SDT022.qxp_Layout 1 3/25/19 12:43 PM Page 25
www.sdtimes.com
Since every nuance of a microservices application is decomposed into a microservice, each can be tested at the unit level and certified there and then in integration tests created above them. “Functional, integration, end-to-end, security and performance tests [are] incorporated in your continuous testing plan,” said Infostretch’s Mathuria. “In addition, if your microservices-based architecture is to be deployed in a public or private cloud, that brings different or unique nuances of testing for identity management, testing for security. Your static testing could be different, and your deployment testing is definitely different so those are the new aspects that microservices-based architecture has to reflect.” KMS Technology’s Honda breaks testing into two categories — pre-production and production. In pre-production, he emphasizes contracts in testing, which includes looking at all the
April 2019
SD Times
His test automation emphasizes business-critical workflows including what would be chaotic or catastrophic if those functions didn’t work well in the application (which ties in with performance and load testing). “A lot of times what happens is performance and load are kept almost in their own silo, even with microservices and things of that nature. I would encourage the inclusion of load and performance testing to be done just like you do functional validation,” said Honda. On the production side, he stressed the need for testers to have access to data such as for monitoring and profiling. “With the shift in the role that testing plays, testers and the team in general should get access to that information [because it] allows us to align with the true business cases that are being applied to the application
‘I call out pre-prod and prod buckets specifically instead of calling out dev, QA, staging, UAT, all of those because of the nature of continuous delivery and deployment.’ — Mush Honda, KMS Technology
API documentation for the various APIs, verifying that the APIs perform as advertised, and checking to see if there is any linguistic contract behavior within the documentation or otherwise that the microservices leverage. He also considers the different active end points of all the APIs, including their security aspects and how they perform. Then, he considers how to get the testing processes and the automated test scripts in the build and deployment processes facilitated by the tool he’s using. Other pre-production tests include how an API behaves when the data has mutated and how APIs behave from a performance perspective alone and when interacting with other APIs. He also does considerable component integration testing. “I call out pre-prod and prod buckets specifically instead of calling out dev, QA, staging, UAT, all of those because of the nature of continuous delivery and deployment,” said Honda. “It’s so much simpler to keep them at these two high levels.”
when its live,” said Honda. “How or what areas of the system are being hit based on a particular event in the particular domain or socially? [That] also gives us the ability to get a good understanding of the user profile and what a typically useful style of architecture is. Using all of that data is really good for test scenarios and [for setting] our test strategy.” He also recommends exploratory testing in production such as A/B testing and canary deployments to ensure the code is stable. “The key thing that makes all of this testing possible is definitely test data,” said Honda. “When you test with APIs, the three key sorts of test data that I keep in mind are how do you successfully leverage stubs when there’s almost canned responses coming back with more logic necessarily built into those? How do you leverage fakes whereby you are simulating APIs and leveraging anything that’s exposed by the owner of any dependent services? Also, creating mocks and virtualization where we need to make sure that any of the mocks are invoked in a certain manner [which allows] you to focus on component interaction between services continued on page 28 >
25
CLOUD-BASE TESTING LAB With real devices and browsers to test, we’re always on. Always updated. Entirely secure. Get the stability and enterprise-grade scalability you need. SIMPLIFIED TEST CREATION Whether you need help with test authoring, maintenance, management, validations, or debugging, Perfecto has a solution for you. STREAMLINED TEST EXECUTION Orchestrate large suites across platforms for high velocity and parallel text execution with unlimited elastic scaling. SMART ANALYSIS Get an instant overview in a powerful management dashboard with Perfecto’s visual analytics. With fast feedback, you’ll get all the insights you need.
023-28_SDT022.qxp_Layout 1 3/25/19 12:42 PM Page 26
26
SD Times
April 2019
www.sdtimes.com
Century-Old Company Prioritizes CT ompanies in just about every industry are being disrupted by digital natives. To stay competitive, old companies must adapt, which can be a painful transformation. Moving to Agile is tough enough. Now, more businesses have adopted DevOps, continuous integration (CI) and continuous deployment (CD), albeit at different speeds. Software testing at Lincoln Financial Group is progressive by design because software delivery speed and quality differentiate the company, so much so that Michelle DeCarlo, SVP of IT Enterprise Delivery, leads the company’s technical transformational initiatives. “DevOps, cloud, continuous testing all feed into our larger journey which is the digital transformation effort,” said DeCarlo. “At the core, it’s changing the way we work. So, when we talk about continuous testing, it’s how do we get faster, leaner, and eliminate friction.” Like many mature companies, Lincoln Financial had waterfall-type practices that involved a series of hand-offs. In addition, a lot of the testing was manual. The company is now using continuous processes, including CT to get to market faster. In fact, DeCarlo said that continuous testing has improved Lincoln Financial’s delivery speed by 30%. “Continuous testing is one of those levers that we view as being critically important and fundamental to our strategy,” she said. “Testing used to be viewed as a cost because we were always waiting. Now it’s what differentiates us and helps us get to market faster.” Of course, getting there wasn’t all about the CT process. In addition to changing the way people work, talent had to be upskilled and new tools were required. “There are a few things we do differently with CT compared to how it used to work. One of those things is recognizing that the advanced engineering practices that are important to learn, like test-driven development and service virtualization, are table stakes now,” said DeCarlo. “These skills allow us to accelerate delivery cycles and the test engineers are embedded in the delivery cycle.” Lincoln Financial automates many types of tests, including performance, security, regression, feature and smoke testing. “When you drop code, you need to determine in seconds whether that code is valuable. If it fails, it stops your production cycle,” said DeCarlo. “You have to think about it as one continuous flow
C
because if you have a weak link, it halts your production and you lose hours. You cannot advance your practice unless you have automation.” The company is also taking advantage of microservices and containers, which enable a flexible and scalable application architecture. “The ability to decompose your code base and test at the most granular level is how firms get an advantage,” said DeCarlo. “When you test monolithic systems in totality it takes a long time, so we now decompose code and features, and test at the smallest level possible. Minimum viable product is part of our DNA now.”
API testing is critical “The focus on service testing is huge,” said DeCarlo. “In the past, people were worried about testing the front-end, which is still helpful and necessary, but the majority of your effort should be focused on the service level because that’s where you need to make sure everything works.”
Data drives insights Lincoln Financial also makes a point of Michelle DiCarlo using encrypted test data that includes the scenarios it wants to test so it can very quickly test multiple parameters and scenarios against whatever use cases apply. “It’s critical to ensure that testers have all the data needed as opposed to waiting on other groups,” said DeCarlo. “We have figured out how to get the data that might be upstream, downstream, in the middle, or in our service architecture. Our strategy is to make data [available] on demand, which will allow us to respond at the same pace as other firms.”
Continuous testing means continuous learning Lincoln Financial has a mantra when it comes to software: build, measure, learn. The philosophy maps to developing the software, getting the data ready, testing it, going through the continuous delivery process, and measuring it. “You need to have performance dashboards that are available to all your stakeholders, whether it’s a developer, tester, or product owner, and then you have to make modifications and improvements because what worked yesterday might work today but likely will not work tomorrow,” said DeCarlo. “You have to embed this continuous improvement mindset and not look at it as failures, but learning.” z
023-28_SDT022.qxp_Layout 1 3/25/19 12:42 PM Page 28
28
SD Times
April 2019
www.sdtimes.com
“There should be some agreed-upon execution SLA and some agreed-upon independent factor that says our test cases are not necessarily depending on one another. It increases the concept of good scalability as your system matures,” said Honda. “One of the routine mistakes I’ve seen a lot of teams make is saying I’m going to execute six test cases, all of which are interdependent. [For example,] I can’t execute script number three unless tests one and two have been executed. [Test independence] becomes critical as your system matures and you have to execute 500 or 50,000 test cases in the same amount of time.” As his teams writes tests, they include metadata such as the importance of the test, its priority, the associated user story, the associated release, and what specific modules, component, or code file the test is executed against. That way, using What goes into the right mix of CT tests are the metadata, it is possible to choose tests those that are going to test that functionality you’re based on goals. “The type of test and the test’s purpose about ready to deploy, as well as critical business both have to be incorporated in your decifunctionality that could be impacted by your change sion-making about what tests to run,” said plus some performance and security tests.’ Honda. “You don’t write your tests thinking, ‘I want to test this,’ you write your — Nancy Kastl, SPR test and incorporate a lot of metadata in “[Those are] really imporit, then you can use the tests for a specific goal like tant when we talk about services: putting quality into your CD process or doing a who can get to the data, how much sanity test so you can promote it to the next stage. they can get to, is there a threshold to usage or avail- Then you can mix and match the metadata to ability,” said Scheibmeir. “Performance is also key, arrive at the right suite of tests at runtime without and because this is a complex environment, I want to hard-coding that suite.” test integrations, even degrading them, to underPerformance and security testing should be an stand how my composite application works when integral part of CT given user experience expectadependencies are down, or how fast my self-healing tions and the growing need for code-related risk infrastructure really is.” management. “What goes into the right mix of CT tests are Ensuring the right combination of tests going to be those tests that are going to test that When deciding which tests to run, KMS Technol- functionality you’re about ready to deploy, as well ogy’s Honda considers three things: business prior- as critical business functionality that could be ity, the data, and test execution time. impacted by your change plus some performance Business priority triages test cases based on the and security tests,” said SPR’s Kastl. “Taking a riskbusiness objective. Honda recommends getting based approach is what we do normally in testing business buy-in and developer buy-in when and it’s even more important when it comes to explaining which tests will be included as part of continuous testing.” the automation effort, since the business may have Past application issues are also an indicator of thought of something the testers didn’t consider. which tests should be prioritized. While test coverSecond, the data collected by monitoring tools age is always important, it’s not just about test covand other capabilities of the operations team pro- erage percentages per se. Acceptable test coverage vide insight into application usage, the target, pro- depends on risk, value, cost effectiveness and the duction defects, how the production defects are impact on a problem-prone area. being being classified, how critical they are from a Gartner’s Scheibmeir said another way to business severity perspective, and the typical ensure the right mix of tests are being used is workflows that are being used. He also pays atten- benchmark against oneself over time, measuring tion to execution speed since dependencies can such things as lead time, Net Promoter Score, or the business value of software and releases. z result in unwanted delays. < continued from page 25
or microservices.” In addition to API testing, some testing can be done via APIs, which SPR’s Kastl leverages. “If I test a service that is there to add a new customer into a database, I can test that the new customer can be added to the service without having a UI screen to collect all the data and add that customer,” said Kastl. “The API services-level testing winds up being much more efficient and with automation, it’s a lot more stable [because] the API services are not going to change as quickly as your UI is changing.” Jim Scheibmeir, associate principal analyst at Gartner, underscored the need for authentication and entitlements.
webinar ad.qxp_WirelessDC Ad.qxd 3/25/19 12:41 PM Page 1
Be a more effective manager
Visit the sdtimes.com Learning Center Watch a free webinar and get the information you need to make decisions about software development tools.
Learn about your industry at www.sdtimes.com/sections/webinars
030-33_SDT022.qxp_Layout 1 3/25/19 1:06 PM Page 30
30
SD Times
April 2019
www.sdtimes.com
AI and ML is the future of RPA — but don’t forget the people O ver the past year, the adoption of robotic process automation — essentially advanced macros or “robotic workers” meant to automate the most mundane, repetitive and timecostly tasks — has seen major growth. As the technology matures alongside machine learning and artificial intelligence, Forrester chief analyst Craig Le Clair said that the most promising future for knowledge workers is one where the ease of setup of RPA and the raw power of machine learning combine to create more efficient, more intelligent robotic workers. Though Le Clair said that analyst predictions are often “bullish,” growth
BY IAN C. SCHAFER
in the RPA space has greatly outstripped Forrester’s predictions from even a year ago. Forrester has been tracking the space for around three years and has spotted some trends that have contributed to the broader adoption of RPA. “What we’ve seen is the three top companies are now valued at probably in excess of $10 billion, which is pretty interesting given that most of these platforms were unheard of companies doing $10 million a year in places like Romania, the UK and India,” Le Clair said. “The category has been around for
a while, but it was associated with very basic automations that leverage old technologies like screen scraping and so forth. And two things changed to really accelerate it, in my view.” Through those first three years, Le Clair said the first of these accelerants was the sheer convenience of the RPA model which, unlike other steps that a company might make towards a digital transformation, requires no reconfiguration of core systems or processes. Joe Blue, director of global data science at data company MapR, shared the conclusion that the pure convenience is key. He described his experience helping customers set up their
030-33_SDT022.qxp_Layout 1 3/22/19 4:46 PM Page 31
www.sdtimes.com
own RPA processes. “I think one of the keys for adoption is you don’t want to burden people with a lot of new tools and allow their environments to learn,” Blue said. “In every case, what we try to do is, whatever UI they’re working in, maybe we create a widget, maybe there’s a dashboard that we can add a panel to that that would contain the information that’s needed… Adding to their current UI or adding a step above that routs things to the right person so that they don’t even see 80 percent of the cases that they would have seen because they were automatically delegated and never got there. So it really depends on the process.” Alongside the success brought by that freedom to place RPA into essentially any workflow and see results, Le Clair said that a new system of central oversight was integral in RPA’s expansion. “In the early days of the category, bots just ran autonomously on desktops,” Le Clair said. “But the notion of having central management, a central repository where you store the automations for which you could build control tower capability and monitor the activity of the bots and build centralized analytics to dispatch them in an intelligent fashion, that really created more of an enterprise capability. That was sort of the thing that got the category moving and started to experience very high ROI, and an ROI that could be readily understood. You could build an automation, it might cost you $40,000, it might cost you $15,000 a year to maintain a bot, but it was lifting multiple [full time employees] out of an operation. So when you compared it to even the cost of an outsourced capability and a labor arbitrage offshore, the bot was economical.” But Le Clair contributes the past year’s surprising growth to a very unsurprising factor — the confluence of RPA with AI and ML. Before their integration, RPAs were entirely user-specified to repeat a task as it was performed by the user. “There’s no learning capability, there’s no AI in the bots that you build,” Le Clair said. “You’re simply mimicking the keystrokes of the human — the
mouse clicks, the movements. You’re opening up five or six apps. You’re swiveling between them. The bot’s doing that work that the human is doing. But there’s a perceived limitation in that the set of processes that it was good for was relatively narrow because between programming a lot of decisions in the bots, practicing too many applications and handling exceptions, you had to have very structured, repetitive processes. So a lot of that activity, that could be robotized. But when you start to combine it with machine learning, you start to be able to handle exceptions that aren’t on this highly repetitive, happy path.” The eventual outcome of this, Le Clair says, is a scenario where every worker has their own robotic assistant to help them out.
April 2019
SD Times
“Now, that’s in the future, but those are the kind of messages and direction that’s been driving a lot of value in the category,” Le Clair said. The present of AI and ML integration in RPA has only scratched the surface, Jason Trent, senior director of product strategy at process automation company K2, said. “I think if you were to grade this on a maturity scale, at the most mature, organizations are going to start to apply the new learning innovations in things like AI and deep learning to say ‘We’ve gotten and we’ve analyzed all of the work that the bots have done, and we had control groups and we figured out the variances and we’re saving all of this time and money on the bot side,’ ” Trent said. continued on page 32 >
What about the humans? The professionals responsible for pushing RPA forward aren’t blind to the fact that there are workers being displaced. Right now, Forrester chief analyst Craig Le Clair said, that there are around 26 million employees whose roles lie in that perfect ‘sweet spot’ for RPA. “One of the things [Forrester has] done is develop a set of 12 generic work personas. And we’ve mapped all of the occupations, the 980 occupations into the 12. And then looked at the progression of automation and the number of automation dividends and deficits and the net result in each category. And it turns out that the most affected personas are what we call ‘cubicle workers,’ and those are the jobs that are the sweet spot for RPA. So those are back-office workers in finance and accounting and HR and in supply chain and in line-of-business operations groups and telecommunications.” Because many businesses don’t have a good view of the future of work automation, Le Clair says that Forrester developed those 12 personas to make it easier for them to break down which roles will inevitably be trained into RPA models. Over the next 10 years, Le Clair says that it’s around 60 to 70 percent of those 26 million that Forrester expects to find themselves out of a position. But many of them are preparing for it. “They are being asked to train the bots in some cases, so they kind of see that,“ Le Clair said. “So some set of those workers will have what we call constructive ambition and will be able to move on to more human or customer facing, maybe more analytic tasks and roles, but many of them will not and they will at some point, of course, enter the talent economy, enter the gig economy, or end up elsewhere.”
31
030-33_SDT022.qxp_Layout 1 3/22/19 4:46 PM Page 32
32
SD Times
April 2019
www.sdtimes.com
Is smarter RPA on the BY CARLOS MELENDEZ
While Robotic Process Automation (RPA) was originally developed to automate time-consuming or tedious manual tasks performed by humans, today it is helping organizations significantly reduce human errors and increase productivity across key markets, while helping companies cut costs. But there’s one thing missing in RPA, and that’s the ability to be cognitively aware. That’s where Artificial Intelligence (AI) comes in. There’s a lot of interest in RPA and AI thanks to the tremendous business benefits and competitive advantage that they deliver. While each are powerful tools in their own right, when they work together, they can provide complementary synergies. To take advantage of this relationship, it helps to understand what each Carlos Meléndez is COO and co-founder of artificial intelligence and software engineering services firm, Wovenware.
< continued from page 31
In the meantime, as AI and ML as part of an RPA stack grows in complexity, both Forrester’s Le Clair and Trent say that companies should learn to treat robotic workers in a similar way to human workers, with proper checks, oversight and managerial accountability, very much in line with Forrester’s assessment of the value of centralized management in RPA. “The models are inherently built by us,” Trent said. “We haven’t quite gotten to the world where models are building models as effectively. And we see that in all kinds of different allocations of technology, but at the end of the day, we have to use technology to enhance and benefit, not necessarily to take over. In the AI world, they have the big concept called ‘human in the loop,’ and it’s the same thing for RPA. Those robots still need to have managers and you still need to have quality
technology can offer. While RPA is often talked about in the context of AI, it is not really AI. It is designed to automate repeatable, manual tasks, such as claims, payroll or forms processing. In these cases, the RPA system would “read” paperbased forms and enter the information in the correct fields on an automated system. In medical claim records processing, for example, RPA can be used to correct a recurring error that appears in many records. Companies are turning to RPA because it frees up staff to focus on other activities. As a rules-based system, RPA also increases accuracy and helps ensure compliance. It’s becoming so popular that the market is expected to grow to more than $3
billion by 2025. AI, on the other hand, handles tasks that require intelligence, such as speech or image recognition, decision-making or behavior prediction. It is so much in demand that McKinsey projects that it could deliver between $3.5 to $5.8 trillion in value each year. Data scientists develop algorithms based on a huge amount of data to “teach” these AI programs, and the apps learn by finding patterns in the data. As the data scientists continually refine the algorithms, the AI programs continually improve, or become smarter.
control and checks in place” Forrester’s Le Clair elaborated on human manager’s role in relation to their robotic workers. “You want to know who developed the bot and when,” Le Clair said. “You want to understand the bot’s manager. And that manager is accountable for the credentials that the bot has in the same way that a human is responsible to a manager for any nefarious behavior. That’s where the analogy to a human makes a lot of sense, because these are bots that are replacing pieces of what humans are doing. They’re replacing human elements, and they have a lot of the same attended risks that humans have, and they introduce new ones.” This transition is also already being preempted in the hiring process. While menial tasks are cleared from workers’ schedules, leaving existing employees with more free time, it leaves hiring professionals with something new to
think about when looking at resumes. MapR’s Blue said that in many cases, the automation of some tasks has had the effect of increasing specialization across the automation space. “When I was recruiting to build a team, I was also getting a lot of unsolicited emails from recruiters directly to me and some of those would say MLP, machine learning engineer, deep learning, data scientist, so even within the field of the people creating the RPAs, there seems to be specialization of tasks, so three years ago, it would have just been ‘data scientist,’ ‘commutative scientist,’ but now we’re starting to get into that specialization of ‘yeah, I need a computer imaging data scientist, I need an autonomous car data scientist,’ so we are starting to get to that next level of specialization, even among people that are trying to affect change with RPA,” said Blue. And while changes will be felt
RPA can help AI apps learn AI apps can only tackle more complex programs and achieve a higher level of intelligence by having data scientists
030-33_SDT022.qxp_Layout 1 3/22/19 4:49 PM Page 33
www.sdtimes.com
horizon? code more complex rules into the software. RPA can be used to collect the critical data that these AI apps depend on. For example, it can conduct screen scraping to automate the collection of data from websites — such as aggregating data from the web on all the laws passed in a particular year — that can then be used in algorithms to train AI programs. Since one of the biggest challenges of building effective AI algorithms is having lots of data, by helping to gather the tremendous amount of data that is required for AI, RPA can help speed up AI development and free up data scientists to focus on building the algorithms and training the apps.
AI can make RPA act more intelligently Conversely, AI can be paired with RPA to enable it to provide higher functionality. RPA can evolve from collecting information and sharing that with other systems, including AI; gathering information and using rules to make deci-
sions; to querying an AI system to determine a course of action. Take an Interactive Voice Response (IVR) system in a bank, for example. If someone calls to find out the balance on an account, an RPA system can retrieve that information. However, if an AI system is paired with it, it can tell the RPA system the steps that should be taken in handling the call. The RPA program can use information from the AI program to determine the optimal offer for a particular individual or the price point to offer a customer based on previous transactions and other relevant information, or to predict behavior.
Development issues to consider While it makes a lot of sense to develop applications that combine RPA and AI because of the business and technology benefits they deliver, developing software for these applications creates several issues for software engineers. Because the joint development efforts require multiple teams handling RPA and AI functionality, there has to be good coordination between them. Planning and communication are critical; every action
‘I think one of the keys for adoption is you don’t want to burden people with a lot of new tools and allow their environments to learn.’ —Joe Blue, MapR
throughout a number of industries (the medical, retail and financial sectors have been quite receptive to RPA, all sources agreed) as automation progresses, Forrester’s Le Clair reiterated that there’s still quite a ways to go before the bots can be left to their own devices. In “The Forrester Wave: Robotic Process Automation, Q2 2018” report, Le Clair and his analysis team broke down all of the major vendors in the RPA space, and what they were looking for most in a successful RPA product was the proper integration of the RPA with aforementioned human oversight.
“When you build these bots and eliminate the people, you’re also eliminating a lot of process knowledge,” Le Clair said. “So there’s a particular focus on understanding the nuances of the process and documenting that in a way that’s not documented in the coding of the bots. It’s like if you go from making bread from natural ingredients to using a mix and you lose the mix. You have no one that understands the original process. “What I looked for in the platforms,” Le Clair continued, “was: ‘Did they have mid-cycle auditing?’ ‘Did they have a concept of lifecycle under-
April 2019
SD Times
should be logged and communicated on dashboards that are available to everyone. Software developers and QA professionals need to pay specific attention to RPA, and manage it carefully because of its speed in processing information. If there is an error in the software, it could create a lot of problems very quickly. The results of creating applications that combine RPA and AI are very compelling — enhancing the best capabilities of both applications. This allows organizations to benefit from smarter RPA programs that can handle customer interactions or predict behaviors, and enables data scientists to get the data they need to program AI apps to handle more complex problems faster, and speed up development cycles. While the joint development process will require greater communication and collaboration from multiple development teams, it opens up new opportunities for software engineering. A new, powerful class of smarter RPA applications is on the horizon and we are just beginning to see what is possible. z
standing of a bot?’ ‘Did they treat it as a digital worker?’ ‘Did they have the ability in their technology stack to partition the design and the control and the various components so that each department that they had could use the same stack in kind of a multi-tenant approach?’ ‘Did they have a credential vault to manage the credentials securely?’ ‘Did they have an automation center or operating model that allowed the automation to be designed and built in the business, but had enough control centrally and technology management to assure that things would work and work reliably?’ These are things that all of the platforms are striving for. Put the analytics or advanced analytics aside, for the basic automation capability, you need these things. And these providers were in the mode of trying to make that happen, some were further along than others.” z
33
034-37_SDT022.qxp_Layout 1 3/22/19 4:28 PM Page 34
34
SD Times
April 2019
www.sdtimes.com
Buyers Guide
Is DataOps the
next big BY JENNA SARGENT
A
fter watching application teams, security teams and operations teams get the -Ops treatment, data engineering teams are now getting their own process ending in -Ops. While still in its very early days, data engineers are beginning to embrace DataOps practices. Gartner defines DataOps as “a collaborative data manager practice, really focused on improving communication, integration, and automation of data flow between managers and consumers of data within an organization,” explained Nick Heudecker, an analyst at Gartner and lead author of Gartner’s Innovation Insight piece on DataOps. DataOps is first and foremost a people-driven practice, rather than a technology-oriented one. “You cannot buy your way into DataOps,” Heudecker said. Michele Goetz, a principal analyst at the research firm Forrester, explained that DataOps is like the DevOps version of anything to do with data engineering. “Anything that requires somebody with data expertise from a technical perspective falls into this DataOps category,” she said. “Why we say it’s like a facet of DevOps is because it operates under the same model as continuous development using agile methods, just like DevOps does.” DataOps aims to eliminate some of the problems caused by miscommunications between developers and stakeholders. Often, when someone in an organization makes a request for a new data set or new report, there is a lack of communication between the person requesting and whoever will follow through on that request. For example, someone may make a request, an engineer will deliver what they believe is what is needed, and when the requesters receives it they are disappointed that it’s not what they asked for, Heudecker
explained. This can result in increased frustration and missed deadlines, he explained. By getting stakeholders involved throughout the process, some of those headaches may be avoided. “[CIOs] really want to figure out how do they get less friction in their companies around data, which everybody’s asking for today,” said Heudecker. Another potential benefit of DataOps is improved data utilization, Heudecker explained. According to Heudecker, these are some of the questions that organizations may start to ask themselves: l “Can I use the data that’s coming into my organization faster? l Are things less brittle? l Can things be more reliable? l Can I react to changes in data schemas faster? l Is there a better understanding of what data represents and what data means? l Can I get faster time to market for the data assets I have? l Can I govern things more adequately within my company because there’s a better understanding of what that data actually represents?” According to Goetz, for companies that have been journeying down the path of “tightening the bolts” of what is needed from a data perspective and how that supports digital and other advanced analytics strategies, it is clear that they need an operating model that allows development around data to fit into their existing solution development track. This enables them to have data experts on the same team as the rest of the DevOps Scrum teams, she explained. Organizations that are less mature in their data
034-37_SDT022.qxp_Layout 1 3/22/19 4:28 PM Page 35
www.sdtimes.com
operations tend to still think in terms of executing on data from a data architecture perspective. In addition, a lot of those less mature companies do not handle data in-house, but will outsource it to systems integrators and will take a project-oriented waterfall approach, Goetz explained. The companies that are already getting DataOps right are typically going to be the ones that already
April 2019
when the lightbulb goes off and they make the equivalency between DataOps and DevOps,” said Goetz. “It’s like all those barriers start to fall away because they typically have something that’s been in place that they’re able to now fit into instead of fight against.”
Having a DevOps structure in place can ensure DataOps success According to Goetz, companies that have not at least gone through or adopted some Agile methodologies will have a hard time adopting DataOps. Goetz explained that over the years, she has seen companies evolve and try to switch from waterfall to Agile. They tend to struggle and make mistakes along the way, at least at first. Unless a company has some of those competencies, they will likely struggle. “So I think there’s definitely some foundations that make it easier to get started in one end of the company,” said Goetz.
thing g?
DataOps is probably here to stay, though it will be a while before it is widely adopted
have a DevOps practice in place for their solution development, whether it’s on the application or automation side, Goetz explained. Those more advanced companies also tend to have a model for portfolio management and business architecture that aligns to continuous development. “They’re recognizing there is an opportunity to better fit into the way that you operate around development with those teams so that data doesn’t get left behind and isn’t building up technical debt,” she said. According to Goetz, this doesn’t just apply to data systems; it encompasses data governance, which traditionally has been the “final bastion of anything anyone wanted to do with the data. It was always playing cleanup,” she said. “It’s really fascinating to see how organizations act
DataOps is still in the very early stages, so it’s hard to predict where it will go in the future, or even if it will reach wide adoption or fizzle out, Heudecker explained. However, even if DataOps isn’t here to stay, it will still have some positive lasting effects, Heudecker said. “If it gets companies thinking differently about how they collaborate around data, that’s a good thing,” said Heudecker. “Even if it is a short-term hype and then it kind of fizzles out after a while, companies internalize some of the principles or ideas around the topic, and that’s good.” Goetz doesn’t see DataOps going away anytime soon. In fact, she said that it is actually accelerating in terms of interest and adoption. The level of interest will vary from company to company, but the groundswell is definitely there, she explained. In fact, a 2018 survey from data company Nexla and research firm Pulse Q&A revealed that 73 percent of organizations were investing in DataOps last year. The reason she doesn’t see it going away is that one of the catalysts for DataOps is that organizations are recognizing that they don’t just need to build technical capabilities and install applications anymore. In today’s world, organizations are building their own digital foundations, products, and digital business. According to Goetz, those things require a different way of development and going to market. continued on page 36 >
SD Times
35
034-37_SDT022.qxp_Layout 1 3/22/19 4:28 PM Page 36
36
SD Times
April 2019
www.sdtimes.com
The DataOps Manifesto Though it is still in its early days, DataOps already has its own manifesto, similar to the Agile Manifesto.
The DataOps Manifesto places value in: DataOps
l “Individuals and interactions over processes and tools
l Working analytics over comprehensive documentation l Customer collaboration over contract negotiation
l Experimentation, iteration, and feedback over extensive upfront design l Cross-functional ownership of operations over siloed responsibilities”
Other principles of DataOps that it lists include continually satisfying customers, valuing working analytics, embracing changing, having daily interactions, self-organizing, and more. z
< continued from page 35
“[Those companies] looked at where DevOps came from,” Goetz said. “It came from the product companies, particularly the technology product companies. And they have been successful. And you also see integrators redesigning their development practices around DevOps. So there’s just so much momentum behind it. And there’s better results coming out of these practices in general that I don’t see it going away.”
tects will likely change in DataOps structures. Architects have historically been ignored because developers don’t want someone telling them what to develop; they just want to sit down and make it. Often, architects are seen as something that will slow teams down and push them into more of a waterfall structure. But according to Goetz, in stronger Agile practices, architecture actually plays a significant role because it helps define the vision and patterns.
The role of data governance It may be too early to make any predictions around DataOps Even though it’s still too early to start seeing any obvious trends, Heudecker has still seen a lot of interest in the topic. Right now it is very vendor-led, he said, but there has been a lot of interest from organizations, too. In particular, companies are interested in learning exactly what it is and whether or not it will benefit them. Going forward, it will probably be the organizations themselves, not vendors, who will define the best practices, Heudecker explained. Organizations trying DataOps out are going to be “leading on what those best practices are and how you create a center of excellence around that,” said Goetz. One trend that Goetz has already seen is that companies are approaching DataOps from the AI side of things. Algorithms have advanced and a lot of the existing AI models have gotten quite good at classifying, categorizing, and doing other data preparation work. And data scientists have gotten good at finding analytics functions and machine learning to execute on their data. They don’t even necessarily have to be data scientists because they don’t have to manipulate the model to optimize it. Things are a bit more premade, and vendor tooling is enabling the citizen data scientist. “You don’t always need to have data science skills to take advantage of a data science model or machine learning model,” Goetz explained. Another trend she has seen is that the role of archi-
Many of the regulations that are popping up around governance, such as Europe’s General Data Protection Regulation (GDPR) and California’s Consumer Privacy Act, make handling information and governing it mandatory requirements for what you are going to develop, Goetz explained. As a result of these new regulations, we are going to start see that privacy and security from a governance perspective aren’t just going to be handled at the CISO level or in data governance teams. These regulations are causing there to be a stronger working relationship between those stewardship teams and data engineering teams, she said. “It is required to infuse governance capabilities into every aspect of data development or data design,” said Goetz. “That can’t be lost… there’s a symbiotic relationship that is developing, in DataOps specifically, where what you do from a data management and architecture perspective, what you do from a delivery perspective, and what you do for a governance perspective, those are no longer three different silos. It is one single organization, and if there’s only one benefit to going down the route of adopting DataOps, it is that you have a better operating model for data in general, regardless. You will build a better data lake. You will build better pipelines. You will build more secure environments. You will tune your data to business needs better, just by that symbiotic relationship. And I think that that’s the accelerator to not failing in your digital capabilities when data is at the core.” z
034-37_SDT022.qxp_Layout 1 3/22/19 4:28 PM Page 37
www.sdtimes.com
April 2019
A guide to DataOps tools n Ascend empowers everyone to create smarter prod-
n
ucts. Ascend provides a fully-managed platform for data analysts, data scientists, and analytics/BI engineers to create Autonomous Data Pipelines that fuel analytics and machine learning applications. Leveraging the platform, these teams can collaborate and adopt DataOps best practices as they self-serve and iterate with data and create reusable, self-healing pipelines on massive data sets in hours, instead of the weeks or months. n Attunity enables organizations to gain more value from their data while also saving time and money. Its software portfolio accelerates data delivery and availability, automates data readiness, and intelligently optimizes data management.
FEATURED PROVIDER n
n HPCC Systems:
the big data platform that enables you to spend less time formatting data and more time analyzing it. This truly open source solution allows you to quickly process, analyze, and understand large data sets, even data stored in massive, mixed schema data lakes. Designed by data scientists, HPCC Systems is a complete, integrated solution from data ingestion and data processing to data delivery. Connectivity modules and third-party tools, a Machine Learning Library, and a robust developer community help you get up and running quickly.
n Composable Analytics is an enterprise-grade DataOps platform that is designed for business users wishing to create data intelligence solutions and data-driven products.
and monitor your data flows whilst security, data governance and data ethics are treated as first-class citizens. As a streaming platform overlay technology, Lenses integrates with Kubernetes and can run with any distribution of Apache Kafka including AWS MKS and Azure HDInsight.
n
n MapR is a data platform that combines AI and analytics.
DataKitchen’s DataOps platform provides users with previously unavailable insights by allowing for the development and deployment of innovative and iterative data analytic pipelines.
n
Delphix offers a dynamic data platform that connects data with the people who need it most. It reduces data friction by providing a collaborative platform for data operator and consumers. This ensures that sensitive data is secured and the right data is made available to the right people. n Devo is a full-stack, multi-tenant, distributed data analytics platform that scales to petabyte data volumes and collects, stores, and analyzes real-time and historical data. Devo collects terabytes of data per day, enabling enterprises to leverage data from IT, operational and security sources. Devo reduces direct operational costs and resources while ensuring visibility across the enterprise’s data landscape, delivering performance up to 50x faster than competing solutions using 75% less infrastructure. n
Infoworks’ platform automates the operationalization and governance of end-to-end data engineering and DataOps processes. It also provides role-based access controls so that administrators can control which users have access to certain data sets.
n Kinaesis are a leading financial services data consultancy focusing on Data Strategy and Execution through their DataOps methodology. They provide DataOps accelerators and consultancy and partner with leading technology vendors to maximize ROI. n Lenses.io is a DataOps platform for streaming technolo-
gies like Apache Kafka. Lenses enables a seamless experience for running your Data Platform on-prem, cloud or hybrid and put dataOps in the heart of your business operations. Provides self-service data-in-motion control, build
Its DataOps Governance Framework offers a blend of technology options that can provide an enterprisewide management solution that can help them govern data. n Nexla is a data platform that is hoping to be “the new
standard in Data Operations.” It offers data ingestion and integration at scale, Flex API technology, the ability to connect to almost any format, the ability to create inter-company feeds, and provides your data the way you want it. n Qubole is a cloud-native data platform for self-service
AI, machine learning, and analytics. It provides end-toend big data processing that will enable users to more efficiently conduct ETL, analytics, and AI/ML workloads. n Redgate Software: The increasing desire to include database development in DevOps practices like continuous integration and continuous delivery has to be balanced against the need to keep data safe. Hence the rise in database management tools which help to introduce compliance by default, yet also speed up development while protecting personal data. Redgate’s portfolio of SQL Server tools span the whole database development process, from version control to data masking, and also plug into the same infrastructure already used for application development, so the database can be developed alongside the application. n
StreamSets is a data integration engine for flowing data from streaming source to modern analytics platforms. It offers a collaborative pipeline design, and the ability to deploy and scale on-edge, on-prem, or in the cloud, map and monitor dataflows for end-to-end visibility, and enforce data SLAs.
n
Tamr offers a new approach to data integration. It solutions make it easy to use machine learning to unify data silos. z
SD Times
37
039_SDT022.qxp_Layout 1 3/22/19 4:34 PM Page 39
www.sdtimes.com
April 2019
SD Times
INDUSTRY SPOTLIGHT
Drive more data warehouse insights Data warehousing is one of the core nesses get the dual benefit of traditional sources of enterprise information, but reporting and advanced analytics. most organizations are still unable to “If you’re at the earliest stage of unlock the potential value of their maturity, you’re used to asking quesinvestments. For one thing, traditional tions of an SQL or NoSQL database or data warehouses require significant data lake in the form of reports,” said domain experience and manually con- Villanustre. “In a modern data warefigured rules to enable the extraction of house that has a deep learning capabiluseful data. Modern data warehouses ity with anomaly detection, you also get add machine learning, AI and deep new insights that could have a profound learning capabilities to surface insights effect on your company and customers that exceed the capabilities of tradition- such as a security breach, other crimes al data warehouses. in progress, the early warning signs of a “Machine learning and deep learn- disease outbreak or fraud.” ing add a different dimension to data Unlike pre-programmed rules that warehouses, said Flavio Villanustre, identify the “known knowns” in data, CISO and VP, Technology at LexisNex- deep learning can identify the “unknown is Risk Solutions. “For example, you may be able to pin- ‘In a modern data warehouse point anomalies without even that has a deep learning looking for them such as why capability with anomaly a loyal customer’s behavior detection, you also get has changed.” new insights that could Data scientists know this. have a profound effect on However, most software your company.’ developers, business analysts —Flavio Villanustre and IT professionals haven’t learned the capabilities and limitations of machine learning, AI and unknowns” which come in the form of deep learning yet because their posi- risks and opportunities. To help democtions didn’t require such knowledge in ratize the use of machine learning and the past. Given the rapidly growing deep learning, HPCC Systems provides popularity of machine intelligence and a consistent data-centric programming its associated use cases, just about language, two processing platforms and everyone must acquire new knowledge an end-to-end architecture for effective and skills so they can drive new forms processing. With it, developers can of value. design Big Data-powered applications that improve the quality of business Work smarter, not harder decisions and accelerate time to results. Traditional data warehouse administraIn the absence of HPCC Systems, tors work with the business to define a organizations can add basic machine set of rules that transform vast amounts learning capabilities on top of their data of data into reports. As organizations add warehouse, which requires specific machine learning and deep learning domain knowledge, time and investcapabilities, they may continue to update ment. To take advantage of that, they also the old hard-coded business rules and need labeled data for training purposes. they may even encode new ones, but the With the right expertise, data governance insights they get are no longer limited to and maintenance, the system may yield pre-programmed rules. Instead, busi- the desired results. If it does, the organiContent provided by SD Times and
zation will add deep learning capabilities next, which require yet more specific knowledge, skills and investments. By comparison, each HPCC Systems instance includes a traditional data warehouse system, including a data lake and strong integration with deep learning frameworks including TensorFlow so organizations can mature at their own pace without unnecessary expenses and friction. “If you use HPCC Systems, you can leverage its strong data management capability to build your data warehouse, data lake and all the analytics you need,” said Villanustre. “You can also leverage TensorFlow to build deep learning models on top of your data which is something that’s hard to get from the other platforms.” HPCC Systems also does not require data duplication, which further lowers costs and risks. “If you’ve got two copies of data, keeping them synchronized is a big challenge,” said Villanustre. “If the data changes and the import didn’t work as required, you have to reimport the data in both locations. That’s unnecessarily expensive and time-consuming.”
Get open source flexibility HPCC Systems is an open-source platform so there are no software licensing fees. It integrates with Google TensorFlow and all TensorFlow-compatible components including the Keras API for building and training deep learning models. “With HPCC Systems, you have the flexibility to extend the system as much as you want using proprietary or open source components,” said Villanustre. “Our decision tools combine public and industry-specific content with advanced technology and analytics so customers can evaluate and predict risk and enhance operational efficiency.” Learn more at www.hpccsystems.com. z
39
040_SDT022.qxp_Layout 1 3/25/19 12:11 PM Page 40
40
SD Times
April 2019
www.sdtimes.com
Guest View BY GABRILELLE GASSE
Developers need to focus on “Code UX” Gabrielle Gasse is a Java Developer at xMatters.
O
ne of the chief concerns in software design and development is to create an intuitive user experience. However, developers often forget that they actually have two sets of users to consider: the end-user consuming the product, and the other developers using and working on the code itself. Not upholding good “Code UX” affects the maintainability of applications, decreases team productivity, and ultimately, slows down the speed of development. In his book on usability, “Don’t Make Me Think,” Steve Krug presents three compelling principles for building and presenting websites: 1. Don’t make me think. 2. It doesn’t matter how many times I have to click, as long as each click is a mindless, unambiguous choice. 3. Get rid of half the words on each page, then get rid of half of what is left. With simple tweaks, Krug’s laws can be applied to other publication modalities and programming best practices — including source code: 1. Don’t make me think. 2. Finding my way around code should be trivial and unambiguous. 3. Always look for ways to reduce the complexity of the code base.
Not upholding good “Code UX” affects the maintainability of applications.
1. Don’t make me think. Krug writes, always make the intent and usage of your code “self-evident, obvious, and self-explanatory.” Even a junior developer without any experience with your application should be able to skim through and have a pretty good idea of what it does. Remember that every question a developer has to ask while reading your code adds to his or her cognitive workload. Methods like the SOLID principles and choosing meaningful names for namespaces, classes, methods, and variables can go a long way towards making your code more expressive. Keeping classes and methods concise also helps with readability. In-code documentation should be used consistently. Modern IDEs display the header comment of classes and methods as we use them in different parts of software, and when they’re well-written,
they help others avoid sinking time into navigating to different parts of the application to figure out what happens.
2. Finding your way should be trivial, unambiguous. Carefully chosen naming conventions and in-code comments not only help other developers understand the intent of your code, but act as valuable signposts. Developers should be able to pinpoint specific namespaces, classes, and method names, in the same way we’d be able to locate a specific topic from a book’s table of contents. Removing repetitions will also make your code more navigable. Specifically, you can apply the SPOT rule for a single, unambiguous, and authoritative representation of the different pieces of knowledge within your application. Following the SPOT rule and the SOLID principles helps eliminate side effects between different system components; they make the code more predictable and isolate the logic between tasks.
3. Always look for ways to reduce complexity. Complexity increases with each line of code we add, which in turn also adds to our cognitive workload. In his article on YAGNI, Martin Fowler warns against implementing functionalities too early in a system. It can be tempting to do so in anticipation of a feature that’s on the roadmap. However, the cost is adding unnecessary complexity to your systems —– complexity that you’re then forced to maintain until the feature is fully developed and released. With ever-changing software requirements, it’s highly probable that your future feature will be abandoned or change dramatically in scope such that it will require substantial modifications.
And finally, putting it all together... If we assume that the effort to understand and use disorganized code is about the same as writing code that’s easy to understand, then I’d argue the latter is more valuable because the time spent writing good code is done once, whereas the time required to decipher messy code is expended each and every time a programmer works on that software. That sounds like an expensive and frustrating way to develop software. z
041_SDT022.qxp_Layout 1 3/22/19 4:34 PM Page 41
www.sdtimes.com
April 2019
SD Times
Analyst View BY JASON WONG AND ELIZABETH GOLLUSCIO
Invest in your ‘cool factor’ A
pplication organizations have morphed into digital organisms. They evolve through changes in their people, practices and technology. Digital disruptions, like artificial intelligence (AI), provide new and immediate opportunities, forcing application organizations to change faster than ever before. As Darwin taught us, the fastestadapting organisms will survive. Gartner finds that top-performing organizations expect to develop 40% of their new critical solutions in-house. Application leaders responsible for digital development strategies need to invest in novel ways of working internally that fuse with existing development activities to rapidly effect positive change. Becoming a top performer means securing the “cool” factor and making your application development organization the “place to be” for developers. To do so, we recommend investing in your people, practices and technology.
Invest in people: Developers want to build cool software Top developers want to work for organizations that allow them to build purposeful, cool software. As such, application leaders must devote more attention to internal development in order to differentiate themselves and close the gap between their current state and leading industry performers. Hiring or building new competencies, for example in user experience (UX) architecture and API product management, will create an innovative culture that’s cool again. Developers want to know they are using the latest techniques and best practices to enable innovation and agility in their organization.
Invest in practices: Cool organizations are moving to product thinking In 2018, 73% of companies moved to IT product management, up from the mid-50% range in 2017. This product-focused way to work means that application leaders need to introduce digital entry into development processes that supports product owners, thus allowing for continuous product
Jason Wong is a research vice president and key initiative leader for Application Development and Platforms research at Gartner. Elizabeth Golluscio is a managing vice president at Gartner, leading the Application Design and Development team.
improvement. Lean and agile delivery teams need effective product management as much as product management needs agile delivery. It is essential that organizations move their practices toward those used by leading application development teams, for instance agile and DevOps, which subsequently empower developers to do what they truly want to do and more quickly respond to business needs. Gartner predicts that by 2020, product-oriented organizations will deliver better customer satisfaction and business results.
Invest in technology: Multiexperience development has the cool factor
Top developers want to work for organizations that allow them to build purposeful, cool software.
Attracting top talent to develop applications that have the cool factor means investing in new multiexperience development technology that maximizes the mesh app and service architecture (MASA). • Multiexperience development: Involves creating fit-for-purpose apps based on touchpointspecific modalities (e.g., touch, voice, gesture), while ensuring a consistent user experience across web, mobile, wearables, and conversational and immersive touchpoints. • MASA: An overall architecture for building modern apps and services that replace client/server architecture. It is becoming a baseline for new trends in application development. Top performers are using new UX design constructs and multiexperience development technologies to support MASA, simplify the development of front-end apps and increase agility with back-end services. The best application development strategies will shape people, practices and technology investments to modernize and create innovative app experiences for digital business transformation. Most critically, leaders must adopt a growth mindset to create a top-performing development team. z
41
042_SDT022.qxp_Layout 1 3/25/19 1:12 PM Page 42
42
SD Times
April 2019
www.sdtimes.com
Industry Watch BY I.B. PHOOLEN
Crumbs for cupcake-native development I.B. Phoolen writes about all manner of baked goods while enjoying his Dunkin’ coffee. He was introduced to the pages of SD Times years ago by founding editor Alan Zeichick, and his opinions and insights have only appeared in the April 1 editions.
C
rumbs! Yes, there are crumbs a-plenty in the realm of cupcake-native software development. Let I.B. explain. Cloud computing is a cupcake. Or rather, it’s like a cupcake. I get confused about metaphors and similes, just like everyone gets confused about cupcakes and muffins. Cupcakes have papers, but muffins don’t. Cupcakes have frosting, yet muffins don’t, except when cupcakes don’t have frosting. Sprinkles? Cupcakes. Cranberries? Usually but not always muffins. Cloud computing is the way of the future (rather like cupcakes). Instead of a single large multi-layer cake, cupcakes can be consumed in small, discrete units (aka, a cupcake). You need more food? Point, click, and another cupcake is provisioned. Starting a diet? Point, click, and a cupcake is deallocated, and that vital nutritional resource is made available to another consumer. Easy-squeezy. When most organizations begin working with cupcake computing, they start with recipes that involve traditional programming architectures. The finished applications could run on a server in a local data center, but instead are coded, compiled, tested, and deployed into a tasty cloud cupcake. This offers some of the benefits of cloud computing — no need for capital costs, no need for footprint in a local data center, no need for power, bandwidth, cooling. In other words, it’s someone else’s job to supply the oven, the cupcake baking pan, even the little paper wrapping, and then to clean up the mess afterwards. But you’re not gaining the full potential benefits of the cloud platform. That requires true cupcake-native development. “Cloud” is a five-letter word meaning “someone else’s computer.” Someone else’s computer, however, is different than your own computers, because of the (somewhat) infinitely large pool of resources, and also because you can use microservices. Or, as I.B. prefers to think of them, crumbs. The website microservices.io describes the benefits of crumbs thusly: “The microservice architecture enables the continuous delivery/deployment of large, complex applications. It also enables an organization to evolve its technology stack.” Howev-
It’s someone else’s job to supply the oven, the cupcake baking pan, even the little paper wrapping, and then to clean up the mess afterwards.
er, you are dutifully warned, “The microservice architecture is not a silver bullet. It has several drawbacks. Moreover, when using this architecture there are numerous issues that you must address.” I.B. leaves the exploration of these crummy drawbacks as an exercise for the reader. The key point is, you can define reusable services, and then link those services together, as needed, while the cloud platform determines the most efficient way to serve up those crumbs on-demand. To be clear, there are still lots of opportunities for traditional-style development for the cloud. As I.B. mentioned above, you will realize benefits, even when doing what is referred to as “ancient, archaic, Luddite development.” To fully maximize the power of the cloud, though, be cloud-native, for which microservices are only one of the tools. So are functions, aka, “open-source containernative serverless platforms.” So are the copious use of APIs, and a full reliance upon Kubernetes. Get with the program, everyone! What concerns I.B. — and should concern you too — is lock-in. There are lot of cloud platforms, some big, some small, all of which pledge some type of fealty to open source, to open standards, to open borders, to open doors… you get the idea. Cloud computing is not as open as cupcake manufacturing. You can take a recipe from Good Housekeeping or Fanny Farmer and make it in any brand of cupcake pan, and bake it in an oven from Westinghouse, Hotpoint, Electrolux, or even Bosch. Sure, you may have to make a few slight adjustments to cooking time and temperature depending on whether the oven is gas or electric, induction or convection — but as long as it’s not a microwave oven, your cupcake will be yummy. (Never bake a cupcake in a microwave.) The same is not true of cupcake-native crumb computing. Each vendor’s cloud is different, and indeed, many vendors offer a range of infrastructures and architectures, as well as methods for provisioning, securing, blah, blah, blah. Even when the clouds all support the same open-source standards and protocols, you can’t plug-and-play. You’re going to be locked in in a way you won’t be if you build, test, and deploy applications in, say, VMware muffins. You have been warned. Crumbs! Bon appétit! z
Full Page Ads_SDT022.qxp_Layout 1 3/22/19 4:52 PM Page 43
Bad address data costs you money, customers and insight. Melissa’s 30+ years of domain experience in address management, patented fuzzy matching and multi-sourced reference datasets power the global data quality tools you need to keep addresses clean, correct and current. The result? Trusted information that improves customer communication, fraud prevention, predictive analytics, and the bottom line. • Global Address Verification • Digital Identity Verification • Email & Phone Verification • Location Intelligence • Single Customer View See the Elephant in Your Business -
Name it and Tame it!
www.Melissa.com | 1-800-MELISSA
Free Trials, Free Data Quality Audit & Professional Services.