MAY 2019 • VOL. 2, ISSUE 023 • $9.95 • www.sdtimes.com
Full Page Ads_SDT023.qxp_Layout 1 4/18/19 11:21 AM Page 2
003_SDT023.qxp_Layout 1 4/22/19 3:21 PM Page 3
Contents
VOLUME 2, ISSUE 23 • MAY 2019
NEWS 6
FEATURES News Watch
13
What is Rapid Software Testing?
16
How to successfully apply DevOps in your CX development
21
Using version control to automate DevOps
Serverless moves the responsibility of performance monitoring to developers page 8
COLUMNS 44
GUEST VIEW by Mark Troester The new language of high-productivity development platforms
45
ANALYST VIEW by Michael Azoff Managing machine learning
46
INDUSTRY WATCH by David Rubinstein Processing changes in process
BUYERS GUIDE Continuous testing at every step
page 31
page 23 THE SECOND OF THREE PARTS Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 80 Skyline Drive, Suite 303, Plainview, NY 11803. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2019 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 80 Skyline Drive, Suite 303, Plainview, NY 11803. SD Times subscriber services may be reached at subscriptions@d2emerge.com.
004_SDT023.qxp_Layout 1 4/19/19 10:40 AM Page 4
®
Instantly Search Terabytes
www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein drubinstein@d2emerge.com NEWS EDITOR Christina Cardoza ccardoza@d2emerge.com
dtSearch’s document filters support: ‡ popular file types ‡ emails with multilevel attachments ‡ a wide variety of databases ‡ web data
SOCIAL MEDIA AND ONLINE EDITORS Jenna Sargent jsargent@d2emerge.com Jakub Lewkowicz lewkowicz@d2emerge.com ART DIRECTOR Mara Leonardi mleonardi@d2emerge.com CONTRIBUTING WRITERS
Over 25 search options including: ‡ efficient multithreaded search ‡ HDV\ PXOWLFRORU KLW KLJKOLJKWLQJ ‡ forensics options like credit card search
Developers: ‡ $3,V IRU & -DYD DQG 1(7 LQFOXGLQJ FURVV SODWIRUP NET Standard with Xamarin and 1(7 &RUH ‡ 6'.V IRU :LQGRZV 8:3 /LQX[ 0DF L26 LQ EHWD $QGURLG LQ EHWD ‡ )$4V RQ IDFHWHG VHDUFK JUDQXODU GDWD FODVVLILFDWLRQ $]XUH DQG PRUH
.
.
.
Alyson Behr, Jacqueline Emigh, Lisa Morgan, Jeffrey Schwartz CONTRIBUTING ANALYSTS Cambashi, Enderle Group, Gartner, IDC, Ovum
ADVERTISING SALES PUBLISHER David Lyman 978-465-2351 dlyman@d2emerge.com SALES MANAGER Jon Sawyer jsawyer@d2emerge.com
CUSTOMER SERVICE SUBSCRIPTIONS subscriptions@d2emerge.com ADVERTISING TRAFFIC Mara Leonardi adtraffic@d2emerge.com LIST SERVICES Jourdan Pedone jpedone@d2emerge.com
Visit dtSearch.com for ‡ KXQGUHGV RI UHYLHZV DQG FDVH VWXGLHV ‡ IXOO\ IXQFWLRQDO HQWHUSULVH DQG developer evaluations
The Smart Choice for Text Retrieval® since 1991
dtSearch.com 1-800-IT-FINDS
REPRINTS reprints@d2emerge.com ACCOUNTING accounting@d2emerge.com
PRESIDENT & CEO David Lyman CHIEF OPERATING OFFICER David Rubinstein
D2 EMERGE LLC 80 Skyline Drive Suite 303 Plainview, NY 11803 www.d2emerge.com
Full Page Ads_SDT023.qxp_Layout 1 4/18/19 11:29 AM Page 5
What Are Past Attendees Saying? about the TOPICS
about the KEYNOTES
“I very much enjoyed being able to cross-attend the varying topics to gain a large content of ideas.” Tim Robert, Systems Analyst, State Farm
“The keynotes were inspiring! There were several practical talks. Gave me time to think and network to develop actionable takeaways.” Pete Lichtenwalner, Sr. Engineer Manager, Verint
about the TUTORIALS about the SPEAKERS “Great speakers that show they are passionate about what they do. Plus they are open to share ideas and experiences.” Verita Sorsby, QA Manager, Tio Networks
“Excellent conference. The tutorials were invaluable to me and my group.” Jennifer Winkelmann, Business Analyst, TD Ameritrade
SPECIAL OFFER:
Topics:
• Agile & DevOps Leadership • Agile Engineering Practices • Agile Testing & Automation • Building Agile & DevOps Cultures • Continuous Integration
• Continuous Delivery/Deployment • DevSecOps • Scaling Agile & DevOps Capabilities • Digital Transformation • Agile & DevOps Certification Training
Use promo code AWMP to save up to an additional $200!*
JUNE 2–7, 2019 LAS VEGAS, NV CAESARS PALACE AG I L E D E VO P SW E ST.T EC H W E L L .CO M *Discount valid on packages over $400
006,7_SDT023.qxp_Layout 1 4/19/19 1:36 PM Page 6
6
SD Times
May 2019
www.sdtimes.com
NEWS WATCH Applitools released Ultrafast Visual Grid for continuous UI QA Visual testing solution provider Applitools released Ultrafast Visual Grid, software for managing an application’s functional and visual quality. Before Applitools, UI testing was done manually and serially, as pages were examined and changes compared one page at a time. The company said Visual Grid farms out screenshot jobs to a grid of browsers in the cloud to generate images of web pages on all browser types, viewport sizes and emulated devices the tester requests. Further, Visual Grid takes advantage of the AI functionality built into the company’s Applitools Eyes visual testing and monitoring tool to validate the elements on those screens, doing away with the need to maintain what the
Google previews new plug-in for IDEs to ease cloud native app development Google released previews of a set of new plug-ins for integrated development environments (IDEs) that will generate cloud-native code for deployment into Kubernetes-based clusters. The Cloud Code plug-ins are available for any IDE that supports IntelliJ and for Microsoft’s Visual Studio Code. Developers who typically use IDEs on local client devices may lack the experience of building cloud-native code, a consideration Cloud Code addresses, according to Google. Using Cloud Build, a developer can run a pull request or commit to automatically build, test and deploy the application. Google also launched a custom worker feature that adds a CICD function for the company’s new Anthos hybrid cloud software.
company called brittle test code and bloated functional test scripts.
Harness joins vendorneutral Continuous Delivery Foundation Continuous delivery-as-a-service provider Harness joined the Linux Foundation’s Continuous Delivery Foundation, a position the company says it
People on the move
will use to foster collaboration and tech evangelism. The CDF was founded in March with the goal of providing a vendor-neutral space for open-source CI/CD projects and to promote collaboration between developers, end users, and vendors. Harness, in particular, provides a DevOpsfocused continuous delivery platform with automation and machine learning implementations.
n Andrew Fuqua has joined ConnectALL as its new VP of Products. The company has stated that this appointment is a reflection of its commitment to strengthening its value stream integration offerings. Andrew was previously an Enterprise Transformation Consultant at LeadingAgile and has over 30 years of experience in management, product management, and development.
n Dan Streetman has been appointed as TIBCO’s new CEO. The company has stated that this appointment will ensure the company will continue to expand the reach and scale of its Connected Intelligence Cloud to deliver game-changing innovation to the market. Dan was previously the executive VP of worldwide sales and marketing at BMC Software.
n ThoughSpot has welcomed Cindi Howson as its new Chief Data Strategy Officer. Formerly the VP of Research at Gartner, Cindi has 17 years of experience as an analyst for topics including visual data discovery, cloud BI, and mobile BI. At ThoughSpot, Cindi will focus on helping customers understand how to leverage analytics and AI, and will continue working on data and AI for good, women in tech, and AI ethics.
n SecureLink has welcomed Tony Howlett as chief information security officer. The company stated that this appointment is part of an overall plan to accelerate its presence in the cybersecurity space and to deepen connections with customers, particularly those in highly regulated industries. Tony previously worked as chief technology security and privacy officer at Codero.
Visual Studio 2019 improves project, code management Visual Studio 2019 is now generally available for Windows and Mac. Microsoft says that updates to the IDE improve on source control, starting up new projects, code navigation, debugging and AI-assisted code completion. Also incorporated into Visual Studio 2019 is Visual Studio Live Share, a real-time collaboration environment. The tool’s feature set is based on community feedback and includes a read-only mode, support for C++ and Python and guest debugging sessions. Updates that are new specifically to the MacOS version of the IDE include a new C# editor and a port of the Unity tools available on Windows previously.
Apple releases Swift 5 with library and language changes Version 5 of Apple’s Swift introduced improvements to application size and performance and a number of language and library changes based on suggestions from
006,7_SDT023.qxp_Layout 1 4/19/19 1:36 PM Page 7
www.sdtimes.com
the Swift Evolution process. In addition, the Standard Library’s support for raw text in string literals has been improved, the Result and SIMD vector types have been implemented, and String interpolation and Dictionary and Set types have received enhancements. Alongside those, Swift 5 implements 16 other proposals from Swift Evolution.
Elastic enables consistent data modeling Elastic has released version 1.0 of its Elastic Common Schema (ECS) specification. Initially announced in February, ECS is an open-source specification that “provides a consistent and customizable way for users to structure their event data in Elasticsearch,” according to the company’s website. In addition, ECS will simplify the process of creating new searching and dashboards. Every time a data source with a new format is added, users will be able to keep leveraging their existing searches and dashboards, the company explained.
Redis Labs introduces new database models Redis Labs introduced two new data models and a new data programmability paradigm for multi-model operations. RedisTimeSeries collects and stores high volume and high velocity data, and organizes that data by time intervals. The solution also allows organizations to point out specific data points using capabilities such as downsampling, aggregation, and compression. According to Redis Labs, this model will allow
organizations to query and extract data in real-time, enabling rapid analytics. A new in-database serverless engine, RedisGears, allows for nearly infinite programmability options to support event-driven or transaction-based operations.
GitLab 11.9 includes secrets detection GitLab 11.9 introduces secrets detection in its Static Application Security Testing (SAST) feature. GitLab’s latest release is making it easier for security teams to discover if secrets have leaked. With this release, every commit will be scanned to ensure it doesn’t contain secrets, and if it does, the developer is alerted in the merge request. The Code Owners feature will be integrated in this new set of rules, which will make it easier for developers to find the people that need to approve, GitLab explained.
IEEE releases ethics guidelines for automation systems The Institute of Electrical and Electronics Engineers Stan-
dards Association (IEEE-SA) released a new set of guidelines that are committing to the ethical use of automation and intelligent systems. In addition to professional recommendations for those in the world of developing and implementing automation, including technologists and teachers, the document also provides guidelines for policymakers and regulators. The organization said that it spent three years to nail down a version of the guidelines that would be accepted by government-affiliated, educational and industry bodies.
Salesforce releases Einstein services for custom AI Salesforce, which specializes in Customer Relationship Management (CRM), released Einstein Services, enabling admins and developers to build custom AI by using low-code or simple “point-and-click” formulas. The custom AI can then be embedded into Salesforce or any external app. The Einstein Platform Services include Einstein Translation, which will automatically translate any Salesforce object or field into the native lan-
May 2019
SD Times
guage of the service agents, and Einstein Optical Character Recognition (OCR), which uses computer vision to extract relevant information. In addition, the service can generate AI-powered predictions pertaining to business and customer outcomes. This includes Einstein Prediction Builder, which can predict the outcome of any Salesforce field or object, and Einstein Predictions Service, which can embed AI-powered analytics into any third-party system.
CloudBees acquires Electric Cloud CloudBees, continuous economy proponent and provider of enterprise support around Jenkins, has acquired Electric Cloud. The acquired solutions include Electric Cloud’s DevOps-centric ElectricFlow, which allows teams to take on releases at any scale through quick implementation and sharing of secure, repeatable and adaptable pipelines, and also ElectricAccelerator which expedites build and test times through intelligent and automatic parallelization of software tasks across physical or cloud CPUs. z
Atlassian announces Jira Align and Opsgenie updates Atlassian announced product enhancements and rebranding at its 2019 Summit in Las Vegas. AgileCraft, the agile project management company that Atlassian acquired in March, is being rebranded as Jira Align, with the goal of bringing together AgileCraft’s agile-at-scale solution with Jira so no matter where users are on their Agile path, they can benefit from using Jira in their development. The company also has updated Opsgenie, the incident management solution it acquired in September, with new features, including a new incident timeline for tracking key events and response activities and postmortems that help teams find root causes, track efforts to remediate issues and to learn from those incidents.
7
008-11_SDT023.qxp_Layout 1 4/19/19 10:48 AM Page 8
8
SD Times
Serverless May 2019
www.sdtimes.com
moves the responsibility of performance monitoring to developers
S
BY JENNA SARGENT erverless computing puts more power in the hands of the developer. Rather than developing an application and sending it off to IT for deployment, developers can deploy a serverless application themselves, without having to wait on IT operations or their fellow developers. But serverless as a term can be kind of vague. When people refer to serverless, they’re usually referring to functions-as-a-service, explained Ben Sigelman, CEO and co-founder of APM company LightStep. Popular examples of serverless providers include Amazon Lambda, Google Cloud Functions and Microsoft Azure Functions. Functions are an individual piece of code or programming logic. According to Sigelman, the basic driver for serverless comes from management. Companies are recognizing that if they are to exist and succeed, they have to be built around software. “There’s certainly a lot of evidence for what happens to companies that don’t become digital companies,” Sigelman said. “If you’re going to develop software that involves hundreds of developers, which I think is also going to be necessary if you want to actually win in that sort of digital economy, you cannot have hundreds of developers working on one piece of software. They have to be split up into smaller pieces. That’s where the desire for microservices and serverless comes from.” A benefit of serverless is that it allows developers to deploy code separately, which results in faster time-to-market. “The move to serverless is kind of an extreme way of allowing individual developers to deploy their code separately from each other, and the units of deployment are very, very small because a serverless function is almost the smallest possible thing that you could deploy separately,” said Sigelman. The functions are also, by design, stateless. This means that if they’re deployed correctly, they won’t disrupt your database as they’re being deployed, said Sigelman. Another benefit that comes from developers deploying code separate from one another is that they don’t have to waste time worrying about other developers’ breakages as they’re trying to deploy their software. This is another rea-
008-11_SDT023.qxp_Layout 1 4/19/19 10:48 AM Page 9
www.sdtimes.com May 2019
son why monoliths are slow to deploy; everyone in the organization needs to have a clean codebase before something can be deployed, Sigelman explained. But for all the benefits of having code broken up into small functions, this also causes one big problem: it’s hard to get a global view of the system. Looking at the functions on their own doesn’t tell you much about how the whole system is behaving, Sigelman explained. “You’ve intentionally designed your architecture to allow for individual people to work autonomously and to abstract away the rest of the organization, but then for things like monitoring and security, you do need to have a global picture in order to make sense of what’s happening in production and serverless has actually made that a lot harder...You should begin the transition, not finish the transition, with some kind of monitoring solution that’s able to understand the big picture of how transactions actually behave in your application.”
According to Sigelman, this is why there has been a lot of disruption in the monitoring and security of serverless functions. According to Sigelman, the use cases for serverless tend to be very diverse, with developers using serverless to achieve a wide range of functionality. For example, serverless can be used to create ETL (Extract, Transform, and Load) applications, which are applications that take in external data for use in their own applications. Another popular use case is building latency-sensitive applications, where a user will tap something in their mobile device, which invokes various serverless functions and then returns that information to the user, Sigelman explained. Since serverless functions are so diverse, their monitoring strategies are also diverse, Sigelman said. “They’re being used for a lot of different things and depending on the application, the monitoring strategies are really quite different.” Serverless may result in better code Unlike in traditional monolithic software, where a developer writes code and then hands it off to IT operations, in serverless, developers are responsible for the code after it’s deployed. That means they are the ones doing the monitoring, typically. That added responsibility on developers tends to result in cleaner code, Sigelman explained. “When developers move to serverless, they have a natural incentive to build things that are reliable,” said Sigelman. This results in higher-quality software, Sigelman explained. According to Sigelman, this is the mentality that all software developers should have, whether they are writing serverless applications or are working on monolithic software. “I don’t think it’s a good idea, serverless or otherwise, to have complex software written in a way where people who write software aren’t on the hook for its reliability,” Sigelman said.
Differences for monitoring serverless apps According to Ory Segal, co-founder and CTO of PureSec, there are two different types of monitoring strategies for
SD Times
serverless applications: performance and security. On the performance side, developers need to make sure that their functions are running and healthy so that they’re not wasting money if functions get held up or aren’t running properly. With monolithic software, this isn’t quite so important because the server is running whether the application is working properly or not, so there is no added cost. But with serverless, where you are paying for your usage, it’s important that applications are always running as they should be. The second piece of monitoring is for security. Developers also have to monitor applications to make sure that they are secure and to be aware of when there are incidents, Segal explained. Traditionally, security monitoring was handled by the IT security group. But in organizations with a mature DevOps model, the IT security side of monitoring is moving back into the development organization, Segal explained. “In general, when you’re looking at cloud-native, serverless applications, there’s a trend coming from the development organization, where dev organizations want to be able to deploy faster and not rely on the IT teams to host the systems.” According to Sigelman, the main challenge with handing serverless applications off to IT operations is that it’s hard to make the IT team mirror the development organization in a way where each service has its own dedicated IT Ops team to go with the development team. “In a more monolithic era, it was easier to train up a separate function to operate your monolith from the people developing it,” said Sigelman. “But since these things are getting split into little pieces, and furthermore, those little pieces are getting [reorganized] and moved around and refactored all the time, you’d have to be doing that operation in parallel in two different organizations—one in development and one in operations— because you can’t really expect a single operations team to properly operate all of these distinct services. So I think that’s the organizational driver for the move to a single role that does both continued on page 10 >
9
008-11_SDT023.qxp_Layout 1 4/19/19 10:48 AM Page 10
10
SD Times
May 2019
www.sdtimes.com
< continued from page 9
With IT removed, governance is key
development and operations.” One of the main differences between serverless or the more traditional methods and development and deployment is that with serverless, you are running software on infrastructure that you have no control over, Segal said. Many monitoring tools are dependent on your ability to deploy things on the infrastructure. “In cloud-native environments, and specifically in serverless, you have no access to the underlying infrastructure,” said Segal. “You’re basically deploying code and configurations and that’s the only thing you control, that’s the only thing under your responsibility. The cloud provider is now responsible for the underlying infrastructure.” According to Segal, a lot of the traditional monitoring vendors are now beginning to understand that you can’t deploy solutions into the runtime environment, and that you have to inject monitoring into the code itself.
Since the developers are the ones responsible for writing and monitoring their software, theoretically, a developer could deploy something that IT operations or the company doesn’t know about. Therefore, it’s necessary to have some sort of governance plan in place. “If things are not running on infrastructure that you own, it’s harder for you to know what you have, where you have it and how it is behaving,” said Segal. With serverless, things are running outside of your perimeter, and so they can often be outside of your control. Sigelman recommends not relying on person management alone for governance. He believes that part of the checklist for deploying a serverless application is making sure that it is discoverable by monitoring solutions. “Even at Google where I think things were pretty buttoned up in terms of this type of process, the only way we could
12 things to watch out for
1.
get serverless running in production was by using our monitoring system to do that discovery for us,” Sigelman sad. “It wasn’t sufficient to expect people to remember to follow some kind of procedure and keep a system diagram up to date.” The best way to ensure that your organization knows what’s running is to require that whatever is running in production is discoverable by some sort of monitoring solution,” Sigelman explained. “And then you can look at your monitoring tools to look at what’s running in production. And that’s how operations teams would discover that some team five years ago deployed a serverless application. I don’t think it’s reasonable to expect it to happen any other way, in fact.” Having knowledge of what is deployed is not just important from an IT operations point-of-view, but also from a security standpoint. “If you don’t know what are the assets that you have
Earlier this year, PureSec released a list of the 12 most critical risks of serverless applications for 2019.
Function event-data injection: Injection flaws happen when untrusted input is sent to an interpreter before it has been executed or evaluated. In serverless architectures, this is not limited to direct-user input; many types of event sources, such as cloud storage events, NoSQL database events, or HTTP API calls, can trigger execution of serverless functions. Broken authentication: Applying a complex authentication scheme that provides access control and protection to relevant functions, event types, and triggers, can be a huge undertaking that can be catastrophic if done incorrectly. For example, a serverless application that exposes a set of public APIs that do enforce proper authentication may read content from a cloud storage system. If there is not proper authentication on that cloud storage, the system may reveal an unauthenticated entry point. Insecure serverless deployment configuration: Settings provided by a serverless vendor might not be sufficient for a particular application’s needs. Over-privileged function permissions and roles: Serverless functions should only have the privileges needed to perform its specific task, an idea also known as “least privilege.” Inadequate function monitoring and logging: Logs in their out-of-the-box configuration are not typically well-suited to providing a security event audit trail. In order to achieve adequate coverage, developers have to string together logging logic to fit their needs. Attackers can exploit this lack of proper application-layer logging and remain undetected. Insecure third-party dependencies: Third-party packages and modules used when developing serverless functions
2.
3. 4. 5.
6.
7.
often contain vulnerabilities. Insecure application secrets storage: A common mistake that developers make when storing secrets is storing them in a plain text configuration file, which means that any user with “read” privileges on that file can gain access. Developers may also store secrets in plain text as environment variables, which can leak. Denial of service and financial resource exhaustion: Denialof-service (DoS) attacks have skyrocketed in popularity over the past decade. For example, in 2018, a Node NPM package, AWS-Lambda-Multipart-Parser, was vulnerable to regular expression denial-of-service attack vectors and gave attackers the ability to time-out AWS Lambda functions. Serverless business logic manipulation: In serverless systems with multiple functions, the order in which functions are invoked may be important for achieving the desired logic. Attackers can exploit bad designs or inject malicious code during execution. Improper exception handling and verbose error messages: Developers tend to use verbose error messaging, which helps with debugging of environment variables, but then forget to clean code when moving it to a production environment. Legacy / Unused functions and cloud resources: Obsolete functions are targets for abuse and should be looked for and deleted every so often. Cross-execution data persistency: Serverless developers often reuse execution environments, and sensitive data may be left behind and exposed. z
8. 9.
10. 11. 12.
008-11_SDT023.qxp_Layout 1 4/19/19 10:49 AM Page 11
www.sdtimes.com May 2019
deployed, you can’t really protect them properly,” said Segal. Segal predicts that there will be a way for security teams to have insight into functions being deployed. “I believe that in the near future we will see an effort in helping the security team inside companies, the IT security teams, to be able to gain visibility and monitoring, security posture management, to know what it is that they have, where they have it, who’s maintaining it, whether or not it has vulnerabilities, so they will be able to get themselves back into the loop.”
The transition to serverless When people move to serverless, there are two important transitions and questions they need to address. The first is the move from monolithic software and the second is the decision to choose serverless or microservices. The transition tends to look similar regardless of what an organization is transitioning from, said Sigelman. This is because serverless is so different from other architectures, he said. “You have to rethink the way you build your software in general,” Sigelman said. The desire to want to move to serverless is often due to organizations wanting to have their software development team iterating faster and spending less time on things considered as a “tax,” such as human communication and slow release velocity. With monoliths it’s difficult, or even impossible, to do experiments when you’re having to wait a month between releases. As a result of this, a lot of other efforts suffer. The second thing organizations need to consider is whether to choose microservices or serverless. Sigelman believes that organizations need to first understand what their performance requirements are. “If you can tolerate some of the performance implications of serverless, I think you can’t beat the simplicity or the elegance of the serverless architecture,” Sigelman said. According to Sigelman, there is actually quite a large tradeoff in terms of performance that comes from all of the encapsulation that happens in serverless computing. Joseph Hellerstein and other researchers from UC
Berkeley wrote a paper on serverless titled “Serverless Computing: One Step Forward, Two Steps Back.” In the paper, the researchers discuss the downfalls of serverless, calling it a “bad fit for cloud innovation and particularly bad for data systems innovation.” In the paper they discuss challenges that must be overcome in order to truly unlock the potential of the cloud.
SD Times
might see these positives and not consider the fact that they may be introducing all of this potential latency by splitting up functions, he explained. “There’s a need to move to serverless with your eyes open, both as a developer and from a monitoring standpoint, in order to avoid an overcorrection and a move too far in the direction of serverless.”
‘If you’re going to develop software that involves hundreds of developers...you cannot have hundreds of developers working on one piece of software.’
—Ben Sigelman
“They basically document that all of this operational simplicity and elegance comes at a price in terms of the sort of latency you can expect to see out of serverless deployment,” Sigelman said of that paper. “I think serverless is an incredibly friendly architecture to develop on, and with the right tools, a friendly architecture to observe and monitor as well... But it’s still important to choose the applications that are right-sized for current data serverless platforms.” It’s important to understand what types of applications work well with serverless and which don’t. For example, according to Sigelman, it’s not always appropriate to build user-facing applications on top of serverless. He believes that most organizations figure this out before deploying software, but they may figure it out the hard way. For those that can’t tolerate the performance implications of serverless, microservices is probably the way to go. Interestingly, Sigelman noted that while he’s seen more marketing hype over serverless than there is around microservices, he sees more microservices than he does serverless deployments. “They both are receiving a lot of attention from a marketing standpoint, but I think serverless has a louder voice proportionally to usage,” he said. Sometimes developers will want to know what a certain program will look like written as a serverless application, and they end up liking it and realizing that it is better, Sigelman said. But they
According to Sigelman, it’s important to make the transition to serverless gradually and with tooling that can help you understand and track performance. He believes that this is something that is a real challenge for large serverless applications. It’s also important that when making the transition, you don’t completely segregate developers and IT operations teams, Sigelman said. “I think that it’s important to do that transition along with the cultural shift, where your developers are hopefully responsible for what happens to their code after it gets deployed.” “Now, because of the way serverless works you have new things that you need to be looking at,” said Segal. “When you ran an application on your own server, you didn’t really care about the cost. The server is running anyway. Suddenly you have something where if you screwed up and somebody managed to find a way to exploit it, you find yourself with a denial of service with the purpose of inflicting financial damage, which is something very unique to serverless.” z
11
CONTINUOUS TESTING
For Web & Mobile Apps Test confidently and continuously at DevOps speed with Perfecto. We’re smarter. Faster. Scalable and secure. And we’re the best continuous testing platform on the market. The Perfecto Smart Testing Platform CLOUD-BASE TESTING LAB With real devices and browsers to test.
SIMPLIFIED TEST CREATION From test authoring to debugging.
STREAMLINED TEST EXECUTION
SMART ANALYSIS
For high velocity and parallel testing.
Get fast, AI-backed insights.
Explore Perfecto.io
013,14_SDT023.qxp_Layout 1 4/18/19 1:09 PM Page 13
www.sdtimes.com
May 2019
SD Times
What is Rapid Software Testing? James Bach, co-creator of the Rapid Software Testing methodology, explains the ins and outs of RST
BY JENNA SARGENT
James Bach, co-author of the Rapid Software Testing (RST) methodology, recently spoke with SD Times about the practice, what benefits can be derived from it, and how organizations can adopt it for their own use. SD Times: What is Rapid Software Testing? James Bach: Rapid Software Testing is a methodology for the responsible testing of software. But it is not the kind of methodology that comes encased in rules and templates. It is rather a mindset (a way of thinking, an ethics, and an ontology of testing) and a skill set (things that you know how to do, such as performing a heuristic risk analysis). What sets Rapid Software Testing apart from other testing methodologies?
RST is humanist. It focuses on people who do testing (whether or not they are full-time testers) and the mission they pursue. It puts the tester in complete control of the testing process. Other methodologies focus on artifacts. For us, artifacts are a side effect of being a responsible tester. A tester practicing RST is always ready to explain, defend, and otherwise be fully accountable for his work. That’s a big part of this methodology, whereas
it seems to me that most testers using other methodologies have no idea how to respond to criticism of the practices they follow. RST has a theory of learning built into it. It’s an inherently exploratory approach to testing, based on the tester’s emerging mental model of the product, user, and risks. By contrast, look at the V Model: no learning, there. If you use the V Model for testing you are assumed to already know everything at the start of the project. That’s a fairy tale. RST is “ownable.” When you practice RST, you are practicing your version of RST, which you can change or extend however you like. Whether your version of RST is “really RST” or not is something that emerges from community discussion. Having said that, there are heuristic models within RST that inform specific ways of testing and specifically what to test. I think RST is the only testing methodology which explicitly addresses heuristics as well as carving out a specific role for tacit knowledge. What are the benefits of Rapid Software Testing?
It allows you to test honestly, responsibly, and be accountable for your work. Also, I call it “rapid” mainly because it encourages NOT doing
things that waste time. It’s light on paperwork (unless paperwork is really needed). We are skeptical of any practice that is performed just because “some expert said so.” Are there any tools needed for Rapid Software Testing, or is it more of a process/mindset?
I use all kinds of tools. But there is no specific tools required for it. I suppose the most popular tool we use is a mindmapper. What sort of organizations or groups would be most well-suited to successfully implement Rapid Software Testing? In other words, is it easier to implement this if an organization already has a certain type of technology or process in place?
Success in RST requires personal and corporate responsibility. In other words: a craftsmanship culture. It also requires the organization to care about testing. These turn out to be rare situations. Many companies seem happy to have people doing testing who can’t or don’t answer for their work. In many cases, startups really don’t have to care much about testing, so doing it in a shallow fashion with simplistic “unit tests” or other automation seems good enough. Testing shares the same kind of continued on page 14 >
13
013,14_SDT023.qxp_Layout 1 4/22/19 2:47 PM Page 14
14
SD Times
May 2019
www.sdtimes.com
< continued from page 13
problem that preventative health care and fire safety professionals have; if there is no huge disaster for a while, people stop putting energy into systems that keep disasters from happening. Testing is very much like insurance. You don’t buy insurance because you want some sort of profit; you buy it because you want to prevent a loss. It’s defensive thinking, and many people in the technology business would rather speculate than be safe and responsible. Thus RST requires a sense of corporate and technical stewardship. Without that culture, why work so hard to be a good tester? Why not write some test cases and call it a day? Are there any challenges organizations will face when transitioning to Rapid Software Testing? How can they overcome those challenges?
Changing to Rapid Software Testing usually means de-emphasizing and defetishizing artifacts. Stop counting test cases! Stop graphing test cases! Test cases are not the point. Instead, ask “Who is responsible for testing this?”
(If the answer is “everyone,” then I predict the truth is that no one is taking responsibility.) Then ask for a test report. The test report must be made in a fashion that bears upon the needs of the business. This brings up another challenge. Although RST is a personal discipline, to implement on a corporate level, it requires corporate leadership. Management must insist on literate test reporting, and that means they must know how to listen to and interpret a test report. So there is management training that will be needed. How does Rapid Software Testing fit in with the more modern development methodologies that are focused on iterating faster and faster, and which often try to squeeze testing out of the process?
The grumpy dad answer to that is Rapid Software Testing doesn’t fit in with irresponsibility and fantasy logic. I am a tester, so my job is to be honest. The 737 Max is grounded right now and Boeing will suffer billions in losses at least partly because no responsible adult gave the right kind of warning to
management. Or maybe they did give it and it was ignored. Why didn’t the right thing happen? Perhaps because they wanted to “fit in” to the reckless plan for fast-track certification. Maybe fitting in is not the most important thing. Agile and DevOps were not created by people who sought to fit in, either. HOWEVER, while fitting in to unreasonable practices and schedules is not our goal, we believe that the fastest good testing — the kind that would fit in to an aggressive production schedule — rests on skills, tools and testability. The RST answer is to develop all those things. We think this calls for at least SOME fairly sophisticated testing people, rather than enthusiastic part-timers who just write masses of automated test cases and hope for the best. Modern software development is under the control of people. They can and should do whatever they think is right. RST is a mindset and skillset so that these people can think straight and take responsibility for whatever testing they do. z
Full Page Ads_SDT023.qxp_Layout 1 4/18/19 11:34 AM Page 44
016,18_SDT043.qxp_Layout 1 4/22/19 11:14 AM Page 16
16
SD Times
May 2019
www.sdtimes.com
W
hen businesses embrace a “customer-first” mentality, they become more reliant on technology than ever before. As these enterprises race to transform themselves into digital companies, the need for constant innovation of customer experience (CX) capabilities through software development comes into focus. For customer experience products and services, the need for quality is paramount because there are no second chances with customers. Today, customers are empowered in ways never seen before: their switching costs are low and social media stories are powerful ways to amplify their dissatisfaction. Rapid innovation is a priority, but quality cannot be compromised. Companies need flawless customer experience execution from Day One. To drive digital transformation, innovate CX rapidly and assure quality, many development teams have turned to DevOps. Amy Hudson is global head of discovery and enablement at Cyara, where CX meets Agile and DevOps.
BY AMY HUDSON Digital transformation requires a different model of software development. It requires a model of “perpetual evolution,” as McKinsey calls it, with many IT groups pressured to deliver ten significant projects each year, with category leaders far exceeding that. Agile and DevOps practices enable companies to create small experiments to learn which products and experiences are embraced by customers, learn rapidly from shorter feedback cycles between development and operations, and iterate as fast as the market and their customers are moving. The practices also add agility and resiliency to digital transformation projects. When the DevOps software methodology is applied to CX projects, Cyara’s CX Assurance experts highlight four important considerations: Quality is imperative for customer experience success. Customers have higher expectations than ever before and their experience feedback travels fast. Research by Cus-
1.
tomer Contact Week revealed that 54 percent of customers that had a bad customer experience considered switching companies, and 50 percent told friends, family, or coworkers about that issue. The customer’s perspective is paramount. As CX experts, we want every interaction a customer has with our company to be delightful and memorable. Therefore, the software must be designed with the customer’s end goal in mind, rigorously tested across the different channels, leveraging realistic journeys and then monitored in real time to identify potential struggles before a customer experiences them. Only then can we be certain that we’re ensuring an excellent CX. A single customer journey involves many complex technologies. Customers demand omnichannel journeys where they can interact with a company’s website, chatbot, live chat, Iinteractive voice response (IVR), live
2.
3.
continued on page 18 >
Full Page Ads_SDT023.qxp_Layout 1 4/18/19 11:23 AM Page 11
016,18_SDT043.qxp_Layout 1 4/22/19 11:11 AM Page 18
18
SD Times
May 2019
www.sdtimes.com
< continued from page 16
voice agent, email, SMS, or other channels. They expect seamless journeys where each channel understands their unique context and history. The technology infrastructure required to connect siloed channels and pass customer data between channels is extremely complex. For example, the IVR channel requires not just an IVR voice portal, but also VoiceXML applications, speech recognition, text-to-speech, and IP telephony (and that’s just the voice channel!). Connecting an IVR to another channel often requires a connection to a CRM system, computer telephony integration, an ecommerce application, and others — all in the cloud. Many of these systems are supported by legacy and/or homegrown technologies that can be fragile and difficult to evolve. The DevOps solution set is different for customer experience. DevOps for CX engenders unique requirements, and so there are purpose-built solutions that address these. Generally speaking, these solutions increase automation and facilitate an Agile approach to CX design and management. CX applications frequently involve voice interfaces, which demand specialized testing and monitoring to support them. And, for complex contact center software, you may need purpose-built technology to facilitate configuration management.
CX development is an excellent example of how to align Agile and DevOps with a customer-first approach. Until recently, Anthem was barely managing to deliver weekly updates of its contact center system, a massively complex task that typically commenced at 4pm, ran overnight, and involved multiple teams in different locations. This multi-step process included build, integration testing, deployment validation, and final rollout — and was almost entirely manual. Several challenges were associated with this approach:
4.
Anthem puts the customer first in its development projects So, how does this work in practice? To illustrate this, I recently interviewed Anil Ravula, who heads up development of Anthem’s vast network of customer service contact centers. With more than 73 million people served by its affiliated companies, including nearly 40 million within its family of health plans, Anthem is one of the nation’s leading health-benefits companies. As Anthem’s customer base has grown, so too has the challenge of ensuring its contact center operations serve the needs of millions of members and providers nationwide. Their experience in applying DevOps methodologies to
1. It precluded continuous integration/continuous deployment (CI/CD). 2. It required coordination of different groups across different locations. 3. Builds did not always incorporate the most important features. 4. There was no automated regression testing to validate build and deployment. 5. The lead time to implement new features was measured in months. In 2017, as part of a companywide adoption of an Agile/DevOps approach, Anthem transitioned contact center applications development to a DevOpsdriven CI/CD approach. The overarching objective, from a development perspective, was to automate the build and deployment of the IVR system — the frontline service for Anthem’s interaction with customers. As part of its transition to Agile, Anthem adopted a sprint-based approach with a heightened focus on customer experience. “To enable faster innovation of our IVR systems, we embraced DevOps
concepts and with that came a whole set of new tools,” said Anil Ravula, staff vice president at Anthem. “This was also a major shift of our mindset — we started to think in terms of user stories and to assign developers well-defined initiatives that focused on specific customer outcomes.” Anthem’s new approach to CX system development automates the entire process and, most importantly, enables developers to develop, build, and test small, user-focused improvements before publishing these as deliverables to an enterprise artifact repository server. From there, deployment and testing are also automated. Most importantly, testing is now more rigorous, with broader coverage able to explore a huge number of potential cases and be performed on what’s actually been deployed — with automated feedback of errors and other anomalies to the development team. “Whereas before we would run a series of defined tests manually, we can now use a fully automated approach using Cyara’s comprehensive set of IVR testing protocols,” said Ravula. “We also added a Lighthouse Dashboard to measure our build quality based on real-world testing and to provide visual reinforcement that we’re hitting our quality goals.” The combination of investing in cultural change and the right technology has yielded the results Anthem was hoping for. Before the transformation, it took five to eight months to implement new features. Anthem can now innovate faster with smaller, more surefooted steps — and derive meaningful business value from new features almost immediately, with weekly builds and new features deployed twice each month. And while the benefits to Anthem’s customers are improved systems to get their questions resolved, there’s also been a quantifiable improvement for the development team, says Ravula: “What’s really great is that the team now has more reasons to celebrate their efforts because they see the success of delivering valuable features to our customers within weeks, not months.” z
Full Page Ads_SDT023.qxp_Layout 1 4/22/19 9:59 AM Page 19
Bad address data costs you money, customers and insight. Melissa’s 30+ years of domain experience in address management, patented fuzzy matching and multi-sourced reference datasets power the global data quality tools you need to keep addresses clean, correct and current. The result? Trusted information that improves customer communication, fraud prevention, predictive analytics, and the bottom line. • Global Address Verification • Digital Identity Verification • Email & Phone Verification • Location Intelligence • Single Customer View See the Elephant in Your Business -
Name it and Tame it!
www.Melissa.com | 1-800-MELISSA
Free API Trials, Data Quality Audit & Professional Services.
Full Page Ads_SDT023.qxp_Layout 1 4/18/19 11:25 AM Page 15
021_SDT023.qxp_Layout 1 4/22/19 1:20 PM Page 21
www.sdtimes.com
May 2019
SD Times
DEVOPS WATCH
Using version control to automate DevOps While the industry trying to speed up software development and deployment with methodologies like Agile, DevOps and CI/CD, one company believes that tools and solutions that go along with these approaches aren’t changing fast enough. “Version control is so structured and complex, it takes a lot of time from the software development team,” said Jordi Mon Companys, product manager at Codice Software, makers of the fullstack version control system Plastic SCM. “If we abstract that pain away, we
release until tomorrow. Creating new releases was an event. But, if the step was fully automated (this is what DevOps is all about) then a release (or version) would be created today, and a new one tomorrow or in a few hours with the two tasks you were concerned about. No need to wait, no delays, just a continuous flow,” Santos wrote in a post. Codice has created two types of mergebots: Trunkbot and Conflictsbot. Trunkbot enforces trunk-based development to poll issue trackers for permission to launch builds and run tests, while Conflictsbot is a DevOps remedi-
can liberate a lot of that time and allow developers to shorten the release cycle and focus on more creative things.” To combat this problem, Codice Software has created mergebots, or event-driven automated integrators, for its Plastic SCM system. Mergebots are designed to automatically merge branches once they are reviewed, validated and the test pass. Pablos Santos, CEO and founder of Codice, compares mergebots to robotic automation, but for the integration process. According to the team, mergebots are independent programs that connect to Plastic SCM through WebSockets. “Releases became an event, in the bad sense. ‘Hey JM, please, do the integration tomorrow because these two important tasks still need review and we really need them in the next release.’ Sound familiar? You end up delaying the
ation bot that notifies users if Plastic SCM is unable to merge branches. According to Companys, Plastic SCM’s mergebots are able to detect and alert about merges through the platform’s built-in semantic merge technology. In addition, users can create custom mergebots for their specific needs. “A mergebot is the logic that drives your daily workflow, so while you can stick to one of the standard ones we'll be publishing, chances are you'll need some variations. Things like: once a new release is created, you want a Tweet to be sent automatically to announce it, or you want to automatically create the release notes getting a given Jira field from each task, etc.,” Santos wrote. However, using version control to automate the DevOps pipeline is not a new phenomenon, according to GitLab’s head of product Mark Pundsack; it is
BY CHRISTINA CARDOZA
often known as GitOps. “The biggest benefits of using version control to automate your DevOps pipeline are permissions and tracking. For example, you can control who can push to certain branches, and thus control who can trigger pipelines, and you have a record of not only who caused those changes, but a history of everything that has ever been deployed and when,” said Pundsack. GitLab enables version control and the CI/CD pipeline to work together. CI/CD pipelines are used for various version control events like pushing a commit, merging to the master branch, adding a tag or creating a merge request, according to Pundsack. “CI is a great place to start for automation. You start by automating your testing, and then move into automating your deployments. Similarly, GitHub has a solution in beta called GitHub Actions, which enables developers to implement custom logic to perform specific tasks. “You can combine GitHub Actions to create workflows using an action defined in your repository, a public repository on GitHub, or a published Docker container image. GitHub Actions are customizable and can use the GitHub API and any publicly available third-party APIs to interact with a repository,” according to the company. “Using version control for a DevOps pipeline doesn’t have to be an all-ornothing affair. You can start slowly and build up, when your team is ready. Start by making your deployments repeatable, even if they’re manually controlled. Then maybe automatically deploy to staging. Then add automation to deploy to production, but only for specific branches, tags, or after manual actions,” said Pundsack. “When you’re ready, dive into full continuous delivery and have ‘master’ deploy to production on every push. It might seem scary from where you’re at now, but it’s wonderful once you get there.” z
21
Full Page Ads_SDT023.qxp_Layout 1 4/18/19 11:31 AM Page 22
023-28_SDT023.qxp_Layout 1 4/22/19 1:15 PM Page 23
SD Times
Continuous Testing at every step
C
ontinuous integration (CI), continuous testing (CT) and continuous delivery (CD) should go hand-in-hand, but CT is still missing from the CI/CD workflow in most organizations. As a result, software teams eventually reach an impasse when they attempt to accelerate release cycles further with CI/CD. What they need to get to from both a mindset and process standpoint is a continuous, end-to-end workflow that includes CT. While CT requires test automation in order to meet time-to-market mandates, the two are not synonymous. A common misconception is that CT means automating every test, which isn’t necessarily practical or prudent. Instead, the decision to automate tests should be viewed from a number of perspectives including time and cost savings.
BY LISA MORGAN
How to set up continuous testing Like CI, CD, DevOps and Agile, the purpose of CT is to accelerate the release of quality software. To enable a continuous end-to-end workflow, one should understand how CT fits into the CI/CD pipeline and how it can be used to drive higher levels of efficiency and effectiveness. “The key thing is prioritizing,” said Mush Honda, VP of testing at software development, testing services and consulting company KMS Technology. “If you are in a state where you don’t have a live system, it’s easier to go into a mindset of automation first. I still believe not everything can be automated in most cases. For those things that you are trying to migrate off of manual testing and add a component of auto-
mated testing with a system that’s already live or near going live, I would attack it with business priorities in mind.” Automated testing should occur often enough to avoid system disruption and ensure that business-critical functionality is not adversely impacted. To prioritize test automation, consider the business severity of defects, manual tests that take a lot of time to set up, and whether the tests that have already been automated still make sense. Also, make a point of understanding what the definition of CT is in your organization so you can set goals accordingly. “You need to understand what you’re going to achieve [by] doing CT in measurable terms and how that translates to your application or softcontinued on page 24 >
23
023-28_SDT023.qxp_Layout 1 4/22/19 4:38 PM Page 24
24
SD Times
May 2019
www.sdtimes.com
< continued from page 23
ware project,” said Manish Mathuria, CTO and co-founder of digital strategy and services company Infostretch. “Beyond that, then it depends on having the right strategy for automation. Automation is key, so [you need] good buy-in on what layers you’re going to automate, the quality gates you’re going to put on each of these types of tests for static analysis, what you are going to stop at for unit tests, what kind of pass rates you’re going to achieve. It goes upstream from there.” Each type of automated test should be well-planned. The rest is engineering, and the hard part may be getting everyone to buy into the continuous testing process. “Continuous testing is designed to mature within your CI/CD process,”
• Logging and monitoring to pinpoint errors occurring in production Implementing CT may require adjusting internal testing processes to achieve the stated goals. For example, Lincoln Financial developers used to follow a waterfall methodology in which
THE SECOND OF THREE PARTS
developers met with a business user or analyst to understand requirements. Then, the developer would write code and send it off for testing. The company now does Test-Driven Development (TDD), which juxtaposes testing and
“T est engineers need to know equally what is being changed by
software engineers and then make the changes at the same time your software engineers are changing the application.”
—Nancy Kastl
said Nancy Kastl, executive director of testing at digital transformation agency SPR. “Without having testing as part of the build, integrate, deploy [process], all you’re doing is deploying potentially bad code quicker.” The CT process spans from development to deployment including: • Unit tests that ensure a piece of functionality works the way it is intended to work • Integration tests that verify the pieces of code that collectively enable a piece of functionality are working as intended together • Regression testing to ensure the new code doesn’t break what exists • API testing to ensure that APIs meet expectations • End-to-end tests that verify workflow • Performance tests that ensure the code meets performance criteria • Security testing to identify vulnerabilities
development. Test scripts are written and automated based on a user story before code is written. In addition, acceptance testers have been placed in development. “When the code passes the test, you know you’ve achieved the outcome of your user story,” said Michelle DeCarlo, senior VP of technology engineering and enterprise delivery practices at Lincoln Financial.
Managing change When code changes, the associated tests may fail. According to SPR’s Kastl, that outcome should not happen in a CT process since developers and testers should be working together from day one. “Communication and collaboration are really key when it comes to managing changes,” said Kastl. “As part of Agile methods, your team includes software engineers and test engineers, and
the test engineers need to know equally what is being changed by the software engineers and then make the changes at the same time your software engineers are changing the application.” To improve testing efficiency, Lincoln Financial uses tools to isolate software changes and has quality checks built into its process. The quality checks are performed with different types of resources to lessen the likelihood that a change may go unnoticed. “We try to isolate when an asset changes [so we can] make sure that we’re testing for those changes. Quite frankly, nothing is foolproof,” said Lincoln Financial’s DeCarlo. “After we’ve released to production, we also do sampling and examine the code as it works in production.” While it’s probably safe to say no organization has achieved a zero-defect rate in production, Lincoln Financial tries to minimize issues by performing different types of scans, including listening to customer feedback via various channels, including social media, so that feedback can be integrated into the delivery stream. Generally speaking, it’s important to understand what these software changes impact so the relevant tests can be adjusted accordingly. If a traditional automation script fails, the defect may be traceable back to the build process. If that’s the case, one can determine what has changed and what specific code caused the failure. Nevertheless, it’s also important to have confidence in the test scripts themselves. “If you don’t have high confidence in the scripts that are traditionally run, that sort of spirals into the question of what you should do next,” said KMS Technology’s Honda. “You don’t know whether it was a problem with the way the script was written or the data it was using, or if it was genuinely a point of failure. Being able to have high confidence in the script I created is what becomes a key component of how I know something did go wrong with the system.” Issue tracking tools like Jira help because they provide traceability from continued on page 27 >
Full Page Ads_SDT023.qxp_Layout 1 4/18/19 11:32 AM Page 25
A brief history of web and mobile app testing.
BEFORE SAUCE LABS Devices. Delays. Despair.
AFTER SAUCE LABS Automated. Accelerated. Awesome.
Find out how Sauce Labs can accelerate continuous testing to the speed of awesome. For a demo, please visit saucelabs.com/demo Email sales@saucelabs.com or call (855) 677-0011 to learn more.
JUST A LAB? THINK AGAIN.
Perfecto is the only end-to-end solution for continuous testing. Create, execute, AND analyze tests in our all-in-one platform. And do it all from our always-on, always-stable cloud-based lab. Explore all that Perfecto has to offer. Learn More
023-28_SDT023.qxp_Layout 1 4/22/19 1:17 PM Page 27
www.sdtimes.com
< continued from page 24
the user story on. Without that, it’s harder to pinpoint exactly what went wrong. Some tools now use AI to enable model-driven testing. Specifically, the AI instance analyzes application code and then automatically generates automated tests. In addition, such tools use other data, such as the data that resides in other tools to understand such things as what happens in the software development process, where defects have arisen, and why tests have failed. Based on that information, the AI instance can predict and determine the risks and impacts of defects module by module. “Model-based testing is essentially about not writing tests by a human being. What you do is create the model and let the model create tests for you, so when a change happens you are changing something a lot more upstream versus changing the underlying test cases,” said KMS Technology’s Honda. “Likewise, when a test is written and automated [by the AI instance], if certain GUI widgets change or my user interaction changes, since I did not automate the test in the first place, my AI-driven program would automatically try to define automated tests based on the manual test case. Predictive QA is more resilient to change, [which is important because] brittleness is the biggest challenge for continuous testing and automation.
How to tell if your CT effort is succeeding The general mandate is to get to market faster with higher quality software. CT is a means of doing that. In terms of speed, tests should be running within the timeframe necessary to keep pace with the CI/CD process, which tends to mean minutes or hours versus days. In terms of quality, CT identifies defects earlier in the life cycle, which minimizes the number of issues that make it into production. Another measure of CT success is a cultural one in which developers change their definition of “done” from continued on page 28 >
May 2019
SD Times
Automating tests when change is the norm Continuous testing requires automated testing to help speed the CI/CD process. The trick is to constantly expedite the delivery of code in an era when software change is not only a constant, but a constant that continues to occur at an ever-accelerating rate. In today’s competitive business environment, customers are won and lost based on code quality and the value the software provides. Rather than packing applications with a dizzying number of features (a mere fraction of which users actually utilize) the model has shifted to continuous improvement, which requires a much better understanding of customer expectations in real time, an unprecedented level of agility and a means of ensuring software quality practices are both time-efficient and cost-effective despite software changes. “You used to automate everything thinking you’re going to get an overall lift,” said Michelle DeCarlo, senior VP, Technology Engineering, Enterprise Delivery Practice at Lincoln Financial. “While that was true, there’s also a maintenance cost that can break you because there’s the cost of keeping things current. Now [we have] a lot more precision upfront in the cycle to identify where we should automate and where we’re going to get that return.” Rather than simply automating more tests because that’s what seems to facilitate a shift to CT, it’s wise to have a testing strategy that prioritizes tests and distinguishes between tests that should and should not be automated based on time savings and cost efficiency. “Before people were implementing DevOps, [they] used to say if you needed a stable application you should have one round of manual testing before you could venture into automation. Once people started implementing DevOps,
testing had to happen with development,” said Vishnu Nallani Chekravarthula, VP and head of innovation at software development and quality assurance consultancy Qentelli. “One of the approaches that we have found to be successful is writing the tests before you write the code, and then write the code to ensure that the tests pass.” While test-driven development (TDD) isn’t a new concept, it’s a common practice among organizations that have adopted CT. Whether TDD enables CT or the other way around depends on the unique starting point of an organization. With TDD and CT, automation isn’t an afterthought, it’s something that’s top of mind from the earliest stages of a project.
Adapting to constant change While applications have always been subject to change, change is no longer an event, it’s a constant. That’s why more organizations are going to market with a minimum viable project (MVP) and improving it or enhancing it over time based on customer behavior and feedback. Since development and delivery practices have had to evolve with the times, so must testing. Specifically, testing cycles must keep pace with CI/CD without increasing the risks of software failures. In addition, testers have to be involved in and have visibility into everything from the development of a user story to production. “You’re always able to analyze and do a sort of an impact analysis from user stories [so if] we change these areas or these features are changing, [you can come up with a] list of tests that we typically no longer need, that would have to be abated to reflect the new feature set,” said Mush Honda, VP of testing at software development, testing services and consulting KMS Technology. “So, it follows that the involvement and the engagement of the tester as part of the continued on page 28 >
27
023-28_SDT023.qxp_Layout 1 4/22/19 1:17 PM Page 28
28
SD Times
May 2019
www.sdtimes.com
< continued from page 27
bigger team definitely needs to be a core component.” While it’s always been a tester’s responsibility to understand the features and functionality of the software they’re testing, they now have to understand what’s being built earlier in the life cycle, so tests can be written before code is written or tests can be written in parallel with code. The earlier-stage involvement saves time because testers have insight into what’s being built. Also, the code can better align with the user story. It’s also more apparent what should be automated to drive better returns on investment.
Be careful what you automate
approach to testing also helps shift mindsets away from overreliance on automated UI tests. “If you’re in a situation where you have the type of application that is driven by a lot of business needs and a lot is changing, from a technical perspective, you don’t want to automate at the UI level,” said Nancy Kastl, executive director of testing at digital transformation agency SPR. “Instead you want to automate at the unit level or the API services level.” “Think of applying for a bank loan. In the loan origination process you go through screen after screen. [As a tester,] you don’t want to involve the whole workflow because if something changes in one part, your tests are going to have
A common mistake is to focus automan order to write tests that are tion efforts on the UI. The problem less brittle, you have to test it with that is the UI from the bottom up and then as tends to change things change, you have to change more often than the tests at the individual layers.” back end, which —Manish Manthuria makes those automated tests brittle. The frequency of UI change tends to to change throughout,” said Kastl. be driven by the business, because The concept is akin to building when they see what’s built, they realize microservices applications that use they’d prefer a UI element change, small, self-contained pieces of code such as moving the location of a button. versus a long string of code. Like “If you have a car and if the car microservices, the small, automated breaks down, you don’t just look at the tests can be assembled into a string of steering wheel and dashboard, so tests, yet a change to one small test unless you have tests and sensors does not necessarily require at individual parts of the car, changes to all other tests in you can’t really tell why it the string. has broken down. The “We need to think like prosame idea applies to softgrammers because if something ware testing,” said Manchanges, I’ve got one script to ish Manthuria, CTO and change and everything else fits co-founder of digital strattogether,” said Neil Price-Jones, presiegy and services company dent of software testing and quality Infostretch. “In order to write assurance consultancy NVP Testing. tests that are less brittle, you However, test automation can only have to test it from the bottom do so much. If change is the norm up and then as things change, you because of ad hoc development prachave to change tests at the individual tices that aren’t aligned with the busilayers.” ness’ expectations in the first place, Using a layered approach to testing, then test automation will never work, errors can be identified and addressed according to SPR’s Kastl. Fix the way where they actually reside, which may you develop software first, then you’ll not be at the UI level. A layered be able to get test automation to work. z
“I
< continued from page 27
the delivery of code to the delivery of tested code. “You need the cultural belief that developers can’t say something is done until it’s been tested. Another key success indicator is when all your testing is completed in the same Agile sprint,” said SPR’s Kastl. “It’s not saying ‘I’m going to do some testing in the sprint based on the amount of time I have so I’m going to automate regression in the next sprint.’ You should not be a sprint behind. The way to make sure in-sprint testing is being done as part of a CT process is developers are merging their code and it’s ready to test on an hourly or daily basis, so testers can do their work.” For Infostretch’s Mathuria, the highlevel indicator of CT success is data that proves a build or release is certified in an automated way. A lower-level indicator is that decisions are not being made about software promotion at any predefined level, such as this much functional testing is enough or that much security testing is enough. Instead, what qualifies as “enough” is determined by the CT process an organization has established. “Only exceptions are managed by people and not the base level workflow,” said Mathuria. “Once you achieve that then you see the right value from continuous testing.” And don’t forget metrics, because success needs to be measured. If speed is the goal, what kind of speed improvement are you trying to achieve? Define that and work backward to figure out what’s necessary to not only meet the delivery target but also be confident that the release is of an acceptable quality level. “You also need to think about skill sets. Are they able to adopt the tools necessary or not? Do they understand automation or not? Do they understand the continuous testing strategy or not?” said Honda. If you want to get to continuous anything, there has to be a timeline and a goal you have to measure up against, which ultimately defines whether you’re successful, not successful or facing roadblocks.” z
Full Page Ads_SDT023.qxp_Layout 1 4/18/19 11:33 AM Page 29
What Happens When You Connect the Tools in Your Value Stream? Better Flow. Improved Agility. Increased Velocity. Become More Predictable. Deliver Quality Software
For more details, visit www.connectall.com/sdtimes
031-43_SDT023.qxp_Layout 1 4/22/19 4:14 PM Page 31
www.sdtimes.com
May 2019
SD Times
31
Buyers Guide Organizations are dipping their toes into the water, as understanding of how manufacturing methodology applies to software development grows BY DAVID RUBINSTEIN
V
alue stream has become a part of the application development lexicon, having first gained uptake as a manufacturing exercise and now being applied to the process of delivering value to customers through software. It’s been evaluated by analysts and increasingly written about in the industry press — including the SD Times Jan. 1 cover declaring this The Year of the Value Stream — so people are starting to wrap their heads around the concept of finding where bottlenecks in your process are, and finding which bit of work is just not productive and eliminating it. But creating a value stream takes work. Although there are tools that can help organizations map their own value streams and gain visibility into their processes, there is no cookie-cutter approach to getting it right, and no silver-bullet tool to do the heavy lifting for you. That being said, the rewards a well-structured value stream management program can provide include the frequent delivery of software that your customers like and use, and the elimination of waste from your workflow that can save money and time. It’s a way to ensure you are always getting better at what you do. “We’ve been using the terminology, but it’s just now starting to take hold,” said Lance Knight, senior vice president at ConnectALL, an Orasi company. “I can go in and solve my release process and improve my release time, but if I don’t look at how long it takes for that idea to go all the way through, then I miss time to market. Time to market and visual transformations right now is very, very important.” As organizations should with any relatively new concept, we’ll step back to the beginning, and start with the question: What exactly IS value stream management? Eric Robertson, vice president of product management and strategy execution at CollabNet VersionOne, defines it this way: “Value stream management is an improvement strategy that links the needs of top management with the needs of the operations group. It is a combination of people, process and technology. It is mapping, measuring, visualizing, and then being able to understand from your planning, your epics, your stories, your work items, through these heterogeneous tools all continued on page 32 >
031-43_SDT023.qxp_Layout 1 4/22/19 4:06 PM Page 32
32
SD Times
May 2019
www.sdtimes.com
< continued from page 31
the way through your enterprise software delivery lines, being able to understand that what you’re delivering aligns with the business objectives, and you’re being effective there.” For sure, there is still some misunderstanding. ConnectALL’s Knight said, “Go to a conference and say, do you know what value stream is? I bet you most of them aren’t really going to understand, because values stream management is, it’s about all that Lean [manufacturing] stuff. There’s something that I think is missing in aligning knowledge and people. Value stream mapping is a tool to identify waste. But software itself could be part of the tools you use to identify waste. There’s no one tool that’s going to map and remove all your waste. So, that’s the reality, where people are taking point tools, and doing point things, but they’re not taking a holistic value stream look, end to end.” On a manufacturing floor, where the production design and method of a particular widget doesn’t change often, it’s easier to understand the process and product flow. The challenge of bringing value streams to software development is that software is a constantly changing intangible. “You have to account for that when you’re looking at value stream and software,” Knight said.
Why now?
CloudBees describes the benefits of value stream management in its ability to help organizations: n Gain better visibility by connecting teams, tools and applications across the entire software delivery process to track the flow of value. n Find areas of waste by identifying where the blockages and bottlenecks occur in order to reduce waste and increase efficiency. n Measure and manage performance by benchmarking and tracking DevOps performance based on industry standard indicators related to throughput (Deployment Frequency, Mean Lead Time) and stability (Mean Time To Recover, Change Failure Rate).
“What’s really important when you look at it as a company, and why value stream should be implemented, is you need to handle two major forces. That is, time to market needs to be better than your competitors, bar none, and a good customer experience at the local branch of the bank isn’t it anymore; it is all digital focused.” Alex Tacho, product manager at CloudBees, said, “The manufacturing process is pretty linear, with clear dependencies — steps that have to be completed before moving that product
A few years ago, when the concept of value stream was first being discussed in development organizations, the response often was a quizzical look. But the conversation was about a way to describe end-to-end development and planning all the way — what CollabNet’s Eric Robertson calls “that concept to cash aspect.” “In the Lean/Agile type of world, folks were utilizing methodologies like Agile, Kanban, Scrum, and so they started applying these Lean concepts and techniques around development, and that offshooted into DevOps, because delivery had to be more agile to catch up to development and accommodate that methodology,” Robertson said. “But it was still very technicalcentric. It was about your DevOps tool chains, and we saw customers that invested in a lot of the Agile and DevOps tools — CI/CD — and they were about to automate things around workflows. But it was still very disconnected; they weren’t able to track that work in progress, all the activities and touch points. They could do it technically, so they can can tell you, ‘We delivered five releases this week,’ but really the business was saying, ‘What does that really mean to us? What does that mean to my initiatives and objectives that I’m trying to drive?’ My objective
or widget upstream to the next phase. This is where the value stream concept in software development is interesting to define, since the steps to produce a feature or even fix a bug may not have clear dependencies. … However, like manufacturing, software delivery and continuous delivery is made up of linear processes, too, with dependencies that must be passed before software is deployed to production. What’s really amazing, when you think about it, is that we haven't been as organized around value streams and automation as manufacturing has been for decades, now — maybe as much as a century — value streams really are a concept that also applies to software. Identifying waste in the process and streamlining the flow of software through to production is all good.”
Where to begin? So you’ve done your evaluation, decided creating value streams will greatly benefit your company, and you’re ready to go. But where do you start? As always, opinions vary. Some say it is important to define what you want to accomplish. Others say a value stream begins at the ideation stage, when products are conceived. Still others say the first step is to create a value stream map, which looks at all continued on page 35 >
is enhanced digital footprint, one of my initiatives is support for credit-card processing, and in that last release, what capabilities did we deliver to align with that? That was the gap. They couldn’t really master that question. They could tell you all the technical aspects around it, but really around how this matches back up to the business, and the value you’re delivering, was that missing gap.” Matching up development to company objectives and initiatives has been difficult to do. Plutora’s chief marketing officer Bob Davis said, “We get the value of Agile and DevOps, but look what’s happened. We have distributed organizations, it’s difficult to colDavis laborate, to sync up across multiple methodologies. I’ve got a mobile banking application, and I’ve got a new feature coming out that depends on something going on in a more mainframe-oriented, waterfall-oriented methodology, and how do I make sure those get synced up and the dependencies are met, and things are tested correctly with the proper builds and the proper features? All that stuff is what we’re seeing now.” And that’s what value stream management is trying to pro—David Rubinstein vide. z
Full Page Ads_SDT023.qxp_Layout 1 4/18/19 11:33 AM Page 33
Full Page Ads_SDT023.qxp_Layout 1 4/18/19 2:14 PM Page 34
031-43_SDT023.qxp_Layout 1 4/22/19 4:09 PM Page 35
Project to Product... What does that even mean?? www.sdtimes.com
< continued from page 32
assets and properties to help organizations see where bottlenecks and waste are in their processes. Yet all agree it is a journey of continuous improvement that never ends. Carmen DeArdo, senior VSM strategist at Tasktop, said value stream creation and management starts in the ideate space. He explained, “We set up teams and we say, ‘How do things start? Let’s just talk about features and defects.’ Features don’t just start in Jira, or whatever your Agile management tool of choice is to understand really the wait states, because almost all of flow time is around wait states, and chances are it’s not in creative release. It’s probably in connecting ideate to create, or connecting operate to create, but almost everyone in the industry is doubling down on creative release, which is fine, you have to start somewhere. But it’s just the beginning of the journey and you have to continue that journey.” Once your organization has bought in, it is important to first understand what your current operation looks like, and then to map that, to make your processes as visible as possible. “The first step in value stream mapping is really about understanding your current state, and then really start looking at removing waste, bottlenecks, and understanding the activities that are being performed,” CollabNet VersionOne’s Robertson said. “And then, how can I streamline that, because you can’t improve what you don’t know. It’s really about understanding that current state, that process, that flow of value, understanding how it’s being delivered, and then looking at how that can be optimized.” He said it is not uncommon for organizations to lack an understanding of all the activities that are involved in how products and services are created and delivered. “There’s a big disconnect there,” Robertson said. “Understanding state, your process, how people interact with that, and the tooling, and then going from there.” Jeff Keyes, director of product marketing at Plutora, agreed that value stream mapping is the place to start, to
May 2019
SD Times
In value stream management, a change in perspective is required. Proponents of the methodology say you have to take a product view, rather than a project view, of the work you’re doing. Isn’t a project merely a part of a product? It is, but there is so much more to a software product than the code. Tasktop founder and CEO Mik Kersten titled his book on value stream management ‘Project to Product.’ As Tasktop’s senior strategist Carmen DeArdo explained: “I was at Bell Labs before I went to Nationwide Insurance. We were in a product model but I didn’t really know it. There are characteristics of a model of a product that I think lack in terms of a project. Projects are temporal, projects come and go. Products are things that you live with. Products are the things that continue to sustain you. “When you’re looking at it from a product perspective, you’re looking at it across its entire life cycle and cost of ownership, and you’re looking at all aspects of work. Projects almost always focus on features. Project managers are focused on features. If it’s defects, it’s defects that are affecting the scope of the project. If it’s risk, it’s risk that is affecting the scope of the project. When you’re dealing with a product, you’re talking about everything that’s affecting that. If you have a vulnerability in a Struts library that may have nothing to do with a specific feature, you’re going to consider that as part of your product because it’s one of the systems that’s supporting your product. You don’t do that in a project model. You don’t look at flow distribution, you don’t have conversations around how much do you want to invest in features, defects and risk, and you almost never talk about debt, because debt is not something that’s going to help you now; it’s an investment in the future. It makes the next thing go faster. None of those things are in play in a project world.” DeArdo also pointed out that the amount of churn you’re doing internally should be proportional to how much your company is changing. “The world doesn’t end of December 31st and recreate itself on January 1st,” he said. “I used to say, if you look at a company on January 1st, they’re selling pretty much the same thing they sold on December 31st. But if you look at what happened internally, they probably have a whole new set of projects, a whole new set of project teams, a whole new set of activities... what benefit are you getting from that? What benefit are you getting from doing all that reorganization? Your company may have some strategic changes; you may be launching a new line. BMW may be coming up with a new car. But fundamentally most of your products aren’t changing. That gets completely lost in this whole translation. I just think it’s a fundamentally different way of how work gets done and managed.” z —David Rubinstein
gain that end-to-end visibility into the operation. “The first thing that you want to do is to map your value stream. In whatever methodology you want to talk, you have to understand what you’re starting with, so that you can understand what needs to be improved and where. It might make the most sense to automate; that might be your very next step, because you can improve quality and so forth. It might make more sense to figure out a better way of handling governance, because that’s where your bottleneck is. It might
make more sense to become more product-oriented versus project-oriented. All those things and those discoveries will come out of the process of evaluating the value stream as a system. “The whole point is, do value stream first. Map your value stream so you at least understand what you’re doing. If you do Agile, what you’re saying is, great, let me break down my features into smaller buckets. That’s good. But it may not be where your constraints are. If you’re doing DevOps, you’re saying continued on page 38 >
35
031-43_SDT023.qxp_Layout 1 4/22/19 4:09 PM Page 36
36
SD Times
May 2019
www.sdtimes.com
Losing command and control…and living with it!
There’s a saying that goes, “Developers don’t want to miss the boat, and operations don’t want to sink the ship.” But for organizations to create value streams to observe their operations and eliminate waste while driving efficiencies, they might have to yield some control. ConnectALL’s Lance Knight recalled his time as a Novell network administrator, who required requests in triplicate — from the requester, his boss and HIS boss — before granting access to files. “That was a whole other time of IT command and control that kind of took over for a while, and you’ve got to get rid of that from an IT operations perspective, and think about how you’re trying to support the business, not control it as much. “IT thinks of themselves as process enforcers, rather than the enablers,” he continued. “If you didn’t groom the backlog right, we’re not doing anything with it. That kind of stuff. I remember being that guy, that IT enforcer. You guys are spending the wrong time on what you’re supposed to do on this PC out on the shop floor, so I’m going to turn it off at your lunch hour from now on. Right? An enforcer. They’re used to having that power. They have the passwords, security. ‘I have the admin passwords for the HR system, I know what all you guys make, I am special!’ “ Plutora’s chief marketing officer Bob Davis said, “As you go from command and control, old-school process to the newfangled world of Agile and smaller bites released more frequently, you have to be able to collaborate. And one of the things developers like to do less of is collaborate. If the system can provide the KPIs, if the system can allow them to collaborate silently, in effect, by alerting dependent systems, dependent development processes, etc., automatically, because the system is plugged in and understands the relationships and notifies or alerts the relevant parties … anything that’s happening, to the extent that can be automated and served up into a system, the better you are. “I think that that’s the promise,” Davis continued, “and as we go down the future and say OK, what happens next, you start to get better machine learning and the processes get even more intelligent relative to how to weave in security, how to weave in
< continued from page 35
let me automate as fast as possible from check-in on to deployment. That’s great, but that may not be where your bottlenecks are.” CloudBees’ Tacho said it is important for organizations to first define what they want to accomplish — whether that’s improving quality, delivering software faster, working more efficiently, or some other goal — what some are calling the “true north” of the
compliance, in a more automated kind of parallel process way, without having that ‘oh shit’ moment where you go, ‘I didnt do that.’ Any of those things that are made possible are real advances in the world of software development, and that’s what value stream promises.” The key question is, how do you bridge the gap from command and control and the highly autonomous, self-led teams? In other words, ‘How do you get to Amazon?’ According to Plutora’s Jeff Keyes, the answer is value stream management. “You have to go through the process of understanding, here’s our value stream from beginning to end. Then you start to break that down, you have to start integrating your tools and bringing them together. Third, you’ve got to add a layer of orchestration across it so you can incorporate these things, because that reduces your time to delivery. Fourth, as you’re measuring the time that things are going, now that you’re orchestrating it, you can start to see a path of, ‘well, these checks, I can automate these, because that will improve my delivery performance,’ and it brings the team in so that they’re bought in. Do you still have command and control? Not in the same fashion, but they’re acting as coaches in compliance and ensuring that the right things happen, even in these automated pipelines. What about audit? How do you make that happen? Well because it’s all happening there, and all that data is rolling back into a value stream management platform, audit is easy. You can verify that the right things happen.” Tasktop’s Carmen DeArdo said control gets baked into our objectives and our incentives. “I work with people whose complete incentive was around stability of production. They had no skin in the game to go faster but you’d think they could think a little more broadly about, OK, well, we’ll just never release another feature. We won’t be in business very long, but we’ll protect production, maybe. That’s what led to DevOps. Everybody has to have skin in the game to be aligned with goals. It’s not just deliver business value, but protect it and improve it. The product model is better at protecting because it elevates risk to be a first-class citizen rather than everything being subservient to features, and it’s the same with process.” z —David Rubinstein
organization. After defining the goal, he suggests finding the start and end points, and defining what it is you want to measure and improve. Next, he said, you must assemble the team of all the roles that are involved in delivering a new feature: developers, UX, operations, security, testers and so forth. “The important thing is the people making up the team should be relevant to the end goal and empowered to act on the steps in the value stream to move the
value (product, feature) to the next phase and ultimately to completion,” Tacho said. “But they should also be able to make changes to the process in their domain to make it more efficient or to fill a gap — which is the whole idea of this exercise in the first place.” Once the correct team is assembled, Tacho said it should walk through the flow as it exists today, by analyzing the steps and documenting them in continued on page 41 >
Full Page Ads_SDT023.qxp_Layout 1 4/18/19 11:33 AM Page 37
BUILD STUFF THAT MATTERS
Visibility and insight to deliver 0;ย ;u vo[ย -u;ฤท =-v|;u -mย ;ย rv |;-lv v|uย ]]ัด; |o -mvย ;u |_; tย ;vเฆ omvฤท ฤพ oย -u; ย ; 7obm] ย b|_ ;ย rvฤตฤฟ -m7 ฤพ u; ย ; 7;ัดbย ;ubm] 0;ย ;u vo[ย -u;ฤท =-v|;uฤตฤฟ $o -mvย ;u |_ov; tย ;vเฆ omvฤท ย oย m;;7 0;ย ;u ย bvb0bัดb|ย -1uovv ย oย u ;mเฆ u; vo[ย -u; 7;ัดbย ;uย bm=u-v|uย 1|ย u;ฤท -ัดom] ย b|_ u;-ัดล เฆ l; l;|ub1v |o l;-vย u; -m7 l-m-]; ;ย rv r;u=oul-m1;ฤบ ัดoย 7 ;;v ;ย rเฆ 1v l-h;v ย -ัดย ; v|u;-l l-m-];l;m| ;-vย 0ย ย bvย -ัดbย bm] ย oย u ;m7ล |oล ;m7 vo[ย -u; 7;ัดbย ;uย ruo1;vv -ัดom] ย b|_ vย u=-1bm] u;-ัดล เฆ l; r;u=oul-m1; l;|ub1vฤท -m-ัดย เฆ 1v -m7 -1เฆ om-0ัด; bmvb]_|vฤบ ัดoย 7 ;;v ;ย rเฆ 1v _;ัดrv ย oย 0u;-h 7oย m vbัดovฤท 7;|;1| r-ย ;umv -m7 b7;mเฆ =ย 0oย ัด;m;1hvฤท vo |_-| ย oย 1-m 7;ัดbย ;u 0;ย ;u vo[ย -u;ฤท =-v|;uฤบ
Reduce waste ย b1hัดย 7bv1oย ;u 0ัดo1h-];v bm ย -ัดย ; Yoย ฤน 7;mเฆ =ย ย _;u; fo0v -u; tย ;ย bm] ou =-bัด;7 -m7 |_;bu blr-1| om 7;r;m7;m| 1olrom;m|v omเฆ mย oย vัดย blruoย ; 11;vv u;-ัดล เฆ l; 7-|- =ou -1เฆ om-0ัด; bmvb]_|v |o l;-vย u;ฤท l-m-]; -m7 blruoย ; ;ย rv r;u=oul-m1; -bm =ย ัดัด ย bvb0bัดb|ย -1uovv bm=u-v|uย 1|ย u; -m7 ruo1;vv;v omm;1| |;-lvฤท |ooัดv -m7 -rrัดb1-เฆ omv |o ruoย b7; - vbm]ัด; voย u1; o= |uย |_ -1uovv ย oย u vo[ย -u; 7;ัดbย ;uย bm=u-v|uย 1|ย u;
&v; ัดoย 7 ;;v ;ย rเฆ 1v =ou =u;;ฤด www.cloudbees.com/get-started
031-43_SDT023.qxp_Layout 1 4/22/19 4:18 PM Page 38
38
SD Times
May 2019
www.sdtimes.com
How does your solution help organizations on their value stream journey? Alex Tacho Director of product management, CloudBees CloudBees DevOptics solves the challenges of measuring and managing DevOps results by being the only solution purpose-built to provide visibility into collaborative delivery and DevOps performance. CloudBees DevOptics lets you map and visualize end-to-end software value streams with actionable insights to measure, manage and optimize software delivery across teams, improve DevOps performance and drive more value through faster business innovation. You get realtime value stream insights that automatically collects and analyzes up-to-theminute data across value streams, allowing you to break down silos, detect patterns and identify bottlenecks. CloudBees DevOptics provides that single view of the delivery process with key DevOps performance metrics including: Deployment Frequency (DF), Mean Lead Time (MLT), Mean Time To Recover (MTTR) and Change Failure Rate (CFR).
Eric Robertson Vice president of product management and strategy execution, CollabNet VersionOne The Enterprise Value Stream Management solution from CollabNet VersionOne provides a holistic approach to application development and delivery by applying the principals of Agile-plus-DevOps to the entire product delivery pipeline. As a result, organizations achieve: • Process and flow improvements • Increased management visibility • Compatible data and measurements across tools • Increased collaboration and knowledge sharing • Decreased deployment delays, inefficiencies and errors • Alignment with business strategy These benefits apply to all stakeholders across the enterprise — from portfolio to program, release and team. The practice of Enterprise Value
Stream Management has had a significant impact on some of the world’s largest organizations and with the help of CollabNet VersionOne, brands are able to transform the trajectory of their business all by aligning software development and delivery with business objectives.
Lance Knight Senior vice president, general manager, ConnectALL ConnectALL’s Value Stream Integration solution helps enterprises of all sizes to connect, visualize, and measure end-to-end software delivery value streams. This holistic approach ensures greater velocity and predictability. ConnectALL wants companies to discover the benefits of value stream management with the understanding of how business value flows across an organization — connecting people, processes, and tools. We want to help companies understand the importance of integration in optimizing the way software is delivered. ConnectALL will help companies with a consultancy around value stream design — taking an end-to-end approach in value stream management from ideation to implementation to make the improvements to gain velocity and make processes more predictable. ConnectALL with its integration platform will span everything that production teams are supposed to do. The ConnectALL Value Stream Integration Platform integrates applications from the world’s leading vendors including Atlassian, Micro Focus, Microsoft, IBM, Salesforce, BMC, CA, Perforce, and more. Your teams can continue to use the best tools for the job while ConnectALL optimizes your Value Stream and seamlessly integrates the data between teams and applications.
Jeff Keyes Director of product marketing, Plutora “You can’t manage what you don’t measure.” Plutora creates a baseline of
the current state of value streams by pulling data from existing toolchains providing key metrics highlighting constraints for every product team across the portfolio regardless of their level of Agile and DevOps maturity. Normalizing the data allows for unified visibility across diverse methodologies, technologies and toolsets. Plutora then enables teams to create new “what-if scenarios” of development and delivery integrating those practices into the Plutora Platform’s management and governance capabilities. End-to-end release pipelines with associated scope are defined with phases of delivery and criteria gates and are integrated back into each development team’s tools. Environment requests and provisioning are centralized using Plutora’s environment management ensuring complete control and efficiency of pre-production IT environments. Plutora deployment management mixes existing automation with planning, approval, and execution control over cutover activities. The visibility and transparency created by Plutora creates collaboration and efficiencies between teams resulting in an enterprise system of insight measuring outcomes of each effort. It aggregates release, quality, and deployment data to transform the way application delivery teams solve problems, make decisions, and measure results. Data analytics and visualization turn structured data into rich, contextual insights.
Carmen DeArdo Senior Value Stream Management strategist, Tasktop In the Age of Digital Disruption, enterprises are quickly shifting to accelerate the delivery of business value. This shift requires that IT leaders be able to apply systems thinking to answer the question, “Where’s the bottleneck in my software value stream?” The first step to move to a value stream model is having end-to-end visibility across the flow of work performed (features, defects, risks and debt). Tasktop Integration Hub automates and visucontinued on page 43 >
Full Page Ads_SDT023.qxp_Layout 1 4/22/19 4:40 PM Page 39
SubscriptionAd_2018.qxp_Layout 1 4/22/19 4:29 PM Page 1
Discovery. Insight. Understanding. SD Times subscriptions are FREE!
SD Times offers in-depth features on the newest technologies, practices, and innovations affecting enterprise developers today â&#x20AC;&#x201D; Containers, Microservices, DevOps, IoT, Artificial Intelligence, Machine Learning, Big Data and more. Find the latest news from software providers, industry consortia, open source projects and research institutions. Subscribe TODAY to keep up with everything happening in the ever-changing world of software development! Available in two formats â&#x20AC;&#x201D; print or digital.
Sign up for FREE today at www.sdtimes.com.
031-43_SDT023.qxp_Layout 1 4/22/19 4:17 PM Page 41
www.sdtimes.com
chronological order. “Now,” he said, “you have the beginnings of a value stream.” In order to be successful, though, Plutora’s Keyes said, “they’re going to have to address some of the culture and the governance from the command and control, and how to incorporate all that governance, and audit and compliance and security requirements all as part of the life cycle, even though it may not all be automated. Oh.. Scary thought. Wait a minute, we’re talking about Agile and DevOps without a hundred percent automation? Yeah, you still can do that really effectively, and the way to get there, people are starting to look at value stream management to shine light on things and have orchestration to move things along faster.”
The people factor An important step in getting to value stream thinking is changing the culture at work. Teams have to be given more autonomy to innovate and create, and organizations have to understand the role of leaders in their organizations. Tasktop’s DeArdo acknowledged that many in the organization will be cynical about value stream — especially those who have been working for many years and have heard many promises regarding how things can be made bet-
SD Times
Creating a value stream map
Example of software delivery value stream mapping. < continued from page 36
May 2019
Source: Plutora
Have you ever tried to explain to someone new to your hometown how to get to your favorite restaurant? Doing that verbatim is quite challenging, using a map helps you point to and show what you are talking about. You can point to where to make that left turn, after the right turn at the traffic light. It lets others understand where they are relative to their overall journey and ask for clarification and feedback. It streamlines communications and helps you plan and communicate on how to get where you want to be. So how do you do that? Start simple. Pick one, non-complex development process and draw it out on a whiteboard. Define the phases of the value stream and the three or four gates that mark tasks all along the way from the starting point, to middle steps and to the finish. It’s good to note that in software development, each gate can be a small pipeline of substeps that need to be completed before the gate itself is complete and value can be moved downstream. Managing your value streams lets you focus on delivering value to the customer. To do this, you need to connect multiple teams, tools and applications to gain clear visibility and insight into how the value flows through the software delivery process. This also means having a way to access metrics on DevOps performance across the teams, and tracking throughput and stability related to product releases. The goal is to deliver quality software at the speed that your customers want it, in order to drive value back to the organization. —Alex Tacho and Michael Baldani, CloudBees
ter, only to see them fail. But he said culture change begins with stories of success that workers can relate to. “What’s powerful in a culture are the stories that people can relate to that are happening at their company. Not at Netflix, not at Amazon. What really changes is when people in the company had success and they could tell their own story. That’s what motivated people. If you can take a team that’s notorious, and that has the street credibility
and you can turn them into an advocate for what you’re doing, you know you’re on the right track. That’s going to drive more loyalty than anything else ... people wanting to be a part of something.” Another factor for successfully implementing value stream methodologies is making sure the leaders understand their roles in making it happen. Knight sees leadership as being responsible for visibility into the system, continued on page 43 >
41
031-43_SDT023.qxp_Layout 1 4/22/19 4:25 PM Page 42
42
SD Times
May 2019
www.sdtimes.com
A guide to VSM tools n
FEATURED PROVIDERS n
n CloudBees: The CloudBees Suite builds on emerging DevOps practices and continuous integration (CI) and continuous delivery (CD) automation by adding a layer of governance, visibility and insights necessary to achieve optimum efficiency and control new risks. Since every company in the world is now a software company, this new automated software delivery system is becoming the most mission-critical business system in the modern enterprise. As today’s clear leader in CI/CD, CloudBees is uniquely positioned to define and lead this new category. CloudBees puts companies on the fastest path to transforming great ideas into great software and returning value to the business more quickly.
n CollabNet VersionOne: CollabNet VersionOne is the Enterprise Value Stream Management leader that accelerates high value software development and delivery, while improving quality and reducing risk. Our offerings provide global enterprise and government market leaders a cohesive solution, spanning idea through delivery, that enable them to capture, create, deliver and measure the flow of business value throughout their application development lifecycles. n Plutora: Plutora provides value stream management solutions for enterprise IT, improving the transparency, speed and quality of software development and delivery by correlating data from across the toolchains and analyzing critical indicators of every aspect of the delivery process. Acting as the “catwalk above the factory floor”, Plutora ensures organizational alignment between software development with business strategy and provides visibility, analytics and insights into the entire value
n CA: Disparate tools may help an individual or a team do their job, but they impede the progress of the larger organization. With tools that span the application life cycle for planning, build, test, release and putting into production, CA (now a Broadcom company) provides an end-to-end view into the processes and products that deliver value for customers and bring efficiencies to the business. n Electric Cloud: ElectricFlow provides teams with pipeline and environment management to create an executable value stream by connecting stacks, clouds and DevOps toolchains together. During pipeline execution, automated data collection and analytics connects metrics and performance back to the milestones and business value (features, user stories) being delivered in every release. Our out-of-the-box Release Command Center dashboard displays toolchain information in a consolidat-
stream. This approach guides continuous improvement and digital transformation progress through the measured outcomes of each effort. Plutora ensures governance and management across the entire portfolio by orchestrating release pipelines, managing hybrid test environments, and orchestrating complex application deployments — all independent of methodology, team structure, technology, and level of automation. n Tasktop: Tasktop Integration Hub connects the network of best-of-breed tools used to plan, build, and deliver software at an enterprise-level. As the backbone for the most impactful Agile and DevOps transformations, Tasktop enables organizations to define their software delivery value stream, and enables end-to-end visibility, traceability and governance over the whole process. Tasktop is an easy-to-use, scalable and reliable integration infrastructure that automates the flow of product-critical information across tools to optimize productivity, collaboration and adaptability in an unpredictable and fastpaced digital world. n ConnectALL: ConnectALL, an Orasi company, is dedicated to helping companies achieve higher levels of agility and velocity. The company’s enterprise-level application integration platform — ConnectALL Integration Platform — helps with achieving effective Value Stream Management by connecting the applications used to collaborate, drive decisions, and manage artifacts used during the software delivery process, like ALM, Agile, and DevOps. With the ConnectALL Integration Platform, IT companies can accelerate software development and enhance collaboration.
ed, easy-to-interpret view, enabling teams to instantly review the quality, health, dependencies, pending approvals, test results, progress, and status of a release. n GitLab: GitLab is a single application built from the ground up for all stages of the DevOps lifecycle for Product, Development, QA, Security, and Operations teams to work concurrently on the same project. GitLab provides teams a single data store, one user interface, and one permission model across the DevOps lifecycle allowing teams to collaborate and work on a project from a single conversation, significantly reducing cycle time and focus exclusively on building great software quickly. n IBM: UrbanCode Velocity is built for Day 2 DevOps. Today, organizations have DevOps toolchains with numerous tools and struggle with different teams using different toolchains. Velocity provides a con-
sistent view across the toolchains so you can easily see where work is, and where your bottlenecks are. Not just another dashboard, Velocity directs your automation. It triggers pipelines, enforces quality gates and coordinates your release efforts — all while making data visible and actionable. n Intland: codeBeamer ALM is a holistically integrated Application Lifecycle Management tool that facilitates collaboration, increases transparency, and helps align software development processes with your strategic business objectives. n Jama Software: Jama Software centralizes upstream planning and requirements management in the software development process with its solution, Jama Connect. Product planning and engineering teams can collaborate quickly while building out traceable requirements and
031-43_SDT023.qxp_Layout 1 4/22/19 4:15 PM Page 43
www.sdtimes.com
< continued from page 38
test cases to ensure development stays aligned to customer needs and compliance throughout the process. With integrations to task management and test automation solutions, development teams can centralize their process, mitigate risk, and have unparalleled visibility into what they’re building and why. n Micro Focus: Micro Focus helps organizations run and transform their business through four core areas of digital transformation: Enterprise DevOps, Hybrid IT Management, Predictive Analytics and Security, Risk and Governance. Driven by customer-centric innovation, our software provides the critical tools they need to build, operate, secure, and analyze the enterprise. By design, these tools bridge the gap between existing and emerging technologies — enabling faster innovation, with less risk, in the race to digital transformation. n Panaya: Value Stream Management is about linking economic value to technical outcomes. Though not unique to the Enterprise, large organizations have specific challenges and needs: siloed teams, waterfall or hybrid operational modes, as well as many non-technical stakeholders. Panaya Release Dynamix links IT and business teams with an intuitive tool that strategically aligns demand streams with the overall business strategy. n Targetprocess: To connect portfolio, products and teams, Targetprocess offers a visual platform to help you adopt and scale Agile across your enterprise. Use SAFe, LeSS or implement your own framework to achieve business agility and see the value flow through the entire organization. n XebiaLabs: The XebiaLabs DevOps Platform provides the backbone for comprehensive release orchestration, managing, controlling, and offering full visibility into the end-to-end DevOps pipeline. It allows both business and technical teams to easily spot bottlenecks and analyze inefficiencies in their processes, so they can optimize the entire software delivery value stream. z
alizes the flow of work items across the various specialized tools used by different teams and departments to deliver software. The next step is having analytics that allow companies to understand where work is flowing and where it’s slowing down across a value stream. Tasktop helps enterprises answer this question by providing a comprehensive set of Flow Metrics based on the Flow Framework created by Dr. Mik Kersten. The Flow Metrics capture the elements of velocity, time, load, distribution and efficiency for all the work done in a product value stream, by utilizing data from
< continued from page 41
and gaining knowledge from that visibility. “Too often, the CIO, will go, ‘The business has asked for this. When can you give it me?’” Knight explained. “And the next thing you know, four years down the road, they say, ‘You promised me this a year and half ago.’ And that happens a lot in IT. If we frame this about being more predictable, increasing velocity and actually knowing when you put something into the system you’re going to get it, that is the message everyone will align to.” Yet companies always want to measure themselves against competitors in their industry, to see where they fall on the path. This is especially difficult in value stream management, because each organization’s “true north” and values are different. So how can an organization tell if it’s being successful in implementing and utilizing a value stream? CloudBees’ Tacho said it doesn’t matter how organizations define value. “If that value is driving revenue, solves customer or user needs, creates efficiencies or saves on costs and has a number of process steps that can be visualized, then it can be modeled into a value stream. Benchmarks available as a baseline to start measuring performance against and where to improve will be driven based on the industry you are in. But from a software delivery and DevOps perspective, we can tap into several years of research by the DevOps Research and Assessment (DORA) group that shows statistical
May 2019
SD Times
the work item artifacts in the enterprise’s delivery toolchain. These metrics, when combined with the product business results (value, cost, quality, happiness) provide a comprehensive view, which can be used by business and IT leaders as part of their continuous improvement process to identify bottlenecks, allocate investment, and determine actions to optimize the flow of business value. Moving to a product value stream approach is a journey. Tasktop supports companies on this journey with the ability to start with a single value stream and develop a model that can then be applied and scaled across the enterprise in a sustainable way. z
data on how organizations have been progressing towards DevOps performance for software delivery.” According to CollabNet VersionOne’s Robertson, “The measurement component is where you differentiate from value stream mapping and head into management. When you start to understand metrics, when you start measuring material movement and flows of information, and start looking at metrics like duration, delay against activities … these are very key metrics that are utilized to be able to understand and very quickly identify value vs. non value-add activities — waste — in terms of delay, including handoffs to a point, and all those kinds of things. So having a clear understanding and a good foundation around these measurements and KPIs, that is very reusable, add that’s where a lot of the folks are struggling.” For organizations on a value stream journey, it is important to know that it is a winding path that crosses other methodology streams along the way. As Robertson explained, “It is evolving. But you’re not going to see a value stream transformation. You’ll see a DevOps transformation, you’ll see an Agile transformation, you’ll see business agility transformation. You won’t see a value stream initiative on the books. But what they’ll learn very quickly is you need value streams in order to be able to enable that business agility, to enable that successful DevOps transformation, to enable that scaling Agile initiative that you’re undertaking.” z
43
044_SDT023.qxp_Layout 1 4/19/19 1:37 PM Page 44
44
SD Times
May 2019
www.sdtimes.com
Guest View BY MARK TROESTER
The new language of high productivity Mark Troester is VP of Strategy at Progress.
E
ach new — and not so new — technology trend brings its own language, complete with acronyms, jargon and marketing-speak. Consulting services website Connet lists more than 3,000 computer acronyms. Here we are going to focus on one of tech’s current hot topics: Low-code high-productivity application development platforms. Acronyms are flying: RMAD, RADP, LCDP, MADP, hpaPaaS and more. Some of the acronyms may be new, and marketers are spinning the terms “low-code” and “high productivity” in new ways, but these technologies have been around. For a long time. You could say that almost every innovation in software is about “low code” and “high productivity.” Even going all the way back to 1972, when the programming language C came on the scene, it was a radical departure from predecessors COBOL and FORTRAN in terms of the amount of code required and readability. Another significant breakthrough in usability came in 1981, when James Martin coined the term 4GL (Fourth Generation Programming Language) in his aptly titled book, Application Development Without Programmers. 4GLs aimed to enhance programmer efficiency with a more natural language syntax and tooling that utilized GUIs (Graphical User Interfaces). Some even argue that modern low-code tools are simply the current evolution of 4GLs. So why is the tech community so focused on these platforms now? With the amount of data growing exponentially, and with virtually everyone carrying a digital device, the pressure to develop applications to put all that data to good use has never been greater. At the same time, while demand for developers keeps growing, the size of the talent pool isn’t keeping pace with that demand. Recent research by The App Association estimates that nearly 250,000 developer jobs remain unfilled today in the U.S. alone, with that number expected to triple in the next three years. Progress conducted a survey of more than 5,500 web and mobile applications developers to gauge their workloads and sentiment about low-code, high-productivity platforms on the market today. With 39% of developers expected to build 2 to 4
The premise is simple — give developers the tools they need to quickly create, as well as run applications
new apps in the next 12 months, and 14% expected to build 5 to 10 new apps over the same period, the survey validated that developers indeed face greater pressure than ever to produce apps — fast. That same survey also showed some skepticism among developers about these platforms. Twothirds (66%) had negative feelings about them, mostly due to loss of control of their code. But with market researchers expecting explosive growth in this area the next few years, a lot more developers are going to be using these platforms — like it or not. Sorting through the hype is no easy challenge, as even the definitions of “low-code” and “high productivity” themselves seem muddled. Although the term “low-code app development” is fairly new, as mentioned before, the concept is not. Rapid Application Development (RAD) has been around for well over a decade, and Business Process Management (BPM), the ongoing methodology to automate all ad hoc business processes, first gained popularity in the 1990s. Today’s low-code solutions aim to extend the benefits of Application Platform-as-a-Service tools (aPaaS), accelerating app delivery by alleviating the need for developers to spend time manually coding an app from scratch that is made up of common features and components by providing templates to drag-and-drop prebuilt elements and objects. And then there are nocode platforms, gaining in power and popularity and blurring the line with low-code. The premise is simple — give developers the tools they need to quickly create, as well as run applications — but the language surrounding it is anything but. Take hpaPaaS (High-Productivity Application Platform-as-a-Service), a visual, model-driven approach to enable a broad range of individuals to build and deploy apps, including citizen developers. Not to be outdone by hcaPaaS (High-Control Application Platform-as-a-Service), high-productivity platforms are geared to professional developers, giving them more control over their work. From 4GLs to BPM to the drag-and-drop world of MADP (Mobile Application Development Platforms) through the multitude of “as a Service” acronyms, low-code platforms — and their acronyms and marketing-speak — are here to stay and appear to be poised to accelerate the time it takes to build and deploy powerful, modern apps. z
045_SDT023.qxp_Layout 1 4/19/19 10:34 AM Page 45
www.sdtimes.com
May 2019
SD Times
Analyst View BY MICHAEL AZOFF
Managing machine learning I
n July 2018 I wrote here about the next evolution of application life-cycle management (ALM), which is extending its reach into the space of DevOps continuous delivery management, helping to extend the reach of full traceability from requirements to deployed code. ALM tools and the art and science of software engineering supporting it have much to teach the emerging space of machine learning (ML) life-cycle management. Artificial intelligence (AI) has emerged in recent years from a period of reduced research funding, called an AI winter, that started in late 1990s. The most exciting technology that caused this healthy resurgence is deep learning (DL), a branch of ML, which is itself a branch of AI. To be clear we are still talking about advanced “signal processing” (terminology that electrical engineers will recognize) with a degree of intelligence, what Ovum calls machine intelligence, a phase in the evolution of AI that is not quite narrow AI and still a long way away from general AI (the point where intelligent machines can match human intelligence). Despite this limited scope, DL systems are useful: They can perform functions superior to human performance in a range of activities, making them ready for real-world applications. The car industry for example is spending millions on autonomous driving research based on machine intelligence, the fruits of this research are already feeding into advanced driver-assistance systems. Some of the industries most impacted by ML today include: finance, algorithmic trading has transformed the investment industry; health, doctor assistants in image analysis and research mining, drug discovery in pharmaceutical industry; customer service, AI powered virtual assistants and front-line customer support; and telecommunications, in both the customer care business and in network engineering. ML applications are also set to expand driven by the rollout of mobile 5G and next-generation technologies, from hyperconverged infrastructure to cloud-native computing. These in turn will grow edge computing and Internet of Things, creating opportunities for ML applications at the edge as compute power increases and costs reduce. Against this surge of ML activity enterprises looking to deploy such applications are finding there is a serious cultural gap in how the deploy-
ment is managed. Data science and data engineering are relatively young fields, and for many years have been active in largely research modes. What has changed is how enterprises are releasing multiple ML applications into production and finding that while traditional software applications have ALM tools to support development and deployment, the need for such support of ML applications is just beginning to be appreciated. From a life-cycle management viewpoint, ML applications equate most closely with software applications that have complex database activities. The data dimension in ML applications is as important as the algorithmic dimension. Managing data is a hugely complex task that needs to be supported in training ML applications and then supported at scale in production (inference mode). Enterprises serious about the deployment of ML applications will need ML lifecycle management tools — this will become a hot space. Some of the players/products include: CognitiveScale, DataKitchen, DataRobot, MLflow, ParallelM MLOps, and Valohai. In addition, Google’s Kubeflow open-source project is complementary to this community as it focuses on creating a platform for running containerized ML components managed by Kubernetes. The key challenge for the data science/data engineering community is a cultural one. The adoption of ML life-cycle concepts is one of maturity of process and the community is discovering this need the hard way through mistakes. But we expect this will change, running data science projects in silos with small-scale production requirements is rather different from embedding ML components in business-critical systems at scale. For example, take one aspect, version control — a discipline well-drilled into software engineers. For data scientists and engineers there is the need to version control data sets used in training, testing, and validation, plus configuration files, hyperparameter sets as well as algorithm versions. To reproduce results everything needs to be version controlled. As data science and engineering matures the recognition for the role of ML lifecycle management will also grow. z
Michael Azoff is a Distinguished Analyst at research and consulting company Ovum.
DL systems...can perform functions superior to human performance in a range of activities, making them ready for real-world applications.
45
046_SDT023.qxp_Layout 1 4/19/19 1:37 PM Page 46
46
SD Times
May 2019
www.sdtimes.com
Industry Watch BY DAVID RUBINSTEIN
Processing changes in process David Rubinstein is editor-in-chief of SD Times.
I
find that I’m writing an awful lot about process these days, and I have figured out why. It’s because, if your company is like one of my old ones, your processes are awful. It’s not that all the processes themselves are awful, though some are truly poorly thought-out. A big part of the process problem is that there are too many of them, some conflicting, some vague, some enforced, some not. If there’s one thing all workers can agree on, there’s nothing worse than a process that’s not hard and fast. For more on this, you can look to Microsoft, famous back in the day for offering developers nine different ways to do the same thing. Developers don’t want choice (at least not in a process); they want to know the best way to do something and then to do it. Black and white. Remove the gray area. That slows things down. I once worked for an organization that had five different communications channels — email, AIM Messenger, an internal messaging system tied to their production software, printed data sheets that would be updated a number of times throughout the day, and — of course — getting up and actually talking to someone you needed something from. Upper management decided this was untenable, that too many emails were going to too many people only marginally invested in the message. So it came to pass that the company would henceforth use Slack as its lone communications channel. Friends still working there, report in frustration that the company now has SIX different communications channels. Older workers didn’t want to learn Slack, so stayed on email and getting up to talk. Younger workers would send Slack messages, but had to resort to getting up when the older workers didn’t look at the messages. Slack did a nice job of mostly eliminating the printed data sheets, and the internal messaging system fallen largely into disuse, but email remains the top way to communicate — a constant source of annoyance to those only marginally invested in the emails. Process change requires follow-up. It’s not enough for management to identify a problem,
We tend to over-govern, over-control. Why in the heck are we making our process so confining?
come up with a process to solve it, and then — after much back-patting — move on to the next thing. Management needs to check in, see if the new process is being used, ask how it’s impacting performance — helping, or hurting? Then, there is the question of ‘How much process is enough, how much is too much?’ As part of my interviewing process for the Value Stream Management buyers guide in this month’s edition, I spoke with Tasktop senior strategist Carmen DeArdo about process in software development. He had this to say: “I think the people doing the work know the best about how to improve. Jonathan Smart [the former Head of Ways of Working at Barclays] said, you want to provide guardrails to keep them on the road, but inside that, you want to give them room to move around freely and innovate. As much as possible, you need to let the team take control of their own journey and allow them within those guardrails to go with it. I try to get people to think about the nouns in their artifacts, things like initiatives and features and stories, and not get hung up on the tools. Don’t get hung up on the tools. Tools come and go. You should have an exit strategy for your tools before you implement them. Don’t fall in love with a tool or a process; fall in love with getting better. “Companies have people with great ideas … much better ideas than the leaders have. They’re going to work every day. And yet we don’t harness that. We could have never guessed on some of the ideas these teams would come up with once we unleashed that. If we’re trying to hire the best talent we can, let’s listen to them, let’s utilize them. I don’t think we have that right for the most part. We tend to over-govern, over-control. Why in the heck are we making our process so confining? Why do we need it? The process should serve us; we shouldn’t serve the process.” That’s it right there. We’ve become slaves to process. That stems from the command-and-control background the industry grew up from. Organizations feared the chaos that would ensue if they loosened up on the processes. It’s important for business leaders to develop trust that the people they’re hiring know best how to complete their tasks and advance the business mission. Now process THAT! z
Full Page Ads_SDT023.qxp_Layout 1 4/18/19 11:34 AM Page 43