FC_SDT014.qxp_Layout 1 7/30/18 10:47 AM Page 1
AUGUST 2018 • VOL. 2, ISSUE 14 • $9.95 • www.sdtimes.com
Full Page Ads_SDT014.qxp_Layout 1 7/25/18 3:22 PM Page 2
003_SDT014.qxp_Layout 1 7/26/18 11:47 AM Page 3
Contents
VOLUME 2, ISSUE 14 • AUGUST 2018
FEATURES
NEWS 6
News Watch
The reality of augmented reality
12
The Internet never forgets a face
15
How to organize a UX team to complement your development team
16
Kotlin gains ground in Android
18
DARPA to explore the “third wave” of artificial intelligence
20
Gadgets and gizmos aplenty
22
GitLab 11.0 released with Auto DevOps
page 8
COLUMNS Rethinking the way you build software with serverless
44
GUEST VIEW by Matt Ellis In praise of open source
45
ANALYST VIEW by Peter Thorne Scope, silos stifle software innovation
46
INDUSTRY WATCH by David Rubinstein Flowing value into your transformation
page 24
AGILE SHOWCASE 30 The many faces of Agile
Variations on test-driven development
32 CA Technologies: Moving the needle for Agile teams
35 Enabling Agile on the enterprise mainframe
36 Micro Focus ALM Octane platform enables enterprise Agile collaboration
page 40 Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 80 Skyline Drive, Suite 303, Plainview, NY 11803. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2018 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 80 Skyline Drive, Suite 303, Plainview, NY 11803. SD Times subscriber services may be reached at subscriptions@d2emerge.com.
004_SDT014.qxp_Layout 1 7/25/18 3:28 PM Page 4
®
Instantly Search Terabytes
www.sdtimes.com EDITORIAL
CUSTOMER SERVICE
EDITOR-IN-CHIEF David Rubinstein drubinstein@d2emerge.com
SUBSCRIPTIONS subscriptions@d2emerge.com
NEWS EDITOR Christina Cardoza ccardoza@d2emerge.com
dtSearch’s document filters support: ‡ popular file types ‡ emails with multilevel attachments ‡ a wide variety of databases ‡ web data
SOCIAL MEDIA AND ONLINE EDITOR Jenna Sargent jsargent@d2emerge.com INTERN Ian Schafer ischafer@d2emerge.com ART DIRECTOR Mara Leonardi mleonardi@d2emerge.com
2YHU VHDUFK RSWLRQV LQFOXGLQJ ‡ efficient multithreaded search ‡ HDV\ PXOWLFRORU KLW KLJKOLJKWLQJ ‡ forensics options like credit card search
Developers: ‡ $3,V IRU NET, C++ and Java; ask about new cross-platform NET Standard SDK with Xamarin and NET Core ‡ 6'.V IRU :LQGRZV 8:3 /LQX[ 0DF L26 LQ EHWD $QGURLG LQ EHWD ‡ )$4V RQ IDFHWHG VHDUFK JUDQXODU GDWD FODVVLILFDWLRQ $]XUH DQG PRUH
.
.
.
Visit dtSearch.com for ‡ KXQGUHGV RI UHYLHZV DQG FDVH VWXGLHV ‡ IXOO\ IXQFWLRQDO HQWHUSULVH DQG developer evaluations
The Smart Choice for Text Retrieval® since 1991
dtSearch.com 1-800-IT-FINDS
CONTRIBUTING WRITERS Alyson Behr, Jacqueline Emigh, Lisa Morgan, Jeffrey Schwartz
ADVERTISING TRAFFIC Mara Leonardi adtraffic@d2emerge.com LIST SERVICES Shauna Koehler skoehler@d2emerge.com REPRINTS reprints@d2emerge.com ACCOUNTING accounting@d2emerge.com
ADVERTISING SALES PUBLISHER David Lyman 978-465-2351 dlyman@d2emerge.com
CONTRIBUTING ANALYSTS Cambashi, Enderle Group, Gartner, IDC, Ovum
PRESIDENT & CEO David Lyman CHIEF OPERATING OFFICER David Rubinstein
D2 EMERGE LLC 80 Skyline Drive Suite 303 Plainview, NY 11803 www.d2emerge.com
Statement of Ownership, Management, and Circulation for SD Times as required by 39 U.S.C. 3685 PS Form 3526; SD Times, publication number 0019-625, filed July 17, 2018, to publish 12 monthly issues each year for an annual subscription price of $179. The mailing address of the office of publication, the headquarters of business of David Lyman, President and Publisher; David Rubinstein, Editor-in-Chief, is 2 Roberts Lane, Newburyport, MA 01950. The owner is D2 Emerge LLC, 2 Roberts Lane, Newburyport, MA 01950. Known bondholders, mortgagees, and other security holders owning or holding 1% or more of total amount of bonds, mortgages, or other securities are David Lyman, 2 Roberts Lane, Newburyport, MA 01950, and David Rubinstein, 80 Skyline Drive, Suite 303, Plainview, NY 11803. Tax status has not changed during the preceding 12 months. The average number of copies of each issue published during the twelve months preceding the filing date includes: total number of copies (net press run): 18,992; paid and/or requested outside-county mail subscriptions: 17,450; paid/requested in-county subscriptions: 2; sales through dealers, carriers, and other paid or requested distribution outside USPS: 0; requested copies distributed by other mail classes through the USPS: 0; total paid and/or requested circulation: 17,450; outside-county nonrequested copies stated on PS form 3541: 1,395; in-county nonrequested copies stated on PS form 3541: 0; nonrequested copies distributed through the USPS by other classes of mail: 0; nonrequested copies distributed outside the mail: 0; total non-requested distribution: 1,395; total distribution: 18,992; copies not distributed: 37; for a total of 19,029 copies. The percent of paid and/or requested circulation is 91.88%. The actual number of copies of the July 1, 2018 issue includes: total number of copies (net press run): 18,005; paid and/or requested outside-county mail subscriptions: 17,002; paid/requested in-county subscriptions: 0; sales through dealers, carriers, and other paid or requested distribution outside USPS: 0; requested copies distributed by other mail classes through the USPS: 0; total paid and/or requested circulation: 17,002 outside-county nonrequested copies stated on PS form 3541: 1,003; in-county nonrequested copies stated on PS form 3541: 0; nonrequested copies distributed through the USPS by other classes of mail: 0; nonrequested copies distributed outside the mail: 0; total non-requested distribution: 1,003; total distribution: 18,005; copies not distributed: 37; for a total of 18,042 copies. The percent of paid and/or requested circulation was 94.42%. I certify that all information furnished on this form is true and complete. –David Lyman, Publisher.
Full Page Ads_SDT014.qxp_Layout 1 7/25/18 1:04 PM Page 5
006,7_SDT014.qxp_Layout 1 7/26/18 1:46 PM Page 6
6
SD Times
August 2018
www.sdtimes.com
NEWS WATCH WSO2 introduces an integration agile approach to microservices WSO2 announced the Summer 2018 release of its agile integration platform with a new approach to implementing microservices at WSO2Con US 2018 in San Francisco last month. According to the company, while microservices are the software architecture of choice, agile development is being hindered by legacy technology. The Summer 2018 release aims to address this with new product developments and offerings designed to support microservices architectures. “To successfully build modern architectures, we have to free developers from the waterfall orientation of traditional integration and empower development teams to autonomously operate by becoming integration agile,” said WSO2 CEO Tyler Jewell. “Our Summer 2018 Release advances this evolution with open, integrated products and services that enable enterprises to capitalize on microservices to innovate new digital products that operate throughout their organizations and across ecosystems.”
Facebook, Google, Microsoft, Twitter launch data project Facebook, Google, Microsoft and Twitter have officially launched the Data Transfer Project, an open-source initiative designed to enhance the data portability ecosystem. As part of the project, organizations will utilize open-source code to provide a common framework that enables portability and interoperability of data. “Moving your data between any two services can be complicated because every service is built differently and uses different types of data that may require unique privacy controls and settings. For example, you might use an app where you share photos publicly, a social networking app where you share updates with friends, and a fitness app for tracking your workouts. People increasingly want to be able to move their data among different kinds of services like these, but they expect that the companies that help them do that will also protect their data,” Steve Satter-
field, privacy and public policy director for Facebook, wrote in a post. The project’s principles include: • Building for users • Privacy and security • Reciprocity • Focusing on user data • Respecting everyone
and immediate premium for their shares, as well as our employees who will join an organization that shares our values of innovation, collaboration and engineering excellence. We look forward to completing the transaction and ensuring a smooth transition.”
Broadcom to acquire CA for $18.9 billion
Rollout.io previews access into real-time feature deployment
The infrastructure technology management software solution provider CA Technologies has announced it is entering into a definitive agreement to be acquired by the semiconductor company Broadcom. Broadcom is acquiring the company for $18.9 billion. CA shareholders will receive $44.50 per share in cash. “We are excited to have reached this definitive agreement with Broadcom,” said Mike Gregoire, CA Technologies CEO. “This combination aligns our expertise in software with Broadcom’s leadership in the semiconductor industry. The benefits of this agreement extend to our shareholders who will receive a significant
The organic feature delivery company Rollout.io unveiled the preview release of Rollout Visibility, a new solution designed to give organizations access to the status of feature deployment in real-time. According to the company, this will allow them to make smarter business and technical decisions. Rollout Visibility is able to provide customers with complete visibility, flexibility, and agility for monitoring all aspects of feature deployments to ensure that it meets KPIs. This results in “organic feature delivery,” which is when features behave like dynamic organisms, evolving independently, the company explained.
Google releases Jib for containerizing Java applications Google introduced a new open-source project for Java developers. Jib is an Java containerizer designed to help Java developers build containers using the tools they already know. The company explained while containers can make developing Java workflows easier, Java developers are often not container experts, which can make the process of containerizing their apps difficult. Jib addresses this by handling all of the steps of packaging an application into a container image, according to Google. It is directly integrated into both Maven and Gradle, and does not require the developer to have Docker installed or write a Dockerfile. Because of this tight integration, Good explained the project has access to all of the information it needs to package applications. Variations in Java builds will be automatically picked up during subsequent container builds.
006,7_SDT014.qxp_Layout 1 7/26/18 1:46 PM Page 7
www.sdtimes.com
GPL Cooperation Commitment gets more support Red Hat announced its open source license enforcement initiative is making new strides. As part of the GPL Cooperation Commitment, 14 new companies have joined the effort to promote greater predictability for GPLv2 and LGPLv2.x licenses. The new companies are Amazon, Arm, Canonical, GitLab, Intel, Liferay, Linaro, MariaDB, NEC, Pivotal, Royal Philips, SaS, Toyota, and VMware. Other existing members include: Red Hat, Facebook, Google, IBM, CA Technologies, Cisco, HPE, Microsoft, SAP and SUSE. According to Red Hat, by joining the GPL Cooperation Commitment, these companies reject harsh tactics in open-source licenses and support the idea that personal or corporate gain is not appropriate in open source.
Columbia University, IBM to create new blockchain center IBM and Columbia University are teaming up to launch a new center dedicated to researching, educating and innovating blockchain technology and data transparency. This is just one of the recent steps IBM has made to accelerate the adoption of blockchain. The Columbia-IBM Center for Blockchain and Data Transparency will focus on: • Researching areas of data transparency and blockchain across industries • Building new technology capabilities to apply blockchain in new ways • Participating in emerging
policy and regulation related to blockchain and data transparency • Providing new ways to balance regulatory and data ownership issues • Strengthening and expanding professional blockchain and data transparency skills • Supporting startups
IBM continues to bet on Java with WebSphere Liberty IBM has announced the latest release of WebSphere Liberty, its Java application server solution. The company says WebSphere Liberty 18.0.0.2 is the most significant functional release in years, and comes with the first compliant Java Enterprise Edition (EE) 8 runtime. “Over the past 22 years, Java has remained a top programming language, and it continues to rapidly evolve for the cloud-native era. IBM is committed to staying at the forefront of Java development so that our clients benefit from the very latest Java EE and Spring technology updates,” Denis Kennelly, GM of cloud integration at IBM, wrote in a post. The latest release features the most recent Java EE 8 technologies and MicroProfile features. “We are also announcing that we have become the first vendor to pass the Java EE 8 compatibility tests, certifying WebSphere Liberty ahead of anyone else,” Kennelly explained.
CA automation platform to be more developer-centric CA Technologies released version 12.2 of its CA Automic
One Automation platform with a focus on making the tools within the suite more developer-centric. Updates were made to the CA Automic Workload Automation, Continuous Delivery Automation and Automic Service Orchestration components of the platform. With the updates, “We allow developers to define a workflow or continuous delivery pipeline without having to leave their development environment,” Gwyn Clay, VP of product management for CA Automation, told SD Times today. “Developers can interact with the systems from a code perspective, while operations can interact via the visual workbench with operations processes and business processes.” This is done via automation-as-code features that let developers work on customer experience rather than operational tasks, the company said in a statement announcing the release.
GitLab moves from Azure to Google Cloud Platform As part of its plan to improve the performance and reliability of GitLab, the company has announced it is migrating the site from Microsoft Azure to Google Cloud Platform. According to the company, it wanted to move to GCP so that it could run GitLab on Kubernetes. Earlier this year, GitLab shipped native integration with Google Kubernetes Engine (GKE), which has the most robust and mature Kubernetes support available. Moving to GCP was the next step in the plan. Though this plan has been
August 2018
SD Times
in the works for months, this announcement comes shortly after Microsoft announced its acquisition of GitHub. In a statement given to SD Times reacting to the acquisition, Sid Sijbranij, CEO of GitLab, said, “Microsoft likely acquired GitHub so it could more closely integrate it with Microsoft Visual Studio Team Services (VSTS) and ultimately help drive compute usage for Azure.”
Atlassian: Feature flagging now in Jira Atlassian is bringing new feature flagging integration into Jira Software. The company announced integration for LaunchDarkly and Rollout. “We believe bringing additional context about flags into Jira will help improve team coordination and collaboration around product releases and generally make the practice of feature flagging more effective for both developers and the rest of the team,” the company wrote in a blog post. LaunchDarkly is a feature management platform designed to eliminate risks. “As companies transition to a world built on software, there is an increasing requirement to move quickly—but that often comes with the desire to maintain control. LaunchDarkly is the feature management platform that enables dev and ops teams to control the whole feature lifecycle, from concept to launch to value,” the team wrote on its website. Rollout is a feature management platform for the enterprise designed to minimize risks and accelerate development with feature flags and controlled rollouts. z
7
008-11_SDT014.qxp_Layout 1 7/25/18 3:33 PM Page 8
8
The reality of augmented reality
SD Times
August 2018
www.sdtimes.com
No longer just for games, AR/VR is givng businesses informational visualization
W
BY IAN C. SCHAFER hile the first thing that might come to mind when picturing the near-future applications of augmented reality are snazzier Snapchat filters, for major players in AR development, it has serious potential in business environments, with a definite roadmap, and the use cases have already been illustrated. Hardware manufacturer Stanley Black & Decker, for instance, has made use of Meta’s proprietary “immersive AR” headset for training and to display schematics of high-value equipment right before their engineer’s eyes during repairs. This sort of real-time 3D informational visualization is a major focus for developers of enterprise AR who hope to bring the technology into their workplace as a common tool, Meta’s chief revenue officer Joe Mikhail said. “We’ve been at it for several years, and generation over generation, we have identified the general productivity needs in the workforce so that every professional in the office can use [AR], which is 3D presentation,” Mikhail said. “So we can all tell stories and transfer knowledge by actually building experiences that are immersive and interactive, so you and I across the world can have a full-blown experience where we share ideas on other products
or concepts and we can both manipulate and touch them and have a real conversation in real time. That is our first-party generalizable application, which we’re seeing a lot of engagement from the market around.” Mikhail says the company has whittled down the use-cases that appear to be the most sought-after by enterprises looking to insert AR experiences into their work environment.
Three key use cases “The first of three use-cases we see that are general for the entire market are sales and marketing presentations,” Mikhail said. “So it changes how we sell products and it improves conversion rates — that’s what our customers are saying — as well as to possibly sell prod-
ucts that are expensive to build at a lower cost by using a lot of heavy prototypes. With Stanley Black & Decker, in addition to consumer toolsets, they build industrial tooling solutions for railroads, maintenance and things like that, multi-million-dollar pieces of equipment, $500,000 prototypes, and can’t even ship it around the world to show customers, so they go back to blueprints and slides. This changes the entire experience when you can see it at 1:1 to scale in context, and it helps them sell more. The second use case is design review, he contnued. “If we can shorten the design cycle by not waiting for 3D prints, lowering the costs, again having decision makers be able to see a highfidelity prototype virtually across the world and save the cost and the time for
008-11_SDT014.qxp_Layout 1 7/25/18 3:33 PM Page 9
www.sdtimes.com
August 2018
SD Times
Tool and equipment companies are already using AR/VR for training workers in the field on equipment use and repair
design reviews, that’s a huge use-case.” The third use case is corporate training in pre-production, when the manufacturing line is being built and a staff is being created before the project is launched. “On day one, they’re productive as they’ved trained in-full around the project virtually.” For Paul Reynolds, founder of augmented and virtual reality prototyping platform Torch3D, these sorts of applications of the technology will be invaluable and eventually ubiquitous, just as other ways to visually share information have become mundane through advancement. “If you think about how our modernday smartphones have changed how we work and how we communicate with each other, even in an internal, profes-
sional setting, snapping a photo of a whiteboard is kind of a no-brainer these days, as is showing a video you recorded on your phone as a way of helping people see what you’re trying to communicate to them. “If you think about that metaphor and you think about 3D,” he added, “my ability to put a virtual 3D object in the meeting room, or my ability to quickly put together a spatially arranged concept in collaboration with my stakeholders — if you think about how much conversation time that saves, it’s pretty huge ... it could be a physical spatial thing where maybe you and I are brainstorming a new retail display or a new warehouse layout.” So while the ideas are fully formed and the first steps have been made
toward bringing AR into the workplace, industry professionals say that there are a few reasons that despite rapid advancements in AR technology, broad adoption of AR in workplaces outside of heavy industry using proprietary headset technology is a few years away. The first is a major skill gap that needs to be bridged before advancement moves at a pace rapid enough for enterprise applications that utilize AR. While many skilled developers are ready for the rapid delivery and turnaround required for enterprise development, they’re missing a key ingredient that will rocket AR development forward — the ability to think in three dimensions. That’s where expertise from the video game industry comes in continued on page 10 >
9
008-11_SDT014.qxp_Layout 1 7/25/18 3:33 PM Page 10
SD Times
August 2018
www.sdtimes.com
< continued from page 9
handy, said Tim Huckaby, founder of Xenoholographics, a subsidiary of his company Interknowlogy, a Microsoft partner, which is currently developing AI-powered applications for Microsoft’s Hololens augmented reality platform and mobile devices. That effort is bouncing off of the company’s work creating applications for Microsoft’s Kinect motion sensor device for the Xbox line. “An enterprise developer, typically, would be very good at the software architecture for CRUD applications, but this 3D world is a world that most engineers and most developers have never lived in,” Huckaby said. “They don’t have any formal training in it, they didn’t go to school for it.”
Huckaby’s experience, as Reynolds echoed many of his points, elaborating that the skill gap was actually a bit of a surprise for XR frontrunners like Reynolds, who comes from a long career in AAA game development and advising at companies like Electronic Arts and Atari, Inc., and who got his start in the XR development world with the Magic Leap platform in 2014, where his role evolved from game developer to nongame application development lead. “We all just kind of assumed everyone was going to use game engines and
While he mused that it could be Epic Games with its C++-based Unreal Engine, Amazon has already taken steps to make XR development more friendly to those with a different background and even those with little-to-no coding experience with their browser-based Amazon Sumerian utility running on Amazon Web Services. But those proprietary solutions come with their own set of hurdles, Reynolds said. “If an engineer is fully committed and comfortable with the AWS ecosystem, then Sumerian could be a viable
didn’t really pay attention to the fact that that’s a difficult learning curve and probably not necessary for most people and the ideas that they want to build,” Reynolds explained. “The deepest well of interactive 3D experience we have is in the gaming world. There are some things that make sense between making entertainment and game experiences and making 3D applications. I came from the same mentality — ‘we’ll just support popular game engines for the MagicLeap — we’ll support Unity and Unreal.’ And the mentality for a lot of people was, well, Unity is the easiest version of true game development that there’s been, but that doesn’t mean it’s actually easy to use. It’s just the easiest version of game development and, in particular, we watched a lot of talented user experience people and designers who just couldn’t iterate with the technology to help us figure out what is this next generation of interaction that makes sense for the most people.” Huckaby thinks this leaves a big opening for another company to step in.
alternative to game engines like Unity or Unreal,” Reynolds said. “It does require interacting with AWS administration tools which makes it even less accessible to creatives and developers not familiar with the AWS Management Console.” But, Reynold’s explained, “We have found that a lot of these ‘easy to jump into’ 3D development environments and frameworks are fine for simple experiences, but they fall down pretty quickly if you’re building more sophisticated applications.” Outside of the simple knowledge of how to develop 3D experiences, there’s also a technological hurdle that needs to be faced. While plenty powerful for novelty AR experiences, everyone agreed that consumer smartphones and even specialized AR headsets like the Hololens or Meta won’t be able to deliver the most exciting, impressive or truly useful AR experiences on their own hardware for quite a while. That’s why much of the XR industry is banking on the recently finalized 5G standard for delivering its content from the cloud.
Gaming leads the way It doesn’t help, Huckaby said, that the primary tools for developing any kind of augmented or virtual (XR) applications are video game engines, especially C#based Unity, that those outside of game development have little to no experience in. The only answer, in Huckaby’s mind, is collaboration between the companies responsible for developing these environments. “Game engines have facilitated [AR development], but we’re now going to need that type of high-level runtime engine for augmented reality so that the common enterprise developer doesn’t have to have a math background or a 3D background,” Huckaby said. “They essentially can program with all of the tools forming the platform, like they have right now in Visual Studio and some of the other lifecycle tools we have, but there’s a huge, huge gap between enterprise programming right now in Visual Studio and lifecycle tools — and frankly on the other platforms too — and what Unity is, which is specifically designed for games and 3D creations. There’s a giant grand canyon between the two and they barely talk. And I don’t know how it’s going to be fixed unless Microsoft and Unity do a joint venture — or anything — together.” The idea of game developers leading the charge in XR design isn’t unique to
Image: .meta
10
008-11_SDT014.qxp_Layout 1 7/25/18 3:34 PM Page 11
www.sdtimes.com
“Generations forward, especially with 5G of course, will be more distributed architecture or cloud OS concepts, where really all of the encryption, authentication, reconciliation of who’s talking to whom and where the content resides and who’s looking at what from what angle, all is operated in the cloud,” Meta’s Mikhail said. “Because if we all believe, which we do, that this technology is headed towards a pair of glasses, not a head mounted display perse, it would have to be a very thin client. 5G-enabled solves one part of it and then doing all of the heavy compute on the cloud is the other part of it and then you’re dealing with an optical engine really and a communica-
vision, is going to be telepresence where I see you as a hologram in front of me and the content and vice versa. Today you’re getting 3D content shared and consumed on 2D screens.” In the near future, he continued, “meetings are happening where we’re collaborating around a 3D model for board meetings, executive meetings, design reviews, sales and training all happening in this. Short-term, I’d say two or three quarters out.” Huckaby says that Xenoholographic has much the same aim for remotely delivered content on the platforms they develop for. “We’ve built over the last nine months a cloud-based AR system in
tion module on your head.” Meta is making steps towards this goal with its pending Meta 3 model, which will enable real-time sharing of remotely delivered visualizations between two remote Meta 3 headsets, but for now, the technology is still in its beginning stages.
Azure for the lightweight conversion of 3D and holographic content up in the cloud so it can be delivered to the device — and the device is either a smartphone, or the device is a Hololens. That’s what we came out this year on the market with. And obviously if you’re wearing the glasses, you just get this incredible immersive experience where a teleconference with people all over the world totally makes sense. “The cool things about this tech and I think what you’re going to see in innovation over the next couple of years, and frankly where we have both of our patents pending, is being able to deliver what is historically really heavyweight content.” But even the major leap in bandwidth awarded by a 5G connection doesn’t fully solve the problem, Huckaby explained. “In Hololens [and mobile] development, there’s a huge issue where you basically have to compile the 3D objects and holographic content into the application itself,” Huckaby said. “That’s why in the past couple of years, the only AR apps that you see are pretty narrow in focus and pretty static in content. “Even something like Pokemon Go
Visual presentations can help sales and marketing efforts by bringing potential customers into the experience.
Innovations in headsets “It’s happening now in different flavors,” Mikhail said. “Last year, we announced a partnership with Zoom where you can take the feed from the Meta 2 headset and share what you’re looking at in terms of the digital holograms, models, etc,. to anyone on a Zoom call. So today you and I can be on a Zoom call, I can put on a Meta headset, and I can load up a model and instead of looking at a 2D slide, you’ll see a 3D hologram, you’ll see my hands around it and I can explain things in 3D. But the consumption is in 2D. You’re still consuming the content on a 2D screen, whether it’s your laptop or a conference room screen. Moving forward, the next generation of this technology will be two users in the Meta headset seeing everything in 3D and then further, this is the longer-term
August 2018
SD Times
— every time they release a new thing to find, you essentially have to download an entire new app from the App Store, which is absolutely unacceptable” he added. “It’s not enterprise software. I truly believe that this genesis we’re going through in mixed reality and augmented reality for the business part of the world is building enterpriselevel software as opposed to these oneoff, garage-developer type things. The only way this is going to succeed in the broad business categories is to build the software like we build enterprise software. And that means with performance and scale and structure and easy obtainability and all of that stuff, which is not what’s happening right now if you download an AR app on your iPhone.” So while the technology is rapidly approaching where it needs to be to deliver a useful experience in a work environment, there are still plenty of kinks to work out and still plenty of changes that need to be made to the tools and practices that will make AR or any XR technology friendly enough for the developer that they’d be a common part of a business’s operations. “The interesting part about these new 3D technologies is they give us this capability of communicating not only visually, but communicating at physical scale,” Torch3D’s Reynolds said. “I can see it becoming a fundamental way that we communicate concepts and ideas if we do it visually. There’s a lot of interesting anecdotal data backing up that if I can show you what I’m talking about we can both be aligned much quicker and actually get to the meat of the conversation. When you talk about internal productivity, it’s all about taking advantage of visuals and the scale and the spatial relationship that these new technologies provide us. I don’t think we can overspeculate where all of these technologies are going to find their real utility and value. That timeline’s a ways off, something as serious as a shareholder type of meeting, it’s hard enough to do with video conferencing. We’ll know the technology will have hit mainstream that it’s so ubiquitous and reliable that you would actually use it for that particular type of use case.” z
11
012_SDT014.qxp_Layout 1 7/25/18 3:34 PM Page 12
12
SD Times
August 2018
www.sdtimes.com
F
acial recognition has many beneficial uses for society, but it also has the potential to be misused and abused. Microsoft recognizes this and is laying out the steps it is taking to ensure facial recognition technology is used for good, as well as its recommendations to the government and the technology industry.
Microsoft has created the following list of questions that should be addressed by government regulation: Should law enforcement use of facial recognition be subject to human oversight and controls, including restrictions on the use of unaided facial recognition technology as evidence of an individual’s guilt or innocence of a crime? l Similarly, should we ensure there is civilian oversight and accountability for the use of facial recognition as part of governmental national security technology practices? l What types of legal measures can prevent use of facial recognition for racial profiling and other violations of rights while still permitting the beneficial uses of the technology? l
The Internet never forgets a face
But Microsoft is leading the call for public regulation of facial recognition
sentatives. The company says it recognizes that many will question whether members of Congress will have the technical expertise to address these issues, but Microsoft has stated that they believe Congress will be able to address these issues effectively. The company also believes that the technology sector should be responsible for regulation. Microsoft feels that there are still a lot of questions that need answers, but that the following conclusions have been made: The technology sector needs to continue to work to reduce the risk of bias in facial recognition. Microsoft feels that there needs to be a principled and transparent approach in developing and applying facial recognition technology. The company stated that moving forward it is
BY JENNA SARGENT
“Advanced technology no longer stands apart from society; it is becoming deeply infused in our personal and professional lives. This means the potential uses of facial recognition are myriad,” wrote Brad Smith, president of Microsoft, in a post. For example, a missing child could be located by recognizing them as they walk down a street. On the other side, a government could track you everywhere you go without your knowledge or permission, Smith explained. “The only effective way to manage the use of technology by a government is for the government proactively to manage this use itself. And if there are concerns about how a technology will be deployed more broadly across society, the only way to regulate this broad use is for the government to do so. This in fact is what we believe is needed today — a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission,” wrote Smith.
l Should
use of facial recognition by public authorities or others be subject to minimum performance levels on accuracy? l Should the law require that retailers post visible notice of their use of facial recognition technology in public spaces? l Should the law require that companies obtain prior consent before collecting individuals’ images for facial recognition? If so, in what situations and places should this apply? And what is the appropriate way to ask for and obtain such consent? l Should we ensure that individuals have the right to know what photos have been collected and stored that have been identified with their names and faces? l Should we create processes that afford legal rights to individuals who believe they have been misidentified by a facial recognition system?”
Microsoft believes that these issues should be addressed by elected repre-
committed to establishing a transparent set of principles for facial recognition technology. In addition, it believes the deployment of facial technology needs to slow down. “‘Move fast and break things’ became something of a mantra in Silicon Valley earlier this decade. But if we move too fast with facial recognition, we may find that people’s fundamental rights are being broken,” said Smith. “Microsoft is absolutely right that face recognition use by law enforcement must be fully analyzed and debated. Congress should take immediate action to put the brakes on this technology with a moratorium on its use by government, given that it has not been fully debated and its use has never been explicitly authorized,” Neema Singh Guliani, legislative counsel for the ACLU said in a statement. z
Full Page Ads_SDT014.qxp_Layout 1 7/25/18 1:05 PM Page 13
$PHULFDV (0($ 2FHDQLD VDOHV#DVSRVHSW\OWG FRP
Full Page Ads_SDT014.qxp_Layout 1 7/25/18 1:05 PM Page 14
015_SDT014.qxp_Layout 1 7/25/18 3:30 PM Page 15
www.sdtimes.com
August 2018
SD Times
How to organize a UX team to complement your development team BY JENNA SARGENT
In order to provide customers with a good experience when using your product, UX needs to be a key consideration during development. UX serves two important purposes: helping you understand your customer and translating that understanding into business success, said Josh Ulm, group vice president of UX design, PaaS and IaaS at Oracle. “It’s simple really. Positive customer experiences frequently result in a positive bottom line,” he said. He compared the roles of designers, developers and product managers as the three legs of a stool. Equal collaboration and clear communication between the roles is crucial, Ulm said. According to Ulm, design has historically been a latecomer to the process of development, making designers feel as if they “don’t have a seat at the table.” Design isn’t always the only one left out of the process, he said. “Frequently, any one or two of the roles will move forward at the exclusion of the others.” For example, the design team and product management may meet and produce a product concept before bringing engineering into the conversation, later realizing that their decisions have big technical implications.
Another example is that engineering may overpower design and product management, which would result in a poor product experience. “While it can take time to establish, the three roles must have a shared respect for each other’s contribution and develop a positive shared process.” According to Ulm, the actual role of design within an organization depends on the company. Some companies have designers on another team, such as product, engineering or marketing, while other organizations have consolidated design organizations that work as a centralized design service. The right solution for an organization depends on the size of the company and the employees themselves, Ulm explained. Embedded designers are able to collaborate across teams in small organizations, but as companies scale, maintaining the process and culture of design requires more structure in order to be effective. Without that structure, designers may end up feeling isolated, overpowered, or lack the necessary resources to do work, said Ulm. On the flip side, consolidating design resources into a central group might seem daunting to organizations that are used to having designers
directly supporting developer requests. Company reorganization can affect employee morale, so big changes should be treated with caution. Ulm said that if design ends up becoming its own team, maintaining a close relationship with engineering will be crucial. Benefits for larger organizations that have a consolidated design team include “more efficient and aligned procedures, easier rebalancing of resources, design systems for consistency, unifying messaging for the brand and the customer, and support for a vibrant company culture of creativity,” said Ulm. In terms of staffing a design team, Ulm advises that you don’t use the same recruiting as you would for hiring developers. He said there are agencies that specialize in identifying design talent, with an eye for judging a portfolio. “Once you’ve landed your star talent you’ll need to retain them, and that means creating a space that is friendly to design,” says Ulm. “Designers want to be part of the solution and have a big impact. If you consider the culture, environment and practices you are building up to support your designer and their process, you not only will gain their loyalty, you’ll see them flourish.” z
15
016,17_SDT014.qxp_Layout 1 7/25/18 3:28 PM Page 16
16
SD Times
August 2018
www.sdtimes.com
Kotlin gains ground But no current threat to Java, experts say BY LISA MORGAN
Kotlin continues to gain momentum among Android developers. In case you’re not familiar with Kotlin, it’s a statically typed, JVM-based language that’s interoperable with Java. It was developed by IntelliJ IDE provider Jetbrains, which introduced the language in 2011. Version 1.0, the first officially stable release was introduced in February 2016. However, Kotlin’s status was instantly elevated when Google deemed it a firstclass citizen in the Android IDE at the May 2017 Google I/O event. By late 2017, software quality company TIOBE made waves and headlines when it predicted that Kotlin would replace Java for Android app development. Hype aside, following are a few facts.
an analysis of search engine results. “Usually programming language adoption is a slow process, but Kotlin’s usage is [growing] fairly fast” among TIOBE’s multinational customer base, Paul Jansen, managing director at TIOBE, said, though he was unable to provide data showing the actual adoption rate. There is also the Popularity of Programming Languages Index,
“It’s of interest, it’s growing, but there are far fewer Kotlin developers than the noise would warrant,” said Driver. “The whole idea it’s going to replace Java is overhyped. We don’t see Kotlin stacks, we don’t see things that are unique to Kotlin above a certain radar threshold.” Part of the problem is fragmentation. There are more languages competing for market share today than there were historically.
Kotlin by the numbers Mobile app development tool and platform provider Realm produces a report that covers trends and activity patterns gleaned from its global community of active mobile application developers. Its initial Realm Report, published in Q4 2017, showed that Kotlin adoption grew from 0 percent prior to its v1.0 launch and to 4.28 percent by the May 2017 Google announcement. By September 2017, Kotlin use had increased to 7.54 percent while Java slipped from 50.66 percent to 46.23 percent in the same time frame. Software development industry research firm Redmonk also reported aggressive Kotlin growth. In Q3 2017, Kotlin ascended from #65 to #46 on Redmonk’s list of top 100 languages. By January 2018, Kotlin had jumped to #27, making it the fastest-growing language behind Swift. To make Redmonk’s list at all, a language must be observable on both Github and StackOverflow. Meanwhile, TIOBE’s June 2018 Top 100 most popular programming languages list ranked Kotlin at #49 based on
based on Google searches for tutorials. That list ranked Kotlin #16 on a list of 22 for June 2018. Java ranked #2. Interestingly, between June 2017 and June 2018, Kotlin’s foothold increased by 0.6 percent while Java fell by the same amount. Specifically, the list shows that Kotlin had 0.93 percent market share in June 2018 compared to Java at 22.45 percent. Most recently, communications and collaboration API provider Pusher announced the results of a survey of 2,744 developers, 60 percent of whom use Kotlin for work and personal projects. All indications are that Kotlin is gaining ground at Java’s expense. Still, Gartner Research VP Mark Driver said there are remarkably few Kotlin developers.
“Open source markets are typically flat,” said Driver “Kotlin only needs a couple hundred thousand developers to make it successful. It probably has that or more today. One can assume every line of code written in Kotlin is a line of Java that isn’t being written or at least a good chunk of it is. Is it replacing Java in the big picture? Of course not.”
What’s to Like About Kotlin Android Studio support makes for a more stable developer experience. It also helps that Android Studio is based on JetBrains’ IntelliJ, giving Kotlin a unique advantage over other languages.
016,17_SDT014.qxp_Layout 1 7/25/18 3:29 PM Page 17
www.sdtimes.com
in Android And since Android Studio ships with Kotlin, starting a Kotlin project is pointand-click simple — no need for a plugin as before. In addition, debugging Kotlin code is the same as debugging Java code. “A lot of Android developers aren’t necessarily Java developers, so they’re not predisposed to demand Java,” said Gartner’s Driver. “Their last language was probably Objective-C or Swift because they were building an iOS app and now they’re being asked to develop an Android app.” The Android Studio support also gave software development managers greater confidence about the language because they no longer had to worry about the language’s longevity. “Anytime you ship a language with an IDE, it’s more likely to be discovered,” said Driver. “There’s a stamp of approval on it out of the box.” Kotlin is interoperable with Java, so shops can migrate at their own pace instead of rewriting entire applications. The interoperability enables Kotlin code to be converted into Java code and vice versa. It’s also possible to combine Kotlin and Java code within the same application. However, before doing a conversion, developers are wise to read the short interoperability guide because it will save time in the long run. “There’s no risk of incompatibility because it’s running on the JVM,” said Driver. “[However,] if you’re not a Java developer, you’re probably not looking at Kotlin.” Kotlin application performance is comparable to a Java application performance, although Kotlin builds can take more or less time than Java builds, depending on the type of build that’s executed. For example, a clean Kotlin build may take longer than a clean Java build, while an incremental Kotlin build may actually be faster. The differences in build times aren’t substantial enough to impact Kotlin use, though. Developers also like Kotlin’s concise
syntax because it reduces the possibility of errors. “Significantly simplified syntax avoids a lot of scaffolding that you have to go through with Java to do casting,” said Gartner’s Driver. “You can accomplish many of the same things with fewer lines of code, so you’re less likely to introduce bugs and therefore it’s more likely to reduce the cost of maintaining the code, but there’s nothing industrychanging about it at all. It’s just small improvements and streamlined factors here and there. It’s similar to what Apple did with Swift.” Kotlin’s null safety is also attractive, especially given the angst null causes Java developers. For example, software analytics company OverOps discovered that out of one billion Java logged errors, 97 percent of errors were caused by 10 unique errors, the most common of which were NullPointerException errors. A later study of 1,000 applications confirmed that finding, showing that such errors impacted 70 percent of production environments. Kotlin’s strong tooling is also an incen-
Since Android Studio ships with Kotlin, starting a Kotlin project is point-and-click simple. tive. Unlike other languages, Kotlin was developed by an IDE provider who concurrently built Kotlin and first-class IDE support of Kotlin. “IntelliJ is an incredibly popular IDE,” said Driver. “I think JetBrains saw the cumbersome relationship between Java and some of the IDEs and said we can build a language that’s a little more streamlined and modernize it. Java was built 25 years
August 2018
SD Times
ago. If you built it today, you’d end up with something like Kotlin, taking advantage of what we’ve learned in the last 25 years.”
What’s New At the 2018 Google I/O event, Google announced Android Jetpack, which is the latest generation of Android components. Jetpack provides backward compatibility and immediate updates to a larger set of components so developers can build higher-quality apps faster and cheaper. It also manages background tasks, navigation and lifecycle management. Included in Android Jetpack is Android KTX, which are Kotlin-specific components designed to improve the developer experience. Components include WorkManager, Paging, Navigation, and Slices. Google also improved the performance of the Android Runtime (ART) so Kotlin apps can run faster. Code snippets have been added to the official documentation and Google published a Kotlin version of the API reference documentation. The company also launched a Kotlin Bootcamp on Udacity and it now has a Kotlin specialization in the Google Developers Expert Program. Meanwhile, the language itself has been evolving. Kotlin 1.2, the last major release, enables code reuse between the JVM and JavaScript so developers can write an app’s business logic once and reuse it across the back end, browser front end and Android mobile apps. Version 1.2 also compiles 25 percent faster than v1.1. Kotlin 1.1 included a JavaScript target that allows developers to compile Kotlin code into Javascript that runs in a browser. Minor releases included support for Gradle build cache (v1.2.2), a number of bug fixes, JUnit 5 support and more. Kotlin Native also continues to mature. It was announced in November 2017 and is now at v. 0.7. It compiles Kotlin into machine code and produces executables that don’t require a virtual machine. z
17
018_SDT014.qxp_Layout 1 7/25/18 3:39 PM Page 18
18
SD Times
August 2018
www.sdtimes.com
BY CHRISTINA CARDOZA
Despite the advancements made in artificial intelligence so far, the Defense Advanced Research Projects Agency (DARPA) believes there is still more work to be done. DARPA is launching the Artificial Intelligence Exploration (AIE) program as part of its broader AI investment strategy. “DARPA has established a streamlined process to push the state of the art in AI through regular and relatively short-term technology development projects,” said Peter Highnam, DARPA’s deputy director. “The intent is to get researchers on contract quickly to test the value and feasibility of innovative concepts. Where we’re successful, individual projects could lead to larger research and development programs spurring major AI breakthroughs.” According to the agency, past investments have advanced the first and second wave of artificial intelligence. The first wave was focused on rule-based AI while the second wave focused on statistical learn-
ing-based AI technology. The AIE program is meant to advance and accelerate the third wave of AI, which will address challenges and limitations from the first two waves as well as developing new AI theory and applications. “We see this third wave is about contextual adaptation, and in this world we see that the systems themselves will over time build underlying explanatory models that allow them to characterize real-world phenomena,” John Launchbury, director of DARPA’s Information Innovation Office, said in a video. For instance, a secondwave AI system can provide image classification where it is given an image and it does calculations to detect what is in the image. However, Launchbury said the agency would prefer if the system could respond and not only say what the image is, but explain why it came to that conclusion. If the image is of a cat, the system not only knows there is a cat in the image, but it can detect that because the cat has fur,
whiskers, claws and other features. Going further, Launchbury explained in order to do things like image classification, we have to give those second-wave systems much training data and examples for it to learn and detect. In the third wave, DARPA would like to develop systems that think and reason much more like humans, and be able to understand what is going on based off of only a handful of data or examples. “If I had to teach my kid 50,000 times or a 100,000 times how to write something, I would get bored. Human beings are doing something different. We many only need one or two examples, and we are starting to see how to build systems that can be trained from one or two examples,” Launchbury explained. “These are examples that led us to think that the third wave of AI will be built around contextual models where the system over time will learn about how that model should be structured. It will perceive the world in terms of that model. It will
be able to use that model to reason, to be able to make decisions about things and maybe even will start to be able to use that model to abstract and take data further, but there is a whole lot of work to be done to be able to build these systems,” he added. According to the agency, AIE is based on the “Disruptioneering” fast-tracked solicitation process from the agency’s defense science office. The process was created to accelerate scientific discovery. Similarly to the disruptioneering program, AIE will issue special notices or “AIE Opportunities” tied to specific interests and may award up to $1 million for each AIE Opportunity. “AIE will constitute a series of unique funding opportunities that use streamlined contracting procedures and funding mechanisms to achieve a start date within three months of an opportunity announcement. Researchers will then work to establish the feasibility of new AI concepts within 18 months of award. Through this nimble approach to exploring new AI concepts, DARPA aims to outpace competing, global AI science and technology discovery efforts,” DARPA said in an announcement. Projects from AIE may include proofs of concept, pilots, novel apps of commercial technology for defense purposes, and the creation, design, development and demonstration of technical or operational utility, the agency explained. z
Full Page Ads_SDT014.qxp_Layout 1 7/25/18 1:06 PM Page 19
$PHULFDV (0($ 2FHDQLD VDOHV#DVSRVHSW\OWG FRP
020_SDT014.qxp_Layout 1 7/25/18 5:06 PM Page 20
SD Times
August 2018
www.sdtimes.com
Gadgets & gizmos aplenty BY JENNA SARGENT
The Internet of Things is growing, with more and more companies building connected gadgets to make our lives easier. Forty-five companies headed to New York City last month to showcase their latest gadgets and solutions at Pepcomâ&#x20AC;&#x2122;s Digital Experience East. Hereâ&#x20AC;&#x2122;s a sampling of some of the products that stood out.
BACtrack: BACtrack revealed its latest product, the BACtrack C8, a breathalyzer for consumers that uses similar technology to what police use to tell you what your current bloodalcohol level is. BACtrack C8 features ZeroLine technology, which estimates how long it will take for your BAC to return to 0.00 percent.
Cemtrex SmartDesk:
ivWatch: ivWatch is working to improve the safety and effectiveness of intravenous therapy. The device continuously monitors IVs for infiltration of drugs into surrounding areas of the body where they were not intended. According to a 2015 survey from the Journal of Infusion Nursing, over 50 percent of IVs fail and 2023 percent of those failures are a result of infiltration. In addition to reducing patient harm, the ivWatch 400 reduces the costs associated with wasted medications.
Cemtrex was there to show off its upcoming SmartDesk, which eliminates some of the problems of current workspaces, such as wires, clutter, and outdated technology. The desk features 3 high resolution touch screen monitors totaling 72 inches of screen area; STARK gesture system for touch, touchless, and stylus control; a digital phone and webcam that will integrate with most VoIP providers; an integrated document scanner; wireless smartphone charging and connectivity; a built-in digital keyboard and multi-touch input trackpad; and only a single wire.
Ooma: Ooma Home is a home security solution. It features a smart camera that uses AI to do facial and audio recognition. Over time, the camera will learn the faces of you, your family members, and friends, and then will only alert you when someone unrecognized appears. It includes geofencing capabilities that allow you to automatically arm and disarm the security system. Recently added features include a siren, smoke detector, and VTech garage door sensor to alert homeowners if they leave their garage door open.
Piper: Piper is DIY computer kit Rocketbook: The Rocketbook Everlast and Everlast Mini are reusable notebooks that can be connected to some of the most popular cloud services. The notebook looks and feels like a real notebook, but the pages can be wiped clean with a damp towel after the information has been uploaded.
that teaches kids how to code in a creative and fun way. The Piper Computer Kit contains all of the components needed to build a computer, including a display, breadboard, switches, mouse, and a special Raspberry Pi version of Minecraft.
SD Times Photo: Jenna Sargent
20
Full Page Ads_SDT014.qxp_Layout 1 7/25/18 1:06 PM Page 21
022_SDT014.qxp_Layout 1 7/25/18 3:40 PM Page 22
SD Times
August 2018
www.sdtimes.com
DEVOPS WATCH
GitLab 11.0 released with Auto DevOps enables a fully functional delivery license scanning, packaging, performance testing, deploying, and monitoring GitLab’s complete DevOps vision is pipeline in just minutes.” The release aims to remove bottle- their applications,” the company stated becoming a reality in its latest 11.0 release. The company announced the necks, and features the ability to auto- in the announcement. “Auto DevOps matically guide code from verification enables you to ship with confidence general availability of Auto DevOps. because critical security GitLab first announced MANAGE scanning is built in, includAuto DevOps last year, with CON ing static and dynamic the hope that the concept PL FIG E AN U AT application security testing, would help developers dependency scanning, and deliver their ideas to proDEV OPS container scanning. In duction faster. The release doing so, developers can of Auto DevOps focuses on T FY NI MO focus on what matters most accelerating enterprise to the organization — shipDevOps adoption. SECURE ping code that adds value “GitLab is widely to the customer.” known for being a fully GitLab’s Auto DevOps vision aims help developers deliver ideas to production The new solution capable source code and faster by automatically detecting, building, testing, deploying and monitoring. leverages Kubernetes for lifecycle-management tool, but we’re now proving that GitLab to monitoring. According to the compa- deployments and integrates with is much more than that,” said Sid ny, enterprises can speed up the Google Kubernetes Engine for accessSijbrandij, co-founder and CEO of Git- DevOps lifecycle by 200 percent with ing Kubernetes. GitLab 11.0 is the 84th consecutive Lab. “With the release of GitLab 11.0 Auto DevOps. “Developers simply commit their product release, according to the comand power of Auto DevOps, we’re making it effortless for enterprises who code to GitLab, then Auto DevOps pany. Other updates include built-in haven’t yet transitioned to DevOps to does the rest: building, testing, code security scanning and support for .NET effectively push an ‘easy button.’ This quality scanning, security scanning, and Scala. z RI
OR
AG E
VE
PA CK
RE
RE LE AS E
BY CHRISTINA CARDOZA
CR E
22
CloudBees DevOptics adds new monitoring capabilities BY JENNA SARGENT
CloudBees is giving DevOps teams realtime value stream visibility and insights for monitoring, measuring, and managing DevOps performance with new capabilities in its DevOptics solution. According to the company, these new capabilities will solve a big problem that organizations face when trying to adopt DevOps practices, which is that they invest in new ways to deliver software, but still struggle to understand the impact of that investment. CloudBees’ new solution also focuses on helping executives, managers, and practitioners understand DevOps performance and evaluate the impact of their DevOps investments. The new monitoring capabilities and
metrics are designed to enable teams to anticipate high activity times, uncover restrictions to improve feedback cycle times, balance cluster workloads, and identify low periods of activity that are ideal for maintenance or upgrade work. “While organizations appreciate the value of DevOps in principle, they haven’t had a good mechanism to measure its efficiency,” said Ben Williams, senior director, product management, CloudBees. “CloudBees DevOptics collects, analyzes and presents important indicators of DevOps performance, giving organizations insights they can use to improve software delivery processes, drive more value and maximize returns.” In addition, the updates tackle some of the key indicators for measuring val-
ue found in the annual DevOps Research and Assessment (DORA) State of DevOps Report. The four key metrics include: deployment frequency, mean lead time, mean time to recover and change failure rate. “Identifying the right performance indicators and committing to a streamlined measurement process can make the difference between a successful and a less than optimal DevOps implementation,” sai Nicole Forsgren, founder and CEO of DevOps Research and Assessment and an author of the State of DevOps Report. “Having the ability to track these metrics and make improvements is what DevOps is all about. The end result is improved software delivery across the organization.” z
Full Page Ads_SDT014.qxp_Layout 1 7/25/18 1:08 PM Page 23
Software delivery; it’s a team sport
In software delivery if teams aren’t rowing together, they’re rowing in circles. Integrate the complex network of disparate tools that you use to plan, build and deliver software at scale. $XWRPDWH WKH ćRZ RI product-critical information across your entire software delivery process. Help your teams pull together.
Connect
Visualize
Measure
tasktop.com
024-26,28,29_SDT014.qxp_Layout 1 7/26/18 9:52 AM Page 24
24
SD Times
August 2018
www.sdtimes.com
BY CHRISTINA CARDOZA
T
he way software is built is constantly changing to meet the ongoing pressure of getting to the market faster and keeping up with the competition. The software development industry has gone from waterfall to agile, from Agile to DevOps, from DevOps to DevSecOps, and from monolithic applications to microservices and containers. Today, a new approach is entering the arena and shifting the paradigm yet again. Serverless aims to capitalize on the need for velocity by taking the operational work out. “Serverless has changed the game on the go-to-market aspect and has compressed out a lot of the steps that people never wanted to do in the first place and now don’t really have to do,” Tim Wagner, general manager for AWS Lambda and Amazon API Gateway, said in an interview with SD Times. Amazon describes serverless as a way to “build and run applications and services without thinking about servers. Serverless applications don’t require you to provision, scale and manage any servers. You can build [serverless
solutions] for nearly any type of application or back-end service, and everything required to run and scale your application with high availability is handled for you,” the company wrote on its website. The Cloud Native Computing Foundation (CNCF) and its Serverless Working Group define serverless as “the concept of building and running applications that do not require server management. It describes a finergrained deployment model where applications, bundled as one or more functions, are
024-26,28,29_SDT014.qxp_Layout 1 7/26/18 9:53 AM Page 25
www.sdtimes.com
August 2018
SD Times
The three revolutions of serverless According to a recent report from cloud computing company DigitalOcean, while serverless is gaining traction, a majority of developers still don’t have a clear understanding of what it is. Hillel Solow, CTO of the serverless solution provider Protego Labs, explained that the meaning of serverless can be confusing because it has three different core values: serverless infrastructure, serverless architecture and serverless operations. n Serverless infrastructure refers to how businesses consume and pay for cloud resources, Solow explained. “What are you renting from your cloud provider? This is about ‘scales to zero,’ ‘don’t pay for idle,’ ‘true auto-scaling,’ etc. The serverless infrastructure revolution proposes to stop leasing machines, and start paying for the actual consumption of resources,” he wrote in a post. n Serverless architecture looks at “how software is architected to enable horizontal scaling.” As part of this, Solow says there are key design principles: l Setting
up serverless storage as file or data storage so that it can scale based on the application’s needs l Moving all application state to a small number of serverless storages and databases
uploaded to a platform and then executed, scaled and billed in response to the exact demand needed at the moment.” Despite its name, the CNCF stated that serverless doesn’t mean developers no longer need servers to host and run code, and it also doesn’t mean that operation teams are no longer necessary. “Rather, it refers to the idea that consumers of serverless computing no longer need to spend time and resources on server provisioning, maintenance, updates, scaling and continued on page 26 >
l Making
sure compute is event-driven by external events like user input and API calls or internal events like time-based events or storage triggers l Organizing compute into stateless microservices that are responsible for different parts of the application logic
n Serverless operations defines how you deploy and operate software. According to Solow, operations specifically looks at how cloud-native apps are orchestrated, deployed and monitored. “Cloud native means the cloud platform is the new operating system,” he said. “You are writing your application to run on this machine called AWS. Just as most developers don’t give much thought to the exact underlying processor architecture, and how many hyper-threaded cores they run on, when you go cloud native, you really want to stop thinking about the machines and you want to start thinking about the services. That’s how you write software for Android or Windows, and that’s how you should be writing software for the cloud.” In addition, serverless is often referred to as Functions-as-a-Service or FaaS because it is an easier way to think about it, according to Red Hat’s senior director of product management Rich Sharples. FaaS is actually a subset of the broader term serverless, but it is an important part because it is “the glue that wires all these services together,” he explained. “FaaS is a programming model that really speaks to having small granularity of deployable units, and the ability that comes from being able to separate and segregate that out as well as separate it from some of the operational pieces,” said Tim Wagner, general manager for AWS Lambda and Amazon API Gateway. “When I think of serverless, I usually mean a functions model, which is operated by a public cloud vendor, and offers the perception of unbounded amounts of scale and automated management.” z —Christina Cardoza
25
024-26,28,29_SDT014.qxp_Layout 1 7/26/18 9:44 AM Page 26
26
SD Times
August 2018
www.sdtimes.com
< continued from page 25
capacity planning. Instead, all of these tasks and capabilities are handled by a serverless platform and are completely abstracted away from the developers and IT/operations teams,” the CNCF wrote. This enables teams to worry about their code and applications business logic and operation engineers to focus more on critical business tasks. Wagner explained this is a major benefit of serverless because most companies aren’t in the business of managing or provisioning servers. By being able to abstract the operational tasks, capacity planning, security patching and monitoring away, businesses can focus on providing value that matters to the customers. However, Wagner said that while serverless certainly eases up operational tasks, it doesn’t take operational teams out of the equation entirely. Applications and application logic still require monitoring and observability. “The serverless fleet portion goes away and that is the part that frankly was never a joy for the operation team or DevOps team to deal with. Now they get to focus their activities on business logic, the piece that actually matters to the company,” he said.
A successful transition to serverless One of the first things you hear about when it comes to serverless is the cost savings. Serverless provides reduced operational costs and reduced development and scaling costs because you can outsource work and only pay for the compute you need. “It allows applications to be built with much lower cost and because of that, enterprises are able to make and spend more time getting the applications they want. They can devote more time to the business value and the user experience than they were traditionally able to in the past,” said Mike Salinger, senior director of engineering for the application development software company Progress. However, Nate Taggart, CEO of Stackery, the serverless solution provider for teams, said the cost-saving benefits are a bit of a red herring. The
The top use cases for serverless According to the CNCF, there are 10 top use cases for serverless technology 1. Multimedia processing: The implementation of functions that execute a transformational process in response to a file upload 2. Database changes or change data capture: auditing or ensuring changes meet quality standards 3. IoT sensor input messages: The ability to respond to messages and scale in response 4. Stream processing at scale: processing data within a potentially infinite stream of messages 5. Chat bots: scaling automatically for peak demands
main benefit of serverless is velocity. “Every engineering team in the world is looking for ways to increase the speed in which they can create and release business value,” Taggart said. Velocity is a major benefit of serverless, but achieving speed becomes difficult when you have multiple functions and try to transition a large monolithic, legacy application to serverless. Serverless, for the most part, has a low barrier for entry. It is really easy for a single developer to get one function up and running, according to Taggart, but it becomes more difficult when you try to use serverless as part of a team or professional setting. To successfully deploy serverless across an application, Taggart explained teams need to utilize the microservices pattern. Microservices is an ongoing trend organizations have been leveraging to take their giant monolithic apps and break them out into different services. “You can’t just take an entire monolithic application and lift and shift to serverless. It is not interchangeable. If you have a big monolithic application chances are you are using VMs and containers, so transitioning to serverless becomes a lot tricker. We see microservices as one of the stepping stones into serverless.” he said. When transitioning a monolithic application to serverless, Amazon’s
6. Batch jobs/scheduled tasks: Jobs that require intense parallel computation, IO or network access 7. HTTP REST APIs and web apps: traditional request and response workloads 8. Mobile backends: ability to build on the REST API backend workload above the BaaS APIs 9. Business logic: The orchestration of microservice workloads that execute a series of steps 10. Continuous integration pipeline: The ability to remove the need for pre-provisioned hosts —Christina Cardoza
Wagner suggested doing it in pieces. An entire application doesn’t have to move to serverless. Take the pieces that would benefit from serverless the most and transition those bits to optimize on cost and business results, he explained. According to Wagner, most enterprises already have systems that are hybrid at some level, so instead of having to decide between serverless, containers and microservices, you can combine the compute paradigms to your benefit. In addition, professional engineering teams moving to serverless need to provide a consistent and reliable environment. In order to do that, Taggart said organizations need to put company-wide standards in place. “As an organization, you want to ensure that whoever modifies or ships the application does so in a way that is universal so that you can increase reliability and avoid the ‘it worked on my laptop’ problem. When an individual developer is shipping a serverless application, there’s a sort of default consistency,” he said. “When teams are working on serverless applications, and you have more than one developer involved, consistency and standardization become extremely important.” At a basic level, consistency and reliability are achieved by having a centralized build process, standard instrumentation, a universal method for rolling continued on page 28 >
Full Page Ads_SDT014.qxp_Layout 1 7/25/18 1:09 PM Page 27
024-26,28,29_SDT014.qxp_Layout 1 7/26/18 9:44 AM Page 28
28
SD Times
August 2018
www.sdtimes.com
applications themselves are lacking and have to worry about your team dealing back apps, and visibility into the archi- poorly adopted,” Podjarny said. with the things,” he said. tecture and shared dependencies. While serverless doesn’t radically The challenges arise when teams More advanced methods include hav- change security, some things become start thinking about how they are going ing centrally managed security keys, inherently difficult, according to Hillel to make sure their application does only access roles and policies, and deploy- Solow, CTO of the serverless solution what it is supposed to do. Solow ment environments, Taggart explained. provider Protego Labs. The top weak- explained where you put security and Amazon’s Wagner added that it is nesses of serverless include unnecessary how you put security in place has to very important to limit the people who permissions, vulnerable code, and wrong change. can call functions, and limit the rights configurations, according to Solow. In a recent report from Protego and access capabilities to ensure the In addition, Red Hat’s senior direc- Labs, the company found 98 percent of security of applications. tor of product management Rich serverless functions are at risk and 16 According to Progress’ percent are considered Salinger, a best practice serious. “When we anafor transitioning applicalyze functions, we assign tions to serverless is worka risk score to each funcing in a way where your tion. This is based on the application is stateless. posture weaknesses dis“Stateless applications are covered, and factors in done in such a way that not only the nature of the your components can be weakness, but also the scaled up and down at any context within which it time. You have to make occurs,” explained Solow. sure your application isn’t “After scanning tens of relying on a specific state thousands of functions in to occur,” he said. live applications, we Another design princifound that most serverple is to develop your less applications are simbusiness logic and user ply not being deployed as experience first. A comsecurely as they need to mon pitfall is that develbe to minimize risks.” opers think about building According to Podjarny, a serverless application serverless shuffles securiinstead of thinking about Serverless provides reduced operational costs ty priorities and splits building your app and runapplications into many ning a function in a way and reduced development and scaling costs tiny pieces. “Threats such that it will scale out easily, as unpatched servers and because you can outsource work and Salinger noted. denial of service attacks only pay for the compute you need. “It is all about focusing are practically eliminated on the user experience as they move to the platand the value of the application, and Sharples said old application security form, greatly improving the security not having to worry about all the side risks become new again with serverless. posture out of the gate. This reality stuff that is repeatable and less valuable Those risks include function event data shifts attacker attention from the servers for the developer and for their app,” injection, broken authentication, inse- to the application, and so all aspects of Salinger said. cure serverless deployment configura- application security increase in importion and inadequate function monitortance,” he said. “Each piece creates an Solving for serverless security ing and logging. attack surface that needs securing, creServerless is still an “immature” techServerless security isn’t all complicat- ating a hundred times more opportuninology, which means that serverless ed though, Solow explained. For ties for a weak link in the chain. Fursecurity is even more immature, instance, severless requires teams to turn thermore, now that the app is so according to Guy Podjarny, CEO of the over ownership of the platform, operat- fragmented, it’s hard to follow app-wide open-source security company Snyk. ing system and runtime to the cloud activities as they bounce from function “The platforms themselves, such as provider such as Amazon, Microsoft and to function, opening an opportunity for Lambda and Azure Functions, are very Google. “The cloud providers are almost security gaps in the cross-function intersecure, but both the tooling and best always going to do a better job at patch- action.” practices for securing the serverless ing and securing the service, so you don’t Red Hat’s Sharples added that secu< continued from page 26
024-26,28,29_SDT014.qxp_Layout 1 7/30/18 11:31 AM Page 29
August 2018
SD Times
29
rity teams should think about data in a serverless environment, think about least-privilege controls and finegrained authorization, practice good software hygiene and remember data access is still their responsibility To successfully address the serverless security pains, Podjarny suggested good application security practices should be owned and operated by the development team and should be accompanied by heavy automation. In addition, Protego Labs’ Solow suggested embracing a more serverless model for security, which uses security at the places where your resources are. “The good news is these are all mitigable issues,” said Solow. “Serverless applications enable you to configure security permissions on individual functions. This allows you to achieve more granular control than with traditional applications, significantly mitigating the risk if an attacker is able to get access. Serverless applications require far more policy decisions to be made optimally, which can be challenging without the right tools, but if done accurately, these decisions can make serverless applications far more secure than their non-serverless analogs.” Other security best practices Solow suggest include: • Mapping your app to see the complete picture and understand the potential risks • Applying perimeter security at the function level • Crafting minimal roles for each function • Securing application dependencies • Staying vigilant against bad code by applying code reviews and monitoring code and configuration • Adding tests for service configuration to CI/CD • Observing the flow of information to ensure it is going to the correct places • Mitigating for Denial-of-Service and Denial-of-Wallet where hackers can attack your app by “overwhelming” it, causing it to rack up expenses. • Considering strategies that limit the lifetime of a function instance. z
Discovery. Insight. Understanding. SD Times subscriptions are FREE!
SD Times offers in-depth features on the newest technologies, practices, and innovations affecting enterprise developers today — Containers, Microservices, DevOps, IoT, Artificial Intelligence, Machine Learning, Big Data and more. Find the latest news from software providers, industry consortia, open source projects and research institutions. Subscribe TODAY to keep up with everything happening in the ever-changing world of software development! Available in two formats — print or digital.
Sign up for FREE today at www.sdtimes.com.
030,31_SDT014.qxp_Layout 1 7/26/18 1:17 PM Page 30
The Many Faces of Agile BY JACQUELINE EMIGH
A
gile is an umbrella term, not a monolithic entity. The Agile Alliance describes Agile as “the ability to create and respond to change in order to succeed in an uncertain and turbulent environment,” while giving a longer and much more specific definition for Agile software development. Yet you can find plenty of other definitions in other places, too. Beyond that, Agile is being implemented from a variety of perspectives. Here are four interesting Agile methodologies, approaches and strategies which haven’t flown as high on most radar screens as the widely heralded Scrum, Kanban, Test-Driven Development (TDD), and Extreme Programming (XP), for instance.
Mob Programming Mob Programming first received its own formal description in an experience report at Agile2014. In the report, Woody Zuill discussed the experiences of his software development team at Hunter Industries. The team at Hunter planted the initial seeds for Mob Programming back in 2011 while practicing TDD and Coding Dojos to get up to speed on a project which had been placed on hold for several months. “A gradual evolution of practices as well as a daily inspect and adapt cycle [resulted] in the approach that is now known as Mob Programming,” according to the Alliance. An extension of pair programming, in which two developers work together while sharing the same screen, Mob Programming requires all team mem-
030,31_SDT014.qxp_Layout 1 7/25/18 5:05 PM Page 31
AGILE SHOWCASE bers to collaborate continuously on a single computer. Yet although all of the production code and most of the team’s other work gets done on the “main” computer, developers can use their own laptops for “searching, trying things, reading documents, or whatever an individual would like,” Zuill wrote on the MobProgramming.org website. Multiple monitors or projectors can be used, too, making it easier to open and share several applications at the same time. The customer is also part of the team, sometimes sitting in the same room as other team members and sometimes chiming in with feedback via remote screen sharing, instant messaging (IM), or phone.
‘Descaling’ the Organization Ironically, descaling is a common practice for scaling up Agile development. “Scaling by descaling breaks down things into smaller bits: small autonomous teams, smaller product backlogs, etc. In practice it works better than complex processes,” maintained Patric Palm, CEO of Favro, in an interview with SD Times. “Agile teams optimize flow by chopping everything into tiny pieces,” concurred Peter Merel, founder of the XSCALE Alliance, an Agile coaching organization. “Tiny plans, tiny meetings, tiny requirements, tests, sign-offs and retrospectives. Making these things tiny enables teams to review and refactor them in tiny ceremonies, boosting productivity through continuous negative feedback.” Some Agile experts now advocate “descaling” the entire organization, as well. “Scaling is an anti-pattern. Big meetings, long loops, slow cadence, tight coupling, and deep hierarchies represent bottlenecks no matter how Agile an individual team may be. Descaling refactors your organization into self-managing streams of self-organizing teams working together like pods of dolphins,” contended Merel, writing on LinkedIn. Australia-based Fairfax Media Group has transformed its Domain Group business into an Agile organization through descaling, illustrated Stu-
art Bargon, who served as Agile product lead at Domain.com.au. “Descaling through organizational change” is actually the primary purpose of one Agile methodology, dubbed Large-Scale Scrum (LeSS). “Descaling the number of roles, organizational structures, dependencies, architectural complexity, management positions, sites, and number of people. LeSS is not about scaling one team into multiple teams. LeSS is about scaling up Scrum itself in order to achieve organizational descaling,” explained Viktor Grgic, a software developer, Agile coach, and certified LeSS trainer. “Just like Scrum, LeSS removes your organizational band-aids that apparently solved the problems, but actually masked them and didn’t address the root causes,” Grgic wrote, in a blog post on the LeSS site.
Crystal Methodologies Alistair Cockburn was one of the 17 software development luminaries who got together in Snow Bird, Utah in 2001 to forge the now famous Agile Manifesto. While working at IBM, Cockburn had developed Crystal, a family of methodologies for object-oriented programming. The most structured of these approaches, Crystal Diamond, is targeted at large projects, whereas Crystal Clear is a much more fluid methodology for teams of one to six developers. Crystal methodologies revolve around seven properties: frequent delivery; reflective improvement; osmotic communications; personal safety; focus; easy access to expert users; and a technical environment with automated tests, configuration management, and frequent integration. According to David Lowe of the Scrum & Kanban website, it’s quite apparent that Crystal had a profound impact on the resulting Manifesto. However, not all of Crystal’s terminology made it into the groundbreaking document, including reflective improvement. In reflective improvement, team members get together regularly to talk about how to improve a project — certainly not
a revolutionary notion today, but Crystal first saw the light of day in the 1990s. Crystal is still highly regarded by many Agile experts. Yet as Lowe suggests, its adoption might have been limited by the fact that the methodologies are tailored to Agile developers so advanced that they’re already writing their own rules. “It’s kind of a catch-22 designed to avoid followers. Evidently I’m no good at business models,” quipped Cockburn, who now teaches about Scrum as well as non-Scrum approaches.
Bell Curve-based Agile Delivery Model The Bell Curve-based Agile Delivery Model stepped into the limelight earlier this summer in a post by Santosh Balan, technical projects manager at Fujitsu, published in Information Week. As proposed by Balan, Bell Curve-based Delivery enables teams to ease into projects gradually by considering technical complexity along with business value when organizing and tackling items in the product backlog. By focusing first on tasks that are less complex, but which also deliver business value, an Agile team can gain early wins that build team morale and stakeholders’ confidence, even while the team is still working out the myriad intricacies of onboarding new members and embarking on the project. The most difficult and critical highvalue tasks in the project can be deferred until the team has refined its practices well enough to reach peak performance. Finally, as teams begin to dissolve at the end of the project, members can wrap up by completing non-critical features. In response, Palm characterized the Bell Curve-based model as a great “way of thinking about the Agile delivery model with group maturity in mind.” Bell Curve-based Delivery could be overlaid on to various Agile methodologies, such as Scrum and Kanban, Palm theorized. It might also make sense to go after tasks such as technical risk evaluations at the outset, “so that you don’t save the biggest risks until later on,” he told SD Times. z
31
032,33_SDT014.qxp_Layout 1 7/25/18 3:57 PM Page 32
CA Technologies: Moving the needle for
D
oing work for the sake of doing work is boring. Super boring. But give individuals who work at organizations a very clear picture of the purpose of the work that they’re doing, and they tend to stay longer and actually go the extra mile in order to make things happen. They have an understanding how the work that they’re doing on a day-to-day basis is connected to the grand scheme, and how they can impact the success of that bigger picture. Shannon Mason, VP of product management for CA Agile Central, said, “Everybody’s usually focused on straight business metrics, right? Did we move the bottom line? Are we making more money? There’s also an internal focused metric that we’re looking at. We want people to blow out their concept of what it means to be successful.” CA Agile Central’s philosophy is that there are better ways of working to leverage all the Agile principles, whether a developer is practicing Scrum, Continuous Flow, or Kanban. The product is purpose-built to actively support this ideal. Agile Central, formerly Rally Software, started in 2001. Mason joined the company 10 years ago. She shares, “We fundamentally developed a product with the ideals and the output of the manifesto in mind. Everything that we do and have done in this system is with that backbone still in place.”
Focus on metrics, automated information sharing According to Mason, one of Agile Central’s major benefits is that it decreases the amount of information sharing that has to go between people who “want know and track things,” and the folks that “make the magic happen.” She says, “A developer can go inside of Agile Central and essentially leverage all the things around an integrative development lifecycle. They can move work through their system, through their flow, even if that work is connected to bigger pieces or bigger organizational strategic objectives. They don’t ever have to send out an email that says, ‘Hey, the status on this work is complete,’ or, ‘This work is still in progress.’ We automatically track those minute details that oftentimes get sent over email or stored in a spreadsheet, or that a stakeholder might get just in a hallway conversation update.” The product also focuses on the analytics and the measurement component of a project. Teams that are practicing any sort of iterative development process are looking to optimize how they work and how they operate. Mason says, “We have tons of data inside the system from a cumulative flow perspective. It shows developers the way that they’re moving work through their system — whether there’s a bottleneck, or
there’s too much work in progress. It also shows the impact that it’s going to have on velocity.” This is performed not just at the team level, but also at the teams of teams of teams levels that are looking to coordinate with each other.
Fitting Hybrid IT into the picture The Hybrid IT melting pot that blends legacy apps and hardware, development tools, developing for the cloud, and moving through different development platforms is part of the practical aspect that Mason says Agile Central embraces. “One of the things we’ve always wanted and sought to do is to be able to pull people into the system. I’ve had Waterfall teams inside of Agile Central be able to mock up their entire process. They go through all the checks and balances and are able to use all the analysis and kind of flow that we can see inside of the product, which then allows them to connect with their other teams that might be practicing, might be a little bit further along in their journey, and actually interconnect and work in an active way.” According to her, what tends to happen in those scenarios is that those Waterfall teams end up going to Wagile, a combo of the two, and eventually end up seeing all of the productivity their peers have, and then moving over to a more adaptive way of thinking about it. Part of Agile Central’s DNA is believing that Agile is a great way to work. She qualifies that, “The other side is also being super aware of meeting people where they’re at, and that they might be in very different places. You limit the amount of guard-rails, and you put those guardrails in the right places.” While the competitive landscape is stiff, CA differentiates itself in a number of ways. Mason describes a portion of her customer base as still having a very traditional PMO in place, and managing traditional projects, although she says she sees a gradual shift to more product focus. “You’ve got the programmatic understanding, or teams of teams understanding. You’re practicing Scaled Agile Framework, sometimes the value streams conceptualization of that. And then teams, and then how teams deliver work. For us, one of the big things within CA is that there’s something that we have in each one of those critical areas. We’re involved from the moment a company starts thinking about whether or not it wants to fund a particular improvement, or fund a new product, to the way that it thinks, then distributing and arranging and organizing all of that work, and then to tracking, whether or not the team is heading towards that particular goal, to the moment it gets deployed.” The CA background provides broad support for enterprise
032,33_SDT014.qxp_Layout 1 7/25/18 5:03 PM Page 33
AGILE SHOWCASE
Agile teams management. Many of its customers are in heavily regulated environments and have added security at each layer. Its interconnectivity, specifically with an eye towards managing complex planning and complex execution across thousands of people. Mason points out another key difference between CA and its competitors, “A lot of our competitors have something that looks like a SaaS offering but often times it’s an ASP, so they’re providing a service. We have a true multi-tenant offering. It allows us to do very complex analytics against our customer’s data to provide them a lot of great insight into how they’re operating and to baseline their information. They get the availability and uptime that comes with being able to structure their system from that perspective. That’s not something that you see across all of our competitors.”
Change is hard, but go with the flow The next stage, according to Mason, will be to embrace going back to the basics. She says, “So let’s remember why we went through this big change in the first place, right? It was to achieve high value for our customers — to create a really engaging environment for our employees and make sure that we were really aware of the problems that we were trying to solve, and to try that incremental solutioning process. What has happened over the last few years, frankly, is that people really focused on process and speed of delivery. I think we forgot a little bit about why we went through it in the first place, which was just to build better things.” She describes “feature factories” as an element of the original focus that needs to be looked at closely, “Even here at Agile Central, we noticed this behavior is within ourselves. We would congratulate ourselves on the number of features that we would complete every single quarter and completely forget the other side of that question, which was, ‘Great. Are people using them? Do they like them?’” The second part, aside from the back to basics, is achieving full organizational flow. Developers used to think about Agile as encapsulating many different components. “It wasn’t just that you did Scrum, Kanban or continuous flow was usually in there. Lean principles were in there, test-driven development and behavior-driven development were in there.” The groups that Mason’s working with that are really pushing the boundaries are focusing on what it looks like to have full organizational flow. She describes the historical play by play of developers build-
ing things faster, then settling on a DevOps strategy. “So you got the things out there faster, but then what are the things that are upstream that can also practice that same, very focused, day to day planning, pivoting, persevering mechanism that exists inside of most engineering organizations?” she asks and adds, “If I were to take a bet, it would be that at some point in the near future we’re going to be practicing flow versus practicing what is a more Scrum-like approach that’s currently practiced.”
Next-gen Agile focused on business outcomes Next-gen Agile is on the horizon, and it will leverage new technologies like machine learning and predictive analytics to expand its focus from program management to business outcomes. Mason’s excited about Next-gen Agile because it’s making use of everything that developers currently use for better outcomes. She says, “That rotation around looking at very specific delivery metrics is just one facet of what it looks like to be successful. Then when you start to think about it within the concept of how we flow work through the entirety of the organization, from the moment that we think about ‘Maybe we’re gonna do something,’ to the moment that we actually put that out to production, or someone’s going to leverage it, we’re gathering all these data points.” She suggests, “Right now most organizations track that sort of information, but they’re probably tracking more whether or not they got the feature done, and we’re trying to actually pull in not just that data point, but also the data
“We’re going to be practicing flow versus practicing what is a more Scrum-like approach.” —Shannon Mason
points that surround whether or not you achieved the objective that you were seeking to achieve, right? The company’s goal was to release a new piece of functionality that helps organizations understand how their plans change over time. We find that people are going to that page, and so they’re using it, which is fantastic. Those sorts of metrics, that sort of information, starts to build a more multifaceted understanding of how we’re operating as an organization, versus just focusing on one space.” That said, there’s some really interesting data that’s starting to come out around organizations that have clarity of purpose. In closing Mason points out, “We tend to think about purpose and clarity as things that are more altruistic, but if you look at businesses that are publicly traded, that have a clear, discernible idea of what it is that they want do for their customers, they tend to have a significantly higher return on investment. That doesn’t happen magically, that happens because everybody understands what it looks like to kick butt.” Learn more at https://cainc.to/ca-agile-central. z
33
Full Page Ads_SDT014.qxp_Layout 1 7/25/18 1:10 PM Page 34
Force Multiplier: \fo(e)rs \m˵l-t˵-pl Ɍ(˵)r n: A tool that dramatically amplifies your effectiveness.
73% of customer-facing apps are highly dependent on the mainframe. Yet 2 out of 3 lost mainframe positions remain unfilled, putting quality, velocity and efficiency at risk. You need Compuware Topaz as your Force Multiplier to: • Build and deploy with agility • Understand complex applications and data • Drive continuous, automated testing Learn more at compuware.com/force-multiplier compuware.com | @compuware | linkedin.com/company/compuware
The Mainframe Software Partner For The Next 50 Years
035_SDT014.qxp_Layout 1 7/26/18 2:50 PM Page 35
AGILE SHOWCASE
Compuware: Enabling Agile on the enterprise mainframe
T
here’s a thought in the world that the mainframe is not part of the new enterprise IT landscape or can’t keep up. Compuware, as a company, is proving that the mainframe can compete. As the world’s only mainframe-dedicated software vendor, Compuware uses the same tools it provides to customers to build its own software. “We drink our own champagne. In other words, we’re embracing what we’ve done, and we do it every day,” says David Rizzo, vice president of product development for Compuware. “The mainframe is a key part of the enterprise that will be used for the next 50 years by those companies that are using it today. It can fit into the modern processes and cycles that use the same tools being used in other areas of the enterprise,” according to Rizzo. Common enterprise tools like Jenkins, SonarSource, SonarLint and SonarQube, and XebiaLabs’ XL Release, that have been embraced on the non-mainframe side can, in fact, be integrated on the mainframe side, so it’s not an “either/or” decision for enterprise development managers. Compuware’s solutions are designed to help make developers more productive and, as Agile has become the preferred method for development in most organizations, the company’s tools fit into that framework. They have been adapted in a way that helps them to work with the Agile process and associated iterative cycles. For example, Compuware Topaz is a modern Agile platform of mainframe development and testing tools that integrates into a DevOps toolchain. The Eclipsebased IDE enables enterprise users to visualize complex application logic and data relationships, make changes to code, test and debug, and tune for performance. Developers also have access to non-Compuware products and distributed solutions all in the same familiar environment. ISPW, Compuware’s Agile source code management product for the mainframe, allows for multiple code stream and concurrent development on the same elements, which is a challenge that Compuware and its customers have had with some other source management systems. With ISPW, very little input from the developer is required after the initial code is done. Jenkins can manage code throughout the development life cycle by automating the steps of generate, promote, compile and deploy on the mainframe. Unit testing, a time-consuming and complicated practice traditionally shunned by developers, can now be easily done with Topaz for Total Test and newly acquired XaTester. Compuware Hiperstation complements Topaz by performing automated regression testing, system-level testing and component testing.
Embracing hybrid IT and the cloud has come naturally to Compuware. On a company level, everything runs on their mainframe or runs in the cloud. The company eliminated its x86 servers. If it’s core to their business, it runs on mainframe. If it’s not, the company uses cloud services. Compuware partnered with Amazon AWS so customers can deploy Topaz on AWS instead of installing it on individual workstations. With Topaz on AWS, users can get immediate access to new capabilities instead of waiting days or weeks for updates to be rolled out. The company’s strategy is to make its tools available through cloud services, to allow users to work in the same environment, no matter where they physically happen to be, through the web. Compuware differentiates from the competition by maintaining its focus solely on the mainframe. Rizzo says, “We innovate on the mainframe. Our competitors are all main-
“In a few years, [Agile] will be the de facto standard across all of enterprise IT.” —David Rizzo
frame, obviously, but they’re conflicted on what they do because they do distributed technology and mainframe technology. They’re not solely focused on it. And quite frankly, they do a limited amount of new innovation on the mainframe.” He adds, “We are very different in that we are still providing new innovation on the mainframe every quarter. We just released our 15th consecutive quarterly delivery with new feature functionality for the mainframe software and supporting mainframe developers, Agile and DevOps.” In large enterprises, Rizzo says Agile adherence is stronger on the non-mainframe side, but says, “We see on the mainframe side, it continues to become the preferred method and is growing. In a few years, it will be the de facto standard across all of enterprise IT. I believe that they’ll be doing Agile because it makes sense with doing smaller iterations and being able to be more responsive to the end users and customers.” Compuware’s customers see a necessity to do things differently than the way they’ve worked in the past. They realize they need to show progress as they go. They can’t wait weeks, months or possibly years for innovation. He affirms, “I think Agile will continue to grow and be embraced throughout the enterprise and Compuware will be there to support it.” z
35
036_SDT014.qxp_Layout 1 7/25/18 5:06 PM Page 35
AGILE SHOWCASE
36
Micro Focus ALM Octane platform enables enterprise Agile collaboration
F
ounded in 1976 by Brian Reynolds, Micro Focus has grown as an innovator in technology throughout the years. It has participated in several mergers and acquisitions, and in 2017, it merged with HPE Software to form what is now poised as one of the largest pure-play software companies ever. The company’s focus is on helping its enterprise customers to maximize their existing software investments and leverage the world of hybrid IT — whether it’s mainframe, mobile or cloud to drive business goals. Micro Focus offers solutions in critical areas including DevOps, hybrid IT, security, risk management and predictive analytics. Silvia Davis, product marketing manager of lifecycle and portfolio management solutions at Micro Focus says the ALM suite of solutions manages not only Agile teams but also Agile in the enterprise. If a developer runs Run ALM or Quality Center in parallel with ALM Octane, it provides optimal lifecycle management for their organization’s Waterfall, hybrid, or Agile projects. Octane gives teams real-time status into CI and CD ecosystems. They can view committed changes, identify possible root causes of failures and track commits associated with specific user stories and defects. She explains, “This means managing teams of teams. ALM consolidates non-Agile teams too, for example, Waterfall or hybrid teams and brings the information to executives to help them prioritize their investments based on the business needs.” The solution provides collaboration at each point of the Agile process, as well as collaboration with different teams, both internal and external. It also handles governance, testing quality and management, and assists with both traceability and predictability. Micro Focus is committed to embracing enterprise hybrid IT and sees its ability to help its customers as one of the company’s strong points, linking the older, existing technology with the new mobile and cloud landscapes. Davis qualifies it into developer terms and says, “Basically when you think about Hybrid IT, it’s companies that are in transformation. There are companies that are moving away from Waterfall to Agile teams, and because we support Waterfall too, we are able to transform them to the new environment.” The company synchronizes its entire solutions portfolio and embraces its customer’s tools, whatever they may be. It then integrates, consolidates the information and provides the customer with full visibility. It also allows customers to share licenses between its products. She adds, “Micro Focus was identified as a ‘strong player’ on
the hybrid IT scene according a well-regarded analyst firm.” Davis describes their competitive landscape as being mainly made up of point solutions and Agile-only solutions and says their strength is being able to provide more of an end-to-end solution. “An example of this is CA Technologies Agile Central. It is Agile-only and they don’t have the pipeline management capabilities with predictability that we have. There are others that focus on quality and they don’t do the Agile traceability, providing visibility into the entire pipeline. We differentiate ourselves because we have an integrated solution that manages Agile projects, testing, automation across the entire application delivery pipeline.” Davis sees the state of Agile as being healthy and growing although she says they’re struggling a bit in the enterprise when it comes to meshing hybrid IT with Agile teams. As an example she points out, “Getting executive support and sponsorship to move the whole enterprise development environment from Waterfall to Agile is expensive, complex, and timeconsuming.” Even with challenges, she says it is here to stay.
“We differentiate ourselves because we have an integrated solution that manages Agile projects, testing, automation across the entire application delivery pipeline.” —Silvia Davis
“Agile is going to bring flexibility to the development process because teams are getting more mature nowadays. They're developing faster. But this process needs to be improved in the enterprise. The application delivery pipeline needs to be analyzed to determine where the lead times are, the gaps and challenges are, and then improve upon that, make that faster.” To accomplish this, automation, collaboration and strong inside data gathering capabilities are required. With good inside data comes better predictability and more accurate product delivery planning. Gaining insight means understanding timelines and learning from what has happened in the past, whether it was from an error or a breakthrough, goes a long way toward accelerating successful, and rapid app development and delivery. “You want to make sure that you launch that product fast and be first before your competitors,” she says. Enterprise Agile is only going to become stronger as it makes use of Agile, DevOps, machine learning and artificial intelligence according to Davis. Micro Focus has plans to incorporate ML and AI into its products in the future. z
Full Page Ads_SDT014.qxp_Layout 1 7/25/18 1:11 PM Page 37
038_SDT014.qxp_Layout 1 7/25/18 3:40 PM Page 39
38
AGILE SHOWCASE
2018 Agile Showcase FEATURED
n CA Technologies: CA Technologies creates software that fuels transformation for companies and enables them to seize the opportunities of the application economy. CA Agile Management solutions provide the fastest path for your organization to delight customers by helping you better plan, execute and service any business deliverable, while providing new ways to work and manage to help foster a customer-focused company. n Compuware: Compuware is the only software company solely focused on mainframe innovation. The company leverages Agile development and DevOps best practices to accelerate customer collaboration and deliver meaningful innovations every 90 days. Compuware’s modern mainframe solutions integrate into the crossplatform, enterprise-DevOps toolchain so users can fully leverage their high-value mainframe investments with agility. n Micro Focus: Micro Focus is a global infrastructure software company, committed to enabling customers to both embrace the latest technologies and maximize the value of their current IT investments. The company believes organizations don’t need to eliminate the past to make way for the future. Everything Micro Focus does is based on a simple idea: the fastest way to get results from new technology is to build on what you have — in essence, bridging the old and the new — to meet increasingly complex business demands. n AgileCraft: AgileCraft delivers a comprehensive software solution available for scaling Agile to the enterprise. AgileCraft transforms the way organizations enable and manage Agile productivity across their enterprise, portfolios, programs and teams by aligning business strategy with technical execution. The AgileCraft platform combines sophisticated planning, analysis, forecasting and visualization with robust, multi-level collaboration and management. n Atlassian: Atlassian wants to unleash the potential of every team. The company’s collaboration software is designed to help teams organize, discuss and complete shared work. Teams at more than 112,000 customers, across large and small organizations — including Citigroup, eBay, CocaCola, Visa, BMW and NASA — use Atlassian's project tracking, content creation and sharing, real-time communication and service management products to work better together and deliver quality results on time. n CollabNet VersionOne: CollabNet is a global software and services company that allows leading enterprises and government organizations to deliver high-quality software at speed. The company offers a range of platforms and services for customers to
develop and deploy applications by empowering their teams to scale enterprise-wide agility and DevOps across their software development lifecycle. With CollabNet, teams can work together to envision, build and deliver great software with confidence. n cPrime: cPrime enhances the full lifecycle of product development with comprehensive solutions for adopting and scaling Agile methodologies and ALM software. The alignment of people, processes and technology solution sets build and optimize a lasting framework for client growth. cPrime provides training, consulting, and team augmentations to help teams and organizations adopt and scale Agile methodologies in addition to software licensing, software training, software migrations, hosting, support and more. n Digité: Digité is an integrated project management company and provider of collaborative enterprise software and solutions for lean/Agile application lifecycle management and visual project management. Digité’s solutions are targeted towards technology organizations — such as Corporate IT, ISVs, IT services/Outsourcing and IT-Consulting companies, as well as
general business functions like marketing, recruitment, HR, procurement, legal and many others. n Klera: Klera is a new paradigm that transforms the application development and DevOps lifecycle by revolutionizing how disparate systems are accessed, insights are delivered and actions are taken across systems from a single interface. By dynamically discovering insights across systems, without any data movement, Klera gives users the power to make informed data-driven decisions and take actions faster than ever before. n Planview: As the global leader in work and resource management, Planview makes it easier for all organizations to achieve their business goals. The company’s Lean and Agile delivery solution — which includes Planview LeanKit — empowers teams to deliver faster by visualizing value streams, optimizing the flow of work, and continuously improving their performance. n Retrium: Retrium is built on the idea that organizations should be able to deliver value early and often to their customers. The company offers an enterprise-ready solution for Agile retrospectives. Using Retrium, users can run engaging and effective retrospectives with industry leading facilitation techniques, including Mad Sad Glad, Lean Coffee, and more. n Scaled Agile: Scaled Agile, Inc., is the provider of SAFe, the framework for enterprise agility. Through learning and certification, a global partner network, and a growing community of over 200,000 trained professionals, Scaled Agile helps enterprises build better systems, increase employee engagement, and improve business outcomes. n Zoho: Zoho creates software to solve business problems. More than 35 million users around the world rely on Zoho's platform to operate their business. A new addition to this platform is Zoho Sprints — an Agile project management tool. It is a simple tool for Scrum teams to plan work, keep track of progress, and build products that customers really want. z
Full Page Ads_SDT014.qxp_Layout 1 7/25/18 1:11 PM Page 39
GET TOGETHER. GO FASTER. Take your DevOps journey even further with three full days of immersive learning & transformational leadership stories.
2018 Featured Speakers Visit events.itrevolution.com/US for the full list of speakers
ANN BRADLEY Chief Privacy Officer and Global Counsel Nike Direct
PAULY COMTOIS Vice President, Development & Operations Hearst
DR. NICOLE FORSGREN CEO and Chief Scientist DevOps Research and Assessment (DORA)
COURTNEY KISSLER Vice President, Nike Digital Platform Engineering Nike
THOMAS LIMONCELLI SRE Manager Stack Overflow, Inc.
CHARITY MAJORS CEO and Co-Founder Honeycomb
DR. TOPO PAL Senior Director & Sr. Engineering Fellow Capital One
JEFFREY SNOVER Technical Fellow and Chief Architect for Azure Storage & Cloud Edge Microsoft
DR. STEVEN SPEAR Senior Lecturer, MIT Sloan School of Management & Principal, High Velocity Edge
REGISTER TODAY
www.devopsenterprise.io
For sponsorship inquiries, please contact Beth Breiten bethb@itrevolution.com
040,41,43_SDT014.qxp_Layout 1 7/26/18 11:35 AM Page 40
40
SD Times
August 2018
www.sdtimes.com
Buyers Guide
opers mean they are doing exactly what Beck and Fowler and some of the other advocates of TDD recognize as TDD. A lot of the time, they’re doing things a little differently, ” Black said. According to Black, another one of the variations in definition has to do with the size of the test. In Beck’s definition, the tests are very small. But what people often do is write a test for a number of functions all at once. “I think what Beck and Fowler would say if they saw that would be ‘no, that’s not right. You should break it down into single tests or steps and write the tests and the code step-by-step,’ ” Black said. Arthur Hicken, chief evangelist at Parasoft, explained that while TDD is often interpreted as code-based unit testing, it is also sometimes viewed as something akin to a service API in that you have something that does a service and you have a definition file. “Before you ever write code, you write this definition of what it’s supposed to do, and then you can use tools against that definition to instantly generate tests,” said Hicken. Another variant that gets confused with true TDD is people who write unit tests after they write the code. Even if people are mislabeling something as TDD, maybe that’s not such a bad thing, Black said. “From my point of view if I see developers producing automated unit tests and achieving 100 percent state and branch coverage and including those unit tests in their automated build and deployment systems, then I’m happy,” said Black. Hicken sees TDD as a broad spectrum. The pure side of the spectrum is where people are coming up with requirements first, coming up with a unit test, and writing exactly the right amount of code, he explained. On the other side of the spectrum are people who write unit tests at the same time as writing the code, said Hicken. At the pure end of the spectrum, one of the benefits TDD provides is that it prevents people from over-writing code. “When you write code you’re supposed to write exactly how much code you need to pass the test and not a single line more,” said Hicken. “And so
Variations on test-driven development
BY JENNA SARGENT
D
espite the invention of so many different types of software testing methodologies, many in the industry are still sticking to the triedand-true method of test-driven development, or at least some form of it. “Like any other methodology, I think there seems to be some pretty significant variations between TDD as it’s described in books and what’s actually practiced,” said Rex Black, president of RBCS, a software, hardware, and system testing consulting group. According to Black, the textbook def-
inition of TDD is that it is the process of writing small tests that express some sort of functionality that you expect the software to accomplish. You then run that test before writing code to confirm that it fails, then you write software to make the test pass. As soon as the test passes, you move on and repeat that step again, Black explained. That definition is the one advocated by Kent Beck and Martin Fowler, proponents of TDD. But not everyone who says they do TDD is doing it the textbook way, Black explained. “Some of my clients will talk about doing TDD and sometimes, the devel-
Arthur Hicken, chief evangelist at Parasoft “There are couple of big areas that we help at. And at the purest unit test level, we’re looking at code unit testing. If people are doing new code, they won’t use us as much as if they’re looking at legacy code. Suddenly if I’ve got a TDD initiative, I’ve got some code that’s new and I’ve wrote the test first, I can execute those tests inside of Parasoft’s environment. But with that legacy code, we can use our unit test assistant to help create—very rapidly—unit tests that are actually good, maintainable, have a good set of stubs and mocks around them, are very easy to execute so they’re meaningful and maintainable all at the same time. Now again as I mentioned earlier, I like to think about the service API as a unit, and in that case we can take our SOAtest product and read the definition and quickly produce a sequence of tests without any effort at all really. And then the developers can just take that and start coding until the APIs work. I would argue that as an API programmer, API is probably a more meaningful unit than a random individual file is. Those —Jenna Sargent are probably the biggest benefits that we have there.” z
040,41,43_SDT014.qxp_Layout 1 7/26/18 11:36 AM Page 41
www.sdtimes.com
August 2018
SD Times
41
Eggplant approaches testing from the user perspective for better business outcomes
Even if people mislabel their efforts as TDD, that’s not all bad, said Rex Black.
it keeps people from polishing the code or other things like that. You write what you’re supposed to and you move on.” According to Black, developers can go back in and polish the code after it has been written and the test has passed. According to Hicken, another benefit of TDD is that it forces developers to think about the outcome before developing. The code ends up being more maintainable and testable because they are thinking about what is supposed to happen when writing the test. “It kind of changes the mindset of people because they have the end in mind when they’re programming,” he said. Hicken said that TDD is most often compared to behavior-driven development. The main difference is that TDD focuses on developers, while BDD focuses on testers and requirements. Like any new methodology or technology being brought into an organization, it needs to have buy-in from the top in order to work, Hicken explained. It’s crucial to have management on board and to be able to train people to write good unit tests. “I’d argue that over the long haul, the cost of creating a unit test is much less than the cost of maintaining a unit test,” Hicken said. “It’s easy to create a unit test, but to create one that really answers the question, ‘what does this code do? If the test passes, what does it mean? If it fails, what does it mean? And if it fails, what went wrong?’” Hicken said. Training developers to be able to answer these questions is key. z
What sets Eggplant apart from other testing companies is that it approaches software testing from the user’s perspective. “Our goal has always been around trying to help people make software that delights their users,” said Antony Edwards, CTO of Eggplant. Eggplant helps companies test all sorts of solutions, from point-of-sale terminals to vending machines to banking websites. “We test the whole user experience and we can do this in a nice SaaS platform so that you can scale up and down easily,” said Edwards. Edwards believes that in true test-driven development, you cannot start with a testing approach that is centered on code rather than on the user. “What you should be validating at the end of your sprint is that you have delivered that benefit, that functionality, that story to the user. And the only way to do that properly is to come at it from the user perspective,” said Edwards. Most testing solutions analyze code to ensure that it complies with the necessary specifications, Edwards explained. What Eggplant does is make certain that the software is behaving correctly and doing the things that will make the user happy. “I think testers need to start looking at the customers first and not really care that much if we comply with this or, that little bit of the spec,” said Edwards. “I definitely think that more testers need to be thinking, how can I drive the business outcome?” Edwards said. “And I have to say, as I talk to companies, I’m hearing more and more people thinking that way.” He believes this should be the starting point for testing. The company’s testing product, Eggplant AI, uses AI, deep learning, and analytics to accelerate the process of testing. “We’ve always had that advantage in that we’re always testing from the user’s perspective, but the other thing that other people have issues with in testing is the amount of effort that you have to put in.” As teams move to DevOps and reduce the length of their project cycles, it is the testing aspect that falls behind and cannot keep up, Edwards explained. By bringing AI and analytics into the process, Eggplant is able to automatically generate test cases. It allows people to cre-
ate a lightweight model of their application, which can be used to create up to billions of different test cases that provide complete, comprehensive coverage of the application. Since a company cannot run billions of test cases every night, Eggplant AI also uses neural networks and deep learning to identify test cases most likely to help find defects and improve user experience. The company also recently acquired NCC Group’s Web Performance division and its solution, which Edwards explained does real user and synthetic monitoring of systems. “They understand what your users are doing, how they’re behaving, what journeys they’re taking and the demographics of those people converting,” said Edwards. They also understand the technical behavior of the website and are able to build a model that will show how that behavior influences the business outcome and customer satisfaction. “The reason we think it’s a good match is that we then bring that back into the testing process. We’re already focusing on the user experience and now we’ve got a lot more data to understand what really matters,” said Edwards. According to Edwards there are three steps organizations can take to make their testing process more user-focused. First, use user analytics to focus testing efforts. For example, look at the parts of the applications being used to determine what users like and dislike. “Currently, this kind of information doesn’t appear in any test strategy,” said Edwards. “It should.” Second, they should test from the user’s perspective. The user only cares about what appears on the screen or what happens when they press a button. “Test through the eyes of the user,” Edward said. Finally, test objectives should be redefined as increasing user satisfaction and business outcomes instead of as covering requirements. “I’m seeing more and more testers saying ‘what can we really offer,‘ we know what good looks like, we can get information about the customers, and we can be making sure that the new releases they’re doing every week or two weeks are going to be a net benefit to customer satisfaction rather than reduction,” said Edwards. z —Jenna Sargent
Full Page Ads_SDT014.qxp_Layout 1 7/25/18 1:11 PM Page 42
Intelligently Automate Your End-To-End Testing
Copyright Š 2018 Eggplant and the Eggplant logo are trademarks of Eggplant in the United Kingdom, the United States, and other countries.
040,41,43_SDT014.qxp_Layout 1 7/26/18 11:35 AM Page 43
www.sdtimes.com
August 2018
SD Times
A guide to test-driven development tools n Applause: Applause ensures digital experience quality for websites, mobile apps, IoT products and in-store interactions in a way no other approach can — through its crowdtesting technology platform and managed global community of over 300,000 professional and on-demand testers specializing in QA, usability, accessibility, security, automation, digital and more. n CA Technologies: CA’s comprehensive portfolio of continuous testing solutions, which includes CA Agile Requirements Designer, CA Test Data Management and CA BlazeMeter, provides the tools agile teams need to create the tests that will drive code development, ensure test data is available on-demand, automatically generate test scripts on business requirements and automatically execute test cases to build better, higher quality apps, faster.
n CollabNet: CollabNet helps enterprises and government organizations develop and deliver high-quality software at speed. CollabNet was a Best in Show winner in the application lifecycle management and development tools category of the SD Times 100 for 14 consecutive years. CollabNet offers innovative solutions, consulting, and Agile training services. n Micro Focus: Software’s Functional Testing solutions help to deliver high-quality software while reducing the cost and complexity of functional testing. Micro Focus’ solutions address the challenges of testing in agile and Continuous Integration scenarios, as well as hybrid applications, cloud and mobile platforms. ALM Octane provides insights into software, speeds up delivery, and ensures quality user experiences. n QASymphony: QASymphony offers two integrated solutions built for TDD that help teams deliver high quality software at a rapid pace. qTest Scenario is a JIRA add-on with a Gherkin editor for collaboration around feature and scenario development. qTest Pulse is for enterprise BDD, storing your features and scenarios directly within your version control system (i.e. Git). n Rogue Wave: The largest independent provider of cross-platform software development tools, components, and platforms
n
FEATURED PROVIDERS n
n Eggplant: Eggplant’s intelligent testing and performance suite empowers teams to continuously create amazing, user-centric digital experiences that drive positive business outcomes. Using artificial intelligence, machine learning, and analytics, Eggplant intelligent automation solutions hunt defects, and auto-generate test scripts to increase testing productivity, performance, efficiency, speed, and coverage. Eggplant solutions test the true UX, not the code, through intelligent image and text understanding, API automation, and WebDriver object automation — all within a single test.
n Parasoft: Parasoft provides innovative tools that automate time-consuming testing tasks and provide management with intelligent analytics necessary to focus on what matters. Parasoft’s technologies reduce the time, effort, and cost of delivering secure, reliable, and compliant software, by integrating static and runtime analysis; unit, functional, and API testing; and service virtualization. Parasoft supports software organizations as they develop and deploy applications in the embedded, enterprise, and IoT markets. With developer testing tools, manager reporting/analytics, and executive dashboarding, Parasoft enables organizations to succeed in today’s most strategic development initiatives — agile, continuous testing, DevOps, and security. in the world. With Rogue Wave Klocwork, detect security, safety, and reliability issues in real-time by using this static code analysis toolkit that works alongside developers, finding issues as early as possible, and integrates with teams, supporting continuous integration and actionable reporting.
n Sauce Labs: Sauce Labs provides the world’s largest cloud-based testing platform for automated and manual testing of desktop and mobile websites and applications. Using open source frameworks such as Selenium and Appium, TDD/BDD can test across hundreds of different browser and OS combinations on virtual machines, mobile emulators/simulators, and real mobile devices (native, hybrid and mobile web). n SmartBear: TestComplete allows QA teams to easily create stable, stable, and maintainable automated UI tests. Access to a cloud device lab within TestComplete enables these teams to execute tests in over 1,500 environments. Other features of the tool include support for modern scripting languages, recording automated UI tests without scripting knowledge, data-driven testing, support for over 500 controls and frameworks, and out of the box integration with continuous integration tools. n TechExcel: DevTest is a sophisticated quality-management solution used by development and QA teams of all sizes to
manage every aspect of their testing processes from test case creation, planning and execution through defect submission and resolution. It aims to give teams control over product quality; enhance test standardization, reuse and revision; increase team productivity; and ensure ultimate accountability for all test phases.
n ThoroughTest: ThoroughTest offers a straightforward look at the what, how, and why of TDD. By following ThoroughTest’s guide and completing the certification exam, developers can feel confident that they understand TDD all the way from acceptance criteria to a complete suite of tests. n Tricentis: Whether your methodology calls for TDD, BDD, or ATDD, Tricentis Tosca helps you represent scenarios in a “givenwhen-then” style. With Tricentis Tosca’s model-based test automation, you can create a concrete model, automate scenarios, scale test execution, and integrate testing into development — enabling you to deliver fast quality feedback. n Zephyr: Project teams and enterprises use Zephyr’s products to enable continuous testing throughout their software delivery pipeline to release higher quality software, faster. Zephyr’s products include test management, automation integration, predictive analytics and DevOps insights. For more, please visit www.getzephyr.com. z
43
044_SDT014.qxp_Layout 1 7/25/18 3:39 PM Page 44
44
SD Times
August 2018
www.sdtimes.com
Guest View BY MATT ELLIS
In praise of open source Matt Ellis is a software architect at TIBCO.
W
hen you think of little social movements that bring about big societal shifts, the first thing that comes to mind probably isn’t open source. But maybe it should be. The technological revolution that is steadily digitalizing every nook and cranny of human activity obviously relies on code, and open-source code underpins much of the recent surge in innovation. Streaming movies? Digital Assistants? Autonomous cars? All made possible to some degree by the open-source movement and the rapid evolution it has enabled. Access to open-source code lets us reversion, refine, enhance, and scale programs quickly and exponentially — it’s a font of collective knowledge that fuels a whirlwind of computational advancement. But there was a time when code didn’t flow so freely. A truncated history of open source might start in the late 1950s, when chip-based computing unleased a cycle of invention that brought increasingly smaller, more accessible, and more useful computational equipment to government, industry, academia, and (eventually) the masses. Initially, code and hardware were pretty much inextricable, and programming knowledge was shared openly and enthusiastically amongst a relatively small circle of academics and inventive practitioners with access to the machines. As computers steadily grew in importance as business tools, the art of programming took on new meaning. With the great unbundling, “software” began to have value in its own right. (Note that prior to 1974, software did not qualify for copyright protection in the U.S.) To make a long story short, over the past 20 years, a rag-tag pocket of resistance to the likes of Big Blue and Redmond matured into a massive network of collaborative programmers with a deep-seated sharing ethos that has powered many of the transformational leaps in technology we’ve seen emerge in parallel. Open source is not just for academics and hobbyists (though it continues to serve both communities well). It’s long since been
Streaming movies? Digital Assistants? Autonomous cars? All made possible to some degree by the open-source movement
monetized by companies ranging from Red Hat to Docker and embraced by tech juggernauts like Google and Facebook. It’s now foundational to modern programming. Heck, even former foes IBM and Microsoft are now heartily onboard the open source bandwagon. GitHub proclaims that “open source software powers nearly all of our modern society and economy,” and its latest user report hints at its scale: 25 million public repositories logging a billion public commits in the space of just one year. Recent surveys put business adoption of open source software at about 79%, and just last year, the White House released an official federal source code policy requiring agencies to release at least 20 percent of any new custom-developed code as open source. That’s probably because collaboration is clearly a key driver of innovation. For example, opensource frameworks like TensorFlow, currently the “most-forked” project on GitHub, can claim parentage for the current boon in AI. They’ve helped to democratize machine learning by making it something that is implementable for numerous uses across industry and across borders. Open messaging systems such as Kafka and MQTT implementations have slowly become de facto standards. And no discussion of open source would be complete without mentioning Linux, the reigning king in production environments. Put another way, open source technologies are solving problems that proprietary software used to own — messaging, databases, frameworks, etc. This doesn’t mean that all open source code and technologies are created equal: Just because something is open source doesn’t mean it’s a viable solution or even a good one. Plus, even when developers find worthy projects, their ability to contribute may be limited by their employers — which is something that needs to change. (GitHub’s balanced IP agreement is a step in the right direction.) And even though it’s now obvious that “open source won,” there are those who argue that it hasn’t changed the world as much as it should have by now. But I think it’s been an impressive conquest. Try naming any significant computational marvel or new tech model of the past decade that hasn’t been enabled by open source. You can’t. For that, open source deserves our praise. z
045_SDT014.qxp_Layout 1 7/25/18 3:39 PM Page 45
www.sdtimes.com
August 2018
SD Times
Analyst View BY PETER THORNE
Scope, silos stifle software innovation S
oftware innovation can be constrained by scope limitations, and the barriers created by artificial silos in existing systems and organizations. But help is at hand. This article looks at two very different approaches — the application of systems engineering, and customer-centric thinking.
Legacy sets limits In the early days of a software development project, there are visionary ideas, novel approaches, and exciting goals. But the cold light of reality can set in at any moment — available resources and required delivery dates can be defined at a very early stage. The project adapts its thinking to meet these constraints. Both traditional and agile methods allow the project team to keep faith with their vision, and get on with the ‘core’ (to match the budget and schedule), while recording everything else on the WIBL (Wouldn’t It Be Lovely…) list, which hopefully will feed into some later phase of the project. So the big idea which got things started isn’t lost, it just gets a bit fragmented. Sometimes there are legacy issue which can be even more of a constraint — database schemas which no one wants to touch, or perceived turf wars, or tough technical mountains. These issues create even harder constraints than resources and timescale. Project teams adapt their thinking. It’s not easy to think of what should go on the WIBL list. The easiest response is to redefine the project to stay entirely within the scope of control of the project team. The big-idea can suffer.
Systems engineering Systems engineering got bad press because in the early days it was a documentation-heavy methodology, which fitted a big-budget project and a waterfall approach to development. The systems engineers had to spend up-front time on extensive requirements and systems architecture documents. More “ready-aim-aim-aim-aim-…” than the agile “ready-aim-fire-what have we got-let’s go again” approach. The early days are over, and systems engineering is back center stage. “Model based systems engineering” (MBSE) replaces documentation with models. Tools can guarantee model consisten-
cy. Some models allow component performance estimates to be attached, enabling ‘execution’ of the model to simulate system performance and visualize some aspects of function. This approach fits agile projects. But in relation to innovation, the key role of systems engineers is in setting scope. As always, it remains the duty of the systems engineer to consider all the relevant domains which impact the system being worked on by the team. Systems engineers must optimize and prioritize requirements from all these domains. Of course, the central domain is the function and performance of this system itself. But, for a systems engineer, the domain of software development processes is also in scope, as are the domains of delivery, provisioning, deployment, operations and maintenance environments for this system. So the system engineer has a chance to see and articulate the legacy and silo issues which constrain the innovation potential of the project. Of course this does not solve the problem, but at least it brings the issues to the surface for reasoned consideration.
Peter Thorne is director at analysis firm Cambashi.
But in relation to innovation, the key role of systems engineers is in setting scope.
Customer-centric thinking Let’s assume that the customer for the results of the development project will be a user who pays for the right to use the software. In this case, true customer-centric thinking means that the project team should consider the whole customer experience. This means checking for gaps in requirements by looking at the whole lifecycle experience for the customer — what will the customer see and do at every stage — discover, investigate, buy, deploy, use, maintain and upgrade the new software? This is quite a long list, especially when we all know that handling just ‘use’ can seem challenging. So, just like systems engineering, this doesn’t solve the problem. But handling the other steps will probably mean talking to other groups in your own organization. These conversations will help identify the constraints that threaten the big idea. And who knows, your colleagues may find time, budget, resources and perhaps other ways to support the project’s full innovation opportunity. z
45
046_SDT014.qxp_Layout 1 7/26/18 1:36 PM Page 46
46
SD Times
August 2018
www.sdtimes.com
Industry Watch BY DAVID RUBINSTEIN
Flowing value into your transformation David Rubinstein is editor-in-chief of SD Times.
E
very hundred years or so, it seems, we enter some new industrial age, and the way business works must change along with it. The Industrial Revolution led to the Age of Oil, followed by the Atomic Age and now we’re in the middle of a Digital Revolution. Change is always exciting, but painful, as companies that are not built to work in these new ways struggle to keep pace with the future. As project managers at last-revolution companies transition to this new digital world, the methods and processes they use must change. Mik Kersten, CEO at DevOps software tools company Tasktop, has written a book called “Project to Product: How to Survive and Thrive in the Age of Disruption with the Flow Framework.” He says the key thing that organizations need to do is switch from focusing on projects to a product-oriented paradigm, and create an infrastructure to measure value against time spent creating the product and user satisfaction. “You have to look at business value from a customer point of view,” he says. In the earlier days, a company could manufacture a product, or work 18 months on a software release, and that was measurable. In today’s rapid delivery, cloud-based environment, he says, “You have to be able to build software sufficiently and at scale, and enterprise IT organizations haven’t learned to do it.” Part of the reason is that until now, the industry has not been able to agree on how to define a unit of productivity in software delivery. Metrics for measuring value delivery have been rooted in manufacturing, but don’t apply to software development. “There’s great ideas in manufacturing, great ideas in lean, but things are different in software. You’re not producing the same widget over and over, and your product development — your creative cycle of design and entering the market and production cycle — aren’t the same,” he said. Furthermore, Kersten says that right now, organizations are using the wrong metrics -- proxy metrics. “The common one in Agile is how Agile am I, how many people have been trained on Scrum. That tells you nothing. It’s good to have
If you’re telling your team to focus on GDPR this quarter, you cannot expct the same rate of flow of features.
people trained on Scrum, but you don’t know if you’re delivering more value.” Another problem is that to truly enhance flow, efforts must be made to clear bottlenecks, but those are often hard to identify in complex, large enterprises. Continuous integration and delivery are good ideas, but putting resources there when it’s not a bottleneck is wasting company resources, he says. Organizations “assume if they hire more developers and implement Jenkins and Puppet, that they will have more throughput. But is that where the bottleneck is? Maybe it’s having great screens and mobile apps. Maybe the bottleneck is they don’t have enough designers.” In the book, Kersten defines four areas of value: features, defects, risks and debts. “Each represent work that being done at a course level… it might be architecture work, or API work. But at a business level, these things are just collectively exhaustive. All work is one of these, and they’re mutually exclusive. If you’re telling your team to focus on GDPR this quarter, you cannot expect the same rate of flow of features. Or you’re trying to get to a 1.0 release, you’re trading off technical debt that you’ll have to pay down at some point.” Companies like Facebook, Amazon, Apple, Netflix and Google (the FANGs, Kersten calls them) already get this. As an example, Kersten says Microsoft “has 3,500 people working in their development division making the tools for the network. Microsoft has its own file system for Git. The FANGs have already done this, but no one else can use it, and it’s their competitive differentiator.” So, he continues, “Either we can be happy giving them more and more of the world economy, or for the rest of these organizations, we need a way of creating that without having to invest the halfbillion dollars a year in tooling development.” Tasktop’s Flow Framework was created to give a common language to technical and non-technical people to help organizations start with end-to-end flow and to start with thinking how it drives business results. Version 1.0 will be introduced at the DevOps Enterprise Summit in Las Vegas in October. All of this is just about business value from a customer point of view. They want the new features, they want a product that will keep our information secure. That’s where the value lies. z
Is
bad data threatening
your business?
C ALL IN THE
Fabulous It’s Clobberin’ Time...with Data Verify Tools!
ddress ! Address
Email !
P hone !
N ame !
Visit Melissa Developer Portal to quickly combine our APIs (address, phone, email and name verification) and enhance ecommerce and mobile apps to prevent bad data from entering your systems. With our toolsets, you’ll be a data hero – preventing fraud, reducing costs, improving data for analytics, and increasing business efficiency. - Single Record & Batch Processing - Scalable Pricing - Flexible, Easy to Integrate Web APIs: REST, JSON & XML - Other APIs available: Identity, IP, Property & Business
Let’s Team Up to Fight Bad Data Today!
melissadeveloper.com 1-800-MELISSA
2thirds.qxp_Layout 1 7/30/18 11:15 AM Page 1
Discovery. Insight. Understanding.
SD Times offers in-depth features on the newest technologies, practices, and innovations affecting enterprise developers today â&#x20AC;&#x201D; Containers, Microservices, DevOps, IoT, Artificial Intelligence, Machine Learning, Big Data and more. Find the latest news from software providers, industry consortia, open source projects and research institutions. Subscribe TODAY to keep up with everything happening in the ever-changing world of software development! Available in two formats â&#x20AC;&#x201D; print or digital.
Sign up for FREE today at www.sdtimes.com.
BC_SDT014.qxp_Layout 1 7/26/18 11:38 AM Page 1
Parasoft: Continuous Quality at Speed Sparx Systems: An Essential Partner
Melissa Simplifies Data Verification
Tricentis Leads Continuous Testing
058,59feature_SDT014.qxp_Layout 1 7/26/18 10:45 AM Page 1
There’s a party going on right here
A celebration… of the best of the best in the software industry. The companies listed here are challenging the accepted methods of creating software with visionary ideas, creative solutions and best-of-breed tools that come raining down to us, like candy from a cracked piñata. This year’s cornucopia of treats includes many of the classics, from testing tools and QA solutions to databases and libraries and frameworks. But the newer ideas are making their mark as well, from DevOps methodologies and low-code/no-code solutions to user experience and — perhaps most importantly with the cloud changing the game — security and performance. Of course, we list the companies we see as influencers on the industry. They’re the ones with the big ideas, and the big money or big communities to implement those ideas. They’re setting the pace and showing us where we’re headed. In this special section of SD Times, we hand the stick over to some of he companies on the SD Times 100, to tell their stories in greater detail and show why they’re considered leaders in the industry by the editors here at the magazine. Please take a moment to get to know them better. n 2 AUGUST 2018
058,59feature_SDT014.qxp_Layout 1 7/26/18 10:45 AM Page 2
AUGUST 2018 3
Full Page Ads_SDT014.qxp_Layout 1 7/25/18 1:16 PM Page 57
056parasoft_SDT014.qxp_Layout 1 7/26/18 10:55 AM Page 1
Gold Sponsor – Testing
Parasoft: Continuous Quality at Speed
any organizations have adopted Agile practices are delivering higher quality software faster, but product quality has not improved. Using Parasoft’s robust portfolio of automated testing solutions, software teams can reduce effort to meet time-tomarket and quality mandates simultaneously. “In the Agile world, Test has difficulty keeping up with development due to constant change. One, small change often requires automated test scripts to be rewritten,” said Mark Lambert, VP of Products at Parasoft. “We believe the key to creating a scalable and maintainable testing practice is to align with the testing pyramid: starting with a solid foundation of unit tests, broad functional coverage with API-level tests and focused application of end-toend UI driven tests.” Parasoft is among the 2018 SD Times 100 for its outstanding contributions to the Testing category. Just recently, Forrester Research recognized Parasoft as a leader in The Forrester Wave: Omnichannel Functional Test Automation, Q3 2018.
M
Parasoft Jtest: A Solid Foundation To ensure software quality, organizations need a multilayered automation approach that begins with developing quality code. Parasoft Jtest is an automated Java software testing and static analysis product that simplifies and accelerates the creation of Junit test cases. It also analyzes the underlying code so developers can ensure the reliability and predictability of individual pieces of functionality. “If you want to avoid liability and security issues, you have to ensure that your code has a solid foundation that can support higher levels of test automation,” said Lambert. When quality is built into code, test automation becomes test verification.
Parasoft SOAtest: Ensuring API Integrity Parasoft SOAtest enables developers and testers to create functional API tests and understand how applications use APIs. It also improves functional test coverage by enabling the creation of tests from API contracts. “SOAtest goes beyond simple record-and-playback capabilities,” said Lambert. “It uses embedded logic and AI to make sure that tests don’t become more brittle as the application is changing.” An important SOAtest feature is Change Advisor, which makes it easy to manage the impact of changes in the application and enables the seamless migration and refactoring of test cases as the underlying APIs change. “As the APIs evolve, changes might be relatively small,” said Lambert. “But the impact on the test cases will be huge if someone has to manually change everything individually.” With the ability to analyze API contracts, testers can rapidly pinpoint what has changed and seamlessly
migrate the changes to the next version. In May 2018, Parasoft announced the launch of the SOAtest Smart API Test Generator, which uses AI and machine learning to generate API tests and monitor applications as they use backend services. Lambert said AI and machine learning will play a larger role in multiple Parasoft products in the next year or so as “assistive” new product features. “We want to make traditionally human-centric processes more effective so people can quickly build maintainable test scenarios that apply intelligence beyond record and playback,” said Lambert. IoT security often fails because products have not been adequately secured and tested. While Parasoft SOAtest and Virtualize test the network layer and the system as a whole, Parasoft C/C++test ensures the integrity of embedded software.
Parasoft Virtualize: Eliminating Test Dependencies Service virtualization has become increasingly popular for isolating the external dependencies of a test environment so
‘Achieving an effective, scalable and maintainable testing strategy ge.’ requires automation at every sta —Mark Lambert, VP of Pro
ducts
test automation can run continuously. As developers and testers move up the pyramid from unit tests to API tests to end-to-end Continuous Testing, the number of dependencies increases. “The dependencies may become unavailable or you can’t get them in the state you need to test a corner case. Either way, you can’t get them to give you the performance characteristics you need to emulate different performance barriers,” said Lambert. “Service virtualization allows you to emulate the behavior, data characteristics and performance characteristics in a way that enables individual teams and the CI/CD pipeline.” Before using Parasoft SOAtest and Virtualize, medical and dental insurance company CareFirst would freeze development whenever one of its third-party API partners announced a change. Otherwise, its software would break. Using SOAtest and Parasoft Virtualize, CareFirst has created a continuous test environment that operates despite third-party API changes, saving the company more than 9,000 hours of unplanned downtime per year. “Achieving an effective, scalable and maintainable testing strategy requires automation at every stage,” said Lambert. “Parasoft’s automated testing tools make each phase of testing easier and more efficient.” There’s a whole automated software testing world out there. Learn more at www.parasoft.com. n AUGUST 2018 5
Is
bad data threatening
your business?
C ALL IN THE
Fabulous It’s Clobberin’ Time...with Data Verify Tools!
ddress ! Address
Email !
P hone !
N ame !
Visit Melissa Developer Portal to quickly combine our APIs (address, phone, email and name verification) and enhance ecommerce and mobile apps to prevent bad data from entering your systems. With our toolsets, you’ll be a data hero – preventing fraud, reducing costs, improving data for analytics, and increasing business efficiency. - Single Record & Batch Processing - Scalable Pricing - Flexible, Easy to Integrate Web APIs: REST, JSON & XML - Other APIs available: Identity, IP, Property & Business
Let’s Team Up to Fight Bad Data Today!
melissadeveloper.com 1-800-MELISSA
054melissa_SDT014.qxp_Layout 1 7/25/18 3:45 PM Page 1
Gold Sponsor – Database and Database Management
Melissa Simplifies Data Verification
s applications become increasingly data-intensive, developers need faster, easier, more timely access to accurate customer data. Toward that end, leading provider of global contact data quality and identity verification solutionsMelissa recently introduced open-source low-code and commercial tools. It also created a new division that uses semantic technology and machine reasoning to identify relationships in data that were previously undiscoverable. “We support developers with tools that are easy to access and use,” said Bud Walker, VP of Enterprise Sales at Melissa. “We have a smart, sharp tool approach so developers can just get the address verification they need or take advantage of our matching and merge/purge capabilities. There’ s no need to purchase a monolithic, enterprise-level MDM platform to take advantage of our capabilities.” Melissa is among the 2018 SD Times 100 for its outstanding contributions to the Database and Data Management category.
A
Listware Desktop Provides an Open Source Option Listware Desktop is an open-source low-code tool that enables developers to customize data cleaning workflows. Because it’ s open-source, developers can version it as their needs dictate or create their own version of the tool on GitHub. Also, instead of paying for a product up front, developers can use Listware Desktop free and only pay for individual transactions. “Listware Desktop houses a number of our web services within a software interface,” said Walker. “Developers can use it for lowlevel data cleansing to see how our tools work before doing a more involved integration into their custom applications.” With Listware Desktop, developers can quickly access Melissa Web services to validate and correct data, interpret result codes and understand how production data can be enhanced with data quality routines. The tool cleans and enriches people and business data by verifying, updating and standardizing global address, email, phone, and name data. Its color-coded reports make it easy to identify how many contact data elements were verified, corrected, or bad and what changes were made to the original data. Melissa’ s new Developer Portal also provides developers with easy access and onboarding of our Web APIs through Swagger UI. “Now our developer community can easily interact with many of our data quality and enrichment Web APIs for testing and easy implementation — and scale successfully as business needs grow and evolve,” Walker said.
UNISON – Next Gen Data Quality Platform: Fast, Easy, Secure UNISON consolidates all of Melissa’ s onpremises tools into a single platform so organizations can verify and correct data faster, easier and more securely. With it, users can perform com-
plex data quality tasks across multiple RDBMSs, schedule jobs, collaborate on projects and take advantage of powerful data visualizations to thoroughly analyze their datasets. UNISON reduces implementation time and completely eliminates development time. Administrators can simply install the product on a dedicated server and then integrate UNISON with the company’ s LDAP system to use preexisting logins or create UNISON account logins for data stewards to use. “UNISON is massively scalable,” said Walker. “You can easily install it across multiple servers so you can scale horizontally and vertically.” UNISON corrects and validates U.S. and Canadian addresses, appends latitude and longitude coordinates and Census data, validates and standardizes email addresses and phone numbers, and validates and parses full names. “We wanted to make something that was easy for database administrators to
‘Melissa Informatics provides a faster, easier path to data merging and ndards.’ compatibility with all types of sta —Bud Walker, VP of Enterp
rise Sales
use,” said Walker. “A subscription includes access to the datasets you need, and it allows multiple users to script jobs that provide data access at different levels as required for different types of functionality. You can review individual projects, check their status and monitor how you’ re improving data quality over time.” UNISON is valuable for regulated companies and other organizations that need to guard against sensitive data leakage.
Melissa Leverages Machine Reasoning Melissa recently acquired semantic technology firm IO Informatics to improve data quality even further with machine reasoning. The resulting operating division, Melissa Informatics, uses semantic technology to uncover deeper data connections within complex, changing data. “Melissa Informatics provides a faster, easier path to data merging and compatibility with all types of standards,” Walker said. “It harmonizes data to find patterns and relationships that were previously unrecognizable.” Melissa Informatics’ fuzzy matching and record linkage capability provides entirely new insights. It enables deep dives into CRM or MDM, or even compares multiple customer view platforms to reduce the possibility of false negatives. Melissa Informatics and all Melissa offerings conform to international laws and regulations including GDPR so it customers can use whatever solution they choose with confidence. Learn more at www.melissa.com. n AUGUST 2018 7
Full Page Ads_SDT014.qxp_Layout 1 7/25/18 1:06 PM Page 21
052sparx_SDT014.qxp_Layout 1 7/26/18 1:14 PM Page 1
Gold Sponsor – Development Tools
Sparx Systems: An Essential Partner
oday’ s IT teams need to deliver business-relevant capabilities in Agile, scalable and reusable ways. Using Sparx Systems’ tools and services, architects, solution developers, business analysts and others can translate business requirements into working systems that advance business objectives. “Teams need a single, holistic view of the enterprise and its endeavors,” said Geoffrey Sparks, Sparx Systems founder and CEO. “Enterprise Architect provides that insight from a wide range of viewpoints including strategic, operational, development, and integration.” Sparx Systems is among the 2018 SD Times 100 for its continued innovation in the Development Tools category. Enterprise Architect, Sparx Systems’ flagship product, is a visual modeling and design platform used by more than 740,000 users worldwide, including 81% of Fortune 100 Global firms and more than 210 American government departments operating at the county, state and federal levels.
T
Sparx Services Provides Expert Guidance Software is now an integral part of an organization’s ecosystem, vision, strategy and goals. Sparx Services provides organizations with the tools, skills, training and consulting they need to become more productive and have greater impact. Sparx Services also leads the worldwide partner ecosystem of more than 160 companies. The regional service organizations partner with global engineering and support teams so customers always have access to the resources they need. “Sparx Services helps tailor Enterprise Architect so modeling teams can get the best benefit from the modeling approach,” said Sparks. “We help you deal with the modeling standards your business needs and support you through the challenges.” Customers taking advantage of Sparx Services are able to identify the proper model representation for their question so they can avoid over-engineering or under-engineering. “The challenges our customers face can’ t just be solved by a set of tools. Skills, training and consulting are part of the solution as well,” said Sparks. “Sparx Services allow us to engage with customers more deeply. In fact, our employees spend 400 – 500 [man] days per year at a customer site to help them find their modeling approach and drive as much value as they can with Enterprise Architect.” For example, the European Union’ s General Data Privacy Regulation (GDPR) has recently driven the need for Enterprise Asset Management (EAM). “A lot of companies have failed at EAM because they approached it as a project when it is a really a journey for the entire company to produce only the information actually required to fulfill the company’ s needs,” said Sparks. “Sparx Services and Enterprise Architect enable you to focus on what matters.”
Align IT and the Business
Enterprise Architect has continued to evolve with software trends since its first commercial release in 2000. Organizations use the platform to model and visualize complex systems from the enterprise level down to the engineering and software levels. The latest version, Enterprise Architect 14, and Pro Cloud Server together enable the sharing of architectures, designs and modeled content with a broader audience than just IT. Pro Cloud Server is a central hub for connecting and synchronizing the views to a wide range of other tools including Jira, Doors, TF and more. “IT leaders have been talking about digital transformation over the past couple of years, and a big part of that is changing the way technologists work with other parts of the company,” said Sparks. “With Pro Cloud Server, the rich wealth of information created by architects and software development teams can be shared with non-technical audiences, enabling the company to leverage diverse ideas from across the organization.”
‘Teams need a single, holistic view of the enterprise and its endeavors.’ —Geoffrey Sparks, founder
and CEO
Enterprise Architect is an enterprise-grade platform that integrates vertically into PLM and horizontally into ALM. Its unique capabilities enable users to create their own Model Driven Generation (MDG) technology with a meta modeling standard, or even a combination of standards, to ensure complete modeling support throughout the enterprise. It can be tailored to support any kind of customer process and integrate specific customer rules and guidelines. “Enterprise Architect enables you you model the connective tissue of your organization,” said Sparks. “In order to be Agile and move quickly, you need to have the architectures and roadmaps to guide you to where you want to go.” Enterprise Architect supports many industry standards, including TOGAF, SysML, BPMN, BIZBOK, BABOK, DoDAF, NIEM, and others, each of which provide a framework software teams can use to accelerate delivery of business capabilities. Its powerful breadth and depth of capabilities can be adapted to suit the unique requirements of individual groups, simply by hiding features that are not relevant. Conversely, Enterprise Architect allows the addition of new functionality and windows, and new wizards capable of supporting any custom process. That way, when an event occurs, model changes can be validated, and users can follow a predefined process that avoids mistakes. Learn more at www.sparxsystems.com. n
AUGUST 2018 9
Full Page Ads_SDT014.qxp_Layout 1 7/25/18 1:14 PM Page 51
The Gartner Magic Quadrant for Software Test Automation is here! Download your copy now!
www.tricentis.com/GartnerSDTimes
Tricentis
Gartner recognizes Tricentis as a Continuous Testing Leader
050tricentis_SDT014.qxp_Layout 1 7/25/18 3:47 PM Page 1
Gold Sponsor – Testing
Tricentis Leads Continuous Testing
gile and DevOps change the game for software testing. It’s not just a matter of accelerating testing—it’s also about fundamentally altering the way that quality is measured. Agile requires teams to test faster and earlier. And DevOps demands a more deep-seated shift. The test outcomes required to drive a fully-automated release pipeline are dramatically different than the ones that most teams measure today. “Sooner or later, Agile and DevOps teams realize they can’t achieve continuous delivery or continuous deployment without continuous testing,” said Wayne Ariola, chief marketing officer at Tricentis. “However, the path to that goal is often unclear—especially for enterprise organizations faced with complex legacy architectures, stringent compliance requirements and a long history of manual testing.” To deliver the rapid feedback and proactive quality engineering that accelerates testing while protecting the business, the differences between automated testing and continuous testing must be understood and applied. As a continuous testing leader, Tricentis is among the 2018 SD Times 100 for its outstanding contributions to the Testing category.
A
How Automated Testing and Continuous Testing Differ Continuous testing is the process of executing automated tests as part of the software delivery pipeline in order to obtain feedback on the business risks associated with a software release candidate as rapidly as possible. “Test automation is designed to produce a set of pass/fail data points correlated to user stories or application requirements,” said Ariola. “Continuous testing focuses on business risk and whether the software should be released. To successfully shift from test automation to continuous testing, we need to stop asking, ‘Are we done testing?’ and instead concentrate on whether the release candidate has an acceptable level of business risk.” Of course, continuous testing takes advantage of automated testing, but test automation alone cannot keep pace with changing business and technology requirements. For example, unlike traditional test automation, continuous testing requires the alignment of testing with business risk. continuous testing also applies service virtualization and stateful test data management to stabilize testing as necessary for continuous Integration. In addition, continuous testing enables exploratory testing that exposes “big block” issues early in each iteration which automated testing alone cannot do. “Continuous testing is not just about more or different tools, it transforms processes and the way people work,” said Ariola.
How Automated and Continuous Testing Compare Automated testing and continuous testing differ in three important ways: risk, breadth and time.
Business risk increases as organizations innovate faster for competitive advantage. The more functionality they expose to users, the greater the number, variety and complexity of potential failure points. Most automated tests provide low-level details about whether user stories correctly implement requirements. Continuous testing also ensures high-level assessments about the viability of a release candidate. The breadth of testing also differs. Just knowing that a unit test failed or a UI test passed doesn’t reveal whether the overall user experience has been impacted by recent application changes. continuous testing ensures that the tests that are broad enough to detect when an application change inadvertently impacts functionality on which users rely. Finally, time is of the essence. As software release cycles continue to diminish, fast feedback isn’t enough. To minimize the risk of faulty software reaching an end user, software
‘Sooner or later, Agile and DevOps teams realize they can’t achieve s continuous delivery or continuou testing.’ deployment without continuous ting —Wayne Ariola, chief marke
officer
teams must be able to process instantaneous feedback, which continuous testing enables. “Continuous testing offers five clear advantages over automated testing alone,” said Ariola. “It allows you to assess business risk coverage, helps you protect the user from software defects and provides a stable test environment that’s available on demand. Continuous testing also seamlessly integrates into the software delivery pipeline and DevOps toolchain and delivers actionable feedback appropriate for each stage of the delivery pipeline.”
Why Move to Continuous Testing? Reinventing testing is an untapped opportunity to accelerate the delivery of innovative software. Most organizations today have already invested considerable time and resources in reinventing their development and delivery processes. However, testing is commonly overlooked in these transformations. “Yes, this is a problem—but it’s also an opportunity,” explained Ariola. “You can further accelerate delivery speed by shifting your attention to testing. By transforming testing to an automated, continuous process, you better align your quality process with Agile, DevOps and other digital transformation initiatives. Plus, you also reduce business risk and free up significant budget that can be reallocated to underfunded innovation initiatives.” Learn more at www.tricentis.com. n AUGUST 2018 11
049_SDT014.qxp_Layout 1 7/25/18 3:55 PM Page 1
Silver Sponsor – DevOps
Free Continuous Deployments for All eployHub unleashes rapid and safe continuous deployments for agile teams with freely available open source software. Monolithic software deployments are an old habit based on waterfall practices. They create a barrier to achieving the full promise of agile, the point where software end users can safely receive new innovation on a high frequency basis. DeployHub breaks that barrier with iterative continuous deployments designed for high performing software development teams. OpenMake software democratized continuous deployments when it spun off DeployHub Inc. to support the DeployHub Open Source Project. “We saw the barrier to achieving the full promise of agile was the lack of open source or even affordable continuous deployment tooling” explains Tracy Ragan, CEO of DeployHub and OpenMake Software. “We responded by launching the DeployHub Open Source Project based on the core of our commercial DeployHub solution. DeployHub is committed to making sure every Agile development team can achieve continuous deployments, even when they have zero budget authority.” In addition, a SaaS offering was just released so developers can begin using DeployHub Open Source for continuous deployments immediately. The DeployHub Open Source Project is focused on features critical to microservices and container management with an eye on Kubernetes and Istio. Its agentless technology can easily support mixed environments from physical servers to cloud and containers. ‘Continuous DeployHub’s most interesting feature is its is deployment ability to perform iterative deployments based of the process tion on a unique back-end versioning engine. For a making innov – each software release, DeployHub versions ers visible to us your entire software deployment stack inright now.’ CEO cluding infrastructure, environment vari, n a g —Tracy Ra ables and database updates. Each time a deployment is executed, DeployHub only releases the changes. Rollbacks, roll forwards or even version jumps are done incrementally creating a rapid and safe method for deploying new code to any environment. In addition, DeployHub includes plugins for common tools such as Jenkins, GitHub, Jira, Ansible and databases so Agile teams can mature their CI/CD process to full continuous deployment without sacrificing their favorite tools and existing work. “Continuous deployment is the process of making innovation visible to users — right now,” said Tracy Ragan. “One of our customers, a large government agency, reported that their releases went from seven to eight hours down to an easy five minutes with DeployHub.” OpenMake was selected as one of the 2018 SD Times 100 for its innovative contributions to the DevOps category. For more information go to www.DeployHub.com or www.DeployHubProject.io. n
D
70
p
'HSOR\+XE
12 AUGUST 2018
Full Page Ads_SDT014.qxp_Layout 1 7/25/18 5:14 PM Page 48