FC_SDT033.qxp_Layout 1 2/24/20 1:43 PM Page 1
MARCH 2020 • VOL. 2, ISSUE 033 • $9.95 • www.sdtimes.com
IFC_SDT032.qxp_Layout 1 1/17/20 3:29 PM Page 2
Instantly Search Terabytes
www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein drubinstein@d2emerge.com NEWS EDITOR Christina Cardoza ccardoza@d2emerge.com
dtSearch’s document filters support: ‡ popular file types ‡ emails with multilevel attachments ‡ a wide variety of databases ‡ web data
SOCIAL MEDIA AND ONLINE EDITORS Jenna Sargent jsargent@d2emerge.com Jakub Lewkowicz jlewkowicz@d2emerge.com ART DIRECTOR Mara Leonardi mleonardi@d2emerge.com CONTRIBUTING WRITERS Alyson Behr, Jacqueline Emigh, Lisa Morgan, Jeffrey Schwartz
2YHU VHDUFK RSWLRQV LQFOXGLQJ ‡ efficient multithreaded search ‡ HDV\ PXOWLFRORU KLW KLJKOLJKWLQJ ‡ forensics options like credit card search
Developers: ‡ 6'.V IRU :LQGRZV /LQX[ PDF26 ‡ &URVV SODWIRUP $3,V IRU & -DYD DQG NET with NET Standard / 1(7 &RUH
.
.
.
‡ )$4V RQ IDFHWHG VHDUFK JUDQXODU GDWD FODVVLILFDWLRQ $]XUH $:6 DQG PRUH
CONTRIBUTING ANALYSTS Enderle Group, Gartner, IDC, Intellyx, Ovum
ADVERTISING SALES PUBLISHER David Lyman 978-465-2351 dlyman@d2emerge.com SALES MANAGER Jon Sawyer jsawyer@d2emerge.com
CUSTOMER SERVICE SUBSCRIPTIONS subscriptions@d2emerge.com ADVERTISING TRAFFIC Mara Leonardi adtraffic@d2emerge.com LIST SERVICES Jourdan Pedone jpedone@d2emerge.com
Visit dtSearch.com for ‡ KXQGUHGV RI UHYLHZV DQG FDVH VWXGLHV ‡ IXOO\ IXQFWLRQDO HQWHUSULVH DQG developer evaluations
The Smart Choice for Text Retrieval® since 1991
dtSearch.com 1-800-IT-FINDS
REPRINTS reprints@d2emerge.com ACCOUNTING accounting@d2emerge.com
PRESIDENT & CEO David Lyman CHIEF OPERATING OFFICER David Rubinstein
D2 EMERGE LLC 80 Skyline Drive Suite 303 Plainview, NY 11803 www.d2emerge.com
003_SDT033.qxp_Layout 1 2/21/20 12:21 PM Page 3
Contents
VOLUME 2, ISSUE 33 • MARCH 2020
FEATURES
NEWS 4
News Watch
To serve man
6
Java celebrates 25th anniversary and release of Java 14
Machines that use augmented intelligence are a recipe to help, not replace, human workers
18
CollabNet VersionOne and XebiaLabs merge to form integrated Agile DevOps platform
18
Report uncovers value stream’s impact on software delivery
COLUMNS 32
page 10
GUEST VIEW by Eric Naiburg Don’t use velocity as a weapon
33
ANALYST VIEW by Michael Azoff The climb to quantum supremacy
34
INDUSTRY WATCH by George Tillmann Planning for the perfect
Software 3.0: Enterprise AI Systems and the Brave New Economy page 14
Cyber insurance:
Transitioning to SRE
A crucial part of any cybersecurity strategy THE LAST OF
THREE PARTS
page 28
page 30
How to get DevSecOps right
page 20
Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 80 Skyline Drive, Suite 303, Plainview, NY 11803. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2020 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 80 Skyline Drive, Suite 303, Plainview, NY 11803. SD Times subscriber services may be reached at subscriptions@d2emerge.com.
004,5_SDT033.qxp_Layout 1 2/21/20 10:43 AM Page 4
4
SD Times
March 2020
www.sdtimes.com
NEWS WATCH Angular 9.0 released with Project Ivy The Angular team has announced the latest major release of its web application framework. Angular 9.0 features updates to the framework, Angular Material and CLI. In addition, this release makes the Ivy compiler and runtime the default as well as improves testing components. Ivy is the framework’s nextgeneration compilation and rendering pipeline. According to the team, the implementation of Ivy will significantly reduce the size of applications. In addition to switching apps to Ivy by default, the 9.0 release adds new bug fixes and improvements such as smaller bundle sizes, faster testing, better debugging, and improved CSS class and style binding, type checking, and build errors.
Kotlin becomes second most popular JVM language Java still holds a large majority, with 86.9% of developers saying they use it as their main programming language for building JVM applications, compared to 5.5% with Kotlin. Kotlin overcame both Scala
and Clojure which were at 2.6% and 2.9%, respectivelyto obtain the number two spot. According to Snyk, who released the annual JVM ecosystem report, the surge in Kotlin adoption is now surprising. Kotlin seamlessly integrates with Java and the adoption of Kotlin in frameworks such as Spring Book makes it easier to build production-grade systems.
Swift’s roadmap to version 6 The Swift programming team wants to pursue new frontiers as it looks to version 6 of the programming language. According to the team, it has reached critical milestones of majority over the last couple of versions, making it possible for users to invest in using Swift. For instance, the arrival of ABI and module stability has enabled the creation of stable binary frameworks and the Swift Package Manager, which has integrated support both in Xcode and other IDEs, provides a cross-platform solution for building and distributing Swift libraries. The major areas that the development team is looking into include creating faster builds, more informative and
accurate diagnostics, responsive code completion, and reliable and a fluid debugging experience.
Atlassian updates Jira’s roadmap capabilities Atlassian is making another major update to its issue and project tracking software Jira, tackling what it says is one of the most popular user features: the native roadmap. According to the company, when it launched the nextgeneration Jira Software experience last year, 45% of users were using the roadmap feature within a month of its launch — making it the most rapidly adopted feature in the solution’s history. Some of the new features to the roadmap include a progress bar, dependency mapping, Confluence integration, hierarchy levels, and filters.
Flexera acquires Revulytics In an effort to help software companies better understand how their products and solutions are being used, Flexera has acquired the software usage analytics provider Revulytics. In a recent report from
FSF calls on Microsoft to open source Windows 7 The Free Software Foundation (FSF) is calling on Microsoft to open source Windows 7 now that it has reached End of Life. Windows 7 reached End of Life on January 14, 2020. The foundation hopes that if Windows 7 is open sourced, it will give the community the opportunity to study, modify, and share it. It also stated two other demands of Microsoft: “We urge you to respect the freedom and privacy of your users — not simply strongarm them into the newest Windows version. We want more proof that you really respect users and user freedom, and aren’t just using those concepts as marketing when convenient.”
Flexera, the company found that while companies who understand usage are more confident in the value they bring, only 35% of companies are able to obtain that usage data. Flexera hopes the addition of Revulytics will provide more insight into the usage of products with compliance data analytics, user behavior and telemetry, and in-app messaging. Key capabilities Revulytics will bring to Flexera include: l Compliance intelligence: the ability to see where pirated versions of software are being used l Usage intelligence: insight into feature usage, customer behavior analysis and dashboards with analytics and telemetry.
New open-source projects focus on Kubernetes security Kubernetes security company Octarine has announced two new open-source projects designed to protect against cloud-native security vulnerabilities. The Kubernetes Common Configuration Scoring System (KCCSS) is a framework for rating security risks, and kube-scan is a workload and assessment tool. According to the company, KCCSS is similar to the Common Vulnerability Scoring System. The difference is KCCSS focuses on configuration and security settings. Configurations can be insecure, neutral or critical for protection and remediation, Octarine explained. The project is designed to score risks and remediations and then calculate risk for every runtime setting from 0 (no risk) to 10 (high risk). In addition, KCCSS shows
004,5_SDT033.qxp_Layout 1 2/21/20 10:43 AM Page 5
www.sdtimes.com
the potential impact through confidentiality, integrity and availability. Kube-scan is based on KCCSS and analyzes more than 30 security settings and configurations including privilege levels, capabilities and Kubernetes policies.
Go 1.15 planning is underway Go 1.14 is on track to be released in February, but the Go team is already planning ahead to the next release. Go 1.15 is scheduled for August, and the team is currently considering what library and language changes to include. According to the team, the primary goals for Go are package and version management, better error handling support, and generics. The team also stated that module support is in good shape and is constantly improving, as well as its generics. They also revealed that they aren’t pursuing changes to error handling at the moment.
GNU Guile 3.0.0 now available GNU Guile, a programming and extension language for the GNU Project, is now available as version 3.0.0. According to the team, this is the first release in the stable 3.0 release series. The major new feature in this version is just-in-time (JIT) native code generation, which helps speed performance. In this release, microbenchmark performance is twice as good as the 2.2 release, and some individual benchmarks have seen improvements up to 32 times as fast, according to the project maintainers.
Microsoft releases Application Inspector source code analyzer Microsoft announced a source code analyzer called Microsoft Application Inspector that can help developers identify “interesting” features and metadata. The tool is a cross-platform CLI that can produce output in multiple formats including JSON and interactive HTML. Application Inspector can help identify high-risk components and unexpected features that require additional scrutiny under the theory that a vulnerability in a component that is involved in cryptography, authentication, or deserialization would likely have higher impact than others.
Google Cloud acquires AppSheet Google Cloud is looking to make it easier for business users to create and improve applications without needing coding knowledge. To do so, the company is acquiring nocode platform AppSheet. Google believes that AppSheet will complement its strategy to “reimagine the application development space.” Customers will now be able to create richer applications using Google technologies such as Sheets, Forms, Android, Maps, and Analytics. According to AppSheet, their services will continue to exist, but will grow once integrated with Google Cloud. Also, their team will join Google Cloud, the platform will stay cross-platform, and their core mission of democratizing app development remains the same, AppSheet’s CEO Praveen Seshadri explained. z
March 2020
SD Times
People on the move
n Enterprise data cloud company Cloudera announced Robert Bearden as its new president and CEO. Bearden is an experienced enterprise software executive who co-founded and lead Hortonworks, the company that merged with Cloudera in 2019. He also served as president and CEO of SpringSource until it was acquired by VMWare in 2009. n Dion Cornett has been named president of Datical. Cornett is a seasoned open source executive who has worked at Red Hat, MariaDB and ReachForce. “The company’s ongoing efforts to support the open source community, namely through Liquibase and Liquibase Pro, are changing the game when it comes to how organizations are bringing DevOps to the database. I’m looking forward to seeing what this year brings in terms of new products and updates that will leverage the power of open source to make deployments easier for everyone involved in the database release process,” said Cornett. n The IBM board of directors has elected a new CEO to lead it into “the next era.” The company’s senior vice president for cloud and cognitive software, Arvind Krishna, will replace Ginni Rometty as CEO. The switch will become effective April 6, 2020. Rometty will continue as IBM’s executive chairman of the board until the end of the year, at which point she will retire. n Oracle has hired a prominent AWS executive as its new chief marketing officer. Ariel Kelman was previously Amazon’s vice president of worldwide marketing where he was responsible for the marketing of the company’s cloud computing business. Prior to that, he was head of worldwide marketing at AWS, and vice president of platform product marketing at Salesforce. n Sauce Labs has appointed Justin Dolly as chief security officer. Dolly is a security industry veteran with more than 20 years of experience. At Sauce Labs, he will work to develop, implement and enforce long-term security strategies as well as ensure its customers are protected. n Stack Overflow is bolstering its leadership team with the addition of Teresa Dietrich. Dietrich will serve as the company’s chief product officer. “I have long been fascinated and impressed by the community and collaboration platform that Stack Overflow has built for technologists,” said Dietrich. “I am eager to leverage my passion and experience with the amazing team at Stack Overflow in the next stage of their journey. I am so excited and energized by Stack Overflow’s huge potential to expand the scope and scale of their impact on technologists’ careers, and champion community growth and inclusion due to the ever-increasing demand for technology talent.” z
5
006,7,8_SDT033.qxp_Layout 1 2/21/20 10:49 AM Page 6
6
SD Times
March 2020
www.sdtimes.com
Java celebrates th anniversary
25
following release of BY JENNA SARGENT he two major Java releases are often the biggest news for the Java community each year, but this year brings another thing for the Java community to celebrate. This month brings the latest Java release, JDK 14, but in May, the programming language will celebrate its 25th anniversary. The first Java release was on May 23, 1995. It was initially developed at Sun Microsystems by James Gosling. According to Rich Sharples, senior director of product management at Red Hat, as with any successful technology, there was a lot of luck in timing that contributed to its success. The language emerged as the dot-com boom was starting, and compared to languages like C++ and C, it was a very well-designed language. Sharples said that early on, Java was an easy language to read, which makes it a safe
T
choice since readability is important for long-term maintenance. Its success during the dot-com boom can be attributed to the fact that it was built with the network in mind, Sharples explained. For example, it had native primitives for emerging internet protocols, such as HTTP. For these reasons, developers had an interest in Java early on, but there was interest in Java from an enterprise perspective as well. It had a strong compatibility guarantee, making it appealing to companies. Around the language sprang up a whole community of vendors offering commercial support for Java — another benefit for organizations. While the language started off at Sun Microsystems, it has since been open-sourced. Sun Microsystems decided to open source Java in 2006, three years prior to the company being acquired by Oracle, who followed through on opening it up. According to
Java
14
Sharples, this gave the language a big boost because it allowed collaborators to come in and help build the language. Sharples believes that Sun and Oracle both have very different approaches when it comes to openness. At Sun, Java was a standalone business and there weren’t really any aspirations of making much money off it. They also provided developers with free access to the JVM and supported Java for four or five years without developers needing to buy support for it.
006,7,8_SDT033.qxp_Layout 1 2/21/20 10:50 AM Page 7
www.sdtimes.com
March 2020
SD Times
7
Java 14 Java 14 features a number of changes to the language. Here is a breakdown of upcoming changes: JEP 305: Pattern Matching for instanceof (Preview): Pattern matching allows common logic to be expressed “concisely and safely.” According to OpenJDK documentation, the motivation for introducing this feature is that there are currently only ad-hoc solutions for pattern matching and they felt it was “time for Java to embrace pattern matching.” JEP 343: Packaging Tool (Incubator): This tool can be used to package selfcontained Java applications. JEP 345: NUMA-Aware Memory Allocation for G1: This will improve G1 performance on large machines. JEP 349: JFR Event Streaming: This allows for continuous monitoring of JDK Flight Recorder data. JEP 352: Non-Volatile Mapped Byte Buffers: This release adds new file mapping modes that allow the FileChannel API to be used to create MappedByteBuffer instances that refer to non-volatile memory. JEP 358: Helpful NullPointerExceptions: Now NullPointerExceptions generated by the JVM will describe which variable was null.
When Oracle took over, innovation in Java didn’t slow down, but they did put more pressure on developers to find a company to support their use of Java. “That was one of the major changes I think was a little bit negative in the market,” said Sharples. “I think in terms of innovation and how they manage the open source project, it’s pretty much continued as it did under Sun Microsystems.” According to Sharples, another big boost for Java came from the fact that it was used for Android development. “They all of a sudden had lots more Java developers, developers learning Java because they want to code Android applications, so that really...kept the ecosystem growing at a time when it probably would have flattened out,” said Sharples. “I think if you were to remove those two events, the open-sourcing and the use of Java in Android, we could be having a different conversation right now.” The most recent big change to Java happened in 2017 when Oracle decided to majorly change its release schedule. Instead of releasing a new version of Java continued on page 8 >
JEP 359: Records (Preview): Records provide a syntax for declaring classes that act as transparent holders for immutable data. One of the main complaints about Java is that it is too verbose, especially when it comes to classes. According to OpenJDK documentation, developers sometimes try to cut corners, which results in issues down the line. JEP 361: Switch Expressions: Now switch can be used either as a statement or an expression. This will simplify everyday use of Java, and lay the groundwork for pattern matching in switch. This feature was previously available as a preview in JDK 12 and JDK 13. JEP 362: Deprecate the Solaris and SPARC Ports: These will be deprecated in this release, and in a future release will be removed completely. JEP 363: Remove the Concurrent Mark Sweep (CMS) Garbage Collector (GC): The CMS Garbage Collector was deprecated over two years ago so that attention could be focused on improving other collectors. Since then, two new collectors have been introduced: ZGC and Shenandoah. The team believes that it is now safe to remove, and expects that future improvements to other garbage collectors will further reduce the need for CMS. JEP 364 and 365: ZGC on macOs and Windows: The ZCG garbage collector has been ported to both macOS and Windows. JEP 366: Deprecate the ParallelScavenge + SerialOld GC Combination: According to the team, this combination is rarely used, but requires significant maintenance. They believe it is only useful in deployments that combine a very large young generation GC and very small old generation GC. JEP 367: Remove the Pack200 Tools and API: These tools were deprecated in Java SE 11. JEP 368: Text Blocks (Second Preview): This will add text blocks, which are multi-line string literals that don’t need escape sequences, automatically format strings in a predictable way, and give developers control over the format. JEP 370: Foreign-Memory Access API (Incubator): This API will allow Java programs to safely and efficiently access foreign memory outside of the Java heap. z
006,7,8_SDT033.qxp_Layout 1 2/21/20 10:50 AM Page 8
8
SD Times
March 2020
www.sdtimes.com
< continued from page 7
every few years, they would now release a major version every six months. And every few years, one of those releases would be selected as a long-term support (LTS) release. The latest long-term support release was Java 11, which came out in September 2018. Sharples believes this shorter release cycle has been a good thing, though it has taken developers some time to get used to this new release cadence. “There’s a lot of change coming pretty much on a constant basis with the JDK updates. They’re fairly frequent … People who are running serious applications on Java really do need to understand that only the LTS or long-term supported version are what they should be ultimately deploying on unless they’re willing to change as rapidly as the JDK releases,” said Sharples. Even though Java is 25 years old, Sharples believes this is still pretty young, given that languages like Python, C++, and C are much older. And it’s still one of the main languages being taught at universities and colleges globally, he said. “My guess is the last Java programmer probably hasn’t even been born yet,” said Sharples. z
As Red Hat’s senior director of product management Rich Sharples mentioned, making Java more open was a big factor in Java’s success. In addition to open-sourcing the language, in 2017 Oracle also decided to move Java EE to the Eclipse Foundation, where it was renamed to Jakarta EE. Mark Little, VP of engineering in Red Hat’s Middleware division, suspects that Oracle’s decision to move Java EE had something to do with the success of Microprofile, which was an Eclipse Foundation project meant to help optimize Enterprise Java for microservices architectures. Back in 2016, people had been worried if Java EE even had a future because it wasn’t something Oracle was putting a lot of focus on, he explained. One of the driving factors behind Microprofile, according to Little, was to show that big vendors and communities still did have interest in enterprise Java. “So perhaps it was a combination of Microprofile and the fact [that] they realized there was a lot of interest still in Java EE that made them decide that they had to do something to get back on the agenda and open-sourcing Java EE is a very commendable thing for them to do in that regard,” said Little. Now that it is at the Eclipse Foundation, Jakarta EE has a strong focus on openness. Java EE changes used to be determined by the Java Community Process (JCP), but the Eclipse Foundation changed that process to be more vendor-neutral and open. A lot of the members involved with Java EE still remain, including IBM, Red Hat, and Oracle. “But by it now being out of the JCP, and like I say, within a vendor neutral organization, we all hope that it will encourage others to step up and get involved. And if you look at the Jakarta EE website you’ll see a number of vendors on that who have expressed support, who kind of didn’t do that when it was under the JCP...So this hopefully means that vendors can be a lot more involved than perhaps they were previously.” z —Jenna Sargent
Changing landscape for Java tools Over the past 25 years, the Java tool landscape has changed quite a bit as well. As Java is such an integral part of many companies’ development environments, there are a lot of vendors who provide support for Java in the form of tools for working with Java. For example, over the years a number of companies have been able to claim a hold on the Java IDE market, namely JetBrain’s IntelliJ IDEA, Eclipse, Apache NetBeans, and Visual Studio Code. According to Java Magazine’s Largest Survey Ever of Java Developers in 2018, IntelliJ IDEA is currently the most popular Java IDE with 45% of respondents saying they use either the free or paid version. About 33% of developers use a paid version of IntelliJ IDEA, while about 12% use the free version. Following behind IntelliJ IDEA is the Eclipse IDE, with 38% of Java developers using it. Apache NetBeans sees a much lower proportion of users, at 11%. NetBeans was originally Sun Microsystems’ IDE for Java, but now it is maintained by Apache. Even lower still are vi/vim/emas (3%), Visual Studio Code (1%), and Oracle JDeveloper.
JetBrains’ 2019 State of Developer Ecosystem survey shows even more distance between editors. They showed that 65% used IntelliJ IDEA (which is owned by JetBrains), 17% used Eclipse, 9% used Android Studio, and 4% used NetBeans. It has also spawned projects like JRebel, which is a JVM plugin that allows developers to update code and then instantly see changes without having to restart the application server, the JRebel team explained. In addition, Java EE led to popular projects, such as the reference implementations Glassfish, Wildfly, and EAP. According to Mark Little, vice president of engineering in Red Hat’s Middleware division, Jakarta EE no longer has reference implementations, though Glassfish does still remain. “Glassfish remains as *an* implementation but each specification is allowed to progress through the new Jakarta EE process as long as there’s an open source implementation out there. It doesn’t have to be Glassfish based these days,” said Little. z — Jenna Sargent
Full Page Ads_SDT032.qxp_Layout 1 1/17/20 2:17 PM Page 9
GET THE
ELEPHANT OUT OF THE ROOM
Bad address and contact data that prevents effective engagement with customers via postal mail, email and phone is the elephant in the room for many companies. Melissaâ&#x20AC;&#x2122;s 30+ years of domain experience in address management, patented fuzzy matching and multisourced reference datasets power the global data quality tools you need to keep customer data clean, correct and current. Tell bad data to vamoose, skedaddle, and Get the El out for good!
Data Quality APIs
Global Address Verification
Global Email
Identity Verification
Global Phone
Geocoding
Demographics/ Firmographics
U.S. Property
Matching/ Deduping
Activate a Demo Account and Get a Free Pair of Elephant Socks! i.Melissa.com/sdtimes
Integrations
www.Melissa.com | 1-800-MELISSA
010-012_SDT033.qxp_Layout 1 2/21/20 11:56 AM Page 10
10
SD Times
March 2020
www.sdtimes.com
To serve man Machines that use augmented intelligence are a recipe to help, not replace, human workers BY JAKUB LEWKOWICZ
ugmented Intelligence is growing as an approach to artificial intelligence, in a way that helps humans complete tasks faster, rather than being replaced by machines entirely. In an IBM report called “AI in 2020: From Experimentation to Adoption,” 45% of respondents from large companies said they have adopted AI, while 29% of small and mediumsized businesses said they did the same. All of these companies are still in the early days of AI adoption, and are looking for ways to infuse it to bolster their workforce. Ginni Rometty, the former CEO of IBM, said in a talk at the World Economic Forum that augmented intelligence is the preferable lens through which to look at AI in the future. “I actually don’t like the word AI because cognitive is much more than AI. And so, AI says replacement of people, it carries some baggage with it and that’s not what we’re talking about,” Rometty said. “By and large we see a world where this is a partnership between man and machine and that this is in fact going to make us better and allows us to do what the human condition is best able to do.” Augmented intelligence is a cognitive technology approach to AI adoption that focuses on the assistive role of AI. “I would explain augmented intelligence as something where you are augmenting a human being to do their tasks a
A
lot better or more efficiently,” said Dinesh Nirmal, the vice president of Data and AI Development at IBM. “You’re not replacing anyone, but you are augmenting the skill set of that individual.” The choice of the word augmented, which means “to improve,” reinforces the role human intelligence plays when using machine learning and deep learning algorithms to discover relationships and solve problems. While a sophisticated AI program is certainly capable of making a decision after analyzing patterns in large data sets, that decision is only as good as the data that human beings gave the system to use.
Full automation is a ‘delusion’ “Full automation is a myth,” said Svetlana Sicular, research vice president at Gartner. “There is quite a bit of delusion that everything can be automated. There are certain things that can be automated, but humans have to be in the loop.” She added that the majority of situations for creating full automation are very expensive and very hard to reach. “Once in a while, AI will go berserk for multiple reasons simply because I’m using it at the edge. So if you consider my phone the edge, I might lose the connection or there might
010-012_SDT033.qxp_Layout 1 2/21/20 11:58 AM Page 11
www.sdtimes.com
March 2020
SD Times
and AI to make them better for scaling, and also the democratization of AI to spread the benefits of the technology evenly. AI’s move to industries has led companies to look for an all-in-one solution. “Data scientists are far fewer compared to developers. That’s why there’s a big effort to try to deliver some kinds of AI in a box that could be slightly modified by developers, and another big effort is how to scale it,” Sicular said. Up until recently, all machine learning was done manually, which means there were PhDs who could develop their own custom algorithms. And most of those algorithms were not developed and deployed at scale, according to Sicular. “You buy a service off the shelf, you go through the crowd, you can get image recognition, speech recognition, forecasting, text analytics. All of this is available without having specialists in your organization or skilled people and so on,” Sicular said. “But the question that’s at the core of augmented intelligence is how this is being adopted and for what purposes.” Augmented intelligence can be seen in sales coaching in which inside sales employees are getting advice as they talk to customers. In healthcare, AI is used to help doctors or specialists find some of the things that they have missed rather than sift through hundreds of thousands of documents.
“AI is really an assistant to help you get done with the mundane and mindless tasks more quickly so you can focus on the more challenging creative aspects.” be too many people in this area. Like in —David Schubmehl, research director of Cognitive/AI Systems at IDC one instance, my navigation kept telling me on the highway to turn left while there was nowhere to turn left and I was actually thinking, ‘What if AI for IT is called AIOps, which is augmenting the workI am in an autonomous car?’ ” Sicular said. load of managers and IT workers by helping them detect There are tasks that require expertise and decision-making anomalies or disruptions much faster. that can only be accomplished by the essential creativity that “A lot of customers are having trouble with the amount of only humans could bring to the table, according to David data that’s coming in,” IBM’s Nirmal said. “Let’s say you have Schubmehl, research director of Cognitive/AI Systems at IDC. IoT data, transaction data, behavioral data, social data and “AI is really an assistant to help you get done with the other places, and the question is how do you do data discovmundane and mindless tasks more quickly so you can focus ery and its classification. At least every enterprise that we are on the more challenging creative aspects,” Schubmehl said. working with, there’s a lot of interest in adopting AI.” Autonomous AI is being used across organizations that An example of augmenting would be a data engineer findtypically require very repeatable tasks such as customer ing data that looks like code but was put into the zip code churn and telecommunications, recommendations in retail field. The data engineer can then determine whether the data and supply chain, Sicular mentioned. makes sense in its place. If it’s not a zip code for example, but Since AI adoption is in its early stages, enterprises don’t instead a social security number, then the data engineer can necessarily know whether adopting AI models would greatly go and change it. The machine learning model will then know expedite their efficiency. Sicular said that first there must be that this number is not a zip code for next time. a lot of analysis as to whether AI is really worth adopting for Another big area of interest is in creating alerts that can certain use cases. detect anomalies and can be used in data centers, according Sicular also said that there are two large trends happening to Nirmal. continued on page 12 > in the world of AI: the industrialization of machine learning
11
010-012_SDT033.qxp_Layout 1 2/21/20 1:28 PM Page 12
12
SD Times
March 2020
www.sdtimes.com
< continued from page 11
A role in identity verification Previous methods of identity verification aren’t as efficient today when there is so much more data in circulation. Augmented intelligence has entered the arena to provide much more accurate solutions for ID verification. Previous methods just required basic information of where someone lives and the applications would then just check a database. One company called Jumio utilizes pictures of government-issued IDs to verify whether this is in fact the correct person, who then has to take a selfie that matches the original ID. AI can adjust and see the different security features that are particular to passports from certain areas to see if it’s legitimate, whether that’s a watermark or a photo, or the ID number. Where AI falls down is where AI can’t read the security feature, whether that’s due to fluorescent lighting or glare found in the picture, said Dean Nicolls, the vice president of global marketing at Jumio. “When AI sees an ID and there’s a lot of glare or a lot of blur on it, the first thing it’s going to say is I can’t read it,” Nicolls said. “Solutions that are 100% automated that rely on AI completely are going to essentially say, sorry, I can’t read the image and I’m going to return a nonrenewable.” “So that leads to a really bad customer experience,” Nicolls continued. “And for our business customers, like in this case, the bank, now they’re put in a very bad position because now they’re being told that 30% of the transactions are unreadable.” Jumio’s solution employs actual human agents who are trained to find out where the AI fell short and to then render a decision. The AI additionally tells the agent specifically where its capabilities fell short. For example, it says there was glare on the face due to the overhead lighting. So now the agent only needs to look at the image on the face, rather than the whole ID. “The augmented technology helps our agents, guides them and tells them where to look and what to review to ultimately render a yes or no verification decision,” Nicolls said. Another way in which humans help train the AI is by looking at the models created for learning to help determine where AI fell short, creating a real-time feedback loop built into the process, Nicolls explained. “Essentially the way AI works is you throw a bunch of data at it and then you let the data determine the algorithms. Not only do our agents decide by giving a yes or no decision, but they are letting us know where the AI fell short. And the fact that we have human agents that are looking at a portion of those that are instructing us where the algorithms fell short, that’s also speeding up the learning process,” Nicolls said. Nicolls said that the goal is to eventually take AI algorithms from 20% of cases that are currently unreadable down to only 5% or 10%. However, the role of verification experts is still important in making that AI iteratively better over time. Augmented intelligence can prove useful in areas where the data isn’t perfect. For example, in radiology there are perfect chest scans that augmented intelligence works off of and in that case it could almost replace the radiologist, according to a Stanford study. However, with things like ID verification a lot of the times the data isn’t perfect. “People think of AI as being a panacea and they think that AI can solve all the world’s problems, right? And in many cases it can solve a lot of problems when you have a lot of perfect data,” Nicolls said. “Most of the situations where I see AI applied, data isn’t perfect and you deal with some uncleanliness and this is where —Jakub Lewkowicz augmented intelligence really, really helps.” z
“Customers are always wanting to figure out a problem before it happens. So anomaly detection becomes pretty critical to make sure that you see a lot of alerts for our data coming or logs coming in,” Nirmal said. “There are some tasks such as fraud detection in which AI tends to generate a lot of false alerts and humans with deep vertical knowledge need to then oversee the process.” Augmented intelligence can also refer to tools such as AI transcriptions from meetings or add-ons to PowerPoint that make recommendations on how to improve the slides as one goes through them. Developers also have access to tools that use AI to create more efficient code reviews to speed up the SDLC. For example, DeepCode shows errors in code based on troves of similar scenarios that occured before, and then provides context on how to fix them. “What Grammarly does for written texts, we do the exact same thing for developers,” said Boris Paskalev, the cofounder and CEO of DeepCode. “There are many areas where I believe that augmented intelligence actually helped developers because now we have a machine learning representation for code. In the DeepCode platform, you can really add any service on top of it because you really have the knowledge of the global development community that you can index in real-time. So we can get the answers in seconds, which is quite exciting, considering these capabilities did not exist just a couple of years ago.” All in all, companies are growing their investments in AI and it is becoming a fundamental part of every industry’s digital transformation, according to IDC’s Schubmehl. “Amazon wouldn’t be where it was without machine learning and AI, because it’s a huge part of its offering,” Schubmehl said. “We’re really in the early days. We’ve finally gotten out of the prototyping phase and we’re actually starting to put real AI and machine learning into production. I think we still have years and years to go before AI is what I would call fully mature.” z
Full Page Ads_SDT033.qxp_Layout 1 2/20/20 11:22 AM Page 9
Force Multiplier: \fo(e)rs \m˵l-t˵-pl Ɍ(˵)r n: A tool that dramatically amplifies your effectiveness.
73% of customer-facing apps are highly dependent on the mainframe. Yet 2 out of 3 lost mainframe positions remain unfilled, putting quality, velocity and efficiency at risk. You need Compuware Topaz as your Force Multiplier to: • Build and deploy with agility • Understand complex applications and data • Drive continuous, automated testing Learn more at compuware.com/force-multiplier compuware.com | @compuware | linkedin.com/company/compuware
014-017_SDT033.qxp_Layout 1 2/20/20 1:26 PM Page 14
14
SD Times
March 2020
www.sdtimes.com
Software 3.0: Enterprise AI Brave New
014-017_SDT033.qxp_Layout 1 2/20/20 1:27 PM Page 15
www.sdtimes.com
March 2020
SD Times
Systems and the Economy How machine learning startups differ from “normal” startups BY RAMNEEK GUPTA AND VISHY VENUGOPALAN
M
achine learning (ML) and other artificial intelligence (AI) technologies are powerful tools with the potential to transform a wide range of processes for both consumers and companies. Though many of these technologies are still commercially nascent, a number of startups have emerged that provide ML-based software solutions to enterprises. We believe that these “enterprise AI systems” will need to be developed, deployed, and monetized differently from other modern enterprise software startups, most of which are based on the Software-as-a-Service (SaaS) paradigm. This new model, which we are calling “Software 3.0,” encourages enterprise AI startups and their customers to collaborate on developing the software itself, and could therefore lead to a new form of upside-sharing between the parties. As investors in young software businesses, we believe that a brand-new model for enterprise software services may be on the horizon. Ramneek Gupta is managing director and co-head of Venture Investing, Citi Ventures.
Vishy Venugopalan is senior vice president at Venture Investing, Citi Ventures.
The Road to Software 3.0 Application software for business use emerged as a standalone category in the mid-1980s. Since then, software businesses have evolved through different phases, each marked by major technological shifts and each with its own incentives for vendors and customers of enterprise software. Software 1.0: Perpetually Licensed Software (c. 1985-2000). During the first phase of evolution, enterprise customers bought software from vendors and deployed it on premises. Contracts were financially structured as a large upfront payment to the vendor for a perpetual software license followed by a stream of smaller annual payments for maintenance and upgrades. This incentivized vendors to spend most of their energy acquiring new customers—once they collected a customer’s upfront payment, they had little remaining reason to help that customer integrate the software into their environment and ultimately be successful. Thus, the customer bore most of the risk of a successful software deployment. Software 2.0: Enterprise Software-as-a-Service (SaaS) (c. 2000-2015). As broadband internet connectivity became ubiquitous in the late 1990s, it paved the continued on page 16 >
15
014-017_SDT033.qxp_Layout 1 2/20/20 1:27 PM Page 16
16
SD Times
March 2020
www.sdtimes.com
used for “pure” software. These systems also require way for the next phase of evolution in software busi- more rigorous tracking: while traditional software develnesses: SaaS. Instead of buying software and booking it opment only tracks the provenance of code and configuas a capital expense, SaaS customers rented software ration in a given software release, Software 3.0 develophosted on the vendor’s servers, using a subscription ment must also track the training data used to shape the model composed of equal recurring payments typically model, using controls for quality and provenance. priced on a monthly basis. Customers could stop using The best enterprise AI systems are built using a gena SaaS product at will, and along with it end the stream eral cross-customer model that specializes to customerof payments they made to its vendor. Thus incentivized specific data over time. Take, for example, an enterprise to ensure that customers actually used their product on AI system for mortgage lending. The vendor developing an ongoing basis, SaaS vendors were typically in close the system may build a deep neural network model that touch with their customers through their customer suc- learns cues for creditworthiness, train it on thousands of cess function and kept a close eye on metrics such as past mortgage applications, then deploy it into produccustomer churn. tion at dozens of mortgage lenders that will use it to adjuSoftware 3.0: Enterprise AI Systems (2015-present). As comCUSTOMER VALUE puting and storage have grown cheaper and digitization has 0 become ubiquitous, enterprises are e 3. r a creating ever-increasing amounts of tw So f digital data. The availability of this data, and of cost-effective ways to process it, has led to the emergence of enterprise AI systems, or “Software 3.0”. Software 3.0 businesses create value for customers not by 0 e 2. r producing software code alone, but a tw 0 composite systems comprising code So f e 1. r a and data. In this phase, customers tw So f and vendors have a symbiotic relationship: Software 3.0 vendors leverage customer data to train their Year 1 Year 2 Year 3 Year 4 TIME proprietary ML models and tune product experiences, while customers receive added value from the vendor through data network Figure 1 – Contrasting customer value over time in different phases of software. effects arising from the vendor’s customer base. This symbiotic relationship sets up equitable risk- and reward-sharing dicate mortgage applications. Once the general creditbetween the customer and the vendor, but also requires worthiness model has been deployed across individual a period of investment from both sides before the full lenders, it can adapt over time to local market conditions rewards of this model can be reaped. and feed local insights back into improving the cross-customer model. How Software 3.0 Is Developed: This is where the customer’s investment in the system The rising tide that lifts all boats comes in. An individual lender using this system may The most important way in which Software 3.0 differs have to contend with additional technical complexity, as from previous phases of evolution in the software indus- well as the regulatory complexity of complying with fair try is its dependency on data. Advanced ML techniques, lending, algorithmic accountability, and other AI-specific including deep neural networks, lie at the heart of enterrules. Their data may also be commingled indirectly with prise AI businesses. These models are trained using sam- data from other lenders to enrich the cross-customer ple data points from the relevant problem domain then model. In return for taking on this added complexity, the are deployed into production, where they make deci- customer benefits from data network effects emerging sions for business use. Software 3.0 systems must there- from the system. The cross-customer model grows more fore be built by cross-functional teams of data scientists, accurate over time thanks to the cooperation of all cusdata engineers, software engineers, and IT operations tomer deployments—thus, Software 3.0 systems are the staff, using a different development life cycle than that rising tide that lifts all customer boats. < continued from page 15
014-017_SDT033.qxp_Layout 1 2/20/20 1:27 PM Page 17
www.sdtimes.com
March 2020
Advertising on the same path To illustrate how different participants in the software value chain capture value and how that has evolved with the software industry, we turn to another industry that relies on scale and network effects: advertising. Software 1.0 customers had experiences analogous to advertisers working with ad agencies to create print and TV campaigns. Those advertisers paid significant money up-front for campaigns that were arduous and time-consuming to realize. Although advertisers derived some value from those campaigns, they were expensive and the value they drove was not particularly trackable. When online advertising emerged as an alternative, it enabled advertisers to launch campaigns quickly, control their daily budget with pay-per-click campaigns, and turn campaigns on and off based on granular information about which of them was working. This shift was similar to the
How Software 3.0 Is Deployed: Rewriting the software startup playbook We have just seen how Software 3.0 systems reach scale by aggregating customer data. This dependency on customer data means that their journey to production deployment, and eventually to scale, looks different from earlier generations of software systems. Software 1.0 and 2.0 are code-only systems, built on processes and inputs that are entirely under the control of the vendor. Software 3.0 systems are different; the data that goes into building them resides within their prospective customers, and thus isn’t available to vendors unless they have an existing business relationship with the customer. In other words, Software 1.0 and 2.0 systems can be built and then sold, but Software 3.0 systems need to be sold before they can be built.
How Software 3.0 Is Monetized: Sharing risk and reward As mentioned previously, Software 3.0 vendors and customers have a symbiotic relationship: customers contribute data to enrich the vendors’ models and receive value from data network effects. This interplay transforms a relatively clean vendor-customer relationship into a more deeply coupled partnership. Relative to how such relationships have functioned in a Software 2.0 world, expectations and definitions of success will need to be reset on both sides, because customers and vendors will need to share the risks and potential rewards of a Software 3.0 project. The concept of a J-curve (Figure 1) features promi-
emergence of Software 2.0 — SaaS applications that enabled customers to sign up quickly, pay a fixed price per month, and end the relationship when they stopped seeing value from their purchase. As trackability and attribution technologies have progressed, many advertisers now run revenue-share or affiliate campaigns where they pay ad platforms a percentage of the revenue they accrue — even if it amounts to more than they would pay per click — because it means less up-front risk to them. Given Software 3.0’s deeper customer-vendor partnership and the long-term benefits that can accrue to customers from data network effects, we believe that monetization models for enterprise AI systems should include scenarios in which customers and vendors share upside, in much the same way as they share risk in the earlier phases of their collaboration on the system. z
nently in venture capital, private equity, and other asset classes with portfolios of illiquid investments. Shaped like the letter J, the curve illustrates returns over time from the perspective of investors in these funds—the early years are a period of negative returns, with most gains happening later in the life of the fund. Likewise, early customers of an enterprise AI system will find that they need to be patient with their vendor partnership as the system emerges. A Software 3.0 vendor has a lot riding on their earliest customers because there is a sustained period of co-creation, data cleansing, and model customization to which the customer will need to dedicate meaningful internal resources. The return on these investments will not become apparent to the customer until data network effects kick in for the vendor. Once a critical mass of data has accumulated, however, customer value ensuing from a Software 3.0 system should increase rapidly and may even exceed the eventual value delivered by a corresponding Software 2.0 application. Understanding the dynamics of the J-curve for customer value is critical to managing customer expectations around project risk. To draw further inspiration from the world of venture capital, due to the disparity between the risk borne by earlier customers and later customers a Software 3.0 vendor should consider providing different incentives to customers as they work through the gestation phase of their system. For Software 3.0 customers, the flipside of sharing risk is being able to participate equitably in both the value created by their bilateral relationship with the vendor and the value they bring to the vendor’s network of customers. z
SD Times
17
018_SDT033.qxp_Layout 1 2/20/20 2:16 PM Page 18
18
SD Times
March 2020
www.sdtimes.com
DEVOPS WATCH
CollabNet VersionOne and XebiaLabs merge to form integrated Agile DevOps platform BY JENNA SARGENT
Following on the heels of TPG Capital’s acquisition of CollabNet VersionOne last September, the investment company has announced that CollabNet VersionOne would be merging with XebiaLabs. Together the two companies will work to create an integrated Agile DevOps platform. When TPG Capital first announced it was acquiring CollabNet VersionOne, it had stated that its main strategy was to build a “leading, integrated, enterprise-focused DevOps platform company.” This merger marks a major step towards that goal, XebiaLabs stated. The merger will combine CollabNet VersionOne’s upstream and Agile planning and version control capabilities with XebiaLabs’ downstream release orches-
tration and deployment automation functionality, the companies explained. Ashok Reddy, previously a senior vice president at Broadcom, will join the company and serve as CEO. “We are on a mission to fundamentally transform how enterprise software development and delivery is done,” said Reddy. “The combination of CollabNet and XebiaLabs will provide a platform that enables digital transformation at scale with Agile and DevOps processes to continuously adapt, learn, and improve, especially in a world of AI-driven intelligent apps and experiences. I am thrilled to be joining at this critical juncture and look forward to working with the entire team to lead the combined company through its next chapter of growth.”
CollabNet VersionOne’s current CEO Flint Brenton will step aside to focus on family, while XebiaLabs’ current CEO Derek Langone will serve as president of the new company. “The combination of CollabNet and XebiaLabs will provide enterprise customers with the end-to-end visibility and management capabilities needed to develop software quickly, reliably, and securely, ultimately helping accelerate their digital transformation and drive business outcomes,” said Nehal Raj and Art Heidrich of TPG Capital. “We congratulate Flint on a successful tenure, and look forward to partnering with Ashok, Derek, and the broader team to further accelerate our strategy to create a leading enterprise DevOps platform company.” z
Report uncovers value stream’s impact on software delivery BY CHRISTINA CARDOZA
Value Stream Management (VSM) is quickly becoming the go-to approach as organizations look to fully unlock their software delivery life cycle. According to a recent report, 95% of enterprises either are interested, have plans to or already have adopted VSM. The report was commissioned by CollabNet VersionOne and conducted by Forrester. It looks at responses from more than 300 IT and business professionals to reveal VSM’s role in software development. “Connecting and measuring business value to software delivery pipelines is critical for success, however, it’s still a challenge for many organizations,” said Flint Brenton, CEO of CollabNet VersionOne. “This research reveals the positive impact that value stream management can have on an organization’s bottom line, and most importantly, how it
helps create delighted customers.” The report found that the biggest challenges organizations face in their value delivery efforts include continuous improvement, automation, agility, visibility and collaboration. Less than 40% of respondents believe their organization excels in any one of those areas. Because of these challenges, organizations end up with poorly integrated toolchains, islands of automation, misalignment, and improper workstreams. When adopting VSM, respondents reported better collaboration across teams, ability to scale Agile and DevOps, better visibility across teams, aligned software delivery with business goals, and ability to measure value. “VSM users are twice as likely than nonusers to have a complete picture of software development/delivery work status at the product portfolio, and enter-
prise levels,” the report stated. “VSM users outperform their peers in their ability to map and analyze, visualize and govern value streams across the entirety of the software lifecycle.” In order to unlock VSM’s full potential, CollabNet recommends a dedicated strategy and solution. For instance, they can assign a champion, align data from customers with strategic priorities and map data flow throughout the entire toolchain. “Ultimately, the study recommends software organizations either start or grow a VSM initiative, use output and outcome metrics to measure progress and business value and to look for a holistic VSM solution. However, the study also heeds that VSM adopters should first implement a strategy before considering tooling,” CollabNet stated an announcement. z
Full Page Ads_SDT033.qxp_Layout 1 2/20/20 11:22 AM Page 19
Forget about opening the Black Box.
You just need a window with Tasktop Viz™.
Do you know why: you only delivered two features this month? ƕƏѷ o= o u -b| ঞl; was in business analysis? it took 2x as long to deliver a new product this year? your development teams are overworked and falling behind?
Tasktop Viz ruo b7;v |_; -mv ;uv |_-| l- ;u |_uo ]_ Flow Metrics. Find out if you qualify for our Flow Framework Starter Program and have your Cuv| r;;h |_uo ]_ |_; bm7o bm|o o u vo[ -u; 7;Ѵb ;u |_-| bѴѴ make your |u-mv=oul-ঞomv 1o m|.
Tasktop.com
020-27_SDT033.qxp_Layout 1 2/20/20 2:18 PM Page 20
20
SD Times
March 2020
www.sdtimes.com
How to Get DevSecOps Right BY LISA MORGAN
D
evOps and security teams are learning how to work together, albeit somewhat awkwardly in these early days of DevSecOps. One reason why it can be difficult to get the partnership “right” is that people define DevSecOps in different ways. “If you asked a room of 10 people to define DevSecOps, you’d get 15 definitions. I think we’re starting to coalesce, but I still think there are a lot of misconceptions and there are still a lot of people who are going about it the wrong way,” said Caleb Queern, principal at KPMG Cyber Security. One challenge is approaching DevSecOps from a tools-only perspective, without considering the impacts of change on processes and people. “A lot of buyers will procure something, put it in place in their organization and declare victory,” said Queern. “It doesn’t lead to the business outcomes that most folks are looking for.” In a December 2019 report, Gartner estimated that by 2021, DevSecOps practices will be embedded in 60% of rapid development teams, as opposed to 20% in 2019. To succeed, DevOps teams must prioritize security and security teams must adapt to the accelerating pace of development.
Is shifting left enough? DevSecOps reduces the number of application vulnerabilities, but shifting left is not enough. “Shift left has truly moved the needle on capturing a lot of these early-stage, often ignored, vulnerabilities [but] it doesn’t necessarily solve all of the security challenges for an organization,” said Jake King, co-founder and CEO of Linux security platform provider Cmd. “We have to be vigilant, we have to be aware, we have to be con-
tinuously testing.” Application exploits are the most common way hackers infiltrate organizations. So, in addition to running a Common Vulnerabilities and Exposures (CVE) scan late in the SDLC, shifting security left adds another security point. However, security should be integrated into every aspect of the application life cycle because as a recent Trend Micro report states, “Attackers will find ways to take advantage of any weak link to compromise the DevOps pipeline.” “[Application] security has kind of been looking at the trees and missing [the] forest,” said Zane Lackey, former chief information security officer (CISO) at ecommerce marketplace Etsy. “Trying to sprinkle a little bit of security dust at the end or at the beginning [means] you’re missing the forest, which is that you need to be empowering teams to think about and have security capabilities from when they’re designing new services to when they’re implementing them to when they’re actually deploying and running them in production.” Sandy Carielli, principal analyst at Forrester Research, agrees. “We need to make sure we’re not just addressing security in the design and development phase, but also in the testing and deployment phases,” said Carielli. “It absolutely needs to be addressed throughout the life cycle. The vulnerability is going to survive for a much shorter lifetime if there is regular scanning.” End-to-end application security results in more secure code. And, full life cycle visibility into exploits helps teams understand how and where their applications are being compromised. “The way security was done historically fruscontinued on page 22 >
020-27_SDT033.qxp_Layout 1 2/20/20 2:18 PM Page 21
www.sdtimes.com
March 2020
THE LAST OF
SD Times
THREE PARTS
21
020-27_SDT033.qxp_Layout 1 2/20/20 2:19 PM Page 22
22
SD Times
March 2020
www.sdtimes.com
What the relationship with the security team should look like There’s a diversion of opinion about who should initiate the DevOps-security relationship. Should DevOps reach out to security or should security reach out to DevOps? There has been some friction in the relationship because security has served as a barrier to faster application releases. “There’s no time for old school security testing every release. Fundamentally, security should be something that’s just like quality. It’s one of the fundamental requirements of your code,” said 451 Research’s Fernando Montenegro. “The role of security is collaborating with that development team to let them know what it is that they need to do. Here are our corporate objectives around security and the kinds of things that we want you to be aware of. Here’s how you threat-model your application to define what needs to happen where and then let the development team run with it.” Traditionally, security’s first instinct is to prevent developers from introducing risks in the first place. For example, Etsy’s former
< continued from page 21
trates developers. You wait until the very end and it takes a week for them to get back results. And a lot of the tools that we’ve used historically, things like static analysis, create an unacceptable [number] of false positives,” said Neil MacDonald, vice president, distinguished analyst and Gartner fellow emeritus in Gartner Research. “It’s the antithesis of what we want for DevOps. We’re supposed to be enabling speed and agility and so security has to shift left, integrate natively into the developer’s environment and catch these problems as soon as possible so the developer isn’t slowed down at the end.” However, baking security into the entire SDLC takes time. “This isn’t something that you finish, ever,” said KPMG’s Queern. For one thing, black hat tactics are constantly evolving. It’s also important to pay attention to
CISO Zane Lackey recently met with the CSO and CIO of a Fortune 100 company separately, about 30 minutes apart. When he asked each of them how they think about cloud and DevOps, the CSO said he wasn’t allowing them to happen because they’re not secure. The CIO subsequently said they had 50 cloud apps now. By the end of the year, they’ll have 200 and in 2021, a couple thousand. “I looked at him kind of funny, and he said, ‘Oh, you just met with the CSO,’ “ said Lackey. “We don’t tell them [anything] anymore because they try to say no to everything. If they want to enable us, then we’ll incorporate that, but if they’re just going to be gatekeepers and say no, we’re not going to speak to them anymore.” Good progress is getting security and DevOps working together. Better progress is when security and DevOps cooperatively integrate security practices throughout the SDLC in a manner which doesn’t create friction in the DevOps pipeline. z — Lisa Morgan
access rights. “It’s not so much about technology components, but how you’re divvying up who does what,” said Fernando Montenegro, principal analyst, information security at 451 Research. “If I have an artifact here, am I allowed to interact with it? Am I allowed to do something with it? Do I have the right credentials to do dev versus QA testing versus production etc.? And then at runtime, is this thing behaving the way it’s supposed to behave and if something goes wrong? Do I have the infrastructure to understand what happened? You absolutely need that across the stack.”
What the DevSecOps team should look like Moving from DevOps to DevSecOps isn’t a matter of putting a security team member on a DevOps team, because developers tend to outnumber security professionals by 100 to one. Since most
developers aren’t security experts and most security experts aren’t developers, the two must learn to work together effectively. “Research shows that the places where we talk more about security, we’re better at security,” said Queern. Queern and others interviewed for this story underscored the importance of “security champions” who serve as the security expert on a DevSecOps team. Security champions are usually a developer who volunteers for the role. In Queern’s experience, the security champion participates in labs, demos or presentations that focus on security and application development best practices. While the application security team may host the trainings, the presentation duties are very quickly assigned to the security champions who will serve as security experts for their teams. Health management platform continued on page 24 >
Full Page Ads_SDT033.qxp_Layout 1 2/20/20 11:24 AM Page 23
020-27_SDT033.qxp_Layout 1 2/20/20 2:20 PM Page 24
24
SD Times
March 2020
www.sdtimes.com
< continued from page 22
4
DevSecOps Mistakes to Avoid
DevSecOps isn’t just a practice, it’s a continuous learning experience. If you want to be successful faster, avoid these common misperceptions.
#1
Business as Usual Is Good Enough. Cybercriminals are constantly chang-
ing their tactics. If your organization’s application security practices are static, they aren’t as robust as they should be. “I think a lot of times, people think that things are going to continue as normal. That the same processes, the same organizational structure and the same way you’ve been doing things up until now are going to help you in the future,” said Fernando Montenegro, principal analyst, information security at 451 Research.
#2
Failing to Monitor Developers’ System Access. Jake King, co-founder and CEO of Linux security platform provider Cmd, said during the early stages of rolling out DevSecOps, organizations overlook the access developers have in the environment. “They’ll grant developers a lot of trust and empower them to do their job as well, but at the same time, they’re not keeping a close eye on what they are doing and how they are doing it – simply ignoring that people are doing very sensitive things,” said King. “It’s like having your CFO being able to process a wire transfer to a country you’ve never made a payment to, independently.”
#3
Failing to Monitor Code Changes. Code is constantly changing, including
new or changed configurations, patches and system maintenance, many of which are outside a DevSecOps’ team’s control. The result is that no one is sure what exists in the environment. “What libraries and packages are out there, not only from a vulnerability perspective but also from an exposure perspective? When you deploy a library, how many supply chain components are you bringing into the fold? How many kinds of upstream vendors?” said Cmd’s King.
#4
provider Advantasure has a security liaison who works across more than 500 developers, according to Chief Information Security Officer (CISO) Wallace Darlymple. The security liaison is responsible for upskilling the developers who will become security champions. Specifically, the liaison oversees their e-learning training programs and is the first escalation point when a security-related question or issue arises. The ROI on training dollars tends to be poor if the person attending training doesn’t apply the newly acquired knowledge on a regular basis, however. “The best type of training is training in the moment. Having that training built into the development process a little more closely will allow the teams to start absorbing it and living it a lot more,” said Forrester’s Carielli. Charles Betz, global DevOps lead serving infrastructure and operations professionals at Forrester, thinks it’s important to provide DevSecOps teams with secure baseline infrastructure environments. “In many cases, people just don’t know what needs to be secured,” said Betz. “So, give them an image that’s secured, give them an environment that’s secure, give me acceptable patterns for using the database, using Apache Kafka, using a load balancer in the firewall. Then, have them focused on what they need to be focused on.”
Trying to Force Traditional Security Methodologies on DevOps Teams.
The dual-speed nature of DevOps and security can be problematic. If security imposes too much overhead on DevOps teams or suggests solutions that aren’t practical, DevOps may well ignore security. “Everything — applications, new services, things being updated and shipped — is now moving orders of magnitude faster than they were 10 years ago. Yet a lot of the security mentality hasn’t adapted,” said Zane Lackey, former chief information security officer (CISO) at ecommerce marketplace Etsy. “That gatekeeper mentality ends up getting routed around. We need to shift them to enabling [DevOps] teams so they can really self-serve.” z — Lisa Morgan
Supply chain security matters Developers are using more opensource and third-party elements in their code than ever before to meet time-tomarket mandates and focus on what matters most, such as competitively differentiating their applications. However, Gartner estimates that fewer than 30% of enterprise DevSecOps initiatives had incorporated automated security vulnerability and configuration scanning for open source and commercial packages in 2019. That number is expected to rise to more than 70% in 2023. “Security needs to go all the way back to the point where they’re downcontinued on page 27 >
025_SDT033_r1.qxp_Layout 1 2/24/20 1:31 PM Page 25
www.sdtimes.com
March 2020
SD Times
25
INDUSTRY SPOTLIGHT
Security – Just Another Aspect of Quality These flaws don’t get as much attention as performance bugs rogrammers err as much as any of us — between 15 and 50 errors per 1,000 lines of code to be more exact. QA tests for these bugs, attempting to ensure that releases are as bug-free as possible. Customers who trust their operations to software won’t tolerate poorly written code, and teams go out of their way to ensure that programs work the way they are supposed to. But there’s another type of bug that often doesn’t get the same attention — the security bug. Those bugs generally don’t affect performance, at least right away, so teams tend to deprioritize them in favor of fixing functional bugs. But in reality, security bugs are no different, and will eventually cause an app to do something unintended. In fact, they have the potential to be far worse. A button that doesn’t respond properly (functional bug) is inconvenient and annoying, and can drive employees using an application batty. But a hackable module in the software (security bug) could give hackers the keys to the corporate kingdom, providing them with access to employee accounts, data, and even money. Part of the challenge is the fact that security is never called out explicitly as an outcome. It’s very often that companies do not have a rules book, best practices or policies developers could follow. Security is expected, but very rarely asked for. The main difference between security bugs and functional bugs, is that for the latter very often there are policies in place and is an explicit requirement for development. Security is never explicitly asked for, but expected to be there when the app is ready for production deployment. If averting security bugs hasn’t been
P
Content provided by SD Times and
a priority for programmers, it certainly should be, and customers should demand testing for bugs as part of quality control. In today’s programming environment, that is likely to mean utilizing automated tools that will test for security issues as program modules are developed — just as they are used for functional bug testing. With much development done in the cloud, for example, teams can use cloudbased tools, to determine if they have been following best security practices. OWASP (the Open Web Application Security Project) provides a long list of automated security testing tools that can help developers detect vulnerabilities. For maximum security testing, teams can use newly-developed tools that check code against security scenarios as
it is written and uploaded to repositories. Such tools can catch problems in specific modules before a security bug gets “buried” in the full application made up of dozens of modules —only to surface when a hacker gets wise to the bug and starts taking advantage of it. If developers “shift left” enough, seeking to resolve security issues as early as possible in the development process, we can squash security bugs as successfully as we do functional bugs. In conclusion, when shifting left the work, make sure to call out “better security” as something developers are expected to deliver, rather than just hoping for it. For more information, go to www.hcltechsw.com/wps/portal/ products/appscan/home. z
020-27_SDT033.qxp_Layout 1 2/20/20 2:20 PM Page 26
26
SD Times
March 2020
www.sdtimes.com
Focused on application vulnerabilities? You’re missing the bigger picture.
BY LISA MORGAN
In today’s era of digital transformation, every organization must focus on application security. However, focusing on security vulnerabilities alone is unwise because it’s nearly impossible to prioritize what needs to be done. “DevOps teams are sitting in front of a table with the keys to the kingdom on their computers,” said Jake King, cofounder and CEO of Linux security platform provider Cmd. “We need to think not only about the risks pertinent to the software that we build and the way in which we build that software, but also the way in which we access the systems that deploy that software. It becomes the weakest link in that chain.” At the time of this writing, there were 130,969 Common Vulnerabilities and Exposures (CVEs) listed in the MITRE Corporation database. That database is synchronized and used by the National Institute of Standards and Technology’s (NIST) National Vulnerability Database (NVD). NIST’s NVD ranks vulnerabilities using NIST’s Common Vulnerability Scoring System (CVSS) which is an open framework for communicating the characteristics and severity of
software vulnerabilities. CVSS v3.0 reflects five levels of severity (none, low, medium, high and critical) versus CVSS v2.0 which used three (low, medium, and high). Security professionals use the ratings to prioritize patching, and hackers know it. So, some hackers will exploit vulnerabilities that have lower ratings while security professionals are focusing on the high and critical vulnerabilities. “I have a very strong allergy to any kind of risk management that relies on an ordinal scale, rate the risk from one to five,” said Charles Betz, global DevOps lead serving infrastructure and operations professionals at Forrester Research. “I think that kind of stuff is worse than useless. It leads people to believe they’re secure when they’re not.” No organization has the money or human resources to manage the 131,000 published vulnerabilities (which does not represent the entire universe of vulnerabilities). So, there are two other things enterprise security professionals consider for prioritization purposes. One is the organiza-
tion’s attack surface (software and hardware assets ranging from traditional to IoT). From a software development perspective, application composition, including open-source, third-party software and other dependencies, must be considered. The reason it’s important to understand the attack surface is because not all vulnerabilities apply to any one organization. If there were no Microsoft software anywhere (unlikely in an enterprise setting), then Microsoftrelated vulnerabilities wouldn’t apply. Similarly, if only certain Microsoft products or versions of products are used by the organization, then the products or versions not in use would not apply. Essentially, understanding the entire attack surface allows security professionals to focus on the relevant vulnerabilities, which is still an overwhelming number. Therefore, it’s important to also consider threats. Of all the vulnerabilities that apply to an organization’s ecosystem or DevOps supply chain, which are hackers actively exploiting? What security mistakes are people in the organization making that cause
020-27_SDT033.qxp_Layout 1 2/20/20 2:21 PM Page 27
www.sdtimes.com
March 2020
SD Times
‘A common metric is the number of bugs and if they’re going down over time. This has probably done more disservice to application security programs than anything else.’ —Zane Lackey, former chief information security
officer (CISO) at Etsy < continued from page 24
systems, software and data to become vulnerable? “We’re never going to get bugs to zero whether they’re in open source or our own code. We need to get visibility into how people are actually attacking and abusing our services,” said Zane Lackey, former chief information security officer (CISO) at Etsy. “The critical ones are easy [because] they jump out at you. Every security team on the planet is [grappling with] a mountain of medium severity bugs that never actually get fixed because they’re the same severity, the same risk.” Worse, security teams tend not to have visibility into what’s happening in production, so they don’t have a way to quantify or prioritize those risks. “There have been a lot of major breaches due to client-side protection issues, API issues and third-party open-source issues,” said Sandy Carielli, principal analyst at Forrester. “Application security is to some extent a third-party risk issue because you’re bringing in all these third-party components and container images.” z
loading code from GitHub. Let’s tell them this is a known vulnerable component. We’re not going to download that because it contains a known set of critical vulnerabilities,” said Gartner’s MacDonald. “Let’s tell them that before they download it, before they stitch it together in their application as they’re writing code in their IDE.” Advantasure’s decision to implement application security platform Veracode was made jointly between security and DevOps. “We can’t dictate to the business, you must do it this way,” said Advantasure’s Darlymple. “It was really us reaching out to them and saying you’re going down this path. How can we be leveraged in your environment without having to hire an army of application security architects just to support you?”
Measuring success It’s hard to tell how effective DevSecOps is if it isn’t being measured. There are some of the classic methods like bug tracking and code coverage, although Forrester’s Betz said it’s important to start with the target operating condition. “What is your risk tolerance? What are your metrics there? What are some of the desired metrics with things like code coverage and what level of assurance do you require in your systems?” said Betz. “Really look at your highest governance metrics that drive your security posture and how you understand your attack surface. Those drive your operational processes and procedures which will include things like
software, bill of materials, supply chain, vulnerability scanning, pen testing and red team testing.” Cmd’s King said time to detection and resolution of an infrastructure issue is important, as is code coverage and code hardening. Measuring customer perception is also wise because security is a trust issue. “If you keep on top of it, if you incrementally add security, you bake it in from the start and add it early in your processes, you will see dividends paid up majorly in the long term because you will not be dealing with legacy,” said King. Another metric has to do with the collaboration between DevOps and security. “A common metric is the number of bugs and if they’re going down over time. This has probably done more disservice to application security programs than anything else because there’s a million questions within that that don’t get answered,” said Lackey. “The healthiest metric you can track is how often DevOps teams proactively interact with the security team because if those teams are proactively coming to you and focusing on that metric, whenever they think of new technologies or architectures you’re going to be able to reduce risk.” Teams also should be counting security incidents such as breaches and data leakages, insider and external exploits and advanced persistent threats. “[Whichever way] you’re counting them you want to be baselining and continuously improving,” said Betz. “I strongly believe in benchmarking against yourself.” z
27
028,29_SDT033.qxp_Layout 1 2/21/20 11:59 AM Page 28
28
SD Times
March 2020
www.sdtimes.com
Cyber insurance: A crucial part of any cybersecurity strategy he threat landscape has been expanding rapidly, and companies are under immense pressure to respond. A lot of companies are investing in trying to prevent attacks, but as evidenced by the sheer number of data breaches and cyberattacks, it’s impossible for a company to predict 100% of possible attacks. That’s where an emerging market is coming in: cyber insurance. According to Accenture’s 2019 Cost of Cybercrime study, almost 80% of organizations introduce innovation to their organization faster than they can secure it. They found that the number of cyberattacks increased by 11% over the last year, and by 67% over the past five years. Further, the average cost of cybercrime was $13 million in 2018. According to Jack Kudale, CEO of cyber insurance provider Cowbell Cyber, cyber insurance deals with the post-event of an attack, as opposed to the pre-attack, which is where cybersecurity tools would come in. The main functions of the cybersecurity market, he explained, deal with preventing and detecting cyber attacks. Cyber insurance focuses on the response and recovery of an attack. Brian Gill, co-founder of data recovery and digital forensics company Gill-
T
BY JENNA SARGENT ware, believes there are three main reasons to use cyber insurance: maintaining the business reputation, financial protection, and crisis containment. Cyber insurers can help a company maintain their reputation following a data breach by providing third-party experts to help mitigate damages, Gill explained. Financially, coverage under one of these policies can include reimbursement for ransomware attacks, compensation for loss of income or earnings due to a cyber breach, and even coverage of fines for compliance penalties. In addition, according to Gill, many insurers also offer extras such as third party security experts or a press officer that can help with timely external communication to help with crisis containment. Jim Hansen, president & COO of security company Swimlane, explained that offsetting risk using insurance coverage is both a “potent and cost-effective strategy.” He believes that it has the biggest impact on smaller firms. This is because many small businesses don’t have access to resources like mature IT management and dedicated security staff. “A well-structured cyber
insurance policy can provide access to experts as well as the financial coverage for costs and damages,” said Hansen. He recounts an experience of a friend he advised who owns a small business. The business suspected a breach, but luckily had a cyber insurance policy in place. “Within hours of notification, he had access to incident response experts who arranged to come onsite and start a professional response process,” said Hansen. “The policy also provided access to top-notch legal counsel to manage the effort and be ready for tackling breach notification. Thankfully, that service was not required as the response team did not find any evidence of a breach. The impact to the business was some lost time for the management and their IT contractor and financially a fairly small deductible. If my friend didn’t have this policy in place, the consequences could have been tens of thousands in direct costs plus the significantly degrading response as he tried to find the right experts to help out. If there actually had been a breach the costs could have destroyed the business.”
028,29_SDT033.qxp_Layout 1 2/21/20 12:00 PM Page 29
www.sdtimes.com
Kudale recommends that when seeking out a policy, companies look for insurers that are digitally savvy and will use data and analytics to tailor coverage specifically for their needs. And insurers also need to be more transparents in their approach to risk assessment. He believes that cyber insurers should make all collected risk insights available to policyholders in order to help them improve their security posture.
A new market that is expected to grow
A knowledge gap between insurers and policyholders One of the biggest challenges with cyber insurance is that there is often a gap in coverage due to a mismatch between insurer and policy holder. According to Kudale, many insurers don’t have the technical expertise to know what needs to be covered, which can lead to companies being either over- or under-insured. Besides the lack of technical knowledge, cyber insurance also suffers from not having a “heat map” like traditional personal, auto, or home insurance. “Unlike every other form of insurable risk, a view has prevailed that cyber risk is just too complex to quantify because we don’t have sufficient contextual loss event data,” explained Simon Mavell, partner at Acuity Risk Management.
Cyber insurance is still a relatively new idea in the world of insurance, and the first cyber liability policy was not available until 2000, Kudale explained. Prior to then, cyber coverage was often provided as an add-on to other policies, often in the form of “Errors and Omissions,” he said. Kudale explained that the cyber insurance market has grown exponentially over the past decade. He believes that growth will continue over the next decade. Allied Market Research backs up that prediction of growth. They expect that the cyber insurance market will be at $14 billion by 2022, which is a compound annual growth rate (CAGR) of 28% between then and 2016. Kudale believes cyber insurance will continue to grow because of something known as “selective risk transfer strategy.” This means that rather than investing in prevention tools, companies are transferring some of their risks to poli-
Standards for cyber insurance To help alleviate the issue of a coverage gap with cyber insurance, the Object Management Group (OMG) is working to develop standards and practices for cyber insurance. OMG recently released a Cyber Insurance Request for Information (RFI) to achieve this. The RFI will help provide a roadmap to help users select the best coverage for their level of business risk, OMG explained. “Cloud service agreements that rely on service-based credits and defer risk through indemnification clauses no longer meet customer needs. Still in its infancy, the cyber insurance market is far too diverse and difficult to navigate,” said Tim Cavanaugh, co-author of the RFI and CISO at Maiden Global Servicing Co. “Both public and private sector need a better understanding of this emerging market to address their cyber risk. Just as importantly, the insurers need to better understand the value behind certifications, cybersecurity control audits and assessments to improve their actuarial and underwriting functions to address the market.” The deadline for responding to this RFI is March 9, 2020. z
March 2020
SD Times
cies that may help cover financial losses. According to Kudale, companies in certain industries can be more selective with their risk transfer strategy than others. For example, a data breach in healthcare would be more expensive on a per lost record basis than a breach in construction.
Use insurance only as a supplement your security strategy, not the entire strategy Many experts, however, believe that it’s important to have a cybersecurity strategy in addition to cyber insurance. Cyber insurance should only be there as a backup for the worst case scenarios; preventing issues in the first place should still be a top priority for companies. Lev Barinsky, CEO of insurance marketplace SmartFinancial, believes companies should be investing in both technology to prevent an attack and cyber insurance. “You can’t just do one or the other. Hackers today are sophisticated and will learn their way around new layers of protection. It’s a bit like whack-a-mole. Minimizing losses is the best you can do, and it means covering bases on both fronts.” Chris Noles, technology advisor and president of managed IT company Beyond Computer Solutions, agrees that insurance is not a substitute for good security and maintenance. “The right strategy is to implement the right cybersecurity tools and computer hygiene and have a cybersecurity insurance policy to cover an incident in case something bad happens anyway,” he said. Noles also added that some cyber insurance policies won’t even cover an incident if it is discovered your company did not take the proper preventative measures. “A good cybersecurity insurance policy should be used to help cover the cost of a catastrophe and not be used as a substitute for having a strategy to keep your systems updated and protected,” said Noles. “You wouldn’t leave your doors and windows unlocked just because you have a home owners’ insurance policy, would you? Treat your cybersecurity insurance policy the same way.” z
29
030,31_SDT033.qxp_Layout 1 2/21/20 10:58 AM Page 30
30
SD Times
March 2020
www.sdtimes.com
Transitioning to SRE
As companies scale, implementing Site Reliability Engineering to make systems more reliable may require a cultural shift BY JENNA SARGENT
O
ver the years, there have been a lot of new methodologies that aim to help an organization manage their technology more efficiently, whether that means making programmers more efficient or the operators who manage a company’s technology infrastructure. DevOps, which sought to bring developers and operators together, is one such example of this, and one that has seen a strong uptake. But one of Google’s internal processes has also seen some surge in popularity lately: Site Reliability Engineering (SRE). SRE is a development practice that incorporates operations thinking. “SRE is what you get when you treat operations as if it’s a software problem,” Google’s SRE documentation states. “So [development and operations] are really important for ensuring the
services are up and running. But these two are entirely separate entities where they work in silos,” said Nith Mehta, VP of technical services at Catchpoint. “So obviously when you have two different teams working in silos, but for the same cause, there’s going to be a gap...And this is where the need for SRE comes in because they [can] bridge this.” Mehta explained that smaller companies can get away with these siloed environments, but as a company scales, this is where SRE comes in. “[With SRE], you’re not having a separate team, but you’re trying to marry an engineer and ops skillset,” he said. “Typically companies look for someone who has an engineering background, but also is pretty good at operating systems, and also understands networks pretty well. So that way there is less friction, more efficiency, and there is one single team that is capable of seeing through end-to-end and learning
through mistakes and improving as they progress.” Finding the right person to be an SRE can be a challenge for companies. Unlike when hiring developers or system admins, SREs need to have a wider range of skills, or at least be capable of learning them. On the development side, SREs need to be able to code, and on the operations side, they need to be familiar with networking concepts in order to handle the infrastructure aspect of applications, Mehta explained. The talent pool for SREs is much smaller than a hiring manager would expect from developers or operators. “Traditionally, organizations have looked at engineers who could code and operations folks who could run the systems, like system admins and so on,” said Mehta. “That’s a model that has worked for decades...But for SREs, you’re looking at developers who also
030,31_SDT033.qxp_Layout 1 2/21/20 11:00 AM Page 31
www.sdtimes.com
understand the operating systems and network and so on. So that kind of reduces the talent pool that is available for organizations to hire.” According to Mehta, this requirement slows down the hiring process. To compensate, companies will often look to build up an SRE team with a balance of skills across their SREs. “The organizations, what they do is they try to build a mix of these different skill sets, hoping that they will eventually learn from each other,” said Mehta. “And you kind of balance it out around the career path and the progression of an engineer being able to learn the Ops side of the world and vice versa. That’s a process. That’s not something that’s going to happen overnight. So this whole talent pool is the first problem to tackle.” Apart from the technical skills, there are also some soft skills that are helpful, Auth0 software engineer Damian Schenkelman explained in a talk at Datadog Dash last year. He believes those who are teachers, advocates, and problem-solvers would be a good fit for SRE. In order for the SRE organization to scale, SREs need to be able to transfer their knowledge to others. They also need to be advocates for reliability and for the SRE brand. “The SRE team brand is very important because people need to be aware of what it does and doesn’t do in order for your team SRE to be effective,” he said. Finally, SRE team members need to be good problem-solvers because these teams will get all sorts of issues thrown at them. He believes these qualities are all things that can be learned by someone who is willing. “I definitely think all of those qualities can be learned. What needs to come from the person is the willingness to learn those things. Not everyone might be interested in those skills, and that’s OK,” he said. Apart from the talent pool, building up an SRE organization requires a change in the way that a team works and collaborates. In order to be successful, teams need to adopt blameless postmortems. “And that can only be solved when management comes in and introduces a process in place, which helps reduce the blame games and fear
of blame for a problem, where companies can collaborate,” said Mehta. Mehta recommends introducing things like error budgets and performance budgets. This gives organizations room to collaborate and try things out. Mehta also explained that like any other cultural shift, it’s important that you’re ensuring that your team doesn’t fall back into old roles and habits. “How do you ensure that they’re not back to the old days of doing the job, just the operations folks or system admins, they’re lost into handling outages, incidents, day-after-day, which means they don’t really have the time to do the actual job of an SRE, which is building a system, making it more reliable, automating some of those manual efforts.” To succeed with SRE, Mehta recommended companies start off gradually. He said organizations should start off with a certain service, and start introducing SRE to that area. He said that organizations that have been successful with SRE have started with this gradual approach. Organizations also need to ensure
March 2020
SD Times
that they’re providing their SREs with enough time to actually do SRE. “First you have to measure the amount of time that SREs spend on incidents and troubleshooting, being on call because if they’re spending most of their time on this, then they’re not SREs,” said Mehta. “They are SREs in title, but they’re essentially doing the job of a system admin or operations.” Since implementing SRE, Auth0 has seen a number of benefits, Schenkelman explained. It has created a culture of building reliable services, instrumenting important things in production, and creating actionable alerts. They have also noted that engineers are now more aware of the techniques that are used in building reliable systems and now consider those their designs. Tooling and libraries for instrumentation have also improved, especially with alerting, which was one of the team’s focus areas. Finally, reliability of SRE-owned services has been great, and they have also improved reliability for the whole system by contributing code across different teams, said Schenkelman. z
Challenges of implementing SRE The main challenge for Auth0 when implementing SRE was bringing clarity and understanding to the work that the SRE team would be doing. There was a big focus on education and explicitly stating what SRE would mean at Auth0, since SRE is such a widely-used term across the industry. They also created internal blog posts, did presentations, and held office hours to help them with this goal. Auth0 software engineer Damian Schenkelman has learned a lot from this process, and has a lot of advice to share for those about to begin this journey: with the ‘why,’” he said. It’s important to understand what your motivations 1. “Start and goals are. He believes SRE should be “a means to an end, not an end itself.” the “why” is determined, do research to decide your company’s “SRE flavor.” 2. Once According to Schenkelman, there are many different ways that companies do SRE. “Even teams at Google have different practices, and they wrote the book on SRE.” is key. You must communicate your plan clearly to stakeholders. 3. Communication He explained that some stakeholders might have heard about SRE and just need clarification, while others may be new to SRE completely. with other teams, deliver value frequently internally, and showcase 4. it“Collaborate often,” said Schenkelman. He explained that teams should quickly show how the new decision is paying off for the organization. For all of these points, he believes that it can be helpful to have a sponsor for the idea high up in the organization. “They can open doors for the SRE team(s), point you to opportunities, help have tough conversations and ensure budget for SRE as a team,” said Schenkelman. z
31
032_SDT033.qxp_Layout 1 2/20/20 11:27 AM Page 32
32
SD Times
March 2020
www.sdtimes.com
Guest View BY ERIC NAIBURG
Don’t use velocity as a weapon Eric Naiburg is vice president of marketing and operations at Scrum.org
A
s I travel around talking to Scrum teams, developers and pretty much anyone involved in building products, they seem to always bring up “velocity.” Don’t get me wrong; velocity is a good measure, but it is only ONE measure, and it is one that can be quite subjective as well. In Scrum, for example, teams will get together to understand the work items that exist in the Product Backlog and then estimate the amount of effort that it will take to complete those items. The estimation can be done in many ways, including Story Points, T-shirt Sizing, Dot Voting and more. It doesn’t matter what form you choose; what is important is that you are consistent in how your team measures. Understanding a team’s velocity can be quite a powerful tool. It will help the product owner and the overall team approximate what can be accomplished during a sprint. Sizing will help the development team break up work amongst themselves, determine if the work item is too big to accomplish and needs to be broken into smaller pieces, knowing how much has been accomplished and predicting what can be accomplished in the future. While sizing and velocity can be excellent tools, they are too often weaponized. As discussed above, they are great for helping to forecast and understand; however, those measurements are not set in stone. Sizing is truly an estimate about what we know at the time. As we start working on something, the effort may increase or decrease based on what is learned while doing. Say we are using the Fibonacci sequence. The sequence is a mathematical series of numbers. The series is generated by adding the two previous numbers together to get the next value resulting in the following series: 1, 2, 3, 5, 8, 13, 21…. As an individual or a team, a 5 can mean something different than it does to another individual or team. As the team members grow to know each other and how they work, they will learn from and about each other to gain a common understanding of how they each size, and become more consistent as a team in their estimating. There are two common places where I often see
It is nearly impossible for the entire organization to estimate exactly the same.
the weaponization of velocity: • Giving teams new velocity targets • Comparing one team’s velocity against another’s When we start holding teams or individuals accountable for a specific velocity, they are often inclined to start upping their estimations. What used to be characterized as a 2 is now estimated to be a 3 and 3 is now estimated to be 5 and so on. They know that they are being measured on the increased velocity so they start to “cheat the system.” This isn’t always done on purpose. Sometimes it is just the feeling that they are underestimating because they are being hammered about not delivering a velocity of x makes and they know that they did as much as they could. When comparing teams’ velocity, you get completely away from the team and individual aspects of estimation and why they are doing it. Each team and individual is estimating as they see fit. It really doesn’t matter if they estimate an item to be a 3, 5 or 8 as long as they over time are consistently understanding what that means for them in terms of forecasting and work loads. Because they are a team using estimation as a way to plan and consistency is learned over time, it is nearly impossible for the entire organization to estimate exactly the same. The estimation of a 3 to one team may honestly be a 5 to another, and that is perfectly fine, because they know what that means to them as a team. Over time, we can see if an individual team is becoming more effective, estimate when work will be complete and when we are putting too much work on a team. However, using increased velocity or comparison of team-vs-team velocity will drive more poor behaviors than good ones. To get a true evaluation of what a team or product is delivering, move from amount delivered to value of what is delivered. Start measuring the impact of the product on the overall organization, users and ability to deliver more of what they want and need. We can easily deliver more of the wrong thing rather than focusing on delivering the highest value items to achieve true success. What will cause disruptions rather than be of value is when we start to force measurements and comparisons rather than using measurements as a way to be more predictable and deliver higher value. z
033_SDT033.qxp_Layout 1 2/21/20 12:20 PM Page 33
www.sdtimes.com
March 2020
SD Times
Analyst View BY MICHAEL AZOFF
The climb to quantum supremacy T
he story of Moore’s Law describes well how computing hardware has evolved and grown in performance over the life of modern computing and how in the current era the pace of that law has saturated. The switch from single to multi-core CPUs has helped keep the curve from going completely flat, but what is creating a second wave of Moore’s Law are hardware accelerators that work with CPUs. In the past these were the preserve of the high-performance computing community, but since artificial intelligence (AI) transferred from research into real-world applications, the need for AI hardware accelerators has led to a huge increase in compute performance. Some of these chips are more versatile than others; for instance you can implement almost any algorithm on an FPGA, but there is a huge demand for repetitive multiply and accumulate (MAC) operations in AI, especially training and inferencing deep neural networks. I expect to see more use of conversational AI in the enterprise and consumer spaces, and the rollout of 5G promises greater use of ML in cloud, IoT, edge computing, and not least in autonomous driving. Finally, there’s hardware for quantum computing. The first player with a commercial offering is D-Wave Systems, which solves a single function by quantum annealing techniques rather than running an algorithm. In the quest for a universal quantum computer the advances are steady albeit at a basic research level. Current state of the art requires designers to work with noisy qubits and use techniques like quantum error correction to support a single logical qubit with multiple physical qubits to keep the quantum states alive long enough to achieve useful computation. A good benchmark for quantum computing is factoring large numbers. IBM and universities across the globe have been competing to factor the largest number. Shor’s algorithm for factoring sparked a new wave of interest in quantum computer programming, the record using Shor’s algorithm was the number 21 in 2012. The research
community subsequently switched to minimization techniques for number factoring, and the record declared in January 2019 is by a team from Shanghai University of the number 1005973 on a DWave 2000Q, using 89 qubits. So not quantum supremacy but progress. Google researchers published a paper in the October 2019 issue of Nature, claiming a 53qubit quantum computer (Sycamore processor) broke quantum supremacy by sampling a random distribution space in a little over 3 min 20 sec, claiming a classical supercomputer would take 10,000 years to accomplish the task. IBM, which is working on a 50+ qubit quantum computer, has countered that the problem could be performed in 2.5 days or less on a classical supercomputer, nevertheless 3 min 20 sec is still impressive. These advances will no doubt continue, but the achievement of a quantum computer that can run any quantum algorithm is still at least a decade away in my opinion; physicists talk of needing a stable 10k logical qubit machine to be able to declare the quantum computer has arrived. The key players today are more concerned with achieving quantum supremacy as there would then be a commercial opportunity with many industries, such as pharmacology, materials science, and more lining up to solve problems that no classical computer could compute in reasonable time. This business opportunity will be operational before the decade is up. The educational effort to teach and simulate quantum computing offered by high-tech companies is welcome. IBM offers on its cloud access to a 5-qubit quantum computer that can run simple quantum operations that has yielded more than 72 academic papers working on it. AWS, Google, IBM, and Microsoft offer quantum simulators, languages and environments in which to practice simulated quantum computing skills. The next generation of quantum computer programmers are being trained now. z
Michael Azoff is a principal analyst for Ovum’s IT infrastructure solutions group.
The achievement of a quantum computer that can run any quantum algorithm is still at least a decade away.
33
034_SDT033.qxp_Layout 1 2/21/20 12:10 PM Page 34
34
SD Times
March 2020
www.sdtimes.com
Industry Watch BY GEORGE TILLMANN
Planning for the perfect George Tillmann is a retired programmer, analyst, systems and programming manager.
E
stimating the time and cost it takes to deliver a project is the bane of system development and it is an old problem that doesn't seem to be getting any better. How bad? According to a 2012 McKinsey-Oxford University study of 5,400 largescale IT projects, 66 percent were over budget, 33 percent came in late, and 17 percent delivered less functionality than they promised. It’s not for a lack of trying. Dozens of estimating approaches and tools have been developed over the years. History-based estimating approaches look into the organization’s past and use the effort required for similar completed projects. Formula-based approaches require the project manager to answer a number of questions that are then entered into a mathematical model. Expert- or guru-based estimating approaches gather systems development and business experts together and, in an IT version of a séance, divine the effort required. Experimental-based approaches involve performing a small amount of actual work on the project, stopping, measuring progress, and then projecting the effort needed to complete the project. The result: In spite of all these approaches, project estimates are still wildly inaccurate. Worse, the estimates are not uniformly incorrect, but skewed with the number of overbudget/late projects significantly greater than the number of underbudget/early ones. Something strange is going on, but what? Perhaps this is not an intellectual problem but an instinctual one. Maybe, someday some evolutionary biologist will discover the underestimation gene—the DNA that causes our species to underestimate any task. Why might we have such a gene? Conceivably, back in our prehistoric past, if we had really understood how difficult some tasks were we would never have undertaken them. Imagine if our cave-dwelling ancestor said, “I think I’ll invent the wheel today,” only to have his neighbor one cave over say, “Don’t forget that you need to reduce the friction between the hub and the axle?” It would be understandable if our discouraged ancestor put wheel-inventing aside
Maybe, someday some evolutionary biologist will discover the underestimation gene.
for another day. This useful skill of underestimating effort might have been passed down generation to generation so that now the do-it-yourselfer is convinced he can assemble that bookcase in the directions-predicted 2 hours. Unfinished bookcases might be a testimony to our genetic past. Our estimation-challenged brains don’t see the potential problems, but only a perfect result. Planning for the perfect is the realization that when one estimates the effort of anything, in the estimator’s mind is the picture of how the project will unfold if everything goes perfectly. If we are programmed to underestimate effort, then simply treating poor estimates as an educational problem will continue to prove disappointing. Simple training is no substitute for gene therapy. Breaking this deterministic hold will require a different approach. Until gene splicing can solve this problem we need an interim solution. Our best response to the estimation conundrum is not to reject the inevitable, but to embrace it. Recognize that you are never going to be a good estimator—your genes won’t allow it. But you can beat those genes at their own game. This is how. Planning for the perfect leads to an ideal estimate. However, projects usually take longer—the actual is the idealistic estimate plus a factor X, call it the reality factor. If we know the reality factor, then the most realistic estimate is the ideal estimate plus an adjustment derived from the reality factor. If the ideal estimate is 100 personmonths and the reality factor is 15 percent, then the realistic estimate is 115 person-months. The reality factor needs to be revisited EVERY time actuals become available. The actuals need to be compared with the estimate, and a revised reality factor created. A reliable reality factor should start to emerge after just a few estimates and their comparison to actuals. Now go and build that bookcase... however long it takes. z This article is excerpted from Tillmann’s book, “Project Management Scholia: Recognizing and Avoiding Project Management’s Biggest Mistakes” (Stockbridge Press, 2019)
Full Page Ads_SDT032.qxp_Layout 1 1/17/20 2:17 PM Page 11
Join the Week-Long Celebration of Agile Testing & Automation F U L L P R O G R A M N O W AVA I L A B L E
EPIC EXPERIENCE 2020 Special Offer: Register using promo code EPICMP to save up to $200 off your registration.*
April 19â&#x20AC;&#x201C;23, 2020 S A N D I E G O, C A San Diego Mission Bay Resort
E P I C .T E C H W E L L .C O M *Discount valid on packages over $400
Full Page Ads_SDT033.qxp_Layout 1 2/20/20 11:26 AM Page 36