SD Times - March 2019

Page 1

FC_SDT021.qxp_Layout 1 2/25/19 11:51 AM Page 1

MARCH 2019 • VOL. 2, ISSUE 021 • $9.95 • www.sdtimes.com


Full Page Ads_SDT021.qxp_Layout 1 2/21/19 11:03 AM Page 2


003_SDT021.qxp_Layout 1 2/22/19 3:49 PM Page 3

Contents

VOLUME 2, ISSUE 21 • MARCH 2019

FEATURES

NEWS 6

News Watch

8

Cracking into SAFe... 8 steps to get started

10

Open Robotics turns its focus to ROS 2.0

12

Biotech firm turns to open source for speed

For development teams, it’s time to throw out the open-office plan page 16

Creating a culture of happiness 19

7 best practices when adopting DevOps

COLUMNS 36

GUEST VIEW by Mitesh Soni Top developer skills in demand

37

ANALYST VIEW by Arnal Dayaratna The ubiquity of developers

38

INDUSTRY WATCH by David Rubinstein A sobering look at cloud

BUYERS GUIDE Adding value to your CI/CD pipeline page 30

page 20 Hackers are still sticking to the tried-and-true methods

page 24 Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 80 Skyline Drive, Suite 303, Plainview, NY 11803. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2019 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 80 Skyline Drive, Suite 303, Plainview, NY 11803. SD Times subscriber services may be reached at subscriptions@d2emerge.com.


004_SDT020.qxp_Layout 1 1/18/19 12:54 PM Page 4

®

Instantly Search Terabytes

www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein drubinstein@d2emerge.com NEWS EDITOR Christina Cardoza ccardoza@d2emerge.com

dtSearch’s document filters support: ‡ popular file types ‡ emails with multilevel attachments ‡ a wide variety of databases ‡ web data

SOCIAL MEDIA AND ONLINE EDITOR Jenna Sargent jsargent@d2emerge.com ASSOCIATE EDITOR Ian Schafer ischafer@d2emerge.com ART DIRECTOR Mara Leonardi mleonardi@d2emerge.com CONTRIBUTING WRITERS

Over 25 search options including: ‡ efficient multithreaded search ‡ HDV\ PXOWLFRORU KLW KLJKOLJKWLQJ ‡ forensics options like credit card search

Alyson Behr, Jacqueline Emigh, Lisa Morgan, Jeffrey Schwartz CONTRIBUTING ANALYSTS Cambashi, Enderle Group, Gartner, IDC, Ovum

ADVERTISING SALES PUBLISHER David Lyman 978-465-2351 dlyman@d2emerge.com

Developers: ‡ $3,V IRU & -DYD DQG 1(7 LQFOXGLQJ FURVV SODWIRUP NET Standard with Xamarin and 1(7 &RUH ‡ 6'.V IRU :LQGRZV 8:3 /LQX[ 0DF L26 LQ EHWD $QGURLG LQ EHWD ‡ )$4V RQ IDFHWHG VHDUFK JUDQXODU GDWD FODVVLILFDWLRQ $]XUH DQG PRUH

.

.

.

SALES MANAGER Jon Sawyer jsawyer@d2emerge.com

CUSTOMER SERVICE SUBSCRIPTIONS subscriptions@d2emerge.com ADVERTISING TRAFFIC Mara Leonardi adtraffic@d2emerge.com LIST SERVICES Jourdan Pedone jpedone@d2emerge.com

Visit dtSearch.com for ‡ KXQGUHGV RI UHYLHZV DQG FDVH VWXGLHV ‡ IXOO\ IXQFWLRQDO HQWHUSULVH DQG developer evaluations

The Smart Choice for Text Retrieval® since 1991

dtSearch.com 1-800-IT-FINDS

REPRINTS reprints@d2emerge.com ACCOUNTING accounting@d2emerge.com

PRESIDENT & CEO David Lyman CHIEF OPERATING OFFICER David Rubinstein

D2 EMERGE LLC 80 Skyline Drive Suite 303 Plainview, NY 11803 www.d2emerge.com


Full Page Ads_SDT021.qxp_Layout 1 2/21/19 10:59 AM Page 5

Get

Inspired at the premier event for

software Testing

Professionals

April 28-May 3, 2019 orlando, FL s t a r e a s t. t e c h w e l l . c o m

SPECIAL OFFER: Register by March 29 with promo code SECM to save up to an additional $200!* *Discount valid on packages over $400


006,7_SDT021.qxp_Layout 1 2/22/19 2:09 PM Page 6

6

SD Times

March 2019

www.sdtimes.com

NEWS WATCH Google releases serverless NoSQL document database Google Cloud is on the path towards a serverless future with the general availability release of its serverless NoSQL document database solution Cloud Firestore. The solution leverages cloud-native technologies to provide the ability for users to store, sync and query data for their web, mobile and IoT apps. Features include live synchronization, offline support and ACID transactions as well as security features and integrations with Firebase and Google Cloud Platform. Firebase is the company’s mobile development platform. With Firebase and Cloud Firestore, users can build apps with real-time capabilities, and hands-off auto-scaling, the company explained. “Building with Cloud Firestore means your app can seamlessly transition from online to offline and back at the edge of connectivity. This helps lead to simpler code and fewer errors. You can serve rich user experiences and push data updates to more than a million concurrent clients, all without having to set up and maintain infrastructure,” Google’s Vice President of Engineering Amit Ganesh, and Product Manager Dan McFarth wrote in a post. In addition, Firestore includes integration with Cloud Functions and Cloud Store, and support for up to 500 collections and documents in a single transaction.

SmartBear: API standardization remains a challenge A newly released report from SmartBear has found companies are struggling when using APIs because they find API standardization too challenging. The company’s State of API 2019 report is meant uncover the methodologies, practices and tools software teams are using to use and manage APIs. The report features responses from more than 3,000 API developers, architects, testers, and product leaders. According to the report, API standardization is increasingly the biggest challenge teams are facing today. In 2016, the company’s annual state of API

report only ranked standardization as third on the list of challenges organizations wanted to see solved. The need to establish an API standard is growing as companies scale out their API programs, the report stated. Furthermore, organizations using new software architectures, such as microservices, also accelerates that need because they may be maintaining thousands of different APIs. The report found 58 percent of respondents want to see standardization solved in the next couple of years. SmartBear believes that organizations will enforce standardization through internal style guidelines. One third of respondents already have defined style guides, while

People on the move Enterprise software provider Perforce is adding a version control expert to its team with the announcement of Brad Hart as CTO of its version control operations. Hart has 20 years of experience designing and implementing version control solutions in the enterprise. He is a co-founder of AccuRev, which was acquired by Micro Focus in 2013. Before AccuRev, Hart worked as a ClearCase consultant at IBM.

another 32 percent plan on creating one in the upcoming years. The industries with the highest adoption of style guides were IT/services, financial, healthcare, and telecommunications. Other challenges to implementing APIs include versioning, composability, security and scalability.

Angular 8.0 to feature first opt-in technical preview of Ivy Google’s Angular Team is setting a soft May 2019 release date for the generally-available version 8.0 release of its web app framework. Angular 8.0 is expected to include the first opt-in technical preview of the new rendering engine, Ivy, originally announced in February last year. The opt-in preview will give developers the option to switch between the Ivy and View Engine build and rendering pipelines within projects, according to Angular developer advocate Stephen Fluin. The team will be providing additional details on exactly how to do this in upcoming beta builds. Fluin also explained that

while the Ivy preview is aiming for “great backwards compatibility,” in addition to faster, smaller builds and easier to read code, there will likely be some features that won’t quite have full compatibility, including i18n, Angular Universal and the Angular language service. “This opt-in preview is focused on moving applications to the Ivy compiler and runtime instructions without requiring developers to rewrite their applications,” Fluin wrote in a blog post. “There are many Ivy-specific APIs that will be added to our public API later as a part of Angular Labs and future stable releases.” The full rollout of Ivy is expected in version 9 of Angular.

JS++ programming language to solve out-of-bounds errors The web programming language JS++ is looking to tackle a common problem impacting a majority of major programming languages: outof-bounds errors. Programming language and compiler company Onux announced the


006,7_SDT021.qxp_Layout 1 2/22/19 2:11 PM Page 7

www.sdtimes.com

release of JS++ 0.9 this week. Out-of-bounds errors occur when the container element you are trying to access doesn’t exist. “For example, if an array has only three elements, accessing the tenth element is a runtime error,” Roger Poon, JS++ lead designer and co-inventor of existent types, explained in a post. “Out-of-bounds errors have plagued computer science and programming for decades. Detecting these errors at compile time has ranged from slow to impossible, depending on the language design,” Poon added. In addition, these errors can result in application termination. The latest release of JS++ aims to address this with a new compiler that analyzes out-ofbounds errors at compile time. “We achieve efficient analysis using traditional nominal typing. An existent type is no different from ‘bool’ or ‘unsigned int’ in JS++ in terms of type checking performance,” Poon explained. “It is true you need to know the size of the array. However, you do not need to know the size of the array at compile time. Existent types describe whether a value is within-bounds or out-ofbounds. The rest is compiler engineering.”

Server Side Public License struggles to gain support The Server Side Public License (SSPL) is not being welcomed into the opensource community with open arms as its creator, MongoDB, had hoped. But that’s not stopping the SSPL from getting the support it needs. MongoDB first announced the release of the new soft-

ware license in October as a way to protect itself from being taken advantage of by larger companies for monetary gain. At the time, MongoDB cofounder and CTO Eliot Horowitz explained: “This should be a time of incredible opportunity for open source. The revenue generated by a service can be a great source of funding for open-source projects, far greater than what has historically been available. The reality, however, is that once an open-source project becomes interesting, it is too easy for large cloud vendors to capture most of the value while contributing little or nothing back to the community.” MongoDB originally submitted SSPL to the Open Source Initiative, but failed to get approved. According to Mark Wheeler, a spokesman for the company, this is just part of the process. The company has been listening to feedback from the community, and has submitted an amended version of the license,

which is still under review by the OSI. “We firmly believe that the SSPL meets the tenets of open source, and in the era of cloud computing, there needs to be some shift,” Wheeler said. In the meantime, opensource company Red Hat announced it was updating its “bad licenses” list to include SSPLv1, meaning any software included in the company’s Fedora Linux distribution would not be allowed to include the license.

Swift 5 comes with exclusivity enforcement Apple has introduced full exclusivity enforcement enabled at run-time in version 5 of its Swift programming language. The feature improves memory safety by preventing a variable from being accessed by a different name during a modification of its value, explained Andrew Trick, software engineer at Apple, in a developer blog.

March 2019

SD Times

The feature was previously available in debug builds in Swift 4, but applications that weren’t fully tested in debug might be affected by the update, Trick said. Trick explained that in situations where a programmer’s intention might be ambiguous, the compiler can’t guarantee some application behavior. For example, a variable that is read and called within the same scope that it’s modified would cause an exclusivity violation at run-time with the new feature, Trick explained. “Compile-time (static) diagnostics catch many common exclusivity violations, but run-time (dynamic) diagnostics are also required to catch violations involving escaping closures, properties of class types, static properties, and global variables,” Trick said. While the types of exclusivity violations that the compiler could catch were increased over successive versions of Swift 4, Swift 4.2 made them more visible with the exclusive access warning error. z

Ionic focuses on web components in 4.0 release The open-source hybrid mobile app development SDK Ionic has released version 4 of its framework with a new focus on web components. This is a substantial change to the framework that has been predominantly Angular compatible. “At the end of 2017, we started asking ourselves if our original dream was worth revisiting. It was clear that frontend developers would never settle on any specific frontend framework or libraries, so assuming otherwise was futile. At the same time, we were frustrated that Ionic could only be used by those that embraced Angular. While we loved Angular, we hated the idea that Ionic wasn’t achieving its original goal of being a toolkit for every web developer in the world,” said Max Lynch, CEO of Ionic. As a result, the Ionic team is referring to the 4.0 release as “Ionic for Everyone” with its newly added support for React, Vue.js and web components. The release includes nearly 100 web components as well as custom elements and shadow DOM APIs, enabling developers to leverage the components for mobile, desktop and progressive web apps. Vue.js and React Ionic bindings are currently available as alpha versions in this release.

7


008,9_SDT021.qxp_Layout 1 2/25/19 10:41 AM Page 8

8

SD Times

March 2019

www.sdtimes.com

Cracking into

SAFe BY CHRISTINA CARDOZA The Scaled Agile Framework (SAFe) has been around for almost eight years now, and over time the framework has evolved and grown to reflect all the changes happening in the software development industry. Since the framework was initially released in 2011, the industry has seen more Kanban, DevOps, lean product development, value stream, technical agility and project management approaches — and the framework has evolved to incorporate those necessary techniques. “We are never really done. We always have new ideas and new explorations,” said Dean Leffingwell, the creator of SAFe and chief methodologist at Scaled Agile, Inc., the provider of SAFe. “We will keep moving and evolving because the challenge keeps moving and evolving, and we learn new things every single day.” CollabNet VersionOne’s 12th annual State of Agile report found that today, SAFe is the leading approach to scaling Agile. However, with all the additions made to the framework over the years, it can be intimidating or overwhelming when looking at how to approach SAFe. “SAFe is undeniably big, but the systems people are building are also really big. You have to have the right tool to address those problems,” said Leffingwell. According to Andrey Mihailenko, CEO of Agile project management tools provider Targetprocess, it is all a matter of knowing where to start. SAFe provides an implementation roadmap. The roadmap provides a series of 12 steps that describe strategies and an

As framework grows, organizations can follow these 8 steps to get started

ordered set of activities proven to be effective in implementing SAFe. To successfully implement SAFe, it can take many months and there is no one-size-fits-all approach, according to Mihailenko. But beyond the roadmap, Targetprocess offers eight steps to provide a fast track proof of concept approach: 1. The tipping point: Also known as the pivot point or business trigger. To get started, it is important to show why you are going on this journey, according to Mihailenko. Common tipping points include a product that is failing or new, proactive thinking from management. “Clearly understanding, and professing to all who will listen why you are embarking on such an important and labor-intensive transformation is imperative. Attempt to quantify the problems and business drivers so that you can create a believable problem baseline,” he said. 2. Create the coalition: In addition to management, it is important to have a team that can drive the transformation vision, be change agents, and have organizational credibility. This includes hiring or training SAFe program consultants, training executives and other managers, and even creating a lean-agile center of excellence. 3. Create the guiding vision: There should also be a coherent document in place that states the intent of the SAFe implementation. This shouldn’t be a list of design requirements, but rather a guide for achieving a solution to the problem. For instance, it can include value stream mapping, product envisioning and an analysis of strengths, weakness, oppor-

tunities and threats (SWOT). 4. Communicate and begin training leaders: While management and leaders may be onboard, they actually have to know what their role is in this implementation. According to SAFe’s Leffingwell, when his team started out with the framework, they quickly realized training the team and excluding management doesn’t work. “The key to successful implementations of SAFe is to bring management on board with education. Show them the path, show them the benefits, introduce them to their new role and explain how that role is different,” he said. “It is a lot more coaching and leading than it is managing. Engage them in the journey, because when they decide to take the leash and lead, everyone will naturally follow.” 5. Empower others: Once the vision is established and senior management is on board, then you need to empower the people who are going to do the tactical work and deliver the end-user value you are looking for. Here, you should define the teams, train them, and maintain a flexible process. 6. Pilot launch: Starting small and having visible short-term wins can help build momentum for your SAFe implementation. Leffingwell suggested picking an area that is impactful enough to make waves throughout the organization, but not too much that you are “boiling the entire ocean.” “You take those wins and you use them to spread the word and knowledge base so others can succeed with it as well,” he said. “Pick things that matter, have scale, and succeed with those.”


008,9_SDT021.qxp_Layout 1 2/25/19 10:42 AM Page 9

www.sdtimes.com

In addition, it is important not to get trapped into concrete plans, according to Targetprocess. The point of Agile is being able to change and adapt quickly. 7. Launch and execute the ART: The Agile Release Train (ART) is a team of Agile teams that plans, commits and releases value. It typically consists of 50 to 150 people in distributed locations and time zones. Things to keep in mind when creating an ART are leadership support, understanding around products, and collaboration. ART can help form successful teams, engage in a learning experience, share knowledge and create more experienced team members. 8. Extend and expand: The first step in this journey is about taking what you’ve learned from small successes and employing those lessons throughout the organization. Here, often a lean-Agile program office will take charge to lead overall improvements, continue to align value streams and new value streams, maintain enterprise value flow, and provide reliable Agile forecasting. SAFe’s Leffingwell added that it is always important to make sure you are doing the basic practices of SAFe if you want to continue to be successful with

your implementations. He explained that most failed initiatives he sees are because users are modifying SAFe and skipping important elements like bi-weekly systems demos or team collaboration. “As in any transformation, the implementation phase never really ends,” said Mihailenko. “You are constantly re-evaluating your ART configurations, the teams themselves, and deciding on how to groom or throttle your adoption or propagation of the framework process across the company.”

Who should implement SAFe The Scaled Agile Framework is often associated with large-scale transformations looking to apply Agile to hundreds or even thousands of people; however, Scaled Agile has worked to provide a number of different configurations of SAFe so it can be applied to almost any transformation. “The different configurations of SAFe allow us to tailor it to only Essential SAFe — which provides support of a program and the teams under it as an Agile Release Train. This might be perfect for a company just starting out with SAFe or even a small to mid-sized company. Based on higher complexities or magnitudes of effort, the portfolio, large

March 2019

SD Times

solution or even full SAFe configurations can be applied,” said Mihailenko. When trying to figure out which teams should be using SAFe, Mihailenko said it all depends on what you are building, so that’s why it is important to understand value streams and product portfolios. For instance, an organization building Internet products is probably going to apply SAFe to front-end and back-end development, UI/UX design, API development and middleware teams. “A critical point to consider is that no matter what you are building, the integration points and conjoining components almost always necessitate the need for a system team to oversee full system integration,” said Mihailenko. “So, whether you’re a publishing company developing digital media, a cable monolith creating streaming VOD products, or an advertising company creating SDKs or digital tools to fill your linear or nonlinear advertising space, going SAFe will help you manage the complexities within your product and/or service value streams.” z

© Scaled Agile, Inc.

9


010_SDT021.qxp_Layout 1 2/21/19 3:00 PM Page 10

10

SD Times

March 2019

www.sdtimes.com

Open Robotics turns its focus to ROS 2.0 BY CHRISTINA CARDOZA

Open Robotics, previously known as the Open Source Robotics Foundation, is pouring its development efforts into rewriting the core of the Robot Operating System (ROS) 1.0 this year. ROS has been around since 2007, and while version 1.0 is already being used in a number of different applications and solutions, the robotics industry is changing, and Open Robotics is determined to see that the technology changes with it. Despite its name, ROS is not exactly an operating system. It is a collection of software libraries and tools used to develop robotics applications. According to Brian Gerkey, CEO of Open Robotics, when the organization first started working on ROS, many of the robotics solutions already available were in the form of traditional robot arms used in factories or in such things as floor-cleaning robots for consumers. “Since that time we’ve seen an explosion of products in other domains, especially mobile robots that do everything from transport goods, to provide facility security, to entertain. And of course we’ve seen the impossible to ignore trend of investment and advancement in autonomous vehicles,” he said. The ongoing evolution of the robotics industry, and the need for more advanced solutions, is what led Open Robotics to rethink the core system. “We made the decision to embark on ROS 2 as a rewrite of ROS 1 because of feedback that we had consistently received over the years from industries such as automotive and aerospace,” said Gerkey. “While ROS 1 is invaluable in R&D and prototyping activities, it can’t reasonably be taken through the QA process that is applied to products that include, for example, safety-critical systems. Based on that feedback we are designing and developing ROS 2 in such a way that it will be amenable to approval for use in such applications.” “While ROS 1 is already used in

The Open Source Robotics Foundation’s most recent ROS 2 alpha release, Crystal Clemmys, was announced in December.

products and services that are on the market today, we expect to see even broader adoption of ROS 2 because of the design decisions and development practices we are employing, based on feedback that we’ve received over the years,” Gerkey continued. ROS 2 alpha releases started coming out in August of 2015. The first official code-named version, Ardent Apalone, was released in December of 2017. Since then, the organization released Bouncy Bolson in July, and the most recent release, Crystal Clemmys, was announced in December. “The ROS 2 Crystal release from December 2018 already provides a lot of what many ROS 1 users need from ROS, including the navigation stack, which is a key feature for many mobile robotics applications,” said Gerkey. In addition, work is still being done

on ROS 1, with the latest ROS 1 distribution release, codenamed Melodic Morenia, released in May of last year. It featured message-passing middleware, developer tools, planning and navigation for mobility and manipulation, and integration with other open-source projects for capabilities such as perception and machine learning, according to Gerkey. “More features will be added to ROS 2 in subsequent releases and at some point in the medium-term we expect that ROS 1 vs. ROS 2 will become a choice of personal preference and/or legacy constraints,” he said. The next release of ROS 2, Dashing Diademata, is scheduled for May of this year. The release is expected to include improvements to intraprocess communication behavior, memory management, performance and reliability. z


Full Page Ads_SDT021.qxp_Layout 1 2/21/19 10:59 AM Page 11


012-14_SDT021.qxp_Layout 1 2/22/19 2:41 PM Page 12

12

SD Times

March 2019

www.sdtimes.com

Biotech firm turns to open Ginkgo Bioworks is using robotics, machine learning and open source technology to push experiments down the pipeline at a rapid pace BY IAN C. SCHAFER

Founded 10 years ago by a group of MIT scientists, Massachusetts-based biotech firm Ginkgo Bioworks has found great success in leveraging a number of open-source technologies to speed up and automate a wide variety of synthetic biology laboratory tasks. The organization’s main focus is the genetic engineering of compound-producing bacteria for a range of industrial applications, and Ginkgo senior software engineers Dan Cahoon and Chris Mitchell spoke with SD Times about how their combined computer and life science backgrounds have given them a unique opportunity to flex their skills and utilize their specific educations outside of more traditional routes for programmers. Cahoon, whose background is in chemical and physical biology, as well as computer science, is part of the ‘Decepticon’ automation sprint team at Ginkgo. Presently, they’re collaborating with automation company Transcriptic to

begin incorporating robots into the laboratory pipeline. Cahoon works on the front and back end as well as architectural aspects of the robotics platforms. “We’re working on onboarding what they call work cells, which are basically a collection of several robots that we’ve put together in the lab (with the big robot arms) to essentially automate all of these lab tasks that [the scientists] would normally have to do,” Cahoon said. Most of these tasks involve handling fluids — transporting, mixing and centrifuging chemicals. With their varied technology stack, relying on plenty of well-known, opensource libraries, as well as some focused directly on biotech, Mitchell says that Ginkgo has seen huge growth in throughput and speed over the years. “I think the number we’re hitting is a three-times increase year-over-year in our throughput for the past five or six years, and we’re continuing on at that scale, which is pretty staggering,” said Mitchell, a life science PhD in addition

to his developer role at Ginkgo. “I have about nine years of benchwork, which is actually doing the physical experimentation. And I see a single individual at Ginkgo can carry out pretty much the entire operations of an entire academic lab in a week. Ginkgo is sort of like a full-stack engineering operation where you write the DNA and you stitch it together and you test it, you learn from it and you do the entire thing. So Ginkgo’s is very much completely vertically integrated into its space. In terms of the scaling and what our automation stack has enabled us to do is that a single individual can optimize thousands of organisms and have those organisms custom built and tested within a few weeks.” Cahoon compared the work of two or three of the aforementioned robotics platforms to around 100 human lab workers, and says their automation efforts mean that projects which utilize similar techniques and on similar scale can be pushed through rapidly, providing more time for what he calls “cool offshoot projects.” This includes a recent experiment which saw scientists at Ginkgo sample DNA from a flower, extinct for around 100 years, which was preserved in a museum. “We took their scent-producing


012-14_SDT021.qxp_Layout 1 2/22/19 2:41 PM Page 13

genes and put them in our yeast platform and it has produced these smells from a flower that no longer grows,” Cahoon said. “So you can now smell at Ginkgo these flowers that are actually extinct.” Though extant sources also provide fragrance profiles from DNA. In collaboration with a flavor and fragrance company, Ginkgo used the same yeastbased platform to produce the compounds that make roses smell the way they do, in mass. Mitchell broke down what software goes where in this long chain from idea to trial to completed experiment. “Essentially, our whole infrastructure is running on Docker, so everything is

SD Times

Ginkgo’s process involves yeast-based platforms designed to synthesize a variety of industrial and cosmetic compounds.

some other miscellaneous things written in Go and Node, and that’s largely just because we have some library that we wanted to use that integrates support in Node. I think GraphQL is one of the

Photos by Grace Chuang

source for speed

Ginkgo senior software engineer Chris Mitchell.

containerized, largely,” Mitchell said. “The orchestration of that right now is done by Rancher and so we use GitLab for spinning things up and down and handling our development and deployment lifecycle. In terms of running the work, we use a variety of back-ends for web servers, the majority being Ruby on Rails and Django. For some small microservices, we’ll use Slack. There’s

March 2019

13 Diagram: Karen Ingram

www.sdtimes.com

best examples of that. That ecosystem was developed in JavaScript, so it makes sense to use Node to run that instead of some other layer. For running tasks and analyzing data, we use Jupyter. For a lot of the ad hoc analysis by users, Celery runs a lot of our work. Celery uses RabbitMQ as its broker with Redis as its back-end. And Airflow is another tool that we utilize. On the machine learning

side, we take advantage of TensorFlow and Keras for trying to learn from our data and make better predictions. Our front-ends are all React, with some Redux in there, usually for our state store. And Apollo for stitching together different GraphQL templates to sort of unify our data.” The most important aspect of their jobs developing in this full-stack synthetic biology operation, Mitchell said, is accessibility from varied classes of users throughout the organization. “At Ginkgo, you have these two worlds, I like to think of. One is sort of the physical sample-handling,” Mitchell said. This world involves the robotics platforms that expedite the physical laboratory work such as mixing liquids and centrifuging. “There’s a lot of samplelineage tracking with that, which is essentially a giant graph of what samples, what reagents and what molecules were in that sample and now comprise a new sample — the tracking of how much of something there was, how much it took, which robot did it. That lets you get insight into things like where is my systematic variation coming into my analysis.” Mitchell says the second world involves how that data is used, queried, processed and referenced. “A lot of that is building different automated pipelines as well as enabling ad hoc pipelines for users to perform additional analyses or refine other measurements,” Mitchell said. “So a lot of that is handling things like ‘What is the provenance of your data?’ so ‘How do you make these analyses reproducible and how do you make them continued on page 14 >


012-14_SDT021.qxp_Layout 1 2/22/19 2:42 PM Page 14

SD Times

March 2019

www.sdtimes.com

< continued from page 13

scalable?’ ‘How do you make them automated so that when somebody comes to the lab tomorrow, their answers are already sitting in front of them?’ ‘How do you make that data accessible to a variety of classes of users?’ We have users who are designing organisms, so they’re interested in biological questions. But the model at Ginkgo is that we distribute the work between different silos. We have the silo that is the people who are running the machines, and they also have access to that data, but they ask different questions like ‘What is the health of my instrument?’ ‘Where is most of my time being spent?’ ‘How can I further optimize my pipeline and increase the throughput and scale of details?’ So a lot of what my team does is say ‘How do we expose this data to different users to make it interactive at the many levels of scale that our users encounter?’ The person submitting experiments for the biological side might have 10 samples they’re looking at. The person running it might have 10,000 samples.”

Photo: Grace Chuang

14

Ginkgo senior software engineers Dan Cahoon, right, and Chris Mitchell.

Cahoon says the next step for his ‘Decepticon’ team is bringing on even more and speeding up the existing robotics platforms, but he says the work he’s already done at Ginkgo, and the organization itself, has been a perfect fit and a unique experience from a both a life science and computer science perspective. “Biology has so much potential for doing things, like when we brought

back the extinct flowers for example,” Cahoon said. “We’ve done that on the platform that I’ve worked on. That’s, I think, incredibly cool. I’ve also been very hands-on with the scientists, talking with them, coming up with things to really solve day-to-day issues and figure out how we can scale up the science. There are so many smart people here, so it’s just constant learning. And I think that’s just super special.” z


Full Page Ads_SDT021.qxp_Layout 1 2/21/19 11:01 AM Page 15


016,17,18_SDT021.qxp_Layout 1 2/21/19 3:01 PM Page 16

16

SD Times

March 2019

www.sdtimes.com

BY JOHN LAFLEUR

W

e’ve all been part of this debate about whether we should have an open-plan office or not. In general, executives would advocate for the open-plan office, while the individual contributors would say, “Please, no!!!” Executives won’t listen because they feel they are the only ones who can see the full picture clearly enough to make this decision...of course! And so, it seems, 70% of all offices now have an open floor plan. I was wondering if we could make a very rational analysis of this matter, then, perhaps, we can agree on what is the best for the company (though you will never convince an executive what is the best for your developers, unfortunately). And finally, we could discuss what could be done if you’re stuck with an open office. First, here is the approach with the criteria to consider, which is followed by a review of the alternative workspace configurations with ratings on those criteria.

What are the criteria for the configuration of your workspace? If there was one criterion that would speak to any executive, it should be the Return On Investment (ROI), but how can we break it down to make it understandable? n Financial criteria: The office is a cost. The more floor space you need, the more costly it is. John Lafleur is co-founder and COO (and a developer himself) at Anaxi.

n Productivity criteria: What is the point of reducing the floor space if your team’s productivity output plummets? Productivity is the output of the organization, but also indirectly of the office. n Collaboration criteria: The best ideas and best innovation come from the collaboration of several minds. If the team creates plenty of products, but none are the right product, the ROI of the team is zero. So, collaboration is directly correlated with the quality of the output. n Talent retention criteria: Let’s be honest here. Salaries are a lot more costly than a floor plan! If the floor plan impacts talent retention, this needs to be a criterion you pay attention to. At first glance, there seem to be four in our evaluation. But from the executive’s perspective, there may be a fifth important one — having the feeling of having the whole team in one place, easy to reach and easy to monitor. We might consider this criterion not valid, but I’m ready to bet this is clearly a decision factor. Just for the sake of the argument, let’s consider this as a fifth criterion (specific to management): accessibility. Before we delve more deeply into this, I’m going to throw out the financial and talent retention criteria. Here’s why. On the financial side, let’s assume that individual desks would take five times more space than an open-plan office for the same number of developers. For a team of 100 developers, it should cost you about $70,000 per year for the rent for an open-plan office. Sure, this depends on which city, but

you’ll see it won’t matter. The average wage for developers should be about $100,000, and that is conservative. All those salaries cost $10 million per year! So 10 percent productivity loss stands for $1 million thrown out the window — 15 times more than the office rent! On the talent retention side, your team’s perceived productivity and efficient collaboration are directly correlated to your talent retention. If your team is productive and collaborate together well, talent retention will be a lesser problem, all other things equal. That’s why productivity and collabo-


016,17,18_SDT021.qxp_Layout 1 2/21/19 3:01 PM Page 17

www.sdtimes.com

March 2019

SD Times

executives. But I assure you, not to developers! The study says that, for open-plan offices, “face-to-face time decreased by around 70 percent across the participating employees, on average, with email use increasing by between 22 percent and 50 percent (depending on the estimation method used).” And when you think about it, it’s obvious. People can’t cope with the noise, and therefore put headphones on. l Accessibility: This is the best configuration for executives to feel part of the team, having them all packed in the same place. Mobility between teams is made easier too.

THE CUBICLE FARM

ration should be your only criteria, whether you are an individual contributor, manager or executive. Let’s analyze the different workspace configurations on those two points.

OPEN-PLAN OFFICE

l Productivity: There is a reason why

developers hate open-plan offices. Yes, hate might be a good word to use; just go on Reddit to check for yourself: “The open-plan office is a terrible, horrible, no good, very bad idea.” Why? Developers are more exposed to interruptions and stress. If the workspace is

designed to have as much motion as possible, that won’t help them focus! Any interruption can easily take more than 30 minutes from the developers’ productive time. And the more interruptions, the more frustration, the less quality work, the more bugs — and it goes on. Let’s assume you had two extra interruptions every day because of this. That’s one hour every day: 15 percent productivity loss! l Collaboration: There is a Harvard study on this; it actually says open-plan floor offices make your team less collaborative. This may come as a surprise to

l Productivity: In cubicles, you have fewer distractions and unnecessary interruptions, thanks to the little extra privacy you get. Not seeing your colleague’s face might just be enough for you to not stop by and crack a joke to him or her. Plus, developers feel less “watched” and can be more relaxed, allowing them to focus more. You do have the same amount of noise, though, which can be easily offset by putting headphones on. l Collaboration: Cubicles could be arranged by teams to achieve maximum collaboration. But this might mean you would need to change cubicles if you change teams/squads, so developers might be less inclined to personalize their cubicles as they would ideally want. Even though you will need to get up to talk to your colleague across from you, you have a bit more privacy to have deeper conversations with colleagues with less fear of disturbing their neighbors, which is a good point for collaboration. l Accessibility: Cubicles take twice as much space than open-plan workspace. So, executives might feel their teams are less accessible, even more so if they need several floors instead of just one, because of the place cubicles take.

TEAM SEPARATION (open within team, closed to other teams) If you have very large teams, it will just be like the open-plan office. So, in this alternative, I’m considering teams of less than 10 people. continued on page 18 >

17


016,17,18_SDT021.qxp_Layout 1 2/21/19 3:02 PM Page 18

18

SD Times

March 2019

www.sdtimes.com

< continued from page 17

l Productivity: The smaller the team,

the better the environment for focusing. Whether you have your manager within this space can actually make a huge difference, though, as developers might feel a bit less relaxed, depending on the style of management, of course. The big advantage here is the lack of distraction outside the team. l Collaboration: This might be the best configuration for collaboration within teams. However, this might lower collaboration among separate teams. A way to offset this is to put teams that should collaborate together in adjacent spaces and have common corridors or meeting spaces. l Accessibility: It is perfect for managers. They have their space with their team, not disturbed by other teams. However, executives might not feel the same. Instead of having one meta-team, they have several separate teams in possibly different places. Between three and eight people feels like a good compromise. More and you get more distractions, for sure.

INDIVIDUAL OFFICES

l Productivity: This is the option with the least interruptions, and therefore theoretically with the highest productivity. However, I would opt to have a colleague in the same room, which actually helps both developers to focus. A developer can’t be productive for eight hours straight, and needs some time to have some light conversations before getting back in the zone. This can’t be easily done in individual offices. You need to have two people sharing a room for that. l Collaboration: Offices make it easier to have conversations. You close the door and you can talk about what needs to be talked about, without worrying about bothering other people. That applies when discussing complex problems, but not for easy questions that will probably be discussed online. This is true especially in the case the person you want to talk to is not nearby, as the team will be spread at this point. l Accessibility: Individual offices can take five times more space than an open-plan floor with team separations.

The main problem with cubicles is how they are perceived because of the way they used to look. But you can make them fun, as Zappos allowed.

Your team of 100 developers could be spread across many floors. Accessibility is at its worst for executives — if your company has an office.

THE REMOTE OFFICE

l Productivity: Provides the least

interruptions and the highest productivity. However, not everybody is built to work remotely. Some teams do virtual coffee breaks for those little breaks between two tasks. Most all communication is asynchronous, so developers can respond when they would like. l Collaboration: More and more companies adopt the remote office, and more and more remote collaboration best practices emerge. Collaboration is just not the same as in a physical office, and your company needs to seriously adapt on this point. But collaboration can even be higher in remote offices, as people have less to fear participating in discussions virtually. l Accessibility: That’s the worst for executives.

So, which one is the best? I think that, unfortunately, accessibility is the criterion the most considered by executives today, even though it shouldn’t be if you think about the companies’ best interests. But the companies are not the decision-makers, their executives are. If we don’t consider accessibility, it depends on whether you favor produc-

tivity over collaboration. If you favor collaboration, individual offices are typically the worst for this. If you favor productivity and output, I would put two or three developers in separate rooms so that you have both focus, along with easy ways to collaborate. My personal favorite is remote teams. But if it’s physical workspace, then my choice would be team separation, as it fosters the most collaboration and productivity within the team. If you want to make an open-plan as good as it can be for developers, here is what you can do: 1. Add mobile separators between teams, or at least distance — enough to offset the motion disturbance 2. Offer noise-cancelling headphones (even if that’s not the best for collaboration, it’s hard to prevent developers from using them) 3. Offer privacy screen filters so developers feel a bit of privacy, instead of concern over the constant stare of their manager on their screen 4. Lastly, if you can afford cubicles, have the discussion with your teams. How much productivity can be gained? I feel they will be interested. In any case, if someone tells you that “open offices often foster a symbolic sense of organizational mission, making employees feel like part of a more laidback, innovative enterprise,” you know what to tell them! z


019_SDT021.qxp_Layout 1 2/21/19 1:59 PM Page 19

www.sdtimes.com

March 2019

SD Times

DEVOPS WATCH

7 best practices when adopting DevOps BY JENNIFER SARGENT

The benefits of DevOps have been talked about for some time now. But a recent report has shown how organizations are reaping the benefits after implementing DevOps. According to a recent survey sponsored by Google and Harvard Business Review Analytics Services, two-thirds of the respondents who have implemented DevOps have seen benefits that impact their bottom line. Seventy percent have seen increased speed to market, 67 percent have seen improved productivity, 67 percent have seen increased customer relevance, 66 percent have seen increased innovation, and 64 percent have seen an increase in product and service quality. The benefits are clear, but the way to go about actually implementing DevOps isn’t so clear. To make the process easier, Google is sharing seven lessons that they’ve learned and believe are essential to adopting a DevOps model. Pilot a small project: Piloting a small project offers a low-stakes

1.

opportunity for mastering key DevOps capabilities. “A few small wins will provide evidence to the rest of [the] organization that DevOps works. Soon others will want to follow suit,” Melody Meckfessel, VP of engineering at Google Cloud, wrote in a post. Be an open-source player: Using open-source tools and engaging in the open-source community can help you stay up-to-date on best practices and solutions. It can also help decrease your organization’s learning curve and speed up release cycles, Meckfessel explained. Embed security in development: Taking care of security issues early on prevents them from being pushed out to production. Apply DevOps best practices: Google recommends companies use Site Reliability Engineering principles to foster collaboration, reduce waste, and increase efficiency. It also recommends looking for ways to improve automation, which can enable higher productivity and free organiza-

2.

3.

4.

tions up to focus on important tasks. Provide immersive training: According to Google, people will only commit to change in an organization when they understand why it is happening and are given the resources to implement the new technology. According to Google, three-quarters of the topperforming DevOps teams in the report provide immersive, hands-on training. Establish a no-blame culture: Runing blameless meetings in an environment that is built on trust allows team members to learn from their mistakes. Presenting mistakes as opportunities enables coworks to relate to each other and solve problems together, while also preventing that same mistake from reoccurring. Build a culture that supports DevOps: According to Google, the rest of this list is worthless without this last point. “When people feel like they have each other’s backs, they’re more likely to take smart risks; more likely to create; more likely to move faster,” said Meckfessel. z

5.

6.

7.

OverOps Reliability Dashboards deepens DevOps visibility BY CHRISTINA CARDOZA

Software reliability platform provider OverOps has announced new Reliability Dashboards to give QA, DevOps and Site Reliability teams more insight across their pre-production and production environments. The dashboards include new machine learning-based scoring capabilities that automatically detect anomalies and prioritize them based on impact. “Most organizations are facing two primary dilemmas in their software delivery: ‘how do I know if a release is ready to move forward, and once it has, how do I know how well it’s doing?’ Even with common testing and monitoring tools in place, there’s still a large degree of uncer-

tainty once code is released into the wild,” said Tal Weiss, CTO and cofounder at OverOps. “OverOps now arms our customers with concrete data in an easily digestible format to validate the quality of any code or infrastructure change to an environment.” According to Weiss, OverOps had previously only been able to find and fix production errors, but this new solution is meant to stop errors from happening in the first place. Other features include reliability scorecards and release certification, true root cause drill-downs, and reliability trends over time. The scorecords and certification uses scores such as newly introduced errors, increasing

errors and performance slowdowns so that DevOps teams can quickly go in and see what requires their immediate attention. In addition, it includes new Jenkins integrations to provide insight into any anomalies introduced in a release, OverOps explained. True root cause drill-down provides a dashboard for gaining deeper visibility into low-scoring deployments, apps and infrastructure tiers. It will also show corresponding anomalies, code and variable state at the moment an error happened. Lastly, reliability trends over time tracks and identifies patterns so teams can compare releases and see how well apps and deployments do over time. z

19


020-23_SDT021.qxp_Layout 1 2/22/19 1:58 PM Page 20

20

SD Times

March 2019

www.sdtimes.com

Creating a culture of

J

happi

BY CHRISTINA CARDOZA

ulia Lindsay was on a successful path in the investment and wholesale banking world, where — for the most part — she enjoyed her time. However, she eventually went through a rough patch where she found herself quite miserable, and that misery seeped into work. “I didn’t have a sense that I was looking forward to going into work, and to be perfectly honest, I think it affected my output, which of course made me feel worse,” she said. This rough patch led Lindsay to resign and leave the sector entirely. Reflecting on what made her go through that miserable phase, she eventually realized

that she did not feel like she fit in with the culture of the organization. When asked if there was anything her company could have done to change her mind about leaving, she said that because she didn’t have more insight into herself, she suspects no one was particularly aware of her unhappiness. That lack of awareness of what is going on within your company and teams is worrisome, according to Lindsay, now the CEO of the iOpener Institute for People & Performance, because happy employees are more likely to be engaged, more focused on their work and more productive, resulting in greater success for the business; where-

as unhappy employees are less likely to contribute, resulting in loss of productivity and time for the business. This can be especially concerning in the software development industry, where because of the high demand for skills and limited supply, developers have more opportunities to switch jobs, companies and projects if they are not happy in their current situation. Speaking of his own experiences, Dragos Barosan, a software engineer for the software company Pegasystems, explained that “good developers are really core assets for business survival and success in this day and age.” If businesses are suffering from a


020-23_SDT021.qxp_Layout 1 2/22/19 1:59 PM Page 21

www.sdtimes.com

SD Times

“If you are not happy at work, then you won’t be able to keep up because you won’t have the energy to do so.”

ness

lack of developer or employee retention, it can result in diminished innovation and productivity, as well as add to costs associated with recruiting, advertising and training, explained Barosan. What iOpener’s Lindsay did find she enjoyed about her previous job was taking on leadership roles that focused on creating teams where people trusted one another, respected one another and had a shared vision of what they were trying to do. “I was interested in creating that type of environment. At the time. I didn’t call it a happy workplace, but that was in effect what it was,” she said. That interest led Lindsay to join the

March 2019

—Julia Lindsay, iOpener

iOpener Institute in 2004. The iOpener Institute has dedicated its business to helping organizations create workspaces where teams thrive and flourish because they believe happiness at work is the key to success for not only individuals, but for organizations overall. “We try to engage the whole organization, because individuals have to take some responsibility as well as the leadership of the organization,” said Lindsay. The institute’s Science of Happiness at Work solution was designed to foster creativity and resilience as well as increase innovation, productivity and performance. Based on the institute’s findings, Lindsay explained happy employees are more likely to take fewer sick days, be more energized, stay at a organization twice as long and be twice as productive. “We define happiness at work as a mindset which enables action to maximize performance and achieve potential,” she said.

Finding out what makes teams happy, and how to keep them happy Through her research at the institute, Lindsay has found that as businesses today move and change so quickly, that can sometimes leave teams feeling left in the dust. “There is a constant need for speed, making decisions quickly, getting things done quickly, and just trying to go faster and faster,” she said. This need for speed is evident in industries where teams are pressured to

deliver high-quality software to market before their competitors. But that sense of urgency can leave teams feeling burnt out, and forced to ship features before they are ready. When that happens, what businesses end up with are brittle or difficult-to-add features, according to Andy Cleff, director of product engineering and agility at the wealth management company RobustWealth. “The only way people can really successfully contribute to speed is by being happy at work,” said Lindsay. “If you are not happy at work, then you won’t be able to keep up because you won’t have the energy to do so.” Cleff clarified that healthy and resilient teams are more likely to do the right thing when it comes to building software, such as not taking any shortcuts or starting bad habits. But while the word happiness is easy to talk about, Lindsay explained businesses have to set out to understand what builds happiness at work, what detracts from it, and what kind of impact results from that happiness or unhappiness. “Technology is the easy part. Culture is the challenge,” according to Carmen DeArdo, senior strategist for Tasktop. Pegasystems’ Barosan believes “a happy developer who loves his job will go the extra mile to accomplish tasks in a timely and quality manner. I know a lot of cases of people willingly staying extra hours over schedule, without even expecting overtime compensation, just continued on page 22 >

21


020-23_SDT021.qxp_Layout 1 2/22/19 1:59 PM Page 22

22

SD Times

March 2019

www.sdtimes.com

< continued from page 21

Tools for measuring team health and happiness include: Atlassian Team Health Monitors: Atlassian’s health checks and playbooks for various types of teams. Comparative Agility’s Agile Assessment: Actionable insight into efforts at the team, program and organization level. iOpener iPPQ: A questionnaire that looks into 25 specific elements to offer insight about team happiness and make recommendations for improving that happiness. Management 30’s 12 steps to happiness: Twelve different areas that businesses can experiment with and monitor results. MoodApp: A solution for gaining daily feedback on whether teams were satisfied or unsatisfied, and what the business can do differently. Team Barometer: A survey where team members vote yellow, red or green for topics like trust, collaboration feedback and meeting engagement. TeamMetrics: A tool for gathering data on team morale, and scores based on that data. TeamMood: Daily emails on how your team members feel at the end of the day so you can gauge the average team’s mood. Businesses need to be constantly measuring the status of their teams, whether that is through surveys, health checks or one-on-one meetings. “There are always opportunities for improvement, so depending on the lens you look through you are going to find problems are either big or small. There are problems everywhere, and they are waiting to be solved,” said Andy Cleff, director of product engineering and agility at the wealth management company RobustWealth. z

because they were engaged and motivated by what they were doing,” he said. “This comes at the complete opposite to an unhappy, bored developer who constantly checks his watch to see how much time is left until he can leave the office.” Happiness can occur by simply taking the time to give people feedback, showing them appreciation, offering recognition, and making sure people are clear on how what they are doing fits into the bigger picture of the business, according to Lindsay. “A lot of this boils down to things that sound really simple. However, as simple as it sounds, quite often we just don’t do it,” she said. In addition, Barosan says factors such as money, management and benefits could also have a direct impact on a team or individual’s happiness. While verbal recognition is always a plus, monetary compensation can boost a developer’s feelings about work. “Of course there are companies who just can't afford to pay as much money as some of the bigger corporations can. Their main option is to compensate the difference in some other ways: startups give a decent part of equity of the company as an incentive, others give a lot more free days, shorter working hours or complete flexibility to the employee in term of his working schedule,” he wrote in a blog post about developer happiness. Barosan does note that increasing salary can be a Band-Aid fix or an “instant gratification pill of happiness.” For instance, if a salary increase only happens once a year, happiness can wear off. Barosan recommends taking an approach where salary is increased through small increments throughout the year, if possible. If things like salary increases are not possible within your company, Barosan suggested things like sending employees to conferences and training sessions as well as covering travel expenses as a way to reward developers. As for management, Barosan explained a team’s direct manager as well as upper management can have an


020-23_SDT021.qxp_Layout 1 2/22/19 2:00 PM Page 23

www.sdtimes.com

The A3 approach

SD Times

Once an organization becomes knowledgeable on the contributing factors to team happiness, it is important to measure and constantly remeasure initiatives to improve happiness and business value over time. “There is an old adage of if you can’t measure it, you can’t manage it or improve it,” said Lindsay. “Whether you agree with that

sentiment or not, I would recommend people try to measure it and measure it over time because the workplace is not static.” Tasktop’s DeArdo explained his company looks at metrics based on value, cost and quality from a business perspective as well as the progress and workflow from the team. According to DeArdo, this enables the business to see how team happiness impacts the business ROI. For instance, if a team is overloaded with work, that will result in negative impacts on how quickly they produce work, how much work they produce and their happiness. “If the distribution of work is only focused on features, but doesn’t take into account debt, then that tends to accumulate and become a negative factor in terms of team happiness,” said DeArdo. So DeArdo suggested also paying attention to defects, risks and debt. In addition, there need to be retrospectives on how teams think they can do better and go faster, so they feel more responsible for the business, DeArdo explained. z

iOpener Institute’s performance-happiness model looks at three key areas: Trust: According to iOpener’s CEO Julia Lindsay, without trust there is no team. In order to build trust, leaders and teams need to improve their communication skills and dig deeper into issues at hand. Recognition: It is important for employees to be recognized for the work they are doing, according to Lindsay. “Saying thank you, or well done can make a real difference. Give credit freely and acknowledge the contributions of others,” she said. Pride: Taking pride in your work results in greater self-worth and selfesteem, Lindsay explained. “Pride is a catalyst for focusing on task, effort, and

persistence. Raising the level of pride people have for the organization and their contribution to it is a win-win for everyone. Praise effort and its results — why it matters and what a positive difference it has made to the team,” she said. In addition, the iOpener Institute focuses on 5Cs when it comes to performance and happiness at work: 1. Contribution 2. Conviction 3. Culture 4. Commitment 5. Confidence “The key to measuring happiness is asking the right questions to identify the underlying themes which affect happiness at work, and where a team is or isn’t thriving,” said Lindsay. z

Tasktop’s Carmen DeArdo is an advocate for Toyota’s A3 approach to managing work and teams, when trying to identify reasons teams are not happy or not improving. A3 is an approach to problem-solving and continuous improvement, named after the ISO A3size paper usually used in the approach. The approach includes a worksheet with information such as the team members, stakeholders, departments, start date, and possible duration of the team. Then it goes through steps such as clarifying the problem, breaking down the problems, setting a target, analyzing the root cause, developing countermeasures, implementing countermeasures, monitoring results and process, and standardizing and sharing success. According to DeArdo, if you are working with a group that is unhappy, A3 can narrow down the issue and shed light on why the team doesn’t feel like they are able to improve or are in control. In addition, with A3, teams can identify the problem, brainstorm potential causes and experiment, he explained. “There are a whole bunch of tools out there, but if you don’t get the culture right, you are never going to be as productive as you want to be,” said DeArdo. z

impact on the level of happiness. “Managers that cannot foster that sense of loyalty towards them will have a higher turnover rate in their teams,” Barosan wrote. “No matter how hard the direct manager tries, if the people in control of the company and its finances will see the development offices as a cost center instead of a profit center, then the developers will fully feel the consequences of that attitude.” Other ways Barosan believes developer happiness can be improved is through team events that foster personal and bonding relationships, minimal bureaucracy, communication with customers, flexible schedules, good tools, transparency and clear responsibilities. “First the company has to identify the root causes of the unhappiness and then a number of decision paths will emerge on what to do next,” said Barosan. “I do not think there is a universal recipe on the best approach. It all depends on a lot of diverse factors that are not very easy to aggregate and correlate together: the profile of the employees, the industry in which the business operates, the local culture of where the office is located, the economic situation of the company and the country in which it operates, etc.”

March 2019

The iOpener’s performance-happiness model

23


024,25,26,29_SDT021.qxp_Layout 1 2/21/19 2:58 PM Page 24

24

SD Times

March 2019

www.sdtimes.com

Hackers

are


024,25,26,29_SDT021.qxp_Layout 1 2/21/19 2:56 PM Page 25

www.sdtimes.com

March 2019

SD Times

still sticking to the tried-and-true methods BY JENNA SARGENT

Despite evolutions in technology, hackers are still using the same old tricks, though sometimes in a more evolved form. The hacker mentality is to want to grab the low-hanging fruit, or go after the easiest target, explained Sivan Rauscher, co-founder and CEO of SAM, a network security company. For attackers trying to find those low-hanging fruits, the explosion of IoT devices is providing a large attack surface. “With the fact that your life becomes more and more connected and there are so many devices and so many endpoints in your home, statistically, some of the attacks will get to you,” said Rauscher. “And because those IoT devices are lacking a security layer like authentication, encryption, all of those classic, basic security layers, it’s so easy to hack them. They are the low-hanging fruit and that’s why it’s so easy to target IoT.” In the past few years, Rauscher has seen a lot of repeating attack methods, such as phishing and ransomware. According to F5 Labs’ December 2017 report, “Lessons Learned from a Decade of Data Breaches,” the root cause of 48 percent of the data breach cases it looked at was phishing. Every year cyberattack monitoring platform Randori sees more attacks of those types because they’re easy and can be pushed out to a large number of people all at once, Rauscher said. For example, the WannaCry ransomware attack in 2017

25


024,25,26,29_SDT021.qxp_Layout 1 2/21/19 2:58 PM Page 26

26

SD Times

March 2019

www.sdtimes.com

< continued from page 25

Credential phishing plays on our basic human nature to be helpful

Though phishing is not necessarily a new type of attack, hackers are using credential phishing more and more, and the method is growing. In Menlo Security’s report, “Understanding a Growing Threat: Credential Phishing,” it defines credential phishing as “an attempt by malicious individuals to steal user credentials and personally identifiable information (PII) by tricking users into voluntarily giving up their login information through a phony or compromised login page.” According to Menlo Security, credential phishing attacks are often the start of a much bigger attack. “Phishing emails are simply the way a threat actor gains access to the network before stealing information, making a ransom demand or simply creating havoc,” Menlo Security wrote in the report. These attacks succeed because they play on an organization’s weakest link: the user. “Human nature is trusting,” the report stated. “It’s curious. It’s willing to follow directions from a seemingly authoritative figure.” “Attackers know very well how to manipulate human nature and emotions to steal or infiltrate what they want. They use email messages that induce fear, a sense of urgency, curiosity, reward and validation, an emotionally charged response by their victims or simply something that is entertaining and a distraction to convince, cajole or concern even seasoned users into opening a phishing email.” According to the report, 12 percent of users open phishing emails and 4 percent always click on a link within a phishing email. Enterprise users tend to be a bit better at identifying phishing emails, but not by much, Menlo Security explained. According to Menlo Security, the only way to 100 percent prevent credential phishing from succeeding is by implementing web isolation, which “physically prevents users from entering their credentials into a bogus web form,” Menlo Security explained. z

affected thousands of computers in a short period of time and spread incredibly fast because of specific vulnerabilities in Windows computers, anti-virus provider Symantec explained. Another example of a widespread attack that same year is the Mirai botnet, which used hundreds of thousands of IoT devices to conduct DDoS attacks that brought down major websites, Cloudflare explained. These attacks happen so frequently because attackers know that it is easier to send something to thousands of people than to go after specific targets. “That’s how attackers think, that’s how they manipulate inside a network and infect the other devices to gain more access and gain more data,” said Rauscher. “And phishing and ransomware is a way to lure the end user to press on something and just extract data and extract your bank account, extract your social security number, and that’s how they do it.” The bottom line is that phishing still is a very common attack method, not just for enterprise, but for end users, Rauscher explained.

Attackers can use social media to create more specialized attacks On the other hand, many attackers are getting more and more specialized. According to Sash Sunkara, co-founder and CEO of cloud management platform provider RackWare, the emergence of social media has led to more sophisticated attacks. Hackers can look at a person’s social media and create targeted phishing emails that will look believable. They can look at your social media profiles and determine who you are connected to at work, and use that to create highly specialized attacks. “Maybe your assistant opens something and all of a sudden the attacker has access to your network and they have access to your data,” said Sunkara, who explained that often, these phishing emails do look very real, even to smart users. “They’re going to use methods that we were thinking were non-threatening that now are going to become threatening,” said Sunkara. “Before, you could really tell when a fake request was coming in. But continued on page 29 >


Full Page Ads_SDT021.qxp_Layout 1 2/22/19 2:05 PM Page 27

Build and improve your AppSec program using straightforward approaches that work. • Obtain practical guidance on how to “do” AppSec • Maximize the value of limited resources & budget • Find specific next steps to take your program to the next level • Avoid outdated security frameworks that don’t match your software development practices

010110101 101010011 1010010

Cobalt’s Pen TTesting esting as a Ser vice (PTTaaS) aaS) Platfor Platform will help you:

Download the eBook: “A Practitioner’s Guide to Application Security” Learn practical tips on getting your appsec program off the ground or accelerating your current one.

https:////cobalt.io/resources

Schedule a De emo Cobalt’s PTaaS platform delivers actionable results that enable agile teams ams to pinpoint, track and fix software vulnerabilities. erabilities.

www.cobalt.io


Full Page Ads_SDT021.qxp_Layout 1 2/21/19 10:56 AM Page 40


024,25,26,29_SDT021.qxp_Layout 1 2/21/19 2:59 PM Page 29

www.sdtimes.com

March 2019

SD Times

29

< continued from page 26

nowadays it’s so well-disguised that it’s hard to tell even for the sophisticated user. And I think that’s going to continue to escalate as far as the next year.” Sunkara explained that at RackWare, the company sends alerts on almost a daily basis warning employees not to click on specific emails, and she estimated that they’ve seen triple the number of fake emails than usual in the last few months. The emergence of these more sophisticated attacks has led to more of a need for education within companies. First, employees need to be educated on how requests should come through and things to watch out for. They should know what the red flags are for fake emails. And in addition, securing your network can ensure that if an attack does get through, your data is protected. “There has to be education, protection, and warnings on the front end, but there has to be protection on the back end in case any of these things get through and they get access to critical information,” said Sunkara. Protecting the network ensures that having access to an IoT device doesn’t compromise your network, Rauscher explained.

DevOps created a much broader attack surface According to Chris Wallace, security liaison engineer at telecommunications company Vonage, the emergence of DevOps has also significantly increased the available attack surface. “Hackers no longer just target the deployed software but also the tools used to automate our deployment pipeline,” Wallace said. “New attack surfaces including GitHub repositories, containers, as well as automation and orchestration tools, provide new opportunities to infiltrate a system and maintain a persistent presence while eluding detection.” Wallace warned that a misdirected DevOps team can be vulnerable, just as an improperly configured server could be. Often, shortcuts are taken when implementing DevOps, resulting in “misconfigured environments, vulnerable servers open to the internet, a lack of appropriate separation of duties and no access control or segmentation of the network environment,” Wallace said. z

What’s the deal with nation-state attacks?

According to the Menlo Security report, nation-state sponsored groups and advanced persistent threats (APTs) often use credential phishing in order to execute attacks against high-profile targets, such as political campaign websites, think tanks, political national committees, and more. An infamous example of a nation-state attack that used credential phishing is the attack against John Podesta, chairman of Hillary Clinton’s 2016 presidential campaign, by a Russian hacking group, the report stated. According to Davis “Moose” Wolpoff, CTO of security company Randori, people often confuse nation-state sponsored attacks with APTs. “A lot of times people ask me about nation-state attacks, and they’re really asking about APT or advanced attacks, or they’ve got something in their brain about technical sophistication,” he said. “I think that’s maybe a little misleading, as I haven’t seen a lot of evidence of what I would consider highly sophisticated attacks in international conflicts... And I typically don’t think of spearphishing or phishing as an advanced attack. It just happens that it’s still pretty effective.” Wolpoff explained that nation-state attacks occur all the time against a large range of targets, from individual lawmakers to nongovernmental organizations (NGOs). According to Wolpoff, companies should prepare against nation-state attacks the same way they would normal attacks. “I wouldn’t necessarily think that a company needs to do something different to prepare for being attacked by a nation as opposed to being attacked by a common hacker, but I think the question is really, what’s the level of determination that an adversary is going to bring against you and what’s the impact to you if they’re successful? And you have to pair your reasoned response based on what that looks like.” Wolpoff believes that nation-state attacks will continue to utilize spearphishing in the years to come. He believes we will see a blending of information warfare, economic warfare, and social interaction. “The vast majority of the attacks we see — I think every year that I’ve ever been tracking — are socially connected attacks. People want to be helpful. Hackers know that people want to be helpful, even a nation-state level Russian hacker knows that people want to be helpful.” z


030,33,34,35_SDT021.qxp_Layout 1 2/25/19 11:08 AM Page 30

30

SD Times

March 2019

www.sdtimes.com

Buyers Guide

Adding value to your CI/CD pipeline BY CHRISTINA CARDOZA

O

ne of the first principles of the Agile Manifesto says to satisfy customers by delivering working software frequently. The problem, however, is that it doesn’t say exactly how we can do that. “Working software is the only measure of progress, but how to you measure that? You need to integrate and test the software as often as possible and fix errors when and as soon as possible,” said William Holz, senior director analyst at the research firm Gartner. According to Holz, in order to move faster and be successful in the areas of

Agile and DevOps, you need to add Agile technical practices to your software development, such as test-driven development and refactoring. One of the best technical practices out there is to create a continuous integration and continuous delivery (CI/CD) pipeline. According to Gartner, CI and CD are the most widespread Agile practices organizations are currently using or plan to use. Continuous integration is the “automation of the software build and validation process driven in a continuous way by running a configured sequence of operations every time a

software change is checked into the source code management repository,” according to Gartner. Dan Packer, industry specialist for the software company Plutora, explained the benefits here are testing, issue resolution, phased changeover, improved team morale, increased velocity, improved quality and improved budget. “With CI, the handoff from one stage to the next is fully automated up to the completion of the testing stage. This is starkly different than that of the waterfall methodology, where the handoff continued on page 32 >

Bringing value stream management into the mix Once you are delivering working software faster, the next thing to ask yourself is are your processes becoming more efficient? Are you not only working faster, but delivering value? According to Aaron McCaughan, product owner at the software company Plutora, this is where value stream mapping comes in. Gartner’s William Holz explained a value stream is a collection or series of steps that deliver customer value. For instance, if you buy something on Amazon, the order button is a value stream. You click the button and it goes through a series of steps to get your package to your door. Value stream mapping provides insights into those steps, where the value is, where the value isn’t, and detects areas that can be improved. “With CI/CD, you might be able to react, you might be able to deliver faster, but you also have to factor in am I delivering value faster?” McCaughan said. “What am I putting into the pipeline that is satisfying my customer so that I am actually keeping everyone happy and increasing the value of the product?” Many of the metrics teams are looking at are deployment frequencies, number of check-ins per day, build failure rates, or mean time to recovery. While those are interesting indicators of work,

they don’t really measure customer value or help you understand what is flowing through the system, according to Jeffrey Keyes, director of product marketing for Plutora. Value stream management provides a broader view of the entire delivery life cycle long before the software becomes software, McCaughan explained. It goes from ideation to production so you can start tracking your value stream map as soon as you identified your strategic goals for the organization and start identifying bottlenecks. “For instance, if there are three months of planning on average for each change coming through, you can start addressing that,” said McCaughan. “It’s about capturing those dependencies, wait times and overburdens.” Specific metrics the value stream map brings to light are the average cycle times it takes to deliver a feature, process time, waste time, and actual time spent working on the feature, according to Keyes. “If you are spending half your time putting out fires, you are not adding value. You are just reacting to production defects or technical debt,” said McCaughan. “Value stream management is the combination of Agile plus DevOps plus the measured outcomes at each phase,” Keyes added. z


Full Page Ads_SDT021.qxp_Layout 1 2/22/19 11:05 AM Page 31


030,33,34,35_SDT021.qxp_Layout 1 2/25/19 11:08 AM Page 32

32

SD Times

March 2019

www.sdtimes.com

< continued from page 30

between stages is typically fully manual throughout the entire lifecycle and deals primarily with completed applications vs. small code segments,” Packer wrote in a blog post. CD is an evolutionary step of CI designed to take automation further, according to Packer. Continuous delivery is the act of releasing reliable software faster through technical delivery and deployment practices like working in small batches and automating repetitive tasks. Benefits include improved velocity, phased progression, always production ready and release control, Packer wrote. Together, they make up the CI/CD pipeline: A continuous flow of software designed to reduce manual and errorprone work, and result in higher quality software. “This is important because the ninth principle of Agile states attention to technical excellence and design increases your agility. In Agile, rework is waste. The time you spend fixing bugs is time that can be better spent delivering new features and functionality,” said Gartner’s Holz. “You need to be able to do CI and DevOps to achieve continuous delivery. It is no longer about your software. It is about delivering the entire solution.” According to Holz, this is how you accelerate your Agile and DevOps initiatives successfully, reduce risks and catch bugs. CI/CD enables the ability to build, package, integrate, test and release code with automation. “Complex operations like CI/CD cannot be accomplished without significant engineering effort. DevOps recognizes the importance of joining tooling and thought throughout the entire process of development and deployment rather than isolating build and production. Rather than having distinct steps when creating a service, DevOps encourages simultaneous development and testing. CI/CD pipelines fit into this perfectly by providing a way to automate the testing portion,” explained Abhinav Asthana, CEO and cofounder of the API development solution provider Postman. z

How do you add value to the CI/CD pipeline? Jeffrey Keyes, director of product marketing for the software company Plutora: At a high level, we address three areas. The first is that we integrate with and unify the entirety of the Agile and DevOps toolchain, including CI/CD tooling. The point of unification is eliminating the inefficiencies and loss of fidelity of handoffs. We also correlate the data and artifacts into the stages of delivery and relate all of the information together. This is critical as you need to see what features are actually being delivered, how that relates to code being built, where it sits in the pipeline and the relationship of test to all of that data. The second area is the management of key processes. We provide release orchestration enabling additional visibility and logic enhancements augmenting the CI/CD pipeline. We can orchestrate between the manual and automated tasks of any pipeline, decomposing delivery into phases and gates ensuring governance is maintained and you have appropriate levels of quality. We have a deployment planning and orchestration which augments application release automation managing the go-live activities. We also have a non-production environment management solution centralizing the requests, orchestrating the provisioning and manages the utilization of pre-production environments. The third area is the analytics and the visualization of the value stream itself. We provide out-of-box visualizations, including a value stream map, for the flow of work along the entire process. We provide rich “what-if” scenario analysis and comparison metrics including using teams and time as dimensions. We enable you to answer the most important question of digital transformation — are we improving?

Abhinav Asthana, CEO and co-founder of API development solution provider Postman: Postman offers a comprehensive API testing tool that makes it easy to set up automated tests. You can aggregate the tests and requests you’ve created into a single automated test sequence that you can reuse again and again. Integration testing is hard. Running through sad paths consisting of hundreds of failing dependencies is a nearly impossible task for all but the largest organizations. From simple happy path debugging to thorough sad path deployment, Postman keeps your tests tightly coupled with your services. The tool is approachable, allowing even less technical QA team members to contribute to a testing suite. At the same time, it is robust in allowing the simulation of complex workflows and business logic. Every test that can be run manually via the Postman GUI, can be automated in Postman’s command line tool, Newman, and can be included as a build step in your pipeline. With Postman you aren’t testing just your code, but the fabric of your entire service, and those it relies on. Hundreds of organizations have built Postman collections alongside their development and employed them as integration tests. Your developers are already debugging with Postman, why not put that work to good use? z


Full Page Ads_SDT021.qxp_Layout 1 2/21/19 11:01 AM Page 32


030,33,34,35_SDT021.qxp_Layout 1 2/22/19 3:52 PM Page 34

34

SD Times

March 2019

www.sdtimes.com

A guide to DevOps CI/CD tools

n Atlassian: Atlassian offers cloud and onpremises versions of continuous delivery tools. Bitbucket Pipelines is a modern cloud-based continuous delivery service that automates the code from test to production. Bamboo is Atlassian’s on-premises option with first-class support for the “delivery” aspect of Continuous Delivery, tying automated builds, tests and releases together in a single workflow.

n API Fortress: API Fortress is a continuous testing platform for APIs. It is the final piece to complete your continuous integration vision. One platform to test functionality, performance, and load. Save time with automated test generation, benefit from true cross team collaboration, leverage your existing version control system, and seamlessly integrate with any CI/CD platform. Catch problems before they are pushed live — automatically. n Automic: Automic from CA Technologies, a Broadcom Company, is leader in business automation software. Automic V12 is a unified suite of business automation products for driving agility across enterprise operations and empowering DevOps initiatives. n Chef: Chef Automate, the leader in Continuous Automation, provides a platform that enables you to build, deploy and manage your infrastructure and applications collaboratively. Chef Automate works with Chef’s three open-source projects; Chef for infrastructure automation, Habitat for application automation, and Inspec for compliance automation, as well as associated tools. n CloudBees: CloudBees is powering the continuous economy by building the world’s first end-to-end system for automating software delivery, the CloudBees Suite. The CloudBees Suite builds on emerging DevOps practices and continuous integration (CI) and continuous delivery (CD) automation adding a layer of governance, visibility and insights necessary to achieve optimum efficiency and control new risks.

n

FEATURED PROVIDERS n

n Plutora: Plutora provides the most complete value stream management solution for enterprise IT, improving the speed and quality of software creation by capturing, visualizing and analyzing critical indicators of every aspect of the delivery process. Plutora orchestrates release pipelines across a diverse ecosystem of development methodologies, manages hybrid test environments, correlates data from existing toolchains, and incorporates quality metrics gathered at every step. The Plutora Platform unifies existing application delivery toolchains ensuring assets, data and artifacts flow between systems. It ensures organizational alignment of software development with business strategy and provides visibility, analytics and a system of insights into the entire value stream, guiding continuous improvement through the measured outcomes of each effort. n Postman: As APIs are becoming increasingly more important to the development lifecycle, a proper API strategy is a crucial piece of your CI/CD pipeline. Postman provides tools that support every stage of the API lifecycle from design and testing to monitoring and debugging. Postman’s command line tool, Newman is designed to help developers run and test Postman Collections directly from the command line and integrate Postman tests within their CI/CD build process. Developers can run Postman tests every time their build process kicks off, and integrate it with their CI service such as Jenkins, Travis CI or any other code deployment pipeline tool. n Datical: Datical brings Agile and DevOps to the database to radically improve and simplify the application release process. Datical solutions deliver the database release automation capabilities IT teams need to bring applications to market faster while eliminating security vulnerabilities, costly errors and downtime. n Dynatrace: Dynatrace provides software intelligence to simplify enterprise cloud complexity and accelerate digital transformation. With AI and complete automation, our all-in-one platform provides answers, not just data, about the performance of applications, the underlying infrastructure and the experience of all users. n Electric Cloud: Electric Cloud helps software-driven companies like E*TRADE, GM, Hyundai, Intel and Samsung build and release applications and devices at any speed the business demands, with the acceleration, orchestration and insight needed to continuously improve their results. n GitLab: GitLab is the only single application for the entire DevOps lifecycle, allowing Product, Development, QA, Security, and

Operations teams to work concurrently on the same project. Designed to provide a seamless development process, GitLab’s built-in Continuous Integration and Continuous Deployment offerings enable developers to easily monitor the progress of tests and build pipelines, then deploy with the confidence that their code has been tested across multiple environments. n IBM: UrbanCode accelerates delivery of software change to any platform — from containers on cloud to mainframe in data center. Manage build configurations and build infrastructures at scale. Orchestrate, automate and deploy applications, middleware and database changes. Release interdependent applications with pipelines of pipelines, plan release events, orchestrate simultaneous deployments of multiple applications. Improve DevOps performance with value stream analytics. n JetBrains: TeamCity is a continuous integration and continuous delivery server that takes moments to set up, shows your build results on-the-fly, and works out of the box. It will make sure your software gets built, tested, and deployed, and you get notified about that appropriately, in any way you choose. TeamCity integrates with all major development frameworks, version control systems, issue trackers,


030,33,34,35_SDT021.qxp_Layout 1 2/22/19 3:52 PM Page 35

www.sdtimes.com

IDEs, and cloud services. n Microsoft: Microsoft’s Azure DevOps solution is a suite of DevOps tools designed to help teams collaborate to deliver high-quality solutions faster. Azure DevOps marks an evolution in the company’s Visual Studio Team Services. VSTS users will now be upgraded to Azure DevOps. The solution features Azure Pipelines for CI/CD initiatives, Azure Boards for planning and tracking, Azure Artifacts for creating, hosting and sharing packages, Azure Repos for collaboration and Azure Test Plans for testing and shipping. n Octopus Deploy: Octopus Deploy is an automated release management tool for modern developers and DevOps teams. Features include the ability to promote releases between environments, repeatable and reliable deployments, ability to simplify the most complicated application deployments, an intuitive and easy to use dashboard, and first-class platform support. n Redgate Software: Including SQL Server databases in Continuous Integration and Continuous Delivery, and stopping them being the bottleneck in the process, is the mission at Redgate. Whether version controlling database code, including it in continuous integration, or adding it to automated deployments, the SQL Toolbelt from Redgate includes every tool necessary. Many, like SQL Source Control, SQL Compare and SQL Change Automation, integrate with and plug into the same infrastructure already used for application development. n Rogue Wave Software: Rogue Wave helps thousands of global enterprise customers tackle the hardest and most complex issues in building, connecting, and securing applications. From API management, web and mobile, embeddable analytics, static and dynamic analysis to open-source support, we have the software essentials to innovate with confidence.

n Sauce Labs: Sauce Labs provides the world’s largest cloud-based platform for automated testing of web and mobile applications. Optimized for use in CI and CD environments, and built with an emphasis on security, reliability and scalability, users can run tests written in any language or framework using Selenium or Appium, both widely adopted open-source standards for automating browser and mobile application functionality. n Tasktop: Tasktop provides the backbone for the most impactful Agile and DevOps transformations by connecting all the best-of-breed tools used for planning, building and delivering software at scale. With its unique model-based integration, Tasktop automates the flow of information from tool to tool, removing the duplicate data entry and manual handovers that are slowing teams down. By normalizing and standardizing data as it flows, Tasktop provides a one-of-akind set of metrics that tells leadership exactly how well they are performing against the business objectives and where they can improve. n TechExcel: DevSuite helps organizations manage and standardize development and releases via agile development methods and complete traceability. We understand the importance of rapid deployment and are focused on helping companies make the transition over to DevOps. To do this, we have partnered with many automation tools for testing and Continuous Integration, such as Ranorex and Jenkins. Right out of the box, DevSuite will include these technologies. n XebiaLabs: XebiaLabs develops enterprise-scale Continuous Delivery and DevOps software, providing companies with the visibility, automation and control they need to deliver software faster and with less risk. Global market leaders rely on XebiaLabs to meet the increasing demand for accelerated and more reliable software releases. z

March 2019

SD Times

Why a good API strategy matters Application programming interfaces (APIs) are increasingly becoming more important to software development as organizations embrace connected services and microservices architectures. “Meaning that APIs are consumed by many different people and are integrated with diverse and complex services,”said Postman CEO and cofounder Abinhav Asthana. According to Gartner’s William Holz, in order to do effective CI/CD, APIs create a separation of concerns and enable the ability to test at the unit level, feature level, integration level and performance level. “APIs gives me an idea of where to look for a problem, help me solve that problem sooner, and minimize the amount of waste or rework I need to do,” he said. “I use the example that I can have 100 percent unit test coverage, but I can still break features. I can still break my application because unit tests don’t test the future.” However, according to Asthana, testing is difficult, and in order for an API strategy to be a powerful tool, dependencies need to be removed. “With connected services, dependencies are a huge concern. If a team updates an API, it could potentially break an API consumer’s service, but a CI/CD pipeline can solve this problem,” Asthana said. “CI/CD pipelines ensure that connected services are healthy by consistently checking for broken dependencies with full system tests at every build. This means that broken dependencies are most often caught in development rather than in production. The later a bug is caught, the more expensive and timeconsuming it becomes to fix it.” Once a CI/CD pipeline is set up, developers can run integration tests as part of their build. According to Asthana, those tests stay with the developer until they pass, which means the production environment remains safe from harm. “A good CI/CD pipeline will have reporting built in, so testers can review their automatically run test results and determine the source of not just errors in their code, but errors in interaction with dependencies,” Asthana said. Asthana adds that teams should also find a tool or framework that can write tests and maintain a test library for the pipeline. This will ensure a good set of tests that can be used and reused. Tools should also provide the ability to test against live environments, services and data, and have a system of reporting implemented so developers and tests can get access to insights quickly, according to Asthana. “Implementation of CI/CD greatly simplifies API management. The consistency and reliability of a CI/CD pipeline mean that developers aren’t bothered with manually managing dependencies between versions. Instead of putting out fires, developers can spend their time improving products. CI/CD pipelines catch errors early on, which saves time and money,” he said. z

35


036_SDT021.qxp_Layout 1 2/22/19 1:58 PM Page 36

36

SD Times

March 2019

www.sdtimes.com

Guest View BY MITESH SONI

Top developer skills in demand Mitesh Soni is the senior director of Innovation and Fintech Ecosystems at Finastr.

D

igital disruption is building a voracious appetite for developers — and every area of home and business life is adapting to disruption. Platform-based business models, from travel and hospitality to recruitment and P2P funding, dominate the economy. Firms like Airbnb and Uber have no physical inventory of their own, but their platforms have revolutionized their industries and are effectively infinite in scope and limited only by imagination. Of course, the key to success for all these firms is the creativity and passion of their development teams in delivering the best UX and staying ahead of the competition, as well as collaborating with the business to create more and more value by expanding products and services that can be rolled into a brand’s offer. So, with disruptors springing up across all industries, which skills are most in demand and where will the biggest opportunities be for developers who want a piece of the ongoing digital revolution?

Digital disruption is opening up new channels for developer skills across a whole gamut of disciplines.

User experience

Perhaps the most important new skill set for developers to acquire in the new era of digital disruption is user experience (UX) design. This is a combination of development and design, where incidents that occur during a user journey can change customer behavior.

Agile and Lean Second on the list is the ability to operate in an Agile and Lean environment. Developers need to be entrepreneurial in their thinking, and therefore willing to take risks in the pursuit of success. This means working fast and being open to trying new things but also accepting that it’s okay to fail. It includes collaborating with the business to spot additional opportunities and working out how to build a new service from another service, possibly using data analytics to identify patterns of behavior in the context of multiple data sources.

Infrastructure developers A new breed of developer is beginning to emerge in line with huge platforms such as Amazon,

WeChat and Google, whose main aim in life is to provide everything that users need so they never have to leave the platform. There’s a requirement for developers who can work on back-end infrastructure to ensure services are fully integrated and secure within short timeframes. Their skills are applied to building systems that move customers from one area to another without friction.

App builders Environments such as iOS and Android are enabling millions of developers to create new apps in days, pulling data from multiple sources and leveraging the stability of core platforms and existing code to address niche areas of value for customers. Marketplaces including the Apple and Google Stores mean that there is a ready-made marketing and distribution channel for new apps, as well as quality control.

Blurring lines Our increasingly entrepreneurial economy is encouraging more and more people to grow small pools of profit, operating on their own or in small teams. This may begin at university, where students can earn as they learn by building apps. There is a very low barrier to entry for those who want to get involved, as app development has become truly democratized. Even those who feel more comfortable working for a big organization can try out their ideas for new apps in their spare time. It’s not just possible to learn development skills for free online, but it’s positively encouraged in the spirit of keeping up with new technologies and their capabilities. Digital disruption is opening up new channels for developer skills across a whole gamut of disciplines, from writing micro apps at one end of the spectrum to building elegant user interfaces, customer journeys and back-end infrastructure at the other. Organizations in the financial services sector understand that the old days of hierarchical and inflexible system development of whatever kind are over, and that they cannot afford to fall behind emerging technology trends. The role of developers in this new environment is to bring a new mindset of entrepreneurial and collaborative thinking to the fore, and to play an active part in delivering value for customers and employers alike. z


037_SDT021.qxp_Layout 1 2/22/19 1:57 PM Page 37

www.sdtimes.com

March 2019

SD Times

Analyst View BY ARNAL DAYARATNA

The ubiquity of developers W

hat does it mean to be a software developer today? Is it necessary to write code to qualify as a contemporary software developer? Are practitioners of low-code and no-code development software developers? Should business stakeholders who participate in software development using platformas-a-service development tools be considered software developers? Moreover, do IT professionals, business analysts and data analysts who develop applications count as software developers? How do we define the boundaries and definition of a contemporary software developer given the heterogeneity of contemporary modalities of software development? The definition of a software developer provides insight into the population of labor resources that architect, implement, maintain and modify digital transformation initiatives. Technology suppliers and vendors should pay attention to the universe of software developers because developers not only build digital solutions but are also critical to their adoption and long-term success well. Meanwhile, organizations pursuing digital transformation initiatives would do well to pay attention to the demographics of software developers to ensure recruitment and retention of developers that can realize their strategic and operational goals. Software developers include those that develop applications by means of custom coding and scripting, in addition to low code and no code developers that leverage visually guided development tools. As such, data scientists who develop algorithms in R or Python are as much of a software developer as are cloud native developers that boast fluency in container orchestration frameworks, containers and microservices. Similarly, business stakeholders that use PaaS developer tools or visually guided development platforms to create dashboards of KPIs and other operational metrics should be recognized as developers because of their ability to design, develop and iterate on digital solutions to business problems. The larger point here is that the population of full-time professional developers is richly complemented by a universe of part-time developers that do not have developer as part of their job title, but actively participate in application development for their professional work. Examples of part-time developers include business analysts, data scien-

tists, risk managers, systems engineers, DevOps engineers and IT operations professionals. Over and beyond this supplementary segment of parttime developers, however, end users that transform applications represent another notable constituency of software developer because they are similarly engaged in the project of designing, building and iterating on software-based digital solutions. All this is to say that, in an era of digital transformation, every user of technology is a developer, of sorts. Gmail users that create filters to preclude unwanted email, perform advanced searches or integrate their email with apps such as Google Drive are developers. Facebook users that customize their privacy and distribution settings are developers, as are LinkedIn users that customize the content of their profiles to optimize the positioning of their profile in searches. Similarly, users of programmable thermostats and appliances that can be controlled by mobile devices are developers in much the same vein as automobile drivers that harness the voice recognition functionality of their Bluetooth systems and voice-activated GPS systems. While IoT technologies and digital transformation have contributed to the digitization of objects, the consumerization of IT has transformed everyone into developers of varying modalities and degrees of sophistication. What is the significance of this proposition that every contemporary user of technology is a developer? Put simply, the acceleration of digital transformation initiatives has transformed the nature of labor to the point where every end user is a software developer and correspondingly has access to a skillset that empowers them to develop digital solutions. End users of technology are intimately familiar with digitized workflows, authentication practices, the implementation of security, customized privacy settings and advanced search functionality. All this means that technology suppliers have, at their disposal a pre-trained workforce of labor that can participate in the development of digital solutions by harnessing the skills they have acquired as a result of their daily use of technology and its associated constellation of software applications. z

Arnal Dayaratna is Research Director, Software Development at IDC.

The acceleration of digital transformation initiatives has transformed the nature of labor to the point where every end user is a software developer.

37


038_SDT021.qxp_Layout 1 2/22/19 3:49 PM Page 38

38

SD Times

March 2019

www.sdtimes.com

Industry Watch BY DAVID RUBINSTEIN

A sobering look at cloud David Rubinstein is editor-in-chief of SD Times.

T

he advantages of cloud computing have been talked about for years in the pages of this magazine. Yes, massive scaling, redundancy and data availability are benefits, but the primary driver has always been cost. Companies were told they could abandon their data centers — or significantly reduce them — and move their applications to the cloud, while cloud providers provided the infrastructure at a fraction of what running a data center costs. Along with touting cloud benefits, vendors also used fear-mongering to move companies to the cloud, after using the same tactics to move companies to Agile, then DevOps; saying a company’s competitors will drive them out of business if they don’t complete this digital transformation, and fast. Full disclosure: SD Times likely oversold this, along with tech media as a whole, as a must-have for survival. But now, years into cloud computing, many organizations are coming out from the ether and taking a more sober look at cloud computing. And many are glad they did not abandon their data centers, where they’re starting to bring cloudnative service platforms behind their firewall. “Companies are re-evaluating their cloud strategies, as costs are going up and the extra complexity around cloud services leads many companies not knowing the actual cost of running in the cloud,” said Glenn Sullivan, co-founder of SnapRoute, which has created a cloud-native network operating system. “It looks like a retreat from the cloud, but it’s actually a smarter look at what we’re putting into clouds. Certain workloads have to be in certain places, and we want to bring cloud flexibility on-premises, to allow for the same kind of API-level structure you’d get in the cloud.” There’s more to cloud than compute. There are all the others services you have to run that quickly can add to the cost. The larger your cloud footprint, and the more services you require to run and maintain your applications, the greater the costs. And they can escalate quickly. Jonathan Sullivan, CTO at DNS provider NS1, said, “The best thing about cloud, there are efficiencies to a point. You push a button and you get a server. You don’t have buy a thing and ship it to a

Companies today are building their businesses to run both in the cloud and on-premises.

data center and send someone over there with a CD to install the operating system.” When NS1 was starting out. Sullivan said. “All of our prototyping was done in Digital Ocean and Amazon six years ago because we just didn’t need to worry about the infrastructure and you scale later. And after scaling, the cost economics no longer work in your favor. Amazon can only go so low with pricing before their margins disappear.” Further, the complexities of new software architectures have opened new attack vectors for hackers, which also is affecting company decisions regarding the cloud. “People will never trust outsourcing security,” Sullivan said. “They need to have their security team. They’ve made heavy investments in appliances, and things like intrusion detection and WAF, so it makes sense for us to just give our customers the software they can run behind their existing security perimeter that they know, they trust, which they have teams managing.” So these companies and others are seeing companies sliding back into private cloud. Sullivan said the biggest indicator of this was Amazon’s announcement of Outposts, which are Amazon rack servers you can run locally get the advantages of AWS in your data center. Modern private clouds — even as part of a hybrid cloud implementation — allow organizations to have their workloads span private cloud, on-premises, and public clouds, and often, multi-cloud setups. “The previous hybrid cloud solution was more like, I’ll keep my data warehouse on-prem, because it makes sense, and I’ll do some stuff in the cloud, and now it’s becoming — with OpenStack and VMware — your company infrastructure is now as flexible and immutable as the cloud stuff. The benefits have spread in both directions,” Sullivan explained. Companies today are building their businesses to run both in the cloud and on-premises. Banks, financial institutions and older enterprises likely will never move fully to the cloud. They’ll run those workloads in the cloud that make sense, and where they can find efficiencies, but they will retain data centers for those things that don’t make sense. There is a lot at stake regarding a cloud migration. Before racing in, organizations should evaluate what makes sense for them, and what does not. z


Full Page Ads_SDT021.qxp_Layout 1 2/22/19 11:12 AM Page 39

Bad address data costs you money, customers and insight. Melissa’s 30+ years of domain experience in address management, patented fuzzy matching and multi-sourced reference datasets power the global data quality tools you need to keep addresses clean, correct and current. The result? Trusted information that improves customer communication, fraud prevention, predictive analytics, and the bottom line. • Global Address Verification • Digital Identity Verification • Email & Phone Verification • Location Intelligence • Single Customer View See the Elephant in Your Business -

Name it and Tame it!

www.Melissa.com | 1-800-MELISSA

Free Trials, Free Data Quality Audit & Professional Services.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.