SD Times April 2020

Page 1

APRIL 2020 • VOL. 2, ISSUE 34 • $9.95 • www.sdtimes.com


IFC_SDT032.qxp_Layout 1 1/17/20 3:29 PM Page 2

Instantly Search Terabytes

www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein drubinstein@d2emerge.com NEWS EDITOR Christina Cardoza ccardoza@d2emerge.com

dtSearch’s document filters support: ‡ popular file types ‡ emails with multilevel attachments ‡ a wide variety of databases ‡ web data

SOCIAL MEDIA AND ONLINE EDITORS Jenna Sargent jsargent@d2emerge.com Jakub Lewkowicz jlewkowicz@d2emerge.com ART DIRECTOR Mara Leonardi mleonardi@d2emerge.com CONTRIBUTING WRITERS Alyson Behr, Jacqueline Emigh, Lisa Morgan, Jeffrey Schwartz

2YHU VHDUFK RSWLRQV LQFOXGLQJ ‡ efficient multithreaded search ‡ HDV\ PXOWLFRORU KLW KLJKOLJKWLQJ ‡ forensics options like credit card search

Developers: ‡ 6'.V IRU :LQGRZV /LQX[ PDF26 ‡ &URVV SODWIRUP $3,V IRU & -DYD DQG NET with NET Standard / 1(7 &RUH

.

.

.

‡ )$4V RQ IDFHWHG VHDUFK JUDQXODU GDWD FODVVLILFDWLRQ $]XUH $:6 DQG PRUH

CONTRIBUTING ANALYSTS Enderle Group, Gartner, IDC, Intellyx, Ovum

ADVERTISING SALES PUBLISHER David Lyman 978-465-2351 dlyman@d2emerge.com SALES MANAGER Jon Sawyer jsawyer@d2emerge.com

CUSTOMER SERVICE SUBSCRIPTIONS subscriptions@d2emerge.com ADVERTISING TRAFFIC Mara Leonardi adtraffic@d2emerge.com LIST SERVICES Jourdan Pedone jpedone@d2emerge.com

Visit dtSearch.com for ‡ KXQGUHGV RI UHYLHZV DQG FDVH VWXGLHV ‡ IXOO\ IXQFWLRQDO HQWHUSULVH DQG developer evaluations

The Smart Choice for Text Retrieval® since 1991

dtSearch.com 1-800-IT-FINDS

REPRINTS reprints@d2emerge.com ACCOUNTING accounting@d2emerge.com

PRESIDENT & CEO David Lyman CHIEF OPERATING OFFICER David Rubinstein

D2 EMERGE LLC 80 Skyline Drive Suite 303 Plainview, NY 11803 www.d2emerge.com


003_SDT034.qxp_Layout 1 3/20/20 2:32 PM Page 3

Contents

VOLUME 2, ISSUE 34 • APRIL 2020

FEATURES

NEWS 4

News Watch

10

How businesses can adapt to support remote work for COVID-19 and beyond

19

HCL Software: Beyond Global IT Services

21

DevOps requires a human transformation

To build resilient systems, embrace the chaos page 6

Making open source work for you and your business

COLUMNS 40

GUEST VIEW by Adam Lieberman 8 open-source data science libraries

41

ANALYST VIEW by Arnal Dayaratna 3 Steps to becoming cloud native

42

INDUSTRY WATCH by David Rubinstein Home is… where I always am!

page 14

iPaaS adoption growing to handle integrations in cloud architectures page 22

Monitoring The firsT of Three ParTs

BUYERS GUIDE

Creating a clear testing path to DevOps takeoff

Application Performance Monitoring:

What it means in today’s complex software world

page 26

page 33

Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 80 Skyline Drive, Suite 303, Plainview, NY 11803. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2018 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 80 Skyline Drive, Suite 303, Plainview, NY 11803. SD Times subscriber services may be reached at subscriptions@d2emerge.com.


004,5_SDT034.qxp_Layout 1 3/20/20 2:55 PM Page 4

4

SD Times

April 2020

www.sdtimes.com

NEWS WATCH GitHub acquires npm

Atlassian brings no-code automation to Jira Cloud

GitHub has announced plans to acquire npm. Npm is the company behind the Node package manager for the programming language JavaScript, the npm Registry and npm CLI. “npm is a critical part of the JavaScript world. The work of the npm team over the last 10 years, and the contributions of hundreds of thousands of open source developers and maintainers, have made npm home to over 1.3 million packages with 75 billion downloads a month. Together, they’ve helped JavaScript become the largest developer ecosystem in the world. We at GitHub are honored to be part of the next chapter of npm’s story and to help npm continue to scale to meet the needs of the fastgrowing JavaScript community,” Nat Friedman, CEO of GitHub, wrote in a post. According to Friedman, once the acquisition is completed GitHub will focus on investing in the registry infrastructure and platform, improving the code experience, and engaging with the JavaScript community on the future of npm.

Atlassian has announced no-code automation is now available natively in Jira Cloud. The capabilities come from the company’s acquisition of Automation for Jira last October. Automation for Jira is a no-code rules builder. The new release features the ability to automate tedious and repetitive tasks; ability to organize teams, tools and process; ability to work across Jira Cloud products; integration with collaboration tools; and automates process spanning DevOps and IT operations teams. Additionally, users can drag and drop if-this-then-that rules together. Some use cases include keeping Jira up to date with rules like “if the last task for a bug fix changes to ‘done,’ close the parent issue and notify the support team,” and the ability to discover potential problems before they become an issue with rules like “if an urgent issue is raised by the CTO, send a Slack message to the support room and set SLA.”

Docker refocuses on developers Container image company Docker has announced that it is completely shifting its focus to developers. The company has revealed that it will expand Docker Desktop and Docker Hub, as well as partnering with the current ecosystem of Docker tools. Updates to Docker Desktop will help accelerate onboarding new developers to team workflows, help new develop-

ers onboard to developing with containers, and provide features to help improve team collaboration and communication, Docker explained. This will include adding more features to the Docker CLI and Docker Desktop UI.

Split introduces approval flows for feature flags In an effort to keep risky changes from making it into production, feature delivery platform provider Split Software has announced the release of Enterprise Approval Flows. This new feature is designed to help engineering teams stay compliant with policies and audits as well as identify any risks. According to the company, much like a code approval process, the Enterprise Approval Flows checks feature flags that release code to end users. Key features include the ability: to receive notifications on submissions, approvals, withdrawals or rejections; to review or comment on changes; to view full audit information about the submitter, approver and current status; and to view all past and pending changes.

Tasktop launches new VSM solution Tasktop Viz is a new value stream management solution designed to provide real-time visibility into software product value streams and obstacles that impede business value delivery. According to Tasktop CEO Mik Kersten, the main goal was to provide the Flow Framework, which he created to help IT leadership shift from a project-centric mindset to a product-oriented focus. The framework helps leadership realize more value and respond faster to market changes, and generates flow metrics to large organizations nearly instantly so they can quickly scale. The Flow Metrics include velocity, distribution, time, efficiency, and load.

Instana brings automated alerts to APM Instana has announced a new way for DevOps and IT Ops teams to manage and execute alerts. Instana SmartAlerts is an automated IT alert management system based on environmental and situational use cases.

According to the company, performance monitoring can become confusing when trying to figure out what you want to get alerts about, how to define KPIs, if thresholds are too high, and looking at all the best practices. SmartAlerts is designed to automatically generate alert configurations with relevant KPIs and automatic threshold detection.

TypeScript 3.8 now available Microsoft has announced the availability of the latest version of TypeScript. TypeScript 3.8 introduces several new features, including new ECMAScript standards and new syntax for type-only imports and exports. One of the new ECMAScript features is private fields. Rules of private fields include that they start with a “#” character, every private field name is uniquely scope, accessibility modifiers can’t be used on them, and they can’t be accessed or detected from outside of the containing class. In addition to privacy, a benefit of private fields is that they are unique, which means they can’t be overwritten in subclasses. Other new features include


004,5_SDT034.qxp_Layout 1 3/20/20 2:55 PM Page 5

www.sdtimes.com

export * as ns syntax, top-level await, JSDoc property modifiers, better directory watching on Linux, “fast and loose” incremental checking, and more.

The first dev preview of Android 11 The Android team is revealing its plans for the next version of its operating system. The first developer preview of Android 11 is now out and features new capabilities for foldable phones, 5G, call-screening, and machine learning. Developers can download the system image for Pixel 2, 3, 3a or 4 devices.

FSF creating new site for collaboration Members of the Free Software Foundation tech team are currently reviewing ethical webbased software that will help teams collaborate on their projects, with features like merge requests, bug tracking, and other common tools. “Infrastructure is very

important for free software, and it’s unfortunate that so much free software development currently relies on sites that don’t publish their source code, and require or encourage the use of proprietary software,” FSF wrote in a blog post. “Our GNU ethical repository criteria aim to set a high standard for free software code hosting, and we hope to meet that with our new forge.”

Android Studio 3.6 released The Android team also announced the latest release of its integrated development environment. According to the company, the release of Android Studio 3.6 aims to address quality specifically in code editing and debugging use cases. The company announced a new packaging tool that aims to improve build performance and change the default packaging tool to zipflinger for debug builds. Additionally, developers can import exter-

People on the move

nally-built APKs to debug and profile them, according to Scott Swarthout, product manager for Android. For code editing, the release features a new way to quickly design, develop and preview app layouts using XML, with a new split view in the design editors. Split view enables developers to see design and code views of their UIs simultaneously.

BMC acquires Compuware IT company BMC has announced that it is acquiring the mainframe application development company Compuware. This acquisition will build on the success of BMC Automated Mainframe Intelligence and Compuware’s Topaz suite, ISPW technology, and classic product portfolios, BMC explained. “Compuware is the proven and trusted partner in mainstreaming the mainframe for Agile and DevOps, and we are thrilled to now be joining forces with BMC in reinventing

April 2020

SD Times

the future of the platform,” said Chris O’Malley, CEO of Compuware. “Both companies have been leaders in mainframe innovation over the last five years and we look forward to combining our complementary solution strengths and common passion for accelerating our customers’ successful digital transformations.”

PowerShell 7.0 generally available Microsoft has announced the general availability of PowerShell 7.0. PowerShell is a configuration and automation tool that includes a commandline shell, object-oriented scripting language, and set of tools for executing scripts and managed modules. Three years ago, Microsoft released a completely reworked version of the tool as PowerShell Core 6. That update introduced cross-platform support across Windows, MacOS, and Linux; SSH-based PowerShell Remoting; improved support for REST and JSON; and official Docker containers. z

n Nicole Forsgren, founder of DevOps Research and Assessment, has announced she will be joining GitHub as VP of research and strategy. At GitHub, she will work to explore open source, developer productivity and happiness for the broader industry.

n WhiteHat Security has named Chris Leffel as its new vice president of product management. Leffel will lead the company’s product management team and product strategy to provide new innovation to WhiteHat’s product portfolio and security expertise.

n Frank Roe, SmartBear’s former chief revenue office, is being promoted to the company’s CEO. Roe succeeds Justin Teague, who has stepped down for personal reasons. Teague will continue to contribute to the company as its executive chairman of the board of directors.

n Exadel has switched up its leadership, which includes transitioning its board member and strategic advisor Ilya Cantor to CEO, and Fima Katz as president. According to the company, the move reflects the increasing business momentum and continued growth it is seeing across every business segment.

n Wind River named former Microsoft executive Kevin Dallas as its new chief executive officer. Dallas succeeds Jim Douglas, who is stepping down as president and CEO. Douglas will continue to advise Wind River and TPG Capital during a transition period.

n Optimizely has announced a new CTO: Lawrence Bruhmuller, who will work to drive the company’s business momentum and innovate its product and engineering teams to deliver better code, apps and experiences. z

5


006-9_SDT034.qxp_Layout 1 3/20/20 2:59 PM Page 6

6

SD Times

April 2020


006-9_SDT034.qxp_Layout 1 3/20/20 2:59 PM Page 7

www.sdtimes.com

April 2020

SD Times

BY JENNA SARGENT

I

t shouldn’t be news to you to hear that software needs to be tested rigorously before being pushed to production. Over the years countless testing methodologies have popped up, each promising to be the best one. From automated testing to continuous testing to test-driven development, there is no shortage of ways to test your software. While there may be variations in these testing methods, they all still rely on some form of human intervention. Humans need to script the tests, which means they need to know what they’re testing for. This presents a challenge in complex environments when a number of factors could combine to produce an unintended result — one for which testers wouldn’t have thought to test. This is where chaos engineering comes in, Michael Fisher, product manager at OpsRamp explained. Chaos engineering allows you to test for those “unknown unknowns,” he said. According to Shannon Weyrick, vice president of architecture at NS1, chaos engineering is “the practice of intentionally introducing failures in systems to proactively identify points of weakness. Weyrick explained that aside from identifying weakness in a system, chaos engineering allows teams to predict and proactively mitigate problems before they turn into problems that could impact the business. Matthew Fornaciari, CTO and co-founder of Gremlin, added that “traditional methods of testing are much more about testing how the underlying sections of the code functions. Chaos engineering focuses on discovering and validating how the system functions as a whole, especially under duress.” Chaos engineering is considered to be part of the testing phase, but Hitesh Patel, senior director of product management at F5, believes that the core of chaos engineering goes back to the development phase. It is all about “designing software and systems in an environment that is mimicking what is really happening in the real continued on page 8 >

7


006-9_SDT034.qxp_Layout 1 3/20/20 2:59 PM Page 8

8

SD Times

April 2020

www.sdtimes.com

< continued from page 7

world,” he said. This means that as a developer is writing code, they’re thinking about how failures will be injected into it down the line and as a result, they’re building more resilient systems. “Right now, chaos engineering is more about setting that expectation when you’re building the software or you’re building the system that failures are going to happen and that you need to design for resiliency and bake that in at the beginning of a product or software life cycle rather than trying to add that on later,” said Patel.

The history of chaos engineering The software development industry tends to latch onto practices and methodologies developed and successfully used at large tech companies. This happened with SRE, which originated at Google, and it’s also the case with chaos engineering. The practice first originated at Netflix almost 10 years ago when they built a tool called Chaos Money that would randomly disable production instances. “By running Chaos Monkey in the middle of a business day, in a carefully monitored environment with engineers standing by to address any problems, we can still learn the lessons about the weaknesses of our system, and build automatic recovery mechanisms to deal with them. So next time an instance fails at 3 am on a Sunday, we won’t even notice,” Netflix wrote in a blog post. Since then, they have created an entire “Simian Army” of tools that they say keep their cloud “safe, secure, and

Best practices for chaos engineering According to Shannon Weyrick, vice president of architecture at NS1, there are three main best practices that should be followed when using chaos engineering. Get buy-in to the chaos mindset across the team: Purposefully injecting failures into a system will require a shift in mindset. He recommends teams investigate the practice, understand the ramifications, and introduce it in small ways for legacy projects and directly for new projects. “Ensure your team knows how to run successful experiments, and minimize the blast radius to reduce or remove potential impact to customers when failures occur,” said Weyrick. Make the experiments real: The goal of chaos engineering is to increase reliability by exploring unpredictables through experiments. To get the most out of chaos engineering, teams should conduct their experiment using the most realistic data and environments possible. He also noted that it’s important to conduct experiments on the production system because it will always contain unique and hard-to-reproduce variables. Be sure people are part of your system: It’s important to remember that infrastructure and software are not the only parts of a system. “Before conducting chaos experiments, remember that the operators who maintain the system should be considered a part of that system, and therefore be a part of the experiments,” said Weyrick. z

highly available.” Examples of tools in this Simian Army include Conformity Monkey, which finds and removes instances that don’t adhere to best practices; Latency Monkey, which introduces artificial delays to see how services respond to service degradation; and Chaos Gorilla, which simulates an outage of an entire AWS availability zone. “With the ever-growing Netflix Simian Army by our side, constantly testing our resilience to all sorts of failures, we feel much more confident about our ability to deal with the inevitable failures that we’ll encounter in production and to minimize or eliminate their impact to our subscribers,” Netflix said. Since then, several companies have adopted chaos engineering as part of their testing process, and it has even spawned companies like Gremlin, which provides chaos-engineering-as-aservice.

Smaller companies can benefit While chaos engineering originated at Netflix, a large company with a complex infrastructure and environment, Patel believes that in a lot of ways, smaller companies will find it easier to implement chaos engineering. Larger companies are going to have more complex compliance, auditing, and reporting requirements. “All of those things factor in when you’re trying to do what I would call a revolutionary change in how you operate things,” said Patel. Overall,

there is less red tape to cut through at smaller and medium-sized companies. “There’s fewer people involved and I think it’s easier for a two person team to get into a room and say ‘right, this is the right thing for the business, this is the right thing for our customers, and we can get started faster’,” said Patel. Weyrick doesn’t entirely agree with the idea that smaller means easier. Today, even small and medium-sized applications can be complex, increasing the surface area for those unpredictable weaknesses, he explained. He believes that microservice architectures in particular are inherently complex because they involve a number of disparate, interconnected parts and are often deployed in complex and widely distributed architectures. Fornaciari recalled being on the availability team at Amazon in 2010 as they were doing a massive migration from a monolithic to a microservices architecture. The point of the move was to decouple systems and allow teams to own their respective functions and iterate independently, and in that sense, the migration was a success. But the migration also led the team to learn the hard way that introducing the network as a dependency between teams introduced a new class of errors. “Days quickly turned into a never-ending deluge of fire fighting, as we attempted to triage the onslaught of new issues,” said Fornaciari. “It was then that


006-9_SDT034.qxp_Layout 1 3/20/20 3:00 PM Page 9

www.sdtimes.com

we realized the only way we were ever going to get ahead of these novel failures was to invest heavily in proactive testing via Chaos Engineering.” Fornaciari believes that as companies start to go through what Amazon went through ten years ago, chaos engineering will be “the salve that allows those companies to get ahead of these failures, as their systems change and evolve.” According to Weyrick, if possible, teams should try to implement chaos engineering early on in an application’s life so that they can build confidence as they scale the application. “The depth of the chaos experiments involved may start simple in smaller companies, and grow over time,” said Weyrick. Patel also recommends starting small. He recommends starting with a non-critical application, one that isn’t going to get your company into the news or get you dragged up to your boss’ boss if things go awry. Once an application is selected, teams should apply chaos engineering to that application end to end. He emphasized that the most important part of this process early on is “building the muscle,” which he said is all about the people, not the technology. “Technology is great, but at the end

of the day, it’s people who are using these things and putting them together,” said Patel. “And what you need to do is build the muscle in the people that are doing this. Build that subject matter expertise and do that in a safe environment. Do that in a way that they can mess up a little bit. Because nothing works right the first time when you’re doing this stuff...People can build the muscle and learn how to do these things, learn the subject matter expertise, gain confidence, and then start applying that in a broader manner. And that’s where I think a tie in with leadership comes in.” According to Patel, having support from the top of the business will be crucial in helping companies prioritize where to apply chaos engineering. “[They’re] not just giving you aircover, but also saying we’re going to apply this in a way that makes sense to our business and to our user experience and matches where we want to go from a strategic standpoint,” said Patel. “So you’re not just applying the technology in areas that no one is going to notice. You’re applying it where you can derive the biggest customer benefit.” Fornaciari added: “As companies grow their applications and the supporting infrastructure, they’ll undoubtedly

Do chaos engineering on your databases too Kendra Little, DevOps advocate at Redgate Software, brought up the point that chaos engineering is not just for software applications. It is a practice that can be applied to databases too. Little believes that the approach to testing databases with chaos engineering remains the same as the approach one would take when testing a regular software application. A big difference, however, is that people tend to be more scared of it when it’s a database instead of an application. “When we think about testing in production with databases it’s very terrifying because if something happens to your data, your whole company is at risk,” she said. But with chaos engineering, what you’re really doing is doing controlled testing. She explained that with this process you’re not just dropping tables or releasing things that could put your company out of business. It’s also important to note that we’ve reached a point in database and infrastructure complexity where it’s not possible to replicate your production environment accurately, Little explained. “If we don’t have a way to learn about how to manage our databases and to learn how our code behaves in databases and production, then in many cases we’re not gonna have anywhere we can learn it. So it is, I think, just as relevant in databases.” z

April 2020

SD Times

introduce more failure modes into their system. It’s unavoidable. That’s why we call chaos engineering a practice — it’s something that must continually grow and evolve with the underlying systems.”

Embracing risk Fisher also added that organizations will need to shift their mindsets from one of “avoiding risks at all costs” to “embracing risk to generate a greater outcome to their users.” This can be a massive cultural shift, especially for those larger, more risk-averse companies, or companies who haven’t already adopted some form of DevOps. “The team needs to evolve from the legacy belief that production is a golden environment that should be touched as little as possible and handled with kid gloves, lest outages occur,” said Weyrick. “Chaos engineering adopts a very different mindset: that in today's world, this legacy belief actually creates fragile systems that fail at the first unexpected and unavoidable real world problem. Instead, we can build systems that consistently prove to us that they can survive unexpected problems, and rest easier in that confidence.” The idea of purposefully trying to break things can be especially difficult for more traditional IT managers who are used to the idea of gatekeeping changes to the production environment,” explained Kendra Little, DevOps advocate at Redgate Software. “Your inclination is, well we have to find a way to be able to test this before it gets to production,” she said. “So it’s kind of this reactionary viewpoint of as soon as I find something, I need to be able to write a test to be able to make sure that never happens again... I mean I used to very much have that perspective as an IT person, and then at a certain point, I and the higher ups in my organization as well began to realize, we can’t just be reactionary anymore. Failure is inevitable. Our system is complex enough and we need to be able to change it rapidly. We can’t just gate keep things out of there. We have to be able to change the system quickly. And there are just so many moving parts in the system and so many external factors that can impact us.” z

9


010-12_Covid19_SDT034.qxp_Layout 1 3/20/20 3:31 PM Page 10

10

SD Times

April 2020

www.sdtimes.com

How businesses can adapt for COVID-19 and beyond BY CHRISTINA CARDOZA

C

OVID-19 is not only quickly spreading across the globe, but it is infiltrating businesses, causing them to clear out their buildings and bring workers online. Unfortunately, most of these businesses are not prepared to handle remote work and the risks that come along with it. While businesses should have disaster recovery and business continuity plans in place for these types of events, none of them could have prepared for something like a pandemic to happen, according to Stan Lowe, global CISO for Zscaler, a cloud security company. Even if they did happen to have a chapter in place for something like a pandemic, there is very little information on how to proceed. Lowe explained businesses typically plan for a physical event like a building burning down or tornado, where maybe 30% of their workforce will be impacted. They don’t plan to have 100% of their employees working remotely for an unforeseeable future. “That is not something that a lot of them are prepared to handle from an operational perspective, business perspective and technology perspective,” he said. But now that businesses are in this situation, they have no choice but to go forward and find alternative ways to do things. “While many organizations may have robust remote work policies, procedures, and systems in place, rarely are they designed to scale beyond a subset of the workforce,” said Josh Perkins, field CTO at AHEAD, an application and infrastructure consulting company. Some challenges a remote workforce presents include connectivity, end-user technology availability, collaboration tools, digital preparedness, business process dependencies, geographical dependencies and shifts in actual business process criticality or need, Perkins explained. First off, Lowe said businesses have to address the fact that they are “on fire” before they can address the fact of why they caught on fire in the first place. They need to break down their business into simple sections, and figure out what are the most important services they have, what are the things that are impacted, and

how they can protect and drive revenue. “The ideal scenario from a work-from-home goal standpoint is to have employees be able to do all the functions, tasks and responsibility at home as they would have at the office,” said Raj Sabhlok, president of IT infrastructure monitoring company ManageEngine. They need access to key technologies, business apps and telephony services.

More needed than VPN It boils down to the basics: connectivity, collaboration, application access, and security, according to Perkins. Connectivity doesn’t necessarily mean having a virtual private network (VPN) in place, but it means making sure remote workers have internet access and devices


010-12_Covid19_SDT034.qxp_Layout 1 3/20/20 3:31 PM Page 11

www.sdtimes.com

April 2020

SD Times

to support remote work

to work with systems in the first place. Then if not all your employees have a business laptop or device to work on, you have to figure out if you are going to let employees use their own devices and what other resources or tools they need to help them work, Perkins explained. If organizations don’t have a VPN plan in place, they aren’t going to be able to come up with one now because it is a 60- to 90-day process plus deployment and additional bandwidth to set up, according to Lowe. And even for organizations that do

have VPNs, there is a security risk every time someone logs into the network, and that increases a business’ attack surface area. “You now have thousands and thousands of new endpoints that have just punched holes in your firewalls and VPN,” said Lowe. “It just takes one person to click on an email or one person to fall for a phishing scam for their identity to be compromised.” Endpoint management software enables IT teams to ensure apps and devices are configured correctly and secure. It can effectively segregate personal information and control data leak-

age, according to ManageEngine’s Sabhlok. In addition, it can maintain control of devices if they are lost, or remotely lock a device and wipe it clean. “Through endpoint management software, we can really lock down what happens and what can be accessed from a device and maybe even more importantly what can be done with the data that is being accessed,” said Sabhlok. The other critical applications for any business right now are going to be email, messaging, and video conference. This is going to allow the business continued on page 12 >

11


010-12_Covid19_SDT034.qxp_Layout 1 3/20/20 3:32 PM Page 12

12

SD Times

April 2020

www.sdtimes.com

Looking at security, working remotely The CISA also provided some recommendations for organizations enforcing remote work:

The US Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) is asking organizations to adopt a heightened state of cybersecurity during this time of increased remote work. Some considerations the CISA says organizations need to be aware of include:

n Update VPNs, network infrastructure devices and devices being used for remote work with latest software patches and security configurations n Make sure employees understand there will be an increase in phishing attempts

n The more VPNs used, the more vulnerabilities that are found and targeted

n Prepare IT security professionals to ramp up remote access cybersecurity tasks with log review, attack detection, and incident response and recovery

n VPNs are less likely to be kept updated with latest security updates and patches

n Implement multi-factor authentication on all VPN connections or require stronger use of passwords

n As more people start working from home, the amount of malicious cyber actors like phishing emails increases

n Test VPN limitations to prepare for mass usage

n Organizations without multi-factor authentication in place are more susceptible to phishing attacks n Critical business operations may suffer if there are only a limited number of VPN connections available

< continued from page 11

to be able to communicate, according to Lowe. “Creating communication paths is critical to the success of any organization operating in a distributed fashion. Organizations must provide reliable, multi-channel virtual environments for collaboration,” said Perkins.

Access control is critical Once you figure out what data and other critical services you need to protect, you need to start having a conversation on how people can access that. According to Lowe, 100% of the business can’t log in at the same time. Businesses have to start to tier their employees based on criticality. Who are the people that are the most critical to making sure these services run and support the business? And then you provide them access at different times of the day depending on their tier. This is only a temporary solution and businesses will have to figure out a midterm solution. “Organizations should consider solutions that consolidate a portfolio of applications and services into a portal

n Contact CIA with any incidents, phishing attacks, malware or cybersecurity concerns.

According to ManageEngine’s Sabhlok and Zscaler’s Lowe, what companies should also be doing during this time is writing down what could have been better, lessons learned for next time, and how your business continuity plans can be improved. “It is not a matter of if, it is a matter of when. This will happen again. Use this as a learning mechanism to be able to better position yourself in the future,” said Lowe.

experience for users that is tailored to their application needs and provides secure single sign-on access to those applications. Often these solutions can contain internal, partner, and SaaSbased applications,” said Perkins. The good news, according to ManageEngine’s Sabhlok, is that a lot of businesses are already leveraging modern technologies like cloud applications. These applications are going to be more secure than what the business would be able to provide because they are accessed through a secure protocol like HTTPS. However, there are other websites business users may need to access that are not as secure. You want to be able to configure browsers and lock them down in accordance with corporate policy. This can be done with a browser security tool, Sabhlok explained. Another important part to keep in mind is licensing, which people tend to forget. But you need to make sure you have enough licensing to support remote work, according to Lowe. “Solutions may not be designed or licensed to scale in the event of a rapid workforce shift,” Perkins added.

Reassessing your risk Having people work from home also changes your risk tolerance and your risk posture based on how many people are working remotely. You have to change your security tools and methodologies to meet that new risk paradigm and risk tolerance, according to Lowe. Identity access management tools become critical because it allows the businesses to change roles and privileges based on who needs access and where they are, according to Sabhlok. Other tools Sabhlok says are going to become necessary are remote management technology so IT teams can access devices and help if something goes wrong; and IT ops tools to monitor the network regularly and understand how it is going to react and handle everyone. “These days when a lot of large companies are regulated through GPDR or CCPA you need to be able to monitor when PII is moving around. Endpoint management software, browser security, and identity access management tools can tell the IT organization when someone is accessing assets that they shouldn't be, and provide an early warning about that,” he said. z


Full Page Ads_SDT032.qxp_Layout 1 1/17/20 2:17 PM Page 9

GET THE

ELEPHANT OUT OF THE ROOM

Bad address and contact data that prevents effective engagement with customers via postal mail, email and phone is the elephant in the room for many companies. Melissa’s 30+ years of domain experience in address management, patented fuzzy matching and multisourced reference datasets power the global data quality tools you need to keep customer data clean, correct and current. Tell bad data to vamoose, skedaddle, and Get the El out for good!

Data Quality APIs

Global Address Verification

Global Email

Identity Verification

Global Phone

Geocoding

Demographics/ Firmographics

U.S. Property

Matching/ Deduping

Activate a Demo Account and Get a Free Pair of Elephant Socks! i.Melissa.com/sdtimes

Integrations

www.Melissa.com | 1-800-MELISSA


014-18_SDT034.qxp_Layout 1 3/20/20 9:41 AM Page 14

14

www.sdtimes.com

Making open source work for you and your business

SD Times

O

April 2020

pen-source software continues to win over developers and enterprises. A recent report found that 92% of applications use open-source components, and open source is the de facto standard for software development. The report, which was conducted by managed open-source company Tidelift, found open source exceeds proprietary software in technology flexibility and extensibility, developer satisfaction, total cost of ownership, development speed, quality of code, security, functionality, and performance and stability. The only area in which open source did not outperform proprietary software was under reliable support and consulting services, but it was a close fight with 36% of respondents saying open source was better in this area and 38% saying proprietary software was better. While open source seems to be dominating the industry, Dries Buytaert the creator of the open-source project Drupal and founder/CTO of the SaaS company Acquia, believes the only other place open source hasn't won yet is in creating a business model. “Successful open-source businesses are extremely rare. Figuring out these business models around open source is the last hurdle that prevents open source from taking over the world. It has already won with developers, but hasn’t

BY CHRISTINA CARDOZA

won as a business model yet,” he said. “Cracking the code would be really valuable because it allows us to solve problems that exist in the world that are very hard to do now.” Open-source business models are ways companies try to create revenue around free and open-source software. Some models include providing support and services for projects, creating advertisement partnerships, adding paid additional features, and selling cloud-based software as a service.

Open source as a business model According to Buytaert, the reason why successful open-source businesses are so rare is because as an open-source project and its adoption starts to scale, it becomes harder and more complex to maintain. Some of the ways the Drupal project deals with the growth of the community and project is by assigning roles and responsibilities as well as providing contributors and maintainers with the tools necessary to complete those roles. For instance, there is a security team assigned, which is given access to security tools to perform things like audits. Donald Fischer, co-founder and CEO of Tidelift, said there are three things companies, or even individuals, who want to be successful using and commercializing open source should pay attention to: security, licensing and

maintenance. When it comes to security, opensource users need to be able to trust and verify who the source code is coming from as well as identify any security vulnerabilities. It’s also important to understand whose job it is to find security vulnerabilities within the project and how fast the project responds to those threats. Licensing is also a complex topic in the open-source world, Fischer explained, and it requires speciality knowledge. Some things open-source users should understand are: what license policies make the most sense for them, what licenses does the company use and can be used, and if those licenses are compatible. Lastly, maintenance and quality have become a big issue. In the old days, software came from vendors like Microsoft and Oracle who ensured certain standards and support. In today’s modern era where businesses and individuals are utilizing open source, not all projects have maintenance or support in place, Fischer explained. This is troubling because those looking to utilize open source want to make sure projects have longevity. It is important to look at how the software keeps working and evolves as well as visibility into the actively maintained versions. Project owners should communicate and provide advanced notice when things retire so open-source


014-18_SDT034.qxp_Layout 1 3/20/20 9:41 AM Page 15

www.sdtimes.com

users are not left stranded. “The ability of businesses to move faster is dictated by their ability to maintain, comply and secure their systems,” said Kevin Wang, founder and CEO of FOSSA, an enterprise opensource management solution provider. “Just understanding what third-party software they depend on, and how they can strategically use that to improve their business is crucial. In order to be really fantastic modern software companies, you have to be really good at using open source.” However, VM Brasseur, director of open-source strategy at networking and cybersecurity company Juniper Networks, warns against viewing open source as a business model. According

to Brasseur, open source is just one of many tools that help execute business models. She worries thinking about open source as a business model will make people think that the fundamental definition of open source needs to be changed in order to add a revenue stream.

Open-source software in the cloud era Another concern open-source businesses have is how to transform business strategies as technology evolves. Towards the end of 2018, a new license sparked major controversy among the open-source community. The

April 2020

SD Times

Commons Clause was drafted to put “conditions” or “limitations” on opensource software. The controversy was that this was not an open-source license, and went against the definition of open source by adding restrictions to open-source software. “Through the past decade of opensource history, there has been this huge stigma generated around any attempt to license software in a not purely open source way. The purpose of the Commons Clause in the beginning was basically to give a super lightweight alternacontinued on page 16 >

15


014-18_SDT034.qxp_Layout 1 3/20/20 9:41 AM Page 16

16

SD Times

April 2020

www.sdtimes.com

< continued from page 15

tive,” said FOSSA’s Wang. “It was this thing in between proprietary code and open-source code.” At the time, Juniper’s Brasseur stated: “By restricting people from making money from a project where it is applied, the Commons Clause directly violates Item 6 in the Open Source Definition. As the Open Source Definition is no longer applicable to those projects, they—quite literally by definition—are no longer open source. At best they can be called ‘source available.’” However, the Commons Clause was created to address a larger problem in the community which was to close the “cloud loophole” where cloud providers were taking advantage of open-source projects without giving back to the community or giving credit to the project. “What has happened in the last five years is the rise of cloud computing, and in particular cloud computing providers who have made very successful businesses of taking successful open-source projects someone else invested most of the research and development, and then harvested most

users in the cloud at a price. Following all the controversy surrounding the Commons Clause, many other open-source projects started to change their licenses. Timescale developed the Timescale License, which aims to prevent cloud and SaaS providers from hosting a database-as-aservice version of TimescaleDB and OEMs who don’t provide value on top of the database. According to Timescale, a majority of its open-source software is still available under the Apache 2 license. “We did not make this decision lightly, and we kind of did it because we felt like we needed to, not because we wanted to. What we saw was the software world was moving faster than the licenses could keep up,” Kulkarni said. “We believe this decision really allowed us to build towards a self-sustaining open-source business where we can control our own destiny and keep reinvesting in the product.” Database company MongoDB created the Server Side Public License (SSPL), and actually tried to go through the Open Source Initiative (OSI) to get it approved as an OSI-approved license. After realizing that the license was not

“There are three things companies, or even individuals, who want to be successful using and commercializing open source should pay attention to: security, licensing and maintenance.” —Donald Fischer, co-founder and CEO of Tidelift

of the revenue via cloud offerings,” said Ajay Kulkarni, CEO of Timescale, a time-series data company. “Essentially, what we realized was in order to be a successful open-source business in the cloud era, we had to think about things a little differently.” Kulkarni went on to explain that the cloud era has changed the way software is consumed, and cloud providers like Amazon are now able to download open-source software and run it for

going to get the broad support it needed to be approved, MongoDB withdrew the SSPL from the OSI-approval process, but continues to use it. “While it’s not OSI approved, MongoDB users are free to review our code, modify our code, distribute our code or redistribute modifications to our code in compliance with the license,” said Eliot Horowitz, cofounder of MongoDB. Cockroach Labs, makers of CockroachDB, took a different approach,

and adopted the Business Source License (BSL). With the BSL, source code is freely available and on the path to become open source at a certain point in time. “We think of it as patent protection. You can decide what protections you want and for how long, and what happens is when that term is up what was formally licensed as BSL becomes Apache in our case,” said Spencer Kimball, co-founder and CEO of Cockroach Labs. “The exclusion is you can’t run Cockroach as an external database as a service. You can think of that fundamentally as an anti-AWS provision.” Other companies in this wave of license changes included Redis, which was one of the first to adopt the Commons Clause and also introduced the Redis Source Available License for Redis Modules; MariaDB, who also adopted BSL; and Confluent, who announced the Confluent Community License. Heather Meeker, the lawyer who drafted the Commons Clause, also created the Polyform Project, which aims to draft and make freely available plainlanguage source-code licenses with limited rights. “Until now, there has been no standardization of this kind of sourcecode license, even though it has become increasingly common. This has resulted in confusing and overlapping licenses, which need to be analyzed one at a time. Lack of standardization has used up the time and resources of many in the software industry, as well as their lawyers. The objective of the PolyForm Project is standardization and reduction of costs for developers and users,” the project’s website states. Timescale’s Kimball hopes that one day a better way will be provided to protect open-source projects. “We are not lawyers. We didn't want to create a license. This is not our business model, and it was a huge time stuck and had a huge expense, but it is also not our job to try to convince the open source entities to make this huge shift.” Drupal’s Buytaert suggests experimenting with licenses and creating new


014-18_SDT034.qxp_Layout 1 3/20/20 9:41 AM Page 17

www.sdtimes.com

April 2020

The top open source licenses

SD Times

usage, permissive licenses are winning.” Open-source security and license compliance management "The copyleft movement carried the interests of Open platform provider WhiteSource has released a complete guide Source well, but pressure has grown recently due to it being for understanding and learning about open source licenses. either too restrictive or not restrictive enough in the eyes of According to the guide, open-source licenses can be categocreators. For those whose main motivation is rized under copyleft or permissive. Under a The top open-source licenses, seeing widespread use, permissive licenses copyleft license, users who use a component according to WhiteSource, are: work best even if that allows the possibility of of the open-source software must make their 1. MIT being modified for use in closed source. code available to others. Under a permissive 2. Apache 2.0 Meanwhile for those who have an open-source license, the open-source soft3. GPLv3 ideological motivation such as ware can be free to use, modify or redistrib4. GPLv2 preventing the use of their code ute, but it also permits proprietary derivative 5. BSD 3 in weapons, copyleft is not works. 6. LGPLv2.1 restrictive enough because it forIn addition, the guide reveals permissive 7. BSD 2 bids that type of discrimination,” open-source licenses are on the rise. 8. Microsoft Public said Rhys Arkins, director of “This can be explained by the continuous 9. Eclipse 1.0 product management at open rise in open-source usage. Open source has 10. BSD source security and license combecome mainstream, and the open source pany WhiteSource. “Finally, you see creators community is embraced and supported by the of open source who want to make their software free except for commercial software community,” the guide states. “With comusually a very narrow concept of direct commercial competipanies like Microsoft and Google standing behind some major tion - this is again something not supported by traditional open-source projects, the ‘Us’ vs. ‘Them’ mentality that ruled in licenses. The latter two use cases chip away at the dominance the early days of open source is long gone. In the interest of of not just copyleft but also permissive licenses too." z this widespread cooperation, and encouraging open source

licenses that can help support the creation, growth and sustainability of new projects. Licenses should encourage sharing, but discourage unfair competition, he explained. “A lot of the open-source licenses we use today are 20 years old, and I think it is a little naive to think something that worked 20 years ago is still perfect today,” said Buytaert. “New licenses are worth exploring. It can be game-changing and provide a breakthrough for how we think of sustaining open source.” Tidelift’s Fischer has other thoughts. “We think the bigger opportunity is around creating some net new value around the open-source software without putting additional restrictions around the use of open-source software or creating a license and debating about the open-source definition. All these things fly over the heads of most organizations trying to use this stuff. Let’s go over this opportunity to create new value that didn’t exist in the world before,” he said. “A great example of that is if there is some open-source code that folks are using but there hasn’t been commercial support or maintenance available for it, let’s start making that

available for that software and if it is valuable, organizations will come and pay for it.”

Open source sustainability According to Juniper’s Brasseur, there are much bigger problems in the opensource world that we should be worried about. While it is great that businesses are taking an interest in open source, a more important issue is being able to sustain open source for years to come. Brasseur explained many companies and individuals will often just donate money to a project in order to help it grow because “throwing money at the problem is easy for people to do,” she explained. “We are conditioned to equate money with stability.” The problem with this is that no one follows up on those donations or sees how it was used. Project owners also don’t understand what to do with the money. While Brasseur does understand it is important to do things like pay and support maintainers, sustainability needs to be more than just money. “If your maintainer or core contributors ran away to join the circus, how many of those would it take to put your

project in a bad position,” she said. “That is something we need to be focusing on for sustainability a lot more than we need to be focusing on just getting money into the hands of contributors.” She suggested taking a look at the book “Our Common Future,” also known as the Brundtland Report, which examines corporate sustainability planning, and how the corporate world grows an economy while growing and sustaining the environment. What the book does is define what sustainability is, which can be applied to the open-source world. According to the book, sustainability is “development that meets the needs of the present without compromising the ability of future generations to meet their own needs.” The book also identifies key areas for successful sustainability and how each one is an interlocking crisis that needs to be addressed simultaneously. According to Brassuer, the benefits of having a corporate sustainability plan include a more reliable supply chain, collaboration between groups internally and externally, improved communication, continued on page 18 >

17


014-18_SDT034.qxp_Layout 1 3/20/20 9:41 AM Page 18

18

SD Times

April 2020

www.sdtimes.com

< continued from page 17

increased innovation, and improved employee retention and recruiting. “Free and open-source software needs to follow the open-source way, and build on the contribution from those who came before us,” she said in a keynote at PyCon Australia last year. For open source, she explained the three elements of sustainability planning are: contributing back, human environmental diversity, and community safety. Contributing back refers to giving back to the open-source project or community whether that be in the form of time, talent or treasure. “Since the very beginning of free and open-source software we have had people and organizations who use free and open source but don’t contribute back. We call them ‘free riders,’” she said in her keynote. “We use it in a very negative way and we dismiss them. They are no good. These organizations may not understand that what they are doing... is degrading the longevity and success of the free and open source software that they rely on.” It is important to note that contributions don’t have to come in the form of code. Some ways to contribute time are by doing things like volunteering at events or helping to organize events; through talent by doing things like security audits, redesigns, or improving accessibility; or through treasure it can be donating money. In addition, there are many different roles that go into a project, such as documentarians, designers, security, infrastructure, testing and marketing, but too often open-source guides are focused just on programmers. “It is fine as a developer to scratch your own itch and release it, but if you want your software to be usable and adopted, you need to bring people with other expertise,” she said. “Events around open-source projects, documentations, marketing and legal advice, these are all things that go into marking a project successful,” added Drupal’s Buytaert. “Having a lens that is more than developer-centric

is really important.” Human and environmental diversity involves getting more and varied people involved in the community. This will help provide more resources, innovation and stability, because the more people involved, the less you have to worry about the bus or circus factor. In addition, diversity does not only mean gender, but can be geographic and language diversity. Allowing people from different parts of the world who speak different languages can open up the door to millions of new contributors in those areas, Brasseur explained. And then, open-source communities can cultivate that diverse contributor base by making sure they feel safe to contribute. “As an open-source participant, you have the power. You are in a position to witness unprofessional and unwelcoming behavior and take action,” said Brasseur. For individuals, a way to ensure

community safety is to restrict any contributions to projects that don’t have a code of conduct. Project owners and maintainers should make necessary steps to enforce the code of conduct. According to Tidelift’s Fischer, there has also been a rise of ethical licenses to promote community safety. For instance, the Hippocratic License 2.0 was just released, which follows the Hippocratic Oath in medicine, which implies first do no harm; however, Fischer notes it may be hard to get these licenses noticed by contributors. “People are trying to have their work used in a context that they endorse from a moral or ethical standpoint, but it is really complicated to figure out how to achieve that without having

unintended second-order consequences,” he said. “We are trying to figure out what those unintended consequences are, and how it works in practice. It is still a work in progress.” Drupal’s Buytaert believes projects need not be afraid to innovate. For instance, when the project started 19 years ago, technologies like mobile and social media didn’t exist. Projects have to be able to ride different innovation waves to stay relevant. The Drupal project tracks all code and non-code contributions to the project and gives contributors credits or points. Those points are then stacked up and ranked so others can see who participates the most. “The Drupal website gets about 2 million unique visitors a month, which is a crazy number for an open-source community website. Not only do you get leads from potential customers, but also it speaks to the expertise of the organization or individuals.” There is also an ongoing trend where companies are acquiring open-source projects instead of starting them. According to Rhys Arkins, director of product management at open source security and license company WhiteSource, this can be a positive trend that promotes more open-source projects in the future. “The best open source is usually that which first comes out of an internal or personal need first, so if starting open-source projects was too intimidating or mostly for large companies alone, we'd miss out on a lot of innovation compared to one where small projects can flourish,” he said. To successfully flourish under corporate stewardship, Arkins recommended company interest aligns with community goals and directs. “If there is any direct conflict between the company's intended business model (e.g. limiting features in the open source and selling advanced features commercially licensed) then it's unlikely to end well. If on the other hand, even long-term open-source use of the project is still of benefit to the company, then it reduces the chances of conflict and increases the likelihood of a win-win situation,” he added. z


www.sdtimes.com

April 2020

SD Times

INDUSTRY SPOTLIGHT

HCL Software: Beyond Global IT Services New software division moves firm into new markets here has been a lot of curiosity among our subscribers about HCL Software, ever since the news of HCL’s acquisition of several product portfolios from IBM closed on July 1, 2019. We recently had the opportunity to sit down with Darren Oberst, the head of HCL Software, and get answers to some of the questions that we have been hearing.

trust and transparency of those relationships. We aspire to be one of the best software companies in the world — driven by the quality and innovation of our products, realized in the value we deliver to our customers, and measured by leadership and growth across our key market segments.

As a nearly $10B global IT services powerhouse, why has HCL now moved into the software industry?

compete with them. How do you see part-

T

Over HCL’s more than 40-year history, we have expanded our offerings and re-invented our capabilities on several occasions. As an example, 15 years ago, we first launched our global remote infrastructure monitoring offerings, and began to compete against the global IT service giants. Many people thought that we were crazy — but we built a strong playbook, differentiated value proposition, found creative ways to innovate in terms of technology and process, and built a multi-billion-dollar business from scratch that is today a global leader. As we look at the software landscape, we see a similar opportunity. There are very few examples of services/consulting companies successfully moving into software, Why should customers believe that they can bet on you as a core strategic software provider?

First and foremost, our mantra is customer success. One of HCL’s core values is “relationship beyond the contract.” As a services company, every day, we have to face our customer, create value, and help our customer to work through problems. We understand that the pivotal moment in our relationship with a customer is not when they sign Content provided by SD Times and

We have heard some concerns from partners that HCL services may look to ners fitting into your strategy?

Darren Oberst, head of HCL Software

the contract and buy the software, but when they realize successful deployments — and continue to see value through meaningful high-impact product enhancements, expanded use cases, and new projects. Second, we are approaching the software market with humility, and a mindset that we need to earn the trust of our customers and the larger market. We have greatly expanded our engineering and support teams for all of our products: AppScan, BigFix, Commerce, Connections, Digital Experience, Notes Domino and Unica to accelerate velocity. We’ve challenged our product managers to bring high impact “wow” features into every release, often times, in close consultation with leading customers and partners. We ask our customers to give us the opportunity to prove out that value every quarter. Third, this will not happen overnight. We are in this for the long-term. Most of our largest customers and partners are relationships that we measure in decades — and we take pride in the

In acquiring these products from IBM, we have been fortunate to also gain an extraordinary partner ecosystem. Our intention is to give those partners new and more offerings so that they can continue to grow their practices around these products. What segments are key to your growth going forward?

We are a multi-product line business, focused on building market-leading capabilities in four major solution categories — Client Experience, Digital Solutions, DevSecOps, and Automation/Security. How would you describe your technology and innovation strategy?

Our technical strategy can be summarized in three main points — Cloud Native, API First, and “trust and security in everything we do.” Cloud Native and API First are the foundation of our architectural modernization strategy. The principle of “trust and security in everything we do” is a wider mandate — and we see these issues as fundamental to the way that we build, test, and deliver our products, to the way that we establish trust with customers and handle their data, and to the features, capabilities and certifications that we apply in our product roadmaps. z

19


047_SDT032.qxp_Layout 1 1/17/20 5:23 PM Page 1

Reach software development managers the way they prefer to be reached A recent survey of SD Times print and digital subscribers revealed that their number one choice for receiving marketing information from software providers is from advertising in SD Times. Software, DevOps and application development managers at large companies need a wide-angle view of industry trends and what they mean to them. That’s why they read and rely on SD Times.

Isn’t it time you revisited SD Times as part of your marketing campaigns? For advertising opportunities, contact SD Times Publisher David Lyman +1-978-465-2351 • dlyman@d2emerge.com


021_DevOpsWatch_SDT034.qxp_Layout 1 3/20/20 1:30 PM Page 21

www.sdtimes.com

April 2020

SD Times

DEVOPS WATCH

DevOps requires a human transformation Report says it’s critical for companies to upskill workers for digital world BY CHRISTINA CARDOZA

While most of the industry is undergoing a digital transformation, the CEO of the DevOps Institute Jayne Groll stresses the need for a human transformation. According to Groll, DevOps initiatives are focusing too much energy on technology and not enough effort with skills. The DevOps Institute released the Upskilling 2020: Enterprise DevOps Skills Report to find the most indemand skills needed for DevOps. The data was based on more than 1,200 respondents. “Human transformation is the single most critical success factor to enable DevOps practices and patterns for enterprise IT organizations,” said Groll. “Traditional upskilling and talent development approaches won’t be enough for enterprises to remain competitive because the increasing demand for IT professionals with core human skills is escalating to a point that business leaders have not yet seen in their lifetime. We must update our humans through new skill sets as often, and with the same focus, as our technology.” According to the report, more than 50% respondents are having trouble on their DevOps transformation journeys, and 58% cited finding skilled DevOps individuals are a challenge. Another 48% find it is difficult to retain skilled DevOps professionals. “The DevOps human and the associated skills plays a huge role in enabling an organization and its culture towards agile innovation, cross-functional collaboration and risk-taking to support digital operating models such as DevOps,” the report stated. “The fight for talent is not new as hiring managers are nervous about a talent gap in their teams relative to human, functional,

technical and process skills and knowledge. Individuals in current positions are eager to update their skills. New job entrants are needing to know how to compete with skills and talents for todays and future opportunities.” The institute found that the top skills necessary to create a “DevOps Human” are process skills and knowledge, automation, and human skill. In addition, the DevOps Institute found that not enough business leaders are focused on upskilling talent. More than 38% respondents’ organizations don’t have an upskilling program, 21%

This year’s report highlighted the need to evolve “t-shape” humans to “eshape” humans, which includes “4-Es:” experience, expertise, exploration and execution. Additionally, there are horizontal and vertical skills an “e-shaped” DevOps human must posses. The horizontal skills include automation, functional, knowledge and technical skills while the vertical skillset includes flow, understanding of different practices such as Scrum and Value Stream Mapping as well as human skills like collaboration and interpersonal skill.s “The time is now to upskill your

“Human transformation is the single most critical success factor to enable DevOps practices and patterns for enterprise IT organizations.” —Jayne Groll, DevOps Institute

are working towards on and 7% don’t even know if on is available to them. Thirty-one percent found their company is already implementing a formal upskilling programming. As part of the report, the DevOps Institute is also introducing the “eshaped” human of DevOps. Last year, the 2019 skills report focused on “tshape” humans, which are specialists who have disciplinary depth in one area such as the cloud, but have the ability to reach out to other disciplines. “Tshaped individuals supplement their depth of specific knowledge (the deep stem of the T) with a wide range of general knowledge (the general top of the T). The need for T-shaped talent is being driven by the increasing requirement for speed, agility and quality software from the business,” the 2019 report stated.

DevOps teams and individuals, however, this must be done across more than technical and functional skills,” said Eveline Oehrlich, research director at the institute. “We already saw a significant demand for a variety of human must-have skills in our 2019 research and this year we saw a tremendous increase across all human skills e.g. collaboration, interpersonal skills, empathy and creativity to name a few. The most important though is the increase in value placed on the human skills, which comes from the management and business leaders in our survey. Our research shows that a mindset shift is happening with the transition from ‘soft’ skills to ‘human’ skills, but more importantly, today’s leaders must change their mindset to recognize the value human skills will bring to a team and organization.” z

21


022-25_iPaas_SDT034.qxp_Layout 1 3/20/20 2:27 PM Page 22

iPaaS adoption growing to handle integrations

I

BY JAKUB LEWKOWICZ ntegration used to be a lengthy, complicated process, a process that simply would not keep up with companies that are within some stage of their digital transformation. This demand for speed and interconnectivity prompted the growth of full life cycle iPaaS solutions — otherwise known as integration platform as a service — that would provide full life-cycle support for the trove of applications that enterprises are trying to connect. iPaaS is a suite of cloud services that enables the development, execution, and governance of integration flows connecting any combination of on-premises and cloud-based processes, services, applications, and data within the individual or across multiple organizations.

“Before Salesforce, organizations did localized integration across applications, And so as organizations started adopting cloud applications, SaaS apps, they had that same need to connect and they were looking for integration-related technology that could handle that interconnectivity across cloud,” said Maureen Fleming, the program vice president for IDC’s Business Process Management and Research Area. “From there, iPaaS vendors started expanding as customers had new needs. Data and processes were spread across multiple applications, obstructing a full view of the business side of things. Previously, companies have used middleware to link software within their enterprises, but the move to cloud has created a new issue. “While these same organizations have spent the last 15 years integrating their enterprise applications to break down silos of information, they are now seeing a


022-25_iPaas_SDT034.qxp_Layout 1 3/20/20 2:28 PM Page 23

www.sdtimes.com

April 2020

global iPaaS market is estimated to generate revenue of approximately $2 billion by 2023, expanding at a compound annual growth rate of 22% between 2017-2023. Also, the expansion of IoT devices and AI are pushing for a greater need for data governance, management, and connectivity, making iPaaS companies crucial for the next stage of innovation. “So looking forward, I think integration is a part of the architecture of modern systems. You can't really have a good transformation strategy or a good major adoption of a cloud, without having to consider integration as a core factor of success,” Fleming said.

What iPaaS solutions provide The capabilities of iPaaS solutions include syncing customer records between two different cloud-based softwares, sending orders from cloud-based software to an on-site application, or extracting sales data from an appli-

in cloud architectures renewed problem of ‘cloud silos’ and facing the dark side of SaaS integration,” MuleSoft wrote on its website. “With little to no barrier of entry in adopting SaaS, companies are deploying numerous SaaS applications without IT involvement, resulting in hundreds of applications and services in the ecosystem, all siloed off and unable to communicate seamlessly with one another.” Now, iPaaS solutions offer fast integration times, a subscription payment model, and a multi-tenant aspect to the software. iPaaS solutions also take care of deployment, management, troubleshooting, and maintenance of the platform. iPaaS adoption is accelerating due to the increased traction of cloud computing applications and the extensive need for efficient processes for developing and managing enterprise applications, according to Market Watch. According to Market Research Future Analysis, the

cation onto the on-site data warehouse. Now, many iPaaS solutions have grown to include API management, API gateway software, electronic data interchange, and extract, transform, load (ETL) operations, and they are constantly looking to add different tiers or layers of capability. iPaaS solutions are constantly expanding in their feature sets to meet growing consumer demand, and many offer end-to-end support and capabilities for integrating and creating cloud-native APIs. “It’s an application development platform. I think we look at integration as just another type of application that one needs to build but with certain special characteristics,” said Uri Sarid, CTO at MuleSoft. “You should think about integration as not just this mysterious thing that only integration developers really should do, but rather continued on page 24 >

SD Times

23


022-25_iPaas_SDT034.qxp_Layout 1 3/20/20 2:28 PM Page 24

24

SD Times

April 2020

www.sdtimes.com

own. I think it’s that partnership between IT and the rest of the business that leads to lower costs and more rapid speeds,” Sarid said. To see if an iPaaS solution will offer the best value, the businesses need to see if the connectivity that they are looking for requires new logic. If that’s the case, then there need to be developers writing new things. However, more often than not, the logic does exist somewhere and businesses need to basically attach to those systems, orchestrate, do some transformation and then connect it. This approach has users building a small number of composite applications that connect to other systems and in the course of doing that they end up exposing and leaving behind some APIs, Sarid explained. “Now you have exposed capabilities and data that you can use later. That means that the next project will go even faster than that and so on. That’s how in the end, businesses say ‘I’m getting great speed this year, but the following year, I’m getting even better speed’ and that motivates them to invest more,” Sarid said. “Obviously that means more and more capabilities get unlocked and we find ourselves not just with faster-moving enterprises but with much healthier ones.” Jitterbit is an iPaaS that offers the Vendors that offer integration are more frequently using AI to improve the usability ability to integrate many applications, of the studio environment of integration and to improve the overall ease of use and with pre-built templates and workflows manageability of integration. IDC’s Fleming predicted that the trend is going to conto automate your business processes. tinue. This low-code approach improves the “Some organizations are looking at AI as something that you call as a service and two core tenets of an iPaaS platform: other people are looking at machine learning as something you use to make a prespeed and simplicity of integration. This diction. And in many cases, you're making a prediction in real-time about someapproach also broadens the scope of prothing,” Fleming said. “And the more it's that latter use case, the more it’s being fessionals that can get involved in inteembedded into integration-related assets.” grations since it no longer requires heavy For example, Jitterbit’s platform uses AI to offer real-time language translation, developer involvement. speech recognition and product upsell recommendations to make better decisions. “User experience is of foremost imporOracle’s Uliyar said that machine learning can also offer self-healing capabilities tance. You just drag and drop your conthat provide suggestions on how to automatically fix when someone has built a nectivities, so you know your source, you wrong integration. Other use cases include integration insight to see how well inteknow your target and then you can estabgrations are working as well as where are the bottlenecks. lish authentication to these systems,” said When it comes to machine learning in MuleSoft’s platform, the technology is used to figure out how to map one type of data to the other and help the developer by Shekar Harihan, vice president of marsuggesting the mappings. keting at Jitterbit. “What we are seeing “But the question is, what is it that you train those machines on and I think that now is, ok we have the API integration, gets to the heart of what we’re really seeing a lot of demand for, and that is the business, integration, data integration so declarative nature of how you compose applications and APIs and so on,” MuleSoft’s what people are asking for now is to help Sarid said. “It started from APIs and the importance of API specs, making it much synchronize these processes.” much easier for developers to connect to systems, but it’s also easier for machine When setting up the platform, users learning to look at how people are connecting to things, and starting to build pathave the choice to do application integraterns on top of it.” tion and data integration by the API or if SAP’s Jegadeesan even said that the next frontier for iPaaS is in applying AI and they’re jumpstarting their application, ML. SAP’s iPaaS contains an integration advisor to predict automatic integrations. z then they can use all of the prebuilt tem< continued from page 23

as another capability in the toolset of developers to develop a kind of application that’s different than the code applications they’ve done in the past.” MuleSoft’s Anypoint Platform was created to help users build robust application networks by turning their applications into a set of composable capabilities that users then compose and generate business value out of. From there, the pace of deriving business value keeps accelerating, according to Sarid. He added that speed is the primary thing that businesses are looking for, followed by the need to reduce costs, which go hand-in-hand with agility. “The role of IT will shift from being one that has to implement everything, and therefore has to have a budget for it, to one that enables the rest of the business to innovate on its

AI features are a growing demand in iPaaS


022-25_iPaas_SDT034.qxp_Layout 1 3/20/20 2:28 PM Page 25

www.sdtimes.com

April 2020

plates to help speed up the time to deploy with every- of functionality — from protocol connectivity, communithing seamlessly tied together. cation and construction of integration workflows, to new With the templates you can literally set up within expectations for policy enforcement and community days, according to Harihan. management. The solution can be used by companies regardless of “This market is booming and you can imagine by seetheir size. Enterprise companies primarily have very com- ing the number of startup companies coming around,” plex use cases with hundreds of integration projects, while Jitterbit’s Harihan said. “There is a need to have integrasmaller companies might just have dozens of projects. tion as part of everybody's stack. When you have applica“Creating APIs, that’s the new currency today of data, tions you have to have integration.” it’s all measured in APIs,” Jitterbit said. According to Harihan, iPaaS is among the top fastestMeanwhile, Oracle — another low-code competitor growing tech markets, happening across businesses of all — has an iPaaS solution called Oracle Integration Cloud different sizes and all different industries. (OIC). “I think as we start to see that you’re building appli“One of the top things customers are asking for is cations in an iPaaS, and another company is building more out-of-the-box solutions. This is exciting because their applications in an iPaaS, and both of them likely Oracle is one of the few companies that has an infrastruc- connect to a lot of common systems, what we’re starting ture business, but it also has the broadest and deepest of to see is the application networks that they’re building all application portfolios,” said Suhas Uliyar, the VP of in individual companies are becoming really a global Product Management of Digital Assistants and Integra- application network,” MuleSoft’s Sarid said. “There’s a tion. “In the past, businesses that wanted to provide inte- tremendous amount of power in regarding it as a single grations for digital experiences such as mobile, or chat- global application network much like we regard the bots, or blockchain, or IoT, feared that integration would entire internet instead of different networks. The pattake 6 months or more.” terns that an airline uses to connect to a car rental comOracle cloud provides a bridge where users can connect an existing SOA to an “...more and more capabilities get iPaaS which is the OIC in a way that they unlocked and we find ourselves can discover and reuse their artifacts of the not just with faster-moving Oracle SOA Suite from OIC and vice-versa. enterprises but with “Let’s say your center of gravity is onmuch healthier ones.” prem and you’re only doing design onpremise and let’s say you want to connect to —Uri Sarid, Mulesoft a CRM system in the cloud, where it's Salesforce or Oracle CRM. Basically, you can use the iPaaS to become your cloud integration and you can discover all of your artifacts for the cloud via pany, those should be relatively universal, so if we can SOA and vice-versa,” Uliyar said. get lots of airlines and lots of car companies to interact SAP is another large iPaaS solution that offers seman- in these standard ways, we’ll see that that whole industic integration for a business-centric model, integration try gets going a lot faster and that’s actually good for all packs that contain over 1,400 integrations and the ability of them because consumer expectations are rising for to work with different clouds. The platform itself is com- the industry.” pletely managed, so that users don’t have to install anyHe added that iPaaS is currently in the mass adoption thing. stage in terms of demand but not yet at the stage in “Think of us as the Netflix of integrations,” said Harsh which everyone knows this is the right way to build Jegadeesan, vice president of product management of things. the Hybrid Integration Platform. “It’s become very “In the last four or five years, customers are somestrategic because it allows you to connect to multiple where in between their on-prem deployment of applicaapplications.” tions and cloud deployments so we see a lot of hybrid use When it comes to which vendors to choose, different cases where customers have ERP on premise but have iPaaS solutions work best with particular use cases, so it’s their CRM in the cloud. There are certain homegrown best to see what value the company wants to derive from applications on prem but some of the new cloud-native it first, according to Gartner. applications are running on multiple clouds so what In a report titled “Critical Capabilities for Enterprise we’ve seen is an incredible increase in the complexity of Integration Platform as a Service,” Gartner found that architectures,” Oracle’s Uliyar said. “The iPaaS market the offerings available in the market exhibit significantly really came into play to enable these hybrid integrations different degrees of strength across an increasing range and another major move to simplicity.” z

SD Times

25


026-30_SDT034.qxp_Layout 1 3/20/20 3:01 PM Page 26

26

SD Times

April 2020

www.sdtimes.com

application Performance

What it means in software world BY DAVID RUBINSTEIN

Monitoring The firsT of Three ParTs

S

oftware continues to grow as the driver of today’s global economy, and how a company’s applications perform is critical to retaining customer loyalty and business. People now demand instant gratification and will not tolerate latency — not even a little bit.


026-30_SDT034.qxp_Layout 1 3/20/20 3:02 PM Page 27

www.sdtimes.com

April 2020

SD Times

Monitoring:

today’s complex As a result, application performance monitoring is perhaps more important than ever to companies looking to remain competitive in this digital economy. But today’s APM doesn’t look much like the APM of a decade ago. Performance monitoring then was more about the application itself, and very specific to the data tied to that application. Back then, applications ran in data centers on-premises, and written as monoliths, largely in Java, tied to a single database. With that simple ntier architecture, organizations were able to easily collect all the data they needed, which was then displayed in Networks Operations Centers to systems administrators. The hard work

came from command-line launching of monitoring tools — requiring systems administration experts — sifting through log files to see what was real and what was a false alarm, and from reaching the right people to remediate the problem. In today’s world, doing APM efficiently is a much greater challenge. Applications are cobbled together, not written in monoliths. Some of those components might be running onpremises while others are likely to be cloud services, written as microservices and running in containers. Data is coming from the application, from containers, Kubernetes, service meshes, mobile and edge devices, APIs and more. The

complexities of modern software architectures broaden the definition of what it means to do performance monitoring. “APM solutions have adapted and adjusted greatly over the last 10 years. You wouldn’t recognize them at all from what they were when this market was first defined,” said Charley Rich, a research director at Gartner and lead author of the APM Magic Quadrant, as well as the lead author on Gartner’s AIOps market guide. So, although APM is a mature practice, organizations are having to look beyond the application — to multiple clouds and data sources, to the network, to the IT infrastructure — to get continued on page 28 >

27


026-30_SDT034.qxp_Layout 1 3/20/20 3:02 PM Page 28

28

SD Times

April 2020

www.sdtimes.com

< continued from page 27

the big picture of what’s going on with their applications. And we’re hearing talk of automation, machine learning and being proactive about problem remediation, rather than being reactive. “APM, a few years ago, started expanding broadly both downstream and upstream to incorporate infrastruc-

ture monitoring into the products,” Rich said. “Many times, there’s a problem on a server, or a VM, or a container, and that’s the root cause of the problem. If you don’t have that infrastructure data, you can only infer.” Rekha Singhal, the Software-Computing Systems Research Area head at Tata Consultancy Services, sees two

Gartner’s 3 requirements for APM APM, as Gartner defines it in its Magic Quadrant criteria, is based on three broad sets of capabilities, and in order to be considered an APM vendor by Gartner, you have to have all three. Charley Rich, Gartner research director and lead author of its APM Magic Quadrant, explained: The first one is digital experience monitoring (DXM). That, Rich said, is “the ability to do real user monitoring, injecting JavaScript in a browser, and synthetic transactions — the recording of those playbacks from different geographical points of presence.” This is critical for the last mile of a transaction and allows you to isolate and use analytics to figure out what’s normal and what is not, and understand the impact of latency. But, he cautioned, you can’t get to the root cause of issues with DXM alone, because it’s just the last mile. Digital experience monitoring as defined by Gartner is to capture the UX latency errors — the spinner or hourglass you see on a mobile app, where it’s just waiting and nothing happens — and find out why. Rich said this is done by doing real user monitoring — for web apps, that means injecting JavaScript into the browser to break down the load times of everything on your page as well as background calls. It also requires the ability to capture screenshots automatically, and capture entire user sessions. This, he said, “can get a movie of your interactions, so when they’re doing problem resolution, not only do they have the log data, actual data from what you said when a ticket was opened, and other performance metrics, but they can see what you saw, and play it back in slowmotion, which often provides clues you don’t know.” The second component of a Gartner-defined APM solution is application discovery diagnostics and tracing. This is the technology to deploy agents out to the different applications, VMs, containers, and the like. With this, Rich siad, you can “discover all the applications, profile all their usage, all of their connections, and then stitch that together to what we learn from digital experience to represent the end-to-end transaction, with all of the points of latency and bottlenecks and errors so we understand the entire thing from the web browser all the way through application servers, middleware and databases.” The final component is analytics. Using AI, machine-learning analytics applied to application performance monitoring solutions can do event correlation, reduce false alarms, do anomaly detection to find outliers, and then, do root cause analysis driven by algorithms and graph analysis. z — David Rubinstein

major monitoring challenges that modern software architectures present. First, she said, is multi-layered distributed deployment using Big Data technologies, such as Kafka, Hadoop and HDFS. The second is that modern software, also called Software 2.0, is a mix of traditional task-driven programs and data-driven machine learning models. “The distributed deployment brings additional performance monitoring challenges due to cascaded failures, staggered processes and global clock synchronization for co-relating events across the cluster,” she explained. ”Further, a Software 2.0 architecture may need a tight integrated pipeline from development to production to ensure good accuracy for data-driven models. Performance definition for Software 2.0 architectures are extended to both system performance and model performance.” Moreover, she added, modern applications are largely deployed on heterogeneous architectures, including CPU, GPU, FPGA and ASICs. “We still do not have mechanisms to monitor performance of these hardware accelerators and the applications executing on them,” she noted.

The new culture of APM Despite these mechanisms for total monitoring not being available, companies today need to compete to be more responsive to customer needs. And to do so, they have to be proactive. Joe Butson, co-founder of consulting company Big Deal Digital, said, “We’re moving to a culture of responding ‘our hair’s on fire,’ to being proactive,” he said. “We have a lot more data … and we have to get that information into some sort of a visualization tool. And, we have to prioritize what we’re watching. What this has done is change the culture of the people looking at this information and trying to monitor and trying to move from a reactive to proactive mode.” In earlier days of APM, when things in application slowed or broke, people would get paged. Butson said, “It’s fine if it happens from 9 to 5, you have lots of people in the office, but then, some poor continued on page 30 >


Full Page Ads_SDT034.qxp_Layout 1 3/20/20 3:12 PM Page 29


026-30_SDT034.qxp_Layout 1 3/20/20 3:02 PM Page 30

30

SD Times

April 2020

www.sdtimes.com

< continued from page 28

person’s got the pager that night, and that just didn’t work because what it meant in the MTTR — mean time to recovery — depending upon when the event occurred, it took a long time to recover. In a very digitized world, if you’re down, it makes it into the press, so you have a lot of risk, from an organizational perspective, and there’s reputation risk. High-performing companies are looking at data and anticipating what could happen. And that’s a really big change, Butson said. “Organizations that do this well are winning in the marketplace.”

Whose job is it, anyway? With all of this data being generated and collected, more people in more parts of the enterprise need access to this information. “I think the big thing is, 10-15 years ago, there were a lot of app support teams doing monitoring, I&O teams, who were very relegated to this task,” said Stephen Elliot, program vice president for I&O at research firm IDC. “You know, ‘identify the problem, go solve it.’ Then the war rooms were created. Now, with agile and DevOps, we have [site reliability engineers], we have DevOps engineers, there is a broader set of people that might own the responsibility, or have to be part of the broader process discussion.” And that’s a cultural change. “In the NOCs, we would have had operations engineers and sys admins looking at things,” Butson said. “We’re moving across the silos and have the development people and their managers looking at refined views, because they can’t consume it all.” It’s up to each segment of the organization looking at data to prioritize what they’re looking at. “The dev world comes at it a little differently than the operations people,” Butson continued. “Operations people are looking for stability. The development people really care about speed. And now that you’re bringing security people into it, they look at their own things in their own way. When you’re talking about operations and engineering and the business people getting together, that’s not a nat-

ural thing, but it’s far better to have the end-to-end shared vision than to have silos. You want to have a shared understanding. You want people working together in a cross-functional way.” Enterprises are thinking through the question of who owns responsibility for performance and availability of a service. According to IDC’s Elliot, there is a modern approach to performance and availability. He said at modern companies, the thinking is, “ ‘we’ve got a DevOps team, and when they write the service, they own the service, they have full end-to-end responsibilities, including security, performance and availability.’ That’s a modern, advanced way to think.” In the vast majority of companies, ownership for performance and availability lies with particular groups having different responsibilities. This can be based on the enterprise’s organizational structure, and the skills and maturity level that each team has. For instance, an infrastructure and operations group might own performance tuning. Elliot said, “We’ve talked to clients who have a cloud COE that actually have responsibility for that particular cloud. While they may be using utilities from a cloud provider, like AWS Cloud Watch or Cloud Trail, they also have the idea that they have to not only trust their data but then they have to validate it. They might have an additional observability tool to help validate the performance they’re expecting from that public cloud provider.” In those modern organizations, site reliability engineers (SREs) often have that responsibility. But again, Elliot here stressed skill sets. “When we talk to customers about an SRE, it’s really dependent on, where did these folks come from?” he said. “Where they reallocated internally? Are they a combination of skills from ops and dev and business?

PART II: APM vs. AIOPs vs. ObseRvAbIlITy: WhAT’s The dIffeRence?

Typically, these folks reside more along the lines of IT operations teams, and generally they have operating history with performance management, change management, monitoring. They also start thinking. Are these the right tasks for these folks to own? Do they have the skills to execute it properly?” Organizations also have to balance that out with the notion of applying development practices to traditional I&O principles, and bringing a software engineering mindset to systems admin disciplines. And, according to Elliot, “It’s a hard transition.” Compound all that with the growing complexity of applications, running the cloud as containerized microservices, managed by Kubernetes using, say, an Istio service mesh in a multicloud environment. TCS’ Singhal explained that containers are not permanent, and microservices deployments have shorter execution times. Therefore, any instrumentation in these types of deployment could affect the guarantee of application performance, she said. As for functions as a service, which are stateless, application states need to be maintained explicitly for performance analysis, she continued. It is these changes in software architectures and infrastructure that are forcing organizations to rethink how they approach performance monitoring from a culture standpoint and from a tooling standpoint. APM vendors are adding capability to do infrastructure monitoring, which encompasses server monitoring, some amount of log file analyst, and some amount of network performance monitoring, Gartner’s Rich said. Others are adding or have added capabilities to map out business processes and relate the milestones in a business process to what the APM solution is monitoring. “All the data’s there,” Rich said. “It’s in the payloads, it’s accessible through APIs.” He said this ability to visualize data can show you, for instance, why Boston users are abandoning their carts 20% greater than they are in New York over the last three days, and come up with something in the application that explains that. z


Full Page Ads_SDT034.qxp_Layout 1 3/23/20 3:13 PM Page 31


Full Page Ads_SDT034.qxp_Layout 1 3/20/20 3:04 PM Page 32


033-38_BuyersGuide_SDT034.qxp_Layout 1 3/20/20 3:06 PM Page 33

www.sdtimes.com

April 2020

SD Times

Creating a clear testing path to DevOps takeoff BY CHRISTINA CARDOZA

D

evOps has transformed the way businesses think and software development teams work, but the power of DevOps is still limited. According to Shamim Ahmed, CTO for DevOps solutions at the global technology company Broadcom, testing still stands in the way of achieving true DevOps and continuous delivery. Testing is a time-consuming process that requires many moving parts to happen in the right way, he explained. Testing in general is just more complex, according to Maya Ber Lerner, CTO of Quali, a cloud automation and digital transformation company. “Testing is not as lightweight as development. You need to have you test automation in place, you need to have your applications in place, third-party components in place, and you need to have the right infrastructure and data set in place. Each one of those things can easily fail a test,” she said. In addition, there are just a number of different questions and scenarios you have to ask yourself when it comes to testing, Matt Davis, managing director for QA Systems, a software quality company said. “Testing, of course, is at the heart of something that you develop.

You test it to determine whether it's going to be released, but that testing can be on the basis of what should I test? Should I test everything? Should I be looking at impact analysis and changebased testing? Should I be looking at auto-test case generation?” he said. And these are just a small set of questions and thinking that is required for testing. Luckily, there are some ways teams can start to break down the barriers of testing in DevOps:

Don’t treat testing like a phase in the life cycle. Despite efforts to test early and to test often, Broadcom’s Ahmed said testing is still looked at as a particular phase in the life cycle, “when in fact it should be continuous and embedded throughout the entire life cycle,” he said. To do this, testing needs to shift left and shift right. “The more testing, especially around automation, that you can build into the application from day one is key,” said Dan McFall, CEO of Mobile Labs, an enterprise mobile app testing company. McFall explained techniques like test-driven development or behavior-driven development help developers become more involved in testing and really taking the time to look at the features and solutions, how

those are going to be tested, and how those tests are going to validate business requirements. Shifting right enables developers to work better with operations to understand what is going on in production and take advantage of that to improve tests and test conditions, according to Ahmed.

Acquire the right skills. “As we shift left and shift right with continuous testing, we need to start to bring in additional skills to the traditional QA testing mindset,” said Ahmed. One trend Ahmed is seeing is the introduction of the software development engineer in test, which are individuals who participate in development and also know testing techniques like white box testing. “These guys are able to participate with developers for example in code reviews, understanding what is going to be built, and participating in the technical debt of the code,” he said. Make testing automated and continuous. “DevOps as a process doesn’t work unless it’s automated and continuous,” said QA Systems’ Davis. He explained that in order to shift from doing nightly builds to continuously integrating testing throughout the entire life continued on page 34 >

33


033-38_BuyersGuide_SDT034.qxp_Layout 1 3/20/20 3:07 PM Page 34

34

SD Times

April 2020

www.sdtimes.com

< continued from page 33

cycle, things need to be 100% automated. “You can’t do things continuously if you have to manually intervene,” he noted. “You need to be able to trigger things and react to activities or react to results and outcomes all the way along your pipeline. One of the biggest door openers for automation is the connection on a pipeline, according to Davis. “There is a major advantage to being able to set up a series of quality gates and triggers, and automating different types of activities throughout the testing and development life cycle,” he said. “The more that can be automated in these pipeline stages via decorative pipeline scripts, the better it can be.” Test environments should also be automated, according to Quali’s Ber Lerner. “If you have a testing organization that’s trying to write tests real fast, and they are automated tests, but it takes two weeks or even three days to set up a test environment, are you really Agile?” she asked. Additionally, test environments should be set up in a way that separates environment issues from test issues so there is no confusion. Broadcom’s Ahmed explained another way to speed up testing is through model-based testing, a technique where tests are automatically generated by models or a description of the system’s behavior. According to Ahmed, model-based testing helps auto generate test assets on the fly to improve test productivity and make sure all the assets are available as soon as the requirements are ready. Impact analysis can also be used to speed up testing. Impact analysis, or change-based testing, helps run only the tests that were impacted by code changes, limiting the amount of tests that are actually run, according to Davis. “Quality counts for the bottom line. The way you make that bottom line more efficient is through automation and integration,” said Davis. “If software doesn’t work as well as it should, or the quality level is just not there,

companies are going to lose ground to their competitors.”

Leverage manual testing. Despite the increased need to automate things, there is still a need for manual testing in DevOps. Automated test assets free up testers to be able to do more valueadded pieces of testing such as testing from a real-user customer experience perspective or doing exploratory testing on a new piece of functionality, according to Broadcom’s Ahmed. He explained that not everything can be automated and testing things that have a human element to it is hard to automate because they can be very subjective to tests. Testers need to be able to go manually into features to actually evaluate the quality. “You want to free up testers to go find your edge cases and try to break them,” said Mobile Labs’ McFall. “There are a lot of contextual things automated testing won’t cover.” For example, he explained from a user experience perspective, if something like a drop-down menu or search field doesn’t work the way it should, test automation won’t be able to catch that. It will only tell you it is there, but it can’t tell you if it isn’t user friendly. McFall believes it is always good to have a person available to validate the test cases before they are automated just to make sure it is actually worthy of automation. Automate feedback. According to QA Systems’ Davis, there needs to be a way to share information, results and test analysis in a timely manner. Even if testing is being done manually, that doesn’t mean the feedback loop can’t be automated. Mobile Labs’ McFall explained as you manually test things like user experience and capture interactions, you can send your findings back to a tool in an automated fashion such as a real user monitoring system or customer experience system. “I am running through a manual test case, but what is automated is the inter-

action of the manual verification with something like my ticketing system so that I know in Jira, it is done, passed and automatically sending back logs and other types of information. That to me is how you can have manual testing in a DevOps environment. The feedback loop itself is automated,” said McFall.

Don’t treat all applications the same. For instance, mobile devices and applications are different from browsers and the web architecture. Mobile devices such as computers are very fragile, according to McFall, and there are many challenges around it. You need to understand the programmatic pieces around mobile apps as they relate to their infrastructure and environment. “People expect the mobile web to be similar to a desk web architecture. Things can look similar, but under the hood they are very different. It is hard to automate that if you are trying to use common frameworks,” said McFall. Enable self-service environments. According to Quali’s Ber Lerner, another main barrier of DevOps is infrastructure provisioning and application provisioning. “Making sure that everyone gets access to the infrastructure and the applications they need, whether it is people trying to run tests or systems that are trying to do automated tasks becomes a big bottleneck,” she explained. In order to overcome this, Ber Lerner explained teams need access to cloud-agnostic environments, which they can get through self-service portals APIs or different plugins. “It gives people self-service access to their test DevOps environments while it is still possible for IT Ops to govern the way that it is done. It makes it possible for people to be fast but still be in control.” “At the end of the day, it is about breaking down the barriers you have within the four walls of your organization and making sure people have access to the environments they need, systems they need, and tools they need to be successful,” said Mobile Labs’ McFall. “Let’s make sure we focus on testing the right things, and the more important things.” z


Full Page Ads_SDT034.qxp_Layout 1 3/20/20 9:36 AM Page 35


Full Page Ads_SDT034.qxp_Layout 1 3/20/20 9:35 AM Page 36


033-38_BuyersGuide_SDT034.qxp_Layout 1 3/20/20 3:07 PM Page 37

www.sdtimes.com

April 2020

SD Times

How do you help test in DevOps? Shamim Ahmed, CTO for DevOps Solutions at Broadcom, a global technology company: The promise of DevOps is that we could deliver more, faster, with no sacrifice in quality. In reality — we see some common blocks to DevOps success. At Broadcom, we address those challenges: we help eliminate the testing bottleneck and bring teams together in a single platform that lets everyone work the way they want to work. Agile teams want to work in their IDEs and command lines. They want to use open source, and they want tools that are seamlessly embedded into the CI/CD pipeline. Traditional testers want to use a UI, and features like scriptless testing. Broadcom makes this simple with BlazeMeter Continuous Testing Platform, a single application that delivers all the functionality you need to make continuous testing a reality. BlazeMeter Continuous Testing Platform is designed for every team across the SDLC. It can be used “as code” in the IDE or with the easy UI. All teams can share assets and align around common metrics and AI-driven insights. AI is also used to optimize test cycles, predict defects and highlight areas for continuous improvement. Most organizations know that DevOps success depends on the ability to shift left and right, and deliver new capabilities with volume and velocity. BlazeMeter really helps them do that — all the way from aligning the business and dev around model-based requirements to using data from production to drive continuous improvement. And best of all — we make it easy. It’s literally click to start and there’s a free version so you can get started today. Dan McFall, CEO of Mobile Labs, an enterprise mobile app testing company For Mobile Labs, we really tackle the problem of mobile devices as enterprise infrastructure. What that means is answering the questions of: Where are my devices? Who has them? What state are they in? What is on them? What application versions are loaded? What can they see? All of the things you need to basically have mobile devices be available at the development and test environment. We solve that problem, and then make them essentially act just like virtual machines. You can call them via API layers. You can build a seamless, headless process around our infrastructure component into your DevOps process. You can have a broad and deep testing space that gives you the confidence that you have covered your bases. We are also looking into more scripting as well, such as low code or no code scripting environments, more behavioral-driven environments. We are seeing that a lot of people are resource challenged, and don’t have folks who can write mobile automation. We are going to make it easier for people to do mobile automation from a scripting perspective this year. Those are the areas where we are continuing to help, which is just the right people with the right skills with the access to

the right environments at the right time. That is going to be a really key aspect to having a successful DevOps strategy. Matt Davis, managing director for QA Systems, a software quality company QA Systems helps DevOps engineers overcome the challenges of test automation and tool integration by focusing on repeatable steps and command line interfaces. Not everything in testing can be automated. However, by removing tedious manual steps from the process, we help engineers focus on building the right tests and solving problems. Automating checks on software quality metrics, architectural relationships, hierarchy and dependencies in your code, ensures that you don’t deviate from your intended design or your code becomes less maintainable as it evolves. By combining automatic test case generation, integrated code coverage, a change-based test build system, plugging testing gaps automatically and linking your tests directly to your requirements, engineers can now access unprecedented test capabilities. Code level analysis and testing should be at the heart of DevOps, where developers can use them efficiently every time code is checked in. QA Systems have found that fully automating these capabilities on the basis of open standards and integrated solutions, significantly enhances the functionality of the verification CI/CD pipeline. Maya Ber Lerner, CTO of Quali, a cloud automation and digital transformation company Test automation is great, but it only solves one part of the DevOps testing problem. To ensure the quality of your application, your developers and testers need instant access to dynamic, production-like environments throughout the value-stream to develop applications and run automated tests effectively. However, time-consuming, error-prone manual processes for setting-up and tearing down these environments creates a huge bottleneck—leading to multiple teams struggling to share static environments, or skirting around ITOps and implementing shadow-IT practices, which can greatly drive up costs and bypass security best practices. Environment as a Service solutions, like Quali’s CloudShell Colony, make it possible for developers and testers to gain immediate access to dynamic, production-like environments ondemand with one click, or automatically by connecting your CI/CD tools to accelerate the value stream. We even have a customer that set up a Slack-bot to provision environment requests. With CloudShell Colony, you can bridge the gap between Dev, Sec, and ITOps leveraging the speed of self-service, automated set-up and tear-down of dynamic environments across the value stream coupled with policy-based configurations ensuring security, compliance, infrastructure utilization, and costs control all from one tool. z

37


033-38_BuyersGuide_SDT034.qxp_Layout 1 3/20/20 3:07 PM Page 38

38

SD Times

April 2020

www.sdtimes.com

A guide to DevOps testing tools n BMC AMI DevOps for Db2 accelerates the delivery of new and updated applications to the market. It comes with out-ofthe box integration with Jenkins, an application development orchestration tool. n Cobalt.io is modernizing penetration testing by building hacker-like testing into development cycles. Pentests are performed by a global team of vetted, highlyskilled professionals with deep domain expertise. n Eggplant enables companies to view their technology through the eyes of their users. The continuous, intelligent approach tests the end-to-end customer experience and investigates every possible user journey, providing unparalleled test coverage essential to DevOps success. . n GitLab helps delivery teams fully embrace continuous integration to automate building, packaging, and testing their code. GitLab’s industry-leading CI capabilities enable automated testing, Static Application Security Testing, Dynamic Application Security testing, and code quality analysis to provide fast feedback to developers and testers. n HCL: AppScan is an automated application security testing and management tool. The company recently released version 10 of the solution, which features on securing DevOps. New features here include interactive application security testing capabilities out-of-the-box integrations with DevOps toolchains, and a new plugin to help developers identify vulnerabilities in their dev environments. n IBM: Continuous Testing provides an end-to-end picture of how products react to new code. It does this early in the development lifecycle which gives Product teams confidence to push incremental code changes more frequently. n Micro Focus: Minimize risk and maximize user satisfaction by testing early, often, and at scale with Micro Focus’ industry-leading, integrated portfolio for continuous and comprehensive testing of web, mobile, and enterprise applications. n Progress: Telerik Test Studio enables QA and SDET professionals to create func-

n

FEATURED PROVIDERS n

n Broadcom: The BlazeMeter Continuous Testing Platform is a complete solution for shift-left continuous testing. The platform includes UI functional testing, user experience testing, API testing and monitoring, performance testing, and virtual services. All capabilities are deeply integrated in an intuitive workflow designed for agile teams and provide robust support for popular open source tools. Delivered in SaaS with support for multiple clouds or private cloud, it is a powerful tool for delivering innovation with quality and speed. n Mobile Labs: The company’s patented GigaFox is offered on-premises or hosted, and solves mobile device sharing and management challenges that arise during development, debugging, manual testing, and automated testing. A pre-installed and pre-configured Appium server with custom tools provides “instant on” Appium test automation. GigaFox enables scheduling, collaboration, user management, security, mobile DevOps, and continuous automated testing for mobility teams spread across the globe and can connect cloud devices to an industry-leading number of third-party tools such as XCode, Android Studio, and many commercial test automation tools. n QA Systems: Cantata from QA Systems is a certified standards compliant automated unit and integration testing tool for embedded C/C++ code. Highly automated test case generation, code coverage, static metrics and requirements tracing are supplemented by architectural analysis and test status management with Test Architect and Team Reporting add-ons. Cantata is integrated with an extensive set of development toolchains, from cross-compilers and debuggers to ALM and continuous integration tools. n Quali: Quali’s CloudShell Colony helps organizations streamline effective application testing by providing development and testing teams with self-service access to automated test environments while delivering security, governance, and cost control. By removing error-prone manual inefficiencies and conflict-ridden static test environments, it creates a solid foundation for Continuous Testing and DevOps. Founded in 2007, Quali helps businesses accelerate innovation, improve quality, and control costs with on-demand access to automated application and infrastructure environment provisioning across any cloud. tional, performance and load tests that work immediately. Patent-pending multisense discovery eliminates broken tests and technical debt that plague other testing solutions. n QASymphony’s qTest is a Test Case Management solution that integrates with popular development tools. QASymphony offers qTest eXplorer for teams doing exploratory testing. n Sauce Labs: With more than 3 billion tests run and counting, the Sauce Labs Continuous Testing Cloud is the only continuous testing platform that delivers a 360degree view of your customers’ application experience.

n ShiftLeft Inspect is a next-generation static code analysis solution, purpose-built to insert security into developer workflows without slowing them down. n At SmartBear, we focus on your one priority that never changes: quality. We know delivering quality software over and over is complicated. So our tools are built to streamline your process while seamlessly working with all the tools you use — and will use. n Testlio: With robust client services, a global network of validated testers, and a comprehensive software platform, we provide a suite of flexible, scalable, and ondemand testing solutions. z


Full Page Ads_SDT034.qxp_Layout 1 3/20/20 10:27 AM Page 39


040_SDT034.qxp_Layout 1 3/20/20 10:31 AM Page 40

40

SD Times

April 2020

www.sdtimes.com

Guest View BY ADAM LIEBERMAN

8 open-source data science libraries Adam Lieberman is the data scientist lead and computer engineer at Finastra.

A

s organizations wake up to the multitude of ways advanced technologies can augment their businesses, developers with relevant skills are becoming ever more valuable. Data is the key to a whole kingdom of opportunity, and when combined with AI and machine learning tools, the bounds of this kingdom are practically limitless. Even for those without the necessary skills to code from scratch — to create algorithms, searching and sorting methods, data manipulation and preprocessing methods, to name a few — there is a thriving open-source community that allows developers access to ready-made tools that perform these tasks. And for those who do possess the technical skills to code these methods, it simply doesn’t make sense to reinvent the wheel each time. More than a decade ago, the software development community realized that recoding popular and or useful methods over and over was not an efficient use of time and developed libraries that their peers could use to call methods that have been circulated time after time. These libraries were not developed by companies paying employees, but rather individual contributors from all over the world working on library development for the greater good of the data science and software development community. Companies like Google and Amazon are also heavily involved in the open-source community — more than that, they were largely responsible for its inception. They were among the first firms to realize that intellectual property is far less useful today than data and collaboration and by open sourcing their tools and technologies, they enabled developers to build upon and augment them, thus kickstarting the open-source community on which many of us now rely. Thanks to this community, any firm wishing to take advantage of AI and machine learning tools can do so, so long as the right use case has been identified. Here’s a list of my top eight popular Python data science libraries: 1. NumPy – Allows a user to process large multidimensional arrays and matrices with hundreds of methods to perform mathematical operations over these data structures in an efficient manner.

Thanks to this community, any firm wishing to take advantage of AI and machine learning tools can do so.

NumPy has had over 641 individual contributors with 17,911 code commits and 136 releases. 2. Pandas – A library that uses a data structure called a Pandas DataFrame, similar to an Excel spreadsheet. It is built on top of NumPy and allows users to easily manipulate data, filter, group it, and combine it. Pandas has had over 1,165 contributors with 17,144 code commits, and 93 releases. 3. Matplotlib – Allows developers to visualize data in diagrams, plots, and graphs. It allows for a wide variety of chart types, from scatter plots to non-cartesian coordinate graphs. There have been over 724 individual contributors and 25,747 code commits on just over 70 releases. 4. Seaborn – A high-level API based on Matplotlib. It is a popular alternative for its nice color schemes and chart styles built by roughly 100 developers. 5. scikit-learn – This has been the go-to library for machine learning algorithms on tasks such as classification, regression, clustering, dimensionality reduction, and anomaly detection. Over 1,000 individual contributors have made 22,743 commits on 86 releases of scikit-learn. 6. TensorFlow – This is a very popular deep learning and machine learning framework started by Google Brain and taken over by the open source community. It allows developers to work with neural networks to solve a wide variety of tasks. Over 1,500 individuals have contributed more than 30,000 commits to build out TensorFlow. 7. Keras – A high-level API built on top of TensorFlow for working with neural networks in an easier manner. Almost 700 individuals have contributed over 4,500 commits to bring this library to the developer community. 8. NLTK – A natural language toolkit developed by almost 250 individuals on 13,000 commits. It is a platform for the field of natural language processing where we can process and analyze textual data and build models to understand and gain predictions from this data. The individuals contributing to these libraries are constantly updating, adding, removing, and refining their work for the community and keeping the libraries up to date. It’s in all our interests to continue building and maintaining these libraries that we use and love. z


041_SDT034.qxp_Layout 1 3/20/20 3:05 PM Page 41

www.sdtimes.com

April 2020

SD Times

Analyst View BY ARNAL DAYARATNA

3 steps to becoming cloud native W

hat is a cloud-native enterprise and how does an enterprise achieve that designation? A cloud-native enterprise is one that specializes in cloud-native development, or development that is optimized for distributed infrastructures. Cloud-native development is optimized for distributed infrastructures because of its ability to bring the automation of the cloud directly to the application stack in the form of automated scalability, elasticity and high availability. By automating the operational management of application infrastructures, cloud-native development enables enhanced development velocity and agility in ways that empower enterprises to produce, disseminate and consume software and application-related services on an unprecedented scale. The automation specific to cloud-native development is important because it enables the development and maintenance of ecosystems of digitized objects such as connected homes, appliances, automobiles, laptops, mobile devices and wearables. Technology suppliers that are seeking to gain market share in the rapidly emerging landscape of digitized ecosystems would do well to embed cloud-native development practices in their development methodologies by taking the following three steps: (1) embracing platform as a service; (2) cultivating developer familiarity with cloud-native technologies; and (3) creating a developer-centric culture in which everyone is a developer.

Platform as a service is key Platform as a service is a key component of an enterprise's transition to cloud-native development because it provides developers with self-service access to developer tools as well as the ability to provision infrastructure. This ability to self-serve accelerates development cadences and empowers developers to work independently of a centralized IT authority. Developers enjoy increased agility in ways that predispose enhanced responsiveness and participation in collaborative decisions. Another key step for enterprises in their transition to cloud native involves cultivating developer familiarity with cloud-native technologies such as microservices, containers, container orchestration frameworks and processes such as DevOps. The universe of cloud-native technologies also includes

functions as a service, APIs, serverless technologies, service mesh and a multitude of others. That said, cultivating developer familiarity with microservices and containers marks a significant step in an enterprise's journey to becoming cloud native that is likely to initiate familiarity with adjacent technologies.

Dr. Arnal Dayaratna is Research Director of Software Development at IDC

A developer-centric culture To become truly cloud native, enterprises need to create a developer-centric culture in which everyone is a developer. This means that professional resources such as business analysts, project managers, HR, business partners, market intelligence analysts and data scientists all participate in application development in one form or another, whether it be through the development of net-new applications by using low-code or no-code development tools, or otherwise through configuring dashboards and widgets in pre-existing application templates. This democratization of development is a key component of an enterprise's path toward cloud-native development, because it increases the digital literacy of business resources that collaborate with IT resources. The increased digital literacy of business stakeholders enables them to more richly inform application developers about the requirements for the digitization of business operations. In addition, this empowers business professionals to contribute to application development and subsequently augment and extend the digitization efforts that are led by professional application developers. The key takeaway here is that the transition of an enterprise to cloud native transcends the acquisition of developer familiarity with cloud-native technologies. The transition requires the confluence platform adoption, proficiency with cloudnative technologies and the democratization of development to professional resources who do not have the job title of a developer. This confluence paves the way for high-velocity, hyperscale development that empowers enterprises to create and maintain ecosystems of digitized objects that serve the intensified needs of the digital economy for increased digitization. z

The ability to self-serve accelerates development cadences and empowers developers to work independently.

41


042_SDT034-DR-2.qxp_Layout 1 3/20/20 2:18 PM Page 42

42

SD Times

April 2020

www.sdtimes.com

Industry Watch BY DAVID RUBINSTEIN

Home is… where I always am! David Rubinstein is editor-in-chief of SD Times.

T

he novel coronavirus pandemic has forced many of us to change our routines. Perhaps the biggest of these changes is the fact that now, many of us are working from home for the first time. In the software industry, remote work — or telework, as it’s sometimes called — is fairly common. But in magazine publishing, where editors and art directors work closely together to create a lively issue, being separated can create problems. Among those are the use of digital collaboration tools, access to the publishing software, bandwidth, and more. For me, personally, the biggest issue was actually just being home. I don’t have a home office set up, as I’ve never needed one, so finding space to work that does not preclude the rest of the family from doing what they do each day was the first challenge. My wife was at first only lukewarm to the idea of me working from home, because, well, I’d just be around all day, being all needy and stuff. And it only got colder from there, as no matter where I was in the house, I apparently was in the way of her completing some task. By day’s end, she was on HER computer, researching other options. “Why can’t you rent one of those daily workspaces they have in office buildings,” she asked. “Because our state is basically locked down?” I say. “We’re not supposed to go anywhere.” “Not SUPPOSED to go is not an order NOT to go,” she said. She’s got some lawyer in her. I’ve started a journal of my experience, which I’ll share with you now.

‘Our state is basically locked down,’ I say. ‘We’re not supposed to go anywhere.’

Day 1 of “working from home” due to COVID-19. I have a dentist appointment at 9:30 AM, so I wake up around 7:30, answer emails until 7:33, and do the Sunday New York Times crossword until 9:10. In between, I hear from my art director that she can’t get onto Slack. I throw on clothes (Oh no, was the computer camera ON??) — no shower, no shave — and go. I return home about 10:30. Time to check in

with the team and see what’s happening. I get the art director back onto Slack, then make myself breakfast. Now it’s 11:30. I start editing a couple of stories and send them along for page layout. I put in a solid two hours of work, then at 1:30, crawl out of the boiler room/new workspace to go upstairs to make coffee. My daughter Hallie is awake (did I mention she’s home from college because it was shut down due to COVID-19?) I ask her if she’s upset that her Delaware Blue Hens’ basketball season ended the way it did, and was quickly reminded that she has no interest in that. It did, though, lead into a half-hour of listening to college fight songs (one a marching band guy…). Side note: My alma mater, the University of Maryland, has TWO! Technically, one’s a fight song, presumably played DURING games, and the other is a ‘Victory’ song, presumably to be played AFTER victories. Yet at games, the Mighty Sound of Maryland marching band plays ‘Maryland Victory’ WAY more times than the fight song, despite rarely winning. Makes no sense. But in this coronavirus world, little makes sense. Back to work for another hour. Now it’s lunchtime. I go back upstairs to have a sandwich, and Hallie — clearly bored on her revised spring break at home — talks Carrie and I into watching an episode of “Schitt’s Creek” — a VERY funny Netflix show. One episode turns into five, throughout which I keep exclaiming, “I can’t... I’m WORKING!” After lunch, I’m back at it, transcribing recorded interviews into text. Play. Pause. “What did he say?” Rewind. Play, Pause. In two hours, I’ve transcribed 11 minutes of recording. I’d rather be back at the dentist! Six minutes to go until the end of the recording, I drop everything to start writing this. Five o’clock. End of Day 1. They’d better find a cure, and fast! The other options are me gaining what my daughter Lindsey called “the COVID 15,” which is like the “Freshman 15” of weight gain, but worse, because it’s associated with coronavirus. At least we don’t have peanut butter-stuffed pretzel nuggets... YET! And, of course, the final option ... me being the victim of a bludgeoning death for having committed the crime of... always being home! z


Full Page Ads_SDT034.qxp_Layout 1 3/20/20 9:32 AM Page 43


Full Page Ads_SDT033.qxp_Layout 1 2/20/20 11:26 AM Page 36


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.