SD Times October 2021

Page 1

FC_SDT052.qxp_Layout 1 9/23/21 5:00 PM Page 1

OCTOBER 2021 • VOL. 2, ISSUE 52 • $9.95 • www.sdtimes.com


IFC_SDT051.qxp_Layout 1 9/23/21 2:23 PM Page 4

®

Instantly Search Terabytes

www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein drubinstein@d2emerge.com NEWS EDITOR Jenna Sargent jsargent@d2emerge.com MULTIMEDIA EDITOR

dtSearch’s document filters support: popular file types

Jakub Lewkowicz jlewkowicz@d2emerge.com SOCIAL MEDIA AND ONLINE EDITOR

emails with multilevel attachments

Katie Dee kdee@d2emerge.com

a wide variety of databases

ART DIRECTOR

web data

Mara Leonardi mleonardi@d2emerge.com CONTRIBUTING WRITERS Jacqueline Emigh, Elliot Luber, Caryn Eve Murray, George Tillmann

2YHU VHDUFK RSWLRQV LQFOXGLQJ efficient multithreaded search

CONTRIBUTING ANALYSTS Enderle Group, Gartner, IDC, Intellyx

HDV\ PXOWLFRORU KLW KLJKOLJKWLQJ forensics options like credit card search

CUSTOMER SERVICE SUBSCRIPTIONS subscriptions@d2emerge.com ADVERTISING TRAFFIC Mara Leonardi mleonardi@d2emerge.com

Developers: 6'.V IRU :LQGRZV /LQX[ PDF26

LIST SERVICES Jessica Carroll jcarroll@d2emerge.com

&URVV SODWIRUP $3,V IRU & -DYD DQG NET 5 / 1(7 &RUH

.

.

)$4V RQ IDFHWHG VHDUFK JUDQXODU GDWD FODVVLILFDWLRQ $]XUH $:6 DQG PRUH

REPRINTS reprints@d2emerge.com ACCOUNTING accounting@d2emerge.com

ADVERTISING SALES PUBLISHER David Lyman 978-465-2351 dlyman@d2emerge.com

Visit dtSearch.com for KXQGUHGV RI UHYLHZV DQG FDVH VWXGLHV IXOO\ IXQFWLRQDO HQWHUSULVH DQG developer evaluations

SALES MANAGER Jon Sawyer 603-547-7695 jsawyer@d2emerge.com

The Smart Choice for Text Retrieval® since 1991

dtSearch.com 1-800-IT-FINDS

PRESIDENT & CEO David Lyman

D2 EMERGE LLC www.d2emerge.com

CHIEF OPERATING OFFICER David Rubinstein


003_SDT052.qxp_Layout 1 9/23/21 4:37 PM Page 3

Contents

VOLUME 2, ISSUE 52 • OCTOBER 2021

FEATURES Companies to Watch in 2022 page 12

Developers are gaining more tools for the edge as it becomes more mainstream

page 6

NEWS 4

News Watch

15

Most dev teams aren’t CI/CD experts

15

CircleCI webhooks enables dev teams to streamline workflows

Reclaim the lost art of critique

COLUMNS 24 GUEST VIEW by Danny Allan Change your DevOps expectations

page 16 25 GUEST VIEW by Dan Pupius Creating healthy hybrid teams

26 INDUSTRY WATCH by David Rubinstein The password is ... passwordless

BUYERS GUIDE API management is a data integration problem

page 18 Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 2 Roberts Lane, Newburyport, MA 01950. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2021 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 2 Roberts Lane, Newburyport, MA 01950. SD Times subscriber services may be reached at subscriptions@d2emerge.com.


004,5_SDT052.qxp_Layout 1 9/27/21 4:50 PM Page 4

4

SD Times

October 2021

www.sdtimes.com

NEWS WATCH Postman updates its API platform The new and improved features include deeper integration with version control systems, all-new private API networks which provides a central directory of all internal APIs in an organization, and simplified API documentation and onboarding. The new version of the platform also includes a new enterprise governance feature in which team members with the Community Manager role can now view all public collection links created by all team members in one place, with the ability to see who created which link and remove any links to collections that are not for public viewing. Developers can now bring together key components with the definition of APIs including source code management, CI/CD, API gateways, and APM to help govern the entire API landscape, according to Postman.

Microsoft allows alternatives to passwords Microsoft today announced that users of Outlook, OneDrive, Family Safety, and more can now opt out of using passwords and choose alternative authentication methods, predicting that “the future is passwordless.” This comes after the company announced that passwordless sign-in was generally available for commercial users, bringing the feature to enterprise organizations around the world. Some of the alternative authentication methods that

Java 17 released with updates to LTS schedule The latest release of Java is now available. Java 17 is a long-term support (LTS) release, the last of which was Java 11. According to Oracle, over 70 JDK Enhancement Proposals (JEPs) have been added to the language since Java 11. With this LTS release, Oracle is also working to enhance support for customers. It worked with the developer community to improve LTS scheduling to give companies more flexibility on when to migrate to a new LTS version. The next LTS release will be Java 21 in September 2023, and this would change the LTS release cadence from three to two years. In order to make it easier to access, Oracle has made changes to the Java license. Java 17 and subsequent Java release will be provided under a free-to-use license until a year after the next LTS release. The company will continue to provide OpenJDK releases under the GPL as well. Another main focus of this release is accelerating Java adoption in cloud settings. Recently, the company introduced Java Management Service, which is an Oracle Cloud Infrastructure (OCI) service for managing Java runtimes and applications. According to the company, it provides visibility over Java deployments, highlights unplanned Java applications, and checks that the latest security patches have been applied. Along with Java 17’s release, Oracle is updating Java Management Service with new language enhancements, library updates, support for Apple M1 Silicon, and removal and deprecation of legacy features. Other enhancements in Java 17 include a macOS/AArch64 port, a new macOS rendering pipeline, sealed classes, and more.

Microsoft now offers include Microsoft Authenticator app, Windows Hello, a security key, or a verification code sent to your phone or email. Microsoft software users can now visit account .microsoft.com, sign in, and choose Advanced Security Options. Under “Additional Security,” you’ll see “Passwordless Account.” Select ‘Turn on.’

Broken Access Control tops OWASP 2021 list Broken Access Control has dethroned Injection as the top vulnerability in the OWASP

2021 list, whereas it previously held fifth place. The 34 Common Weakness Enumerations (CWEs) mapped to Broken Access Control had more occurrences in applications than any other category, according to the OWASP Top 10 2021. Cryptographic Failures (which was previously known as Sensitive Data Exposure) moved up from third to second place. The renewed focus here is on failures related to cryptography which often leads to sensitive data exposure or system compromise. Injection slid down to third, with Cross-site Scripting now qualifying as part of this cate-

gory. New categories of vulnerabilities this year included Insecure Design, Software and Data Integrity Failures, and Server-Side Request Forgery.

CodeSignal announces new IDE for dev hiring CodeSignal, a technical recruiting company, announced today new advanced hiring assessment capabilities. The release features the new IDE designed to test candidates’ technical skills with real-world assessments. With the new IDE, candi-


004,5_SDT052.qxp_Layout 1 9/27/21 4:50 PM Page 5

www.sdtimes.com

dates will have the opportunity to interact with code, files, a terminal, and a preview of their application. This allows them to experience the hiring process much like they would experience the actual job, providing a more familiar work environment and a similar experience to coding on local machines. According to CodeSignal, this better allows candidates to showcase their full skill set in a work-like environment, thus making the hiring process more efficient for both the applicant and the employer. CodeSignal’s new IDE is also completely customizable, allowing employers and recruiters to set their own threshold for qualifications while creating unique assessments with comprehensive testing options for each open position. These upgrades were made possible by a $50 million Series C funding led by Index Ventures. This influx of funding brings CodeSignal’s total fund to $87.5 million.

New release of Tableau improves data prep Tableau 2021.3 includes better ways to prepare and manage data, explore data through Tableau Server or Online site before sharing with others, and new custom sample workbooks. Improvements to Tableau Prep include linked tasks, which will allow users to automate multiple flow jobs and ensure they happen in the right order, and the ability to generate missing rows based on dates, date times, or integers to fill in gaps in data. Tableau Catalog updates include data quality warnings in subscription emails and the

ability to see inherited descriptions within web authoring flows. Tableau 2021.3 also introduces Personal Space, which allows users to stage content before sharing with others.“ Governance and security updates in Tableau 2021.3 include centralized row-level security and a new content type called virtual connections that allow users to create tables through a governed database connection, embed service account credentials, and extract data from data tables to reuse within Tableau Server and Tableau Online. Another new addition is an improved integration with Slack to enable Tableau notifications directly through Slack. Users can be notified through Slack when a specific data threshold is triggered.

Elastic updates Stack, Cloud Elastic has recently announced new capabilities and updates to the Elastic Stack and Elastic Cloud. The upgrades focus on simplifying data management and onboarding, as well as enabling users to achieve faster data insights. Among the upgrades featured is native Google Cloud data source integration with Google Cloud Dataflow. This provides users with faster data ingestion in Elastic Cloud as well as a simplified data architecture. This integration allows users to easily and securely ingest Pub/Sub, Big Query, and Cloud Storage data into their Elastic Cloud deployments. In addition, there have also been updates to Elasticsearch and Kibana that include: enhancements to runtime fields which gives users a new

way to explore their data with the flexibility of schema on read and schema on write.

TypeScript 4.4 brings control flow analysis Control flow analysis is available for aliased conditions and discriminants and it checks to see if a type guard has been used before a particular piece of code. Another new feature is index signatures for symbol and template string patterns. Index signatures are used to describe objects that have properties which must use a certain type, but until now they could only be used on string and number keys. Also, in TypeScript 4.4, the “unknown” type will be the default for catching variables. According to Microsoft, in JavaScript any type of value can be thrown and then caught in a catch clause, and in the past, TypeScript typed

October 2021

SD Times

catch clause variables as “any,” but once it added the “unknown” type, it realized it was a better choice than “any” for catch clauses. This release introduces a new flag called – useUnknownInCatchVariables that changes the default type to “unknown” from “any.” TypeScript 4.4 also added support for static blocks in classes, which is an upcoming ECMAScript feature. Static blocks can be used to write a sequence of statements with their own scope that are able to access private fields within a containing class. This allows developers to write more complex initialization code with the capabilities of writing statements, full access to a class’ internals, and not have to worry about leakage of variables. The ‘–help’ option has also been updated in this release with changes to descriptions of compiler options and updated colors and other visual separation.z

People on the move

n Kyndryl, the managed infrastructure services business spun off from IBM, has appointed Harsh Chugh as its new chief operating officer. Chugh comes with over 20 years of experience in engineering, management consulting, finance, and operations. Previously he was the chief financial officer of PlanSource, where he led several modernization efforts. n Kit Colbert is being promoted to chief technology officer at VMware. He joined the company back in 2003 and led the creation of vMotion and Storage vMotion in VMware vSphere. At VMware he has held roles including Cloud CTO, general manager of VMware’s Cloud-Native Apps business, CTO of VMware’s end-user computing business, and lead architect of the VMware vRealize Operations Suite. n CloudBees has announced that Dinesh Keswani will be taking on the role of the company’s chief technology officer. Previously, he worked at HSBC as CTO and before that held roles as vice president of engineering and information systems at GoDaddy and director of eCommerce, SaaS, and API platforms at Intuit.

5


006-10_SDT052.qxp_Layout 1 9/23/21 5:09 PM Page 6

6

Developers are gaining as it becomes more SD Times

T

October 2021

www.sdtimes.com

BY JAKUB LEWKOWICZ

he edge is growing, and cloud providers know it. That’s why they’re creating more tools to help with embedded programming. According to IDC’s research, edge computing is growing, with 73% of companies in 2021 saying that computing is a strategic initiative for them and they are already making investments to adopt it. Last year, especially, saw a lot of that growth, according to Dave McCarthy, the research vice president of Cloud and Edge Infrastructure Services at IDC. Major cloud providers have already realized the potential for the technology and are adding edge capabilities to their toolkit, which now change the way developers can build for that technology. “AWS was trying to ignore what was happening in the on-premises and edge world thinking that everything would go to the cloud,” McCarthy said. “So they finally kind of realized that in some cases, cloud technologies, the cloud mindset, I think works in a lot of different places, but the location of where those resources are has to change.” For example, in December 2020, AWS came out with AWS Wavelength, which is a service that enables users to deliver ultra-low latency applications for 5G devices. In a way, AWS is embedding some of their cloud platform inside of telco networks such as Verizon, McCarthy explained. Also, last year, AWS rewrote Greengrass, an open-source edge runtime, to be more friendly to cloud-native types of environments. Meanwhile, Microsoft is doing the same with its own IoT platform. “This distribution of infrastructure is becoming more and more relevant. And the good news for developers is it gives them so much more flexibility than they had in the past; flexibility about saying, I don’t have to compromise anymore because my cloud native kind of development strategy is limited to certain deployment locations. I can go all-in on cloud native, but now I have that freedom to

deploy anywhere,” McCarthy said. Development for these types of devices has also significantly changed since its early stages. At first, the world of embedded systems was that intelligent devices gathered info on the world. Then, AI was introduced and all of that data that was acquired began being processed in the cloud. Now, the world of edge computing is about moving real-time analysis to happen at the edge. “Where edge computing came in was to marry the two worlds of IoT and AI or just this intelligence system concept in general, but to do it completely autonomously in these locations,” McCarthy said. “Not only were you collecting that data, but you had the ability to understand it and take action, all within that sort of edge location. That opened the door to so many more things.” In the early days of the embedded software world, everything seemed very unique, which required specialized frameworks and a firm understanding of how to develop for embedded operating systems. That has now changed with the adoption of standardized development platforms, according to McCarthy.

Support for edge deployments A lot more support for deployments at the edge can now be seen in cloud native and containerbased applications. “The fact that the industry, in general, has started to align around Kubernetes as being the main orchestration platform for being able to do this just means that now it’s easier for developers to think about building applications using that microservices mindset, they’re putting that code in containers with the ability to place those out at the edge,” McCarthy said. “Before, if you were an embedded developer, you had to have this specialized skill set. Now, this is becoming more available to a wider set of developers that maybe didn’t have that background.” continued on page 8 >


006-10_SDT052.qxp_Layout 1 9/23/21 5:09 PM Page 7

more tools for the edge mainstream www.sdtimes.com

October 2021

SD Times

7


006-10_SDT052.qxp_Layout 1 9/23/21 5:09 PM Page 8

8

SD Times

October 2021

www.sdtimes.com

< continued from page 7

Some of the more traditional enterprise environments, like VMware or Red Hat, also have been looking at how to extend their platforms to the edge. Their strategy, however, has been to take their existing products and figure out how to make them more edgefriendly. In many cases, that means being able to support smaller configurations, being able to handle situations where the edge environment might be disconnected. This is different from the approach of a company like SUSE, which has a strategy to create some edge-specific things, according to McCarthy. When you look at SUSE’s Enterprise Linux, you know, they have created a micro version that’s specifically designed for the edge. “These are two different ways of tackling the same problem,” McCarthy said. “Either way, I think they’re both trying to attack this from that perspective of let’s create standardization with familiar tools so that developers don’t have to relearn how to do things. In some respects, what you’re doing is abstracting some of the complexity of

what might be at the edge, but give them that flexibility of deployment.” This standardization has proven essential because the further you move towards the edge, there is greater diversity in hardware types. Depending on the type of sensors being dealt with, there can be issues with communication protocols and data formats. This happens especially in vertical industries such as manufacturing that already have legacy technology that needs to be brought into this new world, McCarthy said. However, this level of uniqueness is becoming rarer than before with less on the unique side and more being standardized.

Development requirements differ Developing for the edge is different than for other form factors because edge devices have a longer lifespan than things that can be found in a data center, something that’s always been true in the embedded world. Developers now have to think about the longer lifespan of both the hardware and the software that sits on top of it. At the same time, though, the fast pace of today’s development world has driven the demand to deliver new fea-

tures and functionalities faster, even for these devices, according to McCarthy. That’s why the edge space has seen the prevalence of device management capabilities offered by cloud providers that give enterprises information about whether they can turn off that device, update the firmware of that device, or change configurations. In addition to elucidating the life cycle, device management also helps out with security, because it offers guidance on what data to pull back to a centralized location versus what can potentially be left out on the edge. “This is so you can get a little bit more of that agility that you’ve seen in the cloud, and try to bring it to the edge,” McCarthy said. “It will never be the same, but it’s getting closer.”

Decentralization a challenge Developing for the edge still faces challenges due to its decentralization nature, which requires more monitoring and control than a traditional centralized computing model would need, according to Mrudul Shah, the CTO of Technostacks, a mobile app development company in the United States and India. Connectivity issues can cause major

Companies are seeing benefits in moving to the edge

Infinity Dish Infinity Dish, which offers satellite television packages, has adopted edge computing in the wake of the transition to the remote workplace. “We’ve found that edge computing offers comparable results to the cloud-based solutions we were using previously, but with some added benefits,” said Laura Fuentes, operator of Infinity Dish. “In general, we’ve seen improved response times and latency during data processing.” Further, by processing data on a local device, Fuentes added that the company doesn’t need to worry nearly as much when it comes to data leaks and breaches as it did using cloud solutions. Lastly, the transmission costs were substantially less than they would be otherwise. However, Fuentes noted that there were some challenges with the adoption of edge. On the flip side, we have noticed some geographic discrepancies when attempting to process data. Additionally, we had to put down a lot of capital to get our edge sys-

tems up and running — a challenge not all businesses will have the means to solve,” Fuentes said.

Memento Memorabilia Kane Swerner, the CEO and co-founder of Memento Memorabilia, said that as her company began implementing edge throughout the organization, hurdles and opportunities began to emerge. Memento Memorabilia is a company that offers private signing sessions to guarantee authentic memorabilia from musicians, celebrities, actors, and athletes to fans. “We can simply target desired areas by collaborating with local edge data centers without engaging in costly infrastructure development,” Swerner said. “To top it all off, edge computing enables industrial and enterpriselevel companies to optimize operating efficiency, improve performance and safety, automate all core business operations, and guarantee availability most of the time.” However, she said that one significant worry regarding IoT edge computing devices is that they might be exploited as an entrance point for hackers. Malware or other breaches can infiltrate the whole network via a single weak spot. z


006-10_SDT052.qxp_Layout 1 9/23/21 5:10 PM Page 9

www.sdtimes.com

setbacks on operations, and often the data that is processed at the edge is not discarded, which causes unnecessary data stuffing, Shah added. The demand for application use cases at these different edge environments is certainly extending the need for developers to consider the requirements in that environment for that particular vertical industry, according to Michele Pelino, a principal analyst at Forrester. Also, the industry has had a lot of device fragmentation, so there is going to be a wide range of vendors that say they can help out with one’s edge requirements. “You need to be sure you know what your requirements are first, so that you can really have an apples to apples conversation because they are going to be each of those vendor categories that are going to come from their own areas of expertise to say, ‘of course, we can answer your question,’ but they may not be what you need,” Pelino said. Currently, for most enterprise use cases for edge computing, commodity hardware and software will suffice. When sampling rates are measured in milliseconds or slower, the norms are low-power CPUs, consumer-grade memory and storage, and familiar operating systems like Linux and Windows, according to Brian Gilmore, the director of IoT Product Management at InfluxData, an open-source time series database. The analytics here are applied to data and events measured in human time, not scientific time, and vendors building for the enterprise edge are likely able to adapt applications and architectures built for desktops and servers to this new form factor. “Any developer building for the edge needs to evaluate which of these edge models to support in their applications. This is especially important when it comes to time series data, analytics, and machine learning,” Gilmore said. “Edge autonomy, informed by centralized — currently in the cloud — evaluation and coordination, and right-place right-time task execution in the edge, cloud, or somewhere in between, is a

challenge that we, as developers of data analytics infrastructure and applications, take head on.”

No two edge deployments the same An edge architecture deployment asks for comprehensive monitoring, critical planning, and strategy as no two edge deployments are the same. It is next to impossible to get IT staff to a physical edge site, so deployments should be critically designed as a remote configuration to provide resilience, fault tolerance and self-healing capabilities, Technostacks’ Shah explained. In general, a lot of the requirements that developers need to account for will depend on the environment that edge use case is being developed for, according to Forrester’s Pelino. “It’s not that everybody is going in one specific direction when it comes to this. So you sort of have to think about the individual enterprise requirements for these edge use cases and applications with their developer approach, and sort of what makes sense,” Pelino said. To get started with their edge strategy, organizations need to first make sure that they have their foundation in place, usually starting with their infrastructure, IDC’s McCarthy explained. “So it means making sure that you have the ability to place applications where you need so that you have the management and control planes to address the hardware, the data, and the applications,” McCarthy explained. Companies also need to layer that framework for future expansion as the technology becomes even more prevalent. “Start with the use cases that you need to address for analytics, for insight for different kinds of applications, where those environments need to be connected and enabled, and then say ok, these are the types of edge requirements I have in my organization,” Forrester’s Pelino said. “Then you can speak to your vendor ecosystem about do I have the right security, analytics, and developer capabilities in-house, or do I need some additional help?” When adopted correctly, edge envi-

October 2021

SD Times

ronments can provide many benefits. Low latency is one of the key benefits of computing at the edge, along with the ability to do AI and ML analytics in different locations which might have not been possible before, which can save cost by not sending everything to the cloud. At the edge, data collection speeds can approach near-continuous analog to digital signal conversion outputs of millions of values per second, and maintaining that precision is key to many advanced use cases in signal processing and anomaly detection. In theory, this requires specific hardware and software considerations — FPGA, ASIC, DSP, and other custom processors, highly accurate internal clocks, hyper-fast memory, real-time operating systems, and low-level programming which eliminates internal latency, InfluxData’s Gilmore explained.

Despite popular opinion, the edge is beneficial for security Security has come up as a key challenge for edge adoption because there are more connected assets that contain data, and there is also an added physical component for those devices to get hacked. But, it can also improve security. “You see people are concerned about the fact that you’re increasing the attack surface, and there’s all of this chance for somebody to insert malware into the device. And unfortunately, we’ve seen examples of this in the news where devices have been compromised. But, there’s another side of that story,” IDC’s McCarthy said. “If you look at people who are concerned about data sovereignty, like having more control about where data lives and limiting the movement of data, there is another storyline here about the fact that edge actually helps security.” Security comes into play at many different levels of the edge environment. It is necessary at the point of connecting the device to the network, at the data insight analytics piece in terms of ensuring who gets access to it, and security of the device itself, Forrester’s continued on page 10 >

9


006-10_SDT052.qxp_Layout 1 9/23/21 5:10 PM Page 10

10

SD Times

October 2021

www.sdtimes.com

4 critical markers for success at the edge

A recent report by Wind River, a company that provides software for intelligent connected systems, found that there are four critical markers for successful intelligent systems: true compute on the edge, a common workflow platform, AI/ML capabilities, and ecosystems of real-time applications. The report “13 Characteristics of an Intelligent Systems Future” surveyed technology executives across various mission-critical industries and revealed the 13 requirements of the intelligent systems world for which industry leaders must prepare. The research found that 80% of these technology leaders desire intelligent systems success in the next five years. True compute at the edge, by far the largest of the characteristics of the survey at 25.5% of the total share, is the ability of devices to fully function in near-latency-free mode on the farthest edge of the cloud, for example, a 5G network, an autonomous vehicle, or a highly remote sensor in a factory system. The report stated that by 2030, $7 trillion of the U.S. economy will be driven by the machine economy, in which systems and business models increasingly engage in unlocking the power of data and new technology platforms. Intelligent systems are helping to drive the machine economy and more fully realize IoT, according to the report. Sixty-two percent of technology leaders are putting into place strategies to move to an intelligent systems future, and 16% are already committed, investing, and performing strongly.

< continued from page 9

Pelino explained. Also, these devices are now operating in global ecosystems, so organizations need to determine if they match the regulatory requirements of that area. Security capabilities to address many of these concerns are now coming from the different cloud providers, and also chipset manufacturers offer different levels of security to their components. In edge computing, any data traversing the network back to the cloud or data center can also be secured through encryption against malicious attacks, Technostacks’ Shah added.

What constitutes edge is now expanding The edge computing field, in general, is now expanding to fields such as autonomous driving, real-time insight into what’s going on in a plant or a manufacturing environment, or even what’s

It’s estimated that this 16% could realize at least four times higher ROI than their peers who are equally committed but not organized for success in the same way. The report also found that the two main challenges for adopting an intelligent systems infrastructure are a lack of skills in this field and security concerns. “So when we did the simulation work with about 500 executives, and said, look, here are the characteristics, play with them, we got like 4000 plus simulations, things like common workflow platform, having an ecosystem for applications that matter, were really important parts of trying to break that lack of skill or lack of human resource in this journey,” said Michael Gale, Chief Marketing Officer at Wind River. For some industries, the move to edge is essential for digital transformation, Gale added. “Digital Transformation was an easy construct in finance, retail services business. It’s really difficult to understand in industrial because you don’t really have to have a lot of humans to be part of it. It’s a machine-based environment,” Gale said. “I think it's a realization intelligence systems model is the transformation moment for the industrial sector. If you’re going to have a full lifecycle intelligence systems business, you’re going to be a leader. If you’re still trying to do old things, and wrap them with intelligent systems, you’re not going to succeed, you have to undergo this full transformational workflow.” z

happening with particular critical systems in buildings or different spaces such as transportation or logistics, according to Pelino. It is growing in any business that has a real-time need or has distributed operations. “When it comes to the more distributed operations, you see a lot happening in retail. If you think about typical physical retailers that are trying to close that gap between the commerce world, they have so much technology now being inserted into those environments, whether it’s just the point of sale system, and digital signage, and inventory tracking,” IDC’s McCarthy said. The edge is being applied to new use cases as well. For example, Auterion builds drones that they can then give to fire services. Whenever there’s a fire, the drone immediately shoots and sends back footage of what is happening in that

area before the fire department gets there and says what kind of fire to prepare for and to be able to scan whether there are any people in there. Another new edge use case is the unmanned Boeing MQ-25 aircraft that can connect with a fighter at over 500 miles per hour autonomously. “While edge is getting a lot of attention it is still not a replacement for cloud or other computing models, it’s really a complement,” McCarthy said. “The more that you can distribute some of these applications and the infrastructure underneath, it just enables you to do things that maybe you were constrained on before.” Also, with remote work on the rise and the aggressive acceleration of businesses leveraging digital services, edge computing is imperative for a cheaper and reliable data processing architecture, according to Technostacks’ Shah. z


Full Page Ads_SDT052.qxp_Layout 1 9/23/21 5:06 PM Page 11


0012,13_SDT052.qxp_Layout 1 9/27/21 4:49 PM Page 12

12

SD Times

October 2021

www.sdtimes.com

W

ith technology’s ongoing expansion into the cloud and the edge, even as applications themselves grow in complexity, the needs of organizations that rely on software to power their businesses evolve and grow as well. As we’ve been reporting in SD Times all year, security and governance are two areas in which a lot of time and money are being invested, and developers

are increasingly asked to take on a larger role in the development life cycle. This year’s list of companies to watch reflect those changes in the industry, as startups find gaps to fill and established companies pivot to areas of greater need. Here’s the list of companies to keep an eye on in 2022.

APIsec

Komodor

Mabl

WHAT THEY DO: API security WHY WE’RE WATCHING: APIsec provides a fully automated API security testing platform, giving DevOps and Security teams continuous visibility and complete coverage for APIs. APIsec automates API testing, provides complete coverage of every endpoint and attack vector, and enables continuous visibility.

WHAT THEY DO: Kubernetes troubleshooting WHY WE’RE WATCHING: After raising $25 million, the company is positioning its platform as the single source of truth for understanding Kubernetes applications, whereas extant observability solutions tend to take an ops-centric view of things.

Cribl

Lightstep

WHAT THEY DO: Observability data collection and routing WHY WE’RE WATCHING: Cribl’s LogStream delivers a flexible solution to enable customers to choose what data they want to keep, in what format, in which data store — and the assurance that they can also choose to delay any or all of those decisions with a complete copy in very low cost storage.

WHAT THEY DO: DevOps observability WHY WE’RE WATCHING: With a new beginning under the ServiceNow umbrella (it acquired Lightstep earlier this year), the company’s ex-Googlers built Change Intelligence software to enable any developer, operator or SRE to understand changes in their services’ health and what caused those changes. This, the company says, will deliver on the promise of AIOps — to automate the process of investigation changes within complex systems.

WHAT THEY DO: Automated end-to-end testing WHY WE’RE WATCHING: Mabl is a low-code, intelligent test automation platform. Agile teams use mabl’s SaaS platform for automated end-to-end testing that integrates directly into the entire development life cycle. Its low-code UI makes it easy to create, execute, and maintain software tests. The company’s native auto-heal capability evolves tests with your changing UI, and comprehensive test results help users quickly resolve bugs before they reach production.

Curiosity Software WHAT THEY DO: Testing WHY WE’RE WATCHING: With its mantra of “Don’t trap business logic in a testing tool,” Curiosity offers an open testing platform, and is creating a “traceability lab” that links technologies across the whole SDLC. If something changes in one place, the impact of this change should be identified across requirements, tests, data, and beyond.

Push Technology WHAT THEY DO: Intelligent event data platform WHY WE’RE WATCHING: Winners of 12 industry awards in 12 months, the company’s 6.7 release of its Diffusion platform raises the bar for messaging and event brokers.


0012,13_SDT052.qxp_Layout 1 9/27/21 4:45 PM Page 13

www.sdtimes.com

October 2021

SD Times

Spectral

Rezilion WHAT THEY DO: Autonomous DevSecOps WHY WE’RE WATCHING: With $30 million in September Series A funding in its coffers, Rezilion will build out its Validate vulnerability platform based on the company’s Trust in Motion philosophy, and the company expects to add new solutions that help autonomously mitigate risk, patch detected vulnerabilities and dynamically manage attack surfaces.

Rookout WHAT THEY DO: Live debugging WHY WE’RE WATCHING: The company this year launched its X-Ray Vision feature for debugging third-party code and of Agile Flame Graphs to profile distributed applications in production, its integration with Open Tracing, and its introduction of Live Logger. And, CTO Liran Haimovitch’s podcast “The ProductionFirst Mindset” is wildly popular.

Spin Technology WHAT THEY DO: Application security and ransomware protection WHY WE’RE WATCHING: Spin Technology was highlighted as a Top 5 Online SaaS Backup Solutions for the Microsoft Office 365 ecosystem by the Data Center Infrastructure Group. Spin uses artificial intelligence to improve threat intelligence, prevention, prediction, and protection. It can also enable faster ransomware attack detection and response, as well as automate backup and recovery, while reducing the need for human cybersecurity experts and leading to time and effort savings for enterprise organizations.

WHAT THEY DO: Code security WHY WE’RE WATCHING: Spectral’s platform helps developers ensure their code is secure by integrating with CI tools, by enabling their pre-commit tool to automate early issue detection, and by scanning during static builds with plugins for JAMStack, Webpack, Gatsby, Netlify and more.

Swimm WHAT THEY DO: Code documentation WHY WE’RE WATCHING: Onboarding, outdated documentation and project switching all slow developers down. By syncing documentation with code, Swimm enables developers to get up to speed more quickly on the projects they’re assigned to.

Unqork WHAT THEY DO: No-code platform WHY WE’RE WATCHING: Enterprisegrade no-code application platforms such as Unqork have radically expanded the scope and capabilities of no-code. These platforms empower large organizations to rapidly develop and effectively manage sophisticated, scalable solutions without writing a single line of code. Unqork late last year raised $207 million in funding, bringing the company’s valuation to $2 billion. z

13


Full Page Ads_SDT052.qxp_Layout 1 9/23/21 5:07 PM Page 14

Collaborative Modeling

Keeping People Connected ®

®

®

®

®

Application Lifecycle Management | Jazz | Jira | Confluence | Team Foundation Server | Wrike | ServiceNow ®

Autodesk | Bugzilla

sparxsystems.com

TM

®

®

®

| Salesforce | SharePoint | Polarion | Dropbox

TM

| *Other Enterprise Architect Models

Modeling and Design Tools for Changing Worlds

®


015_SDT052.qxp_Layout 1 9/23/21 4:35 PM Page 15

www.sdtimes.com

October 2021

SD Times

DEVOPS WATCH

Most dev teams aren’t CI/CD experts Organizations still early on DevOps adoption, research finds BY JENNA SARGENT

Current DevOps tools and processes aren’t cutting it for many organizations. Despite the industry having now supposedly largely moved to a continuous integration and continuous delivery (CI/CD) approach, it appears that the majority of development teams aren’t actually practicing true CI/CD at an expert level. According to CloudBolt’s latest report, “The Truth About DevOps in the Hybrid Cloud Journey,” only four percent of respondents consider themselves to be experts in CI/CD. The majority of respondents (76%) consider their CI/CD maturity level as “intermediate.” “CI/CD has evolved to the point where it is now widely accepted as the optimal approach for modern application development and deployment.

Even though the concepts behind CI/CD have been around for decades, most organizations are still early on their journey,” CloudBolt wrote in the report. Further, one promise of CI/CD is that it enables applications to be deployed multiple times per day, but only five percent of respondents actually achieve that. For 69% of respondents, the time it takes to deploy a single

CI/CD pipeline is days or weeks. In addition, although 97% of respondents agreed that being able to test CI/CD infrastructure is crucial and 85% do perform that testing, only 11% of respondents consider their CI/CD infrastructure to be reliable. The three factors respondents cited as obstacles to reliability include lack of automation, consistency, and proactivity. Respondents believe improving reliability can be achieved by making provisioning faster through automation, continuously detecting infrastructure issues to reduce testing challenges, and simplifying remediation of infrastructure issues proactively. “It is only through creating better speed, awareness, and repairs that CI/CD can finally live up to its promise,” according to the report. z

CircleCI webhooks enables dev teams to streamline workflows BY JENNA SARGENT

CI/CD provider CircleCI has announced a new feature called CircleCI webhooks that allows customers to build integrations that work with job and workflow status notifications. “As teams continue to increase the release frequency of complex apps and services, observable CI/CD pipelines are more critical than ever. With CircleCI webhooks, developers can build high quality, customizable integrations across their CI/CD, analytics, monitoring, incident management and other applications to enable more informed software decisions,” said Apurva Joshi, chief product officer at CircleCI. CircleCI believes that this new offering will enable development teams

to streamline their workflows and increase engineering velocity. For example, its integration with Sumo Logic utilizes webhooks to collect event information from CircleCI, which enables teams to better track the performance and health of their CI/CD pipelines. “Collecting, enriching, and correlating data from across disparate sources in the modern DevOps toolchain is one of the biggest challenges of today’s engineering teams,” said Drew Horn, director of business development at Sumo Logic. “With CircleCI webhooks, developers can now — in just a few clicks — push detailed, automatically instrumented pipeline data to Sumo Logic’s Contin-

uous Intelligence Platform to benchmark and optimize their software delivery performance with deep insights and real-time analysis of the SDLC from end to end.” Users will be able to leverage webhooks to build automation systems using real-time notifications, visualize and analyze job and workflow events, and receive internal notifications when jobs are completed. As part of this launch, the company also announced an integration with Datadog’s new CI Visibility tool, and will be one of the first CI/CD platforms to do so. CI Visibility can be used to visualize key metrics like the number of failed builds or average build time. z

15


016,17_SDT052.qxp_Layout 1 9/23/21 4:58 PM Page 16

16

SD Times

October 2021

www.sdtimes.com

Reclaim the lost art Lost in the need to empathize with users of software is the need to do the same with colleagues and stakeholders who can offer criticism in a constructive way BY SHELLEY ARMSTRONG

W

hen it comes to design, UX professionals are acutely aware of the importance of empathy. Understanding the pain points for the end user is key to creating the products that will serve their needs. Increased empathy ensures that a true view of those pain points is attained and there are many tools and best practices that enable this. But there is a blind spot that has grown out of all this goodness, which is forgetting to apply that very same principle to our colleagues, stakeholders and peers. Creative tension between crossfunctional teams can be a positive thing but only if we understand one another’s goals and know how to critique work and receive criticism in a constructive way. In short, we can be tough on ideas, while being kind to our colleagues. Being able to retain the ability to assess and reassess our work as designers for the good of our craft and our customers, while enhancing trust and respect within our teams, is good for business. To help you get the best out of critique workshops, here’s my top tips for creating an effective and robust review framework.

Define a framework for feedback sessions 1

In my 20 years of experience, I’ve learned that setting the scene for feedback sessions is essential for getting the Shelley Armstrong is VP User Experience and Design, Finastra.

most out of all involved. Not only does it provide clear boundaries for criticism, it also ensures that there is a natural progression that aligns with a product roadmap, resulting in a deployable solution when the sessions are done. My tried and trusted trio of feedback stages are as follows: Session one: Where are we directionally? The first session or design sprint should focus on whether the initial brief has been captured. It’s not about what buttons should do or what icons should look like. It’s a simple assessment from key stakeholders that should determine whether we have begun designing against the core business objectives for the solution. The result of this session should be an emphatic green light that allows us to begin the journey towards a minimal viable product. Session two: Is everything functionally there? This session has to focus on the key requirements of the solution. We’re still not thinking about the aesthetic minutiae, but rather ensuring that the functional components of the solution, i.e. core features, are in place. We have to ensure nothing has been neglected at this stage. Session three: Let’s get detailed Now we can begin to drill down into text changes, icons, buttons and menus. This is the stage to painstakingly pore over details and dig deep into why we feel the way we do about the smallest of design choices, then fine-tune some more.

The point of the three stages is to avoid an infinite review loop. Once a session is done, and a direction agreed upon, we move on. If feedback that fits the context of session one is given in session three, we are simply too far along our roadmap to go and back to the drawing board. This is why it is essential that all stakeholders are represented at each stage. 2

Avoid feedback pitfalls

There are three main errors we can make when it comes to feedback. Firstly, we can seek to progress our design without asking for feedback at all. This is undoubtably the most damaging approach, as if we don’t take the initiative to ask for feedback, we miss a huge opportunity. We can’t assume that we’ve designed the perfect solution so


016,17_SDT052.qxp_Layout 1 9/23/21 4:59 PM Page 17

www.sdtimes.com

of critique

October 2021

SD Times

solution reaches production can be 100 times more costly to fix than one surfaced at the design stage, as this will require significant re-work and integration by engineers. One of the best tools in our toolbox for catching design errors are design peer reviews at critical points in the product development lifecycle. Putting time into making these as effective as possible is not only important for efficiency gains, but they are also an essential component in raising an organization’s bottom line.

Use data to back up design decisions 4

we must take advantage of colleagues’ experience and expertise to inform our decisions. Remember that all of us are better than any one of us. Secondly, we can ask for feedback without listening. Sometimes, review sessions can be looked on as a box-ticking exercise, especially when we feel that those giving the feedback are not close enough to our work to understand the desired outcomes or reasoning behind our design choices. Going into feedback sessions with this mindset is a fatal mistake. By not listening to colleagues, we miss valuable insights that can improve our designs. Those invited to critique our work will also likely pick up on our disinterest and will not feel comfortable sharing their thoughts. Lastly, we can ask for feedback to receive praise or validation when we

feel we have done a good job. No matter how valid the points, this approach ensures that we are not in the proper mindset to hear them, which can lead to key flaws being missed and, crucially, failure to fulfill customer requirements.

Embrace criticism and increase your value 3

We need to remember that there is an essential business case for identifying flaws at the earliest possible opportunity. The earlier we discover them, the easier and cheaper they are to fix. Discovering an error at the design stage is far less costly than when a solution is in development, where remediation is around 15 times more expensive. The cost also scales with each progression along the product roadmap. For example, a flaw that is only realized when a

Before beginning our design work, we will have workshopped ideas, undertaken user testing and followed UX bestpractice in gathering feedback. At this stage, it is essential that we accurately record feedback data, as this will be our ultimate ally when we are challenged on our design decisions. Data allows us to address concerns with empathy, rather than taking them personally, and the confidence to show why the decisions we made were right, or at least taken in accordance with customer requirements. Each design of a feature does not need to be backed up by data when there are well-understood design paradigms that inform it. For example, an application with multiple menus does not require us to justify the design of each menu when we know that a certain style meets functional requirements for 99% of users. If a challenge surfaces an unprecedented issue, then we have to be open to the possibility that an oversight has occurred. In this instance, we need to spin up a basic prototype and present it to users along with the original design for some AB testing to gather fast feedback. This is where an agile approach is essential. Adopting this approach means that we are not scared of failure, which is essential for innovation. We need to try things out when there is a case for doing so but fail early and fail fast when we do, which is why effective feedback is so essential for businesses. z

17


18

SD Times

October 2021

www.sdtimes.com

API management is a data integration problem BY JENNA SARGENT

W

ith data being increasingly stuck behind different services, API management is becoming more and more of a data integration challenge. Currently most companies view API management as an access problem, but Avadhoot Kulkarni, product manager at Progress, recommends they shift their mindset and view it as a data problem instead. According to Kulkarni, APIs are nothing but “just ways to expose your data in user consumable ways.” As such, managing APIs leads to a number of data management challenges, including how to maintain data quality, data profiling, and data ownership. API developers and maintainers are concerned about data integrity and data consistency across their APIs, but the emergence of microservices architectures have helped in breaking down monolith applications into smaller services, which creates data silos. “Information, which is critical for organizations for their decision making, is locked behind different services. And it’s not easily accessible to the tooling that helps them integrate that data and get a business decision out of those,” said Kulkarni. One way to address that challenge is to give access to back end data directly, but that comes with its own set of new challenges, according to Kulkarni. It can create issues with data ownership as the user role access constraints put in place as a security measure into the API logic might not be accessible. This can be fixed for a small number of APIs by implementing custom integrations, but as the number of connections needed grows, it becomes less manageable.


www.sdtimes.com

In addition, being able to write those custom integrations for data warehouses, data lakes, or business intelligence tools requires a deep knowledge of the API itself. This is another reason why this solution isn’t scalable, according to Kulkarni. “You sort of waste your engineering capacity on that instead of putting it on your business. You start spending on this side project, which is, most likely not the best avenue for spending your resources,” said Kulkarni. Progress’ Kulkarni predicts that more and more of the industry will soon accept this idea of API management being a data management concern. AI and machine learning have permeated so much of what is done in the tech space, and data-driven or data-aware decision making is becoming more of the norm. “API management will be treated more like a data management problem in the near future. So the question about data quality, data profiling, how data gets moved between the different components, who has access to this data was also what privileges that particular person has on that data like who can modify versus who can only read how that data integrates with different solutions, that would be not only considered, but it will be also baked into the API architecture going forward,” said Kulkarni.

Data mesh emerges According to Eric Madariaga, chief marketing officer at CData, data mesh is a technology that is emerging to help

companies with this challenge. A data mesh helps to decouple data entry points. Data mesh was included in ThoughtWorks’ Technology Radar, first in November 2019 in the “Assess” category, and then moving into the “Trial” category in the October 2020 Radar. ThoughtWorks defines data mesh as “an architectural and organizational paradigm that challenges the age-old assumption that we must centralize big analytical data to use it, have data all in one place or be managed by a centralized data team to deliver value.” According to ThoughtWorks, the concept is built on four principles: 1. Decentralized data ownership and architecture 2. Domain-oriented data served as a product 3. Self-serve data infrastructure 4. Federated governance, enabling interoperability between systems “Different data assets within an organization become surfaced through a mesh-like architecture, so that they can be consumed and integrated from a variety of different resources,” said Madariaga. The concept isn’t that far off from the original concept of APIs, Madariaga explained. The data mesh provides a common interface for communication between different data resources, much like how API infrastructures help applications communicate with each other. “It becomes kind of an entire architectural paradigm,” said Madariaga. “It’s something that large organiza-

Best practices for API creation According to Forrester’s Mooter, it’s best to develop APIs by looking “outside in” rather than “inside out.” What this means is that rather than looking inwards at how the IT systems are already implemented, API developers should look outwards towards who will actually be using the API and what their needs are. He explained that further down this planning process it might be necessary to start considering your internal IT constraints due to factors like cost, but the process “should always begin and largely be driven by end user need, not what's already in your IT system.” Another important consideration for API management is governance. Mooter explained that sometimes companies tend to either under-govern or over-govern, neither of which are ideal. Over-governing could result in things getting slowed down too much, while not having enough governance can result in targets not being met. “Finding that sweet spot is rather challenging for organizations,” said Mooter. z

October 2021

SD Times

Buyers Guide tions are using when they have multiple data warehouses and things. Conceptually, you know, it’s the idea of having a common interface for communicating with these resources, and solving the distributed dispersed data asset issues that organizations are facing and dealing with today. In API infrastructure, people are having applications that are trying to communicate with each other. They’re trying to do that in a common and consistent way. Data mesh, similarly, is solving that problem.”

Event streams also gaining popularity According to David Mooter, senior analyst at research firm Forrester, eventdriven architecture is another technology that is coming into play in the API management equation, specifically event streams. Mooter described a number of vendors already playing in this space of applying event streams to API management, such as IBM and Solace, and there is demand from clients. REST APIs have opened the doors for a lot of business innovation, but they do have their limitations, and event streams are helping to fill in some of those gaps. “It’s growing in popularity, but I’ve seen a lot more growth in demand for event streams as not an alternative to REST, but as an additional pool set that complements REST,” said Mooter. According to CData’s Madariaga, standardization of APIs is important, yet there are many different API frameworks that are in use today, such as REST and SOAP. “So there’s this huge landscape of how applications are talking to one another, and all kinds of different API interface standards,” said Madariaga. Madariaga believes it’s important to have a common language for these APIs to communicate through.

Democratizing data management “It enables citizen immigration and citizen developers and citizen integrators continued on page 20 >

19


018-22_SDT052.qxp_Layout 1 9/27/21 10:37 AM Page 20

20

SD Times

October 2021

www.sdtimes.com

< continued from page 19

to use their tooling to work with APIs and data … If you want to increase adoption of your APIs of which you as a developer worked hard to build, providing tooling that gets all the way down to the end user is a very popular way, it’s a very important way to enable the broadest usage of your APIs,” said Madariaga. The beauty of low-code is that it allows non-developers to build applications through a drag-and-drop UI interface, but according to Forrester’s Mooter, those UI portals aren’t very useful unless they’re able to talk to IT systems. Therefore it’s important that citizen developers are able to connect via a robust suite of APIs. According to Madariaga, there can be a lot of complexity in the way citizen developers connect to APIs. If they want to integrate with an API, they must first define inputs and outputs, and may also have to configure the authentication settings. This can be a barrier to entry for those without the technical knowledge needed. “By abstracting that into, say, a common database standard interface, you literally just drop in a driver and start working with back end APIs, like you would a standard traditional database, and every low-code and no-code application knows how to work with a traditional RDBMS database,” said Madariaga. This abstraction not only benefits citizen developers, but saves traditional developers time as well. “Because really, ultimately, what happens is you’re submitting queries and getting back tables of data, and those tables are self describing,” said Madariaga. “So they come back, and they provide the columns of data that are there that are exposing underlying APIs. You can do things like joins and aggregates, and you could do all that in in way less code than it would be to go connect to an API itself, get data, do the transformations, do the integration, or anything else on the back end, it is a lot more complex when you are not using one of these API standards.” z

How does your company help companies manage their APIs? Eric Madariaga, chief marketing officer at CData APIs are mere table stakes in the world of integration. They offer the promise of connectivity, but require IT and developer resources to integrate, discouraging widespread adoption. At CData Software (www.cdata.com) we simplify API integration through Data APIs. Our universal data connectivity solutions deliver powerful database abstractions on APIs, connecting business users, analysts, and integrators with API data without code. Leveraging a traditional database metaphor, Data APIs provide tables, views, and stored procedures, that map to resources and operations exposed by each data source. With standard database interfaces like ODBC, JDBC, and ADO.NET, Data APIs act as a real-time universal translator between data consumers (Data Governance, Data Prep, Data Integration, Master Data Management, AI & ML, Data Catalog, Data Warehousing, Data Science, BI, Analytics, and even developer technologies like Low-code integration) and all the applications and data sources used across an organization. For DevOps and DataOps teams, standardization on Data APIs establishes a common semantic layer that simplifies the ingestion, curation, and orchestration of siloed data and helps support data democratization initiatives. Instead of having to code integrations against one-off APIs, and continuously stay up to date on every API change, Data APIs provide a layer of abstraction that protects consumers from the constantly shifting elements. At CData, our Data API-based solutions offer a tactical approach to connectivity, augmenting existing integration, processing, and analytics tooling to support broad data access capabilities. To learn more about Data APIs, standards-based drivers, and their impact on data integration, visit us online at www.cdata.com.

Avadhoot Kulkarni, product manager at Progress Organizations, like never before, are embracing business intelligence and analytics solutions to drive decisions within their business. What organizations are learning, however; is that these solutions can only reach their full potential when populated with all relevant data sources. Empowering data access for your applications is what we do at Progress DataDirect. Our products offer secure data connectivity solutions for enterprises needing to integrate data across relational, big data, non-structured and cloud databases to make better, and more informed decisions. And with unrivaled speed, Progress DataDirect ensures the data is always relevant and timely. With APIs quickly becoming the standard across organizations for sharing data internally and externally, the Progress DataDirect Autonomous REST Connector delivers seamless, real-time connectivity between REST data and your ODBC/JDBC tools and applications. By opening your application’s data through API standards like REST, Progress DataDirect improves accessibility from widely used BI, analytics and development tools, as well as reducing the amount of reworking of established analytical and reporting task/jobs. With Autonomous REST Connector organizations can expect: l A built-in user interface where organizations can quickly create connectors that you and your end users need l Out of the box recipes offer out of the box connectivity for business-critical systems that are ready to use as-is or easily customizable l Reduced time/effort to adopt APIs and services by your organization l Continued value from existing analytic and reporting tools when moving to APIs and services l Reduced risk of vendor lock-in and poor data quality l The ability to simplify and accelerate the adoption of your own APIs z


Full Page Ads_SDT052.qxp_Layout 1 9/23/21 5:07 PM Page 21


22

SD Times

October 2021

www.sdtimes.com

A guide to API management tools n

FEATURED PROVIDERS n

n CData: Connect, Integrate, and Automate your enterprise data. At CData (www.cdata.com), we simplify connectivity between all of the applications and data sources that power business operations, making it easier to unlock the strategic value of your data. By focusing on established standards for data access, our solutions plug into all of the business applications that you use today (like BI, Reporting, ETL, & Integration) and connect them with live data from just about anywhere. n Progress: The Progress DataDirect Autonomous REST Connector offers intelligent data connectivity to API sourced data from SQL based applications such as BI, Analytics, and ETL tools. With Autonomous REST Connector organizations can expect: l Reduced time/effort to adopt APIs and services l Continued value from existing analytic and reporting tools when moving to APIs and services l Reduced risk of vendor lock-in and poor data quality l The ability to simplify and accelerate the adoption of your own APIs.

n Apigee is an API management platform for modernizing IT infrastructure, building microservices and managing applications. The platform was acquired by Google in 2016 and added to the Google Cloud. It includes gateway, security, analytics, developer portal, and operations capabilities. n Akana by Perforce provides an end-toend API management solution for designing, implementing, securing, managing, monitoring, and publishing APIs. The Akana API Platform helps you create and publish secure, reliable APIs that are elegant, easy to consume, built the right way, and running as they should. n Boomi’s API management solution provides a unified and scalable, cloud-based platform to centrally manage and enrich API interactions through their entire life cycle. With Boomi, users can rapidly configure any endpoint as an API, publish APIs on-premises or in the cloud, manage APIs with traffic control and usage dashboards. n CA Technologies, a Broadcom company, helps customers create an agile business by modernizing application architectures with APIs and microservices. Layer7 API Management provides the most trusted and complete capabilities across the API life cycle for development, orchestration, security, management, monitoring, deployment, discovery and consumption.”

n Cloud Elements delivers an API integration platform on three pillars: “Elements” unify APIs with enhanced capabilities for authentication, discovery, search, error handling and API maintenance. “Formulas” combine those Elements to automate business processes across applications. “Virtual Data Hubs” provide a normalized view of data objects. n IBM API Connect on IBM Cloud is an API life cycle management offering that allows any organization to secure, manage and share APIs across cloud environments — including multi-cloud and hybrid environments. n Kong delivers a next-generation API and service life cycle management platform designed for modern architectures, including microservices, containers, cloud and serverless. Kong is building the future of service control platforms to intelligently broker information across services.

existing services or secure APIs with an API management gateway; add or remove pre-built or custom policies; deliver access management; provision access; and set alerts so users can respond proactively. n Nevatech Sentinet is an enterpriseclass API management platform written in .NET that is available for on-premises, cloud and hybrid environments. Sentinet supports industry SOAP and REST standards as well as Microsoft-specific technologies and includes an API Repository for API Governance, API versioning, autodiscovery, description, publishing and Lifecycle Management. n Postman is the leading collaboration platform for API development, used by more than 7 million developers and 300,000+ companies worldwide. Postman allows users to design, mock, debug, test, document, monitor, and publish APIs — all from one place. n Red Hat 3scale API Management gives control, visibility and flexibility to organizations seeking to create and deploy an API program. It features comprehensive security, monetization, rate limiting, and community features that businesses seek backed by Red Hat’s solid scalability and performance. n SmartBear Software: With Swagger’s easy-to-use API development tools, SoapUI’s automated testing proficiency, AlertSite’s API-monitoring and ServiceV’s mocking and virtualization capabilities, users can build, test, share and manage the best performing APIs.

n Microsoft’s Azure API Management solution enables users to publish, manage, secure and analyze APIs in minutes. It features the ability to create an API gateway and developer portal quickly, ability to manage all APIs in one place, provides insights into APIs, and connects to back-end services.

n SnapLogic Lifecycle API Management is an end-to-end solution designed for managing, scaling and controlling API consumption quickly, seamlessly and securely. Features include request/response transformations, API traffic control and productization, OAuth2 authentication support, advanced API analytics, threat detection, and the developer portal.

n MuleSoft’s Anypoint API Manager is designed to help users manage, monitor, analyze and secure APIs in a few simple steps. The manager enables users to proxy

n TIBCO Cloud Mashery’s capabilities include API creation, productization, security, and analytics of an API program and community of developers. z


IBC_SDT049.qxp_Layout 1 6/25/21 3:29 PM Page 2

Time to go with the flow! Organizations today are turning to value streams to gauge the effectiveness of their work, reduce wait times and eliminate bottlenecks in their processes. Most importantly, they want to know: Is our software delivering value to our customers, and to us? VSM Times is a new community resource portal from the editors of SD Times, providing guides, tutorials and more on the subject of Value Stream Management.

Sign up today to stay ahead of the VSM curve! www.vsmtimes.com


024_SDT052.qxp_Layout 1 9/23/21 4:34 PM Page 24

24

SD Times

October 2021

www.sdtimes.com

Guest View BY DANNY ALLAN

Change your DevOps expectations Danny Allan is Chief Technology Officer at Veeam.

F

or decades, the development and operations teams within companies were siloed. Developers created the software. Operations tested and deployed it. But in 2009, IT consultant Patrick Debois coined the term “DevOps,” a merging of development and operations to improve communications, establish best practices and create feedback loops for organizations to keep improving the overall process. Up to three-quarters of all organizations use a DevOps blueprint today. However, only 11% of all respondents to a recent survey by Garden are completely happy with their development setups and workflows, and think they’re operating as well as they could be. And in a DevOps Institute report, more than 50% described their DevOps transformation journeys as “very difficult.” What is driving this disillusionment? Is DevOps a good idea on paper, but ineffective in practice? Is the process of automating tasks costing teams the time they could be using for creative innovation? One reason for DevOps growing pains is our tendency to overestimate its role. And it makes sense that we overestimate DevOps — it’s loosely defined. There are thousands of organizations implementing DevOps, but no two define its role the exact same way. Before writing DevOps off, teams should clearly define what it means for their organization, reset their expectations, and develop a realistic game plan. There are three key themes to keep in mind as you define what DevOps can do for you.

Is DevOps a good idea on paper, but ineffective in practice?

Sustained culture shift Humans have a natural resistance to change. One of the biggest roadblocks to implementing effective DevOps is the cultural shift, including skills shortages and limited-to-no automation. While leaders may express excitement at the launch of a DevOps initiative, it’s the sustained engagement that will drive long-term effectiveness. For the Puppet 2021 State of DevOps Report, organizations self-reported where they are on their DevOps journey — from low- or mid-evolution to highly evolved. “Challenges related to culture are most acute among low-evolution organizations, but

present persistent blockers among mid-evolution firms. Eighteen percent of high-evolution respondents report they have no cultural blockers,” according to the report. If managers aren’t committed to a DevOps initiative, team members can lose focus, leading to diminished effectiveness for the project overall. And if that doesn’t work, sometimes getting beat to market by a competitor is an effective way to make us realize that we must evolve to remain competitive.

Ongoing, not an end state Departments are often measured by their progress against annual goals. For some, implementing a DevOps model might be one of those. However, DevOps is not an end state nor does implementing DevOps mean 100% adoption of integrated and automated strategies. For example, you wouldn’t want someone to do open heart surgery with software that has only been through a two-week cycle of development and testing. Some projects are better suited to a more traditional model. Maintaining both modalities — DevOps and traditional development — enables teams to benefit from both.

Creating over compliance According to Garden research, U.S. companies are spending an estimated $61 billion a year on tasks many developers consider frustrating — like waiting for pipelines to run, waiting for builds and tests, and setting up, maintaining and debugging pipelines/ automation — instead of innovation. Another common frustration is the task of achieving and maintaining detailed compliance requirements. While there is a shared responsibility between the DevOps team (who executes backups) and the Platform Ops team (who enables the backup to take place), Platform Ops teams are ultimately the ones who are responsible for compliance. Organizations can reach the productivity DevOps promises by clearly defining realistic expectations. While this has been challenging in a remote work environment, I’m hopeful that as we prepare to return to the office in some capacity, there will be more opportunities for ad hoc collaboration that will catalyze excitement about DevOps — both among developers and organizational leadership. z


025_SDT052.qxp_Layout 1 9/23/21 1:53 PM Page 25

www.sdtimes.com

October 2021

SD Times

Guest View BY DAN PUPIUS

Creating healthy hybrid teams I

n the post-COVID-19 era, the hybrid workplace — one in which some employees work from an office while others remain remote — presents a number of challenges for software development teams. Many people have relocated, some continue to have limited or no childcare, some have healthcare challenges that prevent them from returning to in-person work, while others are excited about returning to the office and getting back to “normal”. Balancing the needs and desires of your developers while still ensuring the work gets done well is no easy task, but the challenge of creating a healthy hybrid team is well worth it. As a leader, how do you manage these varied circumstances while ensuring everyone is treated fairly? What can you do to ensure that employees in the office remain in sync with their remote colleagues?

Identify the challenges Managing remote workers and in-person workers requires different approaches and communication methods, and managers must be aware of the ways in which proximity bias can impact the way their hybrid teams work — and what they can do to keep the bias at bay. Proximity bias is one of the biggest challenges any hybrid team leader should be aware of. People who are in the office may be perceived as being more productive because they are more visible, while remote workers who do amazing work are left to languish in the background. Those working in the office may get better projects because they’re top of mind for managers and team leaders, and junior team members working in-person may receive more hands-on support without even asking for it. In offices, developers often rely on overhearing conversations or swinging by each other’s desks to chat about a project and while information conveyed in this ambient manner is important, it is crucial to adopt a hybrid-friendly approach for those in the office as well as at home. Bringing some playfulness into an otherwise monotonous workday can have very positive effects for hybrid teams. In-office employees can socialize more easily during breaks or around the watercooler, while remote employees are left out of those conversations. Leaders will need to create, foster and nurture culture with their teams work-

ing in multiple places.

Beating back the bias So we know that proximity bias is one of the worst parts of hybrid work, but how do you combat it within your own software development team? First, intentionally and consistently check in with every team member. Get a daily pulse on their work, and how they are doing, through asynchronous mechanisms — whether that’s a daily email, a team virtual standup on Slack, or using dedicated asynchronous check-in tools — and schedule recurring, real-time touchpoints through team meetings and face-to-face individual calls (even if it’s via Zoom). Second, establish the precedent that everything must be written down. Even an FYI about an inperson conversation can go a long way to ensuring that your remote team members are kept in the loop and don’t feel isolated. Written records of all employees’ work, whether it’s done remotely or in the office, also helps diffuse unconscious bias during performance reviews. Finally, be intentional about the culture you’re building. Office camaraderie and culture aren’t the same things, and if you rely on the former, your remote employees will feel left out and marginalized because they’re not in the “in crowd.” Use the “one remote, all remote” policy for meetings, even if several attendees are in the same room, by having everyone attend via their own video chat. It’s imperative that you make sure there is a conscious and consistent effort made to bring remote employees into the fold and to level the playing field.

Dan Pupius is founder and CEO of Range, a workplace collaboration tool for hybrid and remote teams.

Balancing the needs and desires of your developers while still ensuring the work gets done well is no easy task.

Is your hybrid team healthy? If you observe all members of your team feeling connected and comfortable with one another, and if you know that everyone feels like they have the information they need to do their jobs well — and the work gets done on time with no delays — there’s a good chance your team is in great shape. And when in doubt? Ask. If there’s an issue, address it, and move forward knowing that you’ve done your part to make your hybrid team as healthy as can be. z

25


026_SDT052.qxp_Layout 1 9/27/21 4:50 PM Page 26

26

SD Times

October 2021

www.sdtimes.com

Industry Watch BY DAVID RUBINSTEIN

The password is … passwordless David Rubinstein is editor-in-chief of SD Times.

M

icrosoft’s announcement last month that users of Outlook and other company software can now create passwordless login scenarios was welcome news. I think I speak for the entire computer-using world when I say this is just great. Passwords are the bane of our existence. They really give the worst user experience of all. I’ve worked with systems that will prompt you that it’s time to change your password, which means I have to find the paper or computer file that has all my passwords and change it on that list. Then, of course, I have to remember that I changed the password. (I’m reminded when I log in with what I thought were the actual credentials but get the message back that says, “Your user name or password doesn’t match the information we have on file.”) Some people use password managers in the cloud to save their credentials, but as we know, those managers can be hacked as well. Meanwhile, a May report by SecureAuth found that 53% of people use the same password for multiple accounts, making successful breaches even more dangerous. And of those, the most used passwords remain: “123456” and “password.” Next in popularity are “12345678” and “qwerty.” Could we make it any easier for ne’er-do-wells to gain access to our companies’ data? In a recent article, Aviad Mizrachi, co-founder and CTO of Frontegg, makers of an admin portal for SaaS applications, noted that the more you ratchet up security in your applications, the worse the user experience gets. “This means that we probably want to enforce some password complexity rules for our customers to enhance security levels. Needless to say, this adds more friction into the signup and login processes, while reducing customer satisfaction,” Mizrachi noted. In short, passwords are both poor for users and great for hackers. In fact more than half of companies polled said they have implemented alternatives to passwords, according to a recent report, “2021 The State of Password Security,” by Cybersecurity Insiders and HYPR. The report found that 64% cite user experience as a top reason for going passwordless, with 73% of

With more and more services and platforms becoming digitalized, the password authentication model is simply not practical anymore.

respondents stating that a mobile-first passwordless multi-factor authentication (MFA) solution is preferred over traditional factors, such as passwords, push-based MFA, or hardware tokens. On the security side, stopping credential-based attacks is the number one reason people say passwordless MFA is important, with 91% of respondents saying it is the primary reason. Yet, in a related finding, organizations using passwordless MFA can require an underlying password, such as a code sent to a mobile device that must be input into the computer to gain access. Of respondents to the Cybersecurity Insiders survey, 61% said their ‘passwordless’ MFA solution requires either a shared secret, a one-time password or an SMS code, even as 96% of respondents consider eliminating shared secrets for authentication as “essential” (44%) or “somewhat important” (52%). And we haven’t yet touched on the amount of time spent by service desk personnel related to password issues. According to another recent report, the estimated cost of productivity per enterprise is on average $5.2 million annually. According to Mizrachi, “It’s pretty clear that the future belongs to passwordless. With more and more services and platforms becoming digitalized, the password authentication model is simply not practical anymore. Embracing the passwordless trend and implementing it as a default option in self-served and multi-tenant offerings (think user management) is no longer an option. The future belongs to passwordless.” There are numerous passwordless solutions coming to market, including facial recognition, voice, fingerprint and security keys, according to the FIDO Alliance, which creates free and open standards for authentication. In fact, of respondents to the Cybersecurity Insiders study, 36% said they are using their smartphones as a FIDO token for passwordless authentication. For me, the best solution I’ve experienced is the fingerprint. I access my MacBook Pro using Touch ID fingerprint scans, and I can do just about any bank transaction I want on my cellphone by accessing my account with just my fingerprint. It’s quick, and never fails. All I have to do is remember which finger I used. z


Full Page Ads_SDT052.qxp_Layout 1 9/23/21 5:07 PM Page 27


SD T Tiimes News on Mond day The latest news, news analysis and commentary delivvered to your inbox!

• Reports on the newest technologies affecting enterprise deve developers elopers • Insights into thee practices and innovations reshaping softw ware development • News from softtware providers, industry consortia, open n source projects and more m

Read SD Times Ne ews On Monday to o keep up with everything happening in the software devvelopment industrry. SUB BSCRIBE TODA AY Y!


Testing Showcase 2021.qxp_Layout 1 9/29/21 5:20 PM Page 1

Testing Showcase 2021

Sponsored by


Continuous testing isn’t BY LISA MORGAN

D

evOps and CI/CD practices are maturing as organizations continue to shrink application delivery cycles. A common obstacle to meeting time-to-market goals is testing, either because it has not yet been integrated throughout the SDLC or certain types of testing are still being done late in the SDLC, such as performance testing and security testing. Forrester Research VP and principal analyst Diego Lo Giudice estimates that only 20% to 25% of organizations are doing continuous testing (CT) at this time, and even their teams may not have attained the level of automation they want. “I have very large U.S. organizations saying, ‘We’re doing continuous delivery, we’ve automated unit testing, we’ve automated functional testing, we shifted those parts of the testing to the left, but we can’t leave performance testing to the end because it breaks the cycle,” said Lo Giudice. The entire point of shifting left is to minimize the number of bugs that flow through to QA and production. However, achieving that is not just a matter of developers doing more types of tests. It’s also about benefiting from testers’ expertise throughout the life cycle. “The old way of doing QA is broken and ineffective. They simply focus on quality control, which is just detecting bugs after they’ve already been written. That’s not good enough and it’s too late. You must focus on preventing defects,” said Tim Harrison, VP of QA Services at software quality assurance consultancy SQA². “QA 2.0 extends beyond quality control and into seven other areas: requirements quality, design quality, code quality, process quality, infrastructure quality, domain knowledge and resource management.”

What’s holding companies back Achieving CT is a matter of people, processes and technology. While some teams developing new applications have the benefit of baking CT in from the beginning, teams in a state of transition may struggle with change management issues. “Unfortunately, a lot of organizations that hire their QA directly don’t invest in them. Whatever experience and skills they’re gaining is whatever they happen to come across in the regular course of business,” said SQA2‘s Harrison. Companies tend to invest more heavily in development talent and training than testing. Yet, application quality is also a competitive issue. “Testing has to become more of the stewardship that involves broader accountability and broader responsibility, so it’s not just the testers or the quality center, or the test center, but also a goal in the teams,” said Forrester’s Lo Giudice. Also holding companies back are legacy systems and their associated technical debt.

2 OCTOBER 2021

“If you’ve got a legacy application and let’s say there are 100 or more test cases that you run on that application, just in terms of doing regression testing, you’ve got to take all those test cases, automate them and then as you do future releases, you need to build the test cases for the new functionality or enhancements,” said Alan Zucker, founding principal of project management consultancy Project Management Essentials. “If the test cases that you wrote for the prior version of the application now are changed because we’ve modified something, you need to keep that stuff current.” Perhaps the biggest obstacle to achieving CT is the unwillingness of some team members to adapt to change because they’re comfortable with the status quo. However, as Forrester’s Lo Giudice and some of his colleagues warn in a recent report, “Traditional software testing has no place in modern app delivery.”

Deliver value faster to customers CT accelerates software delivery because code is no longer bouncing back and forth between developers and testers. Instead, team members are working together to facilitate faster processes by eliminating traditional cross-functional friction and automating more of the pipeline. Manish Mathuria, founder and COO of digital engineering services company Infostretch, said that engineering teams benefit from instant feedback on code and functional quality, greater productivity and higher velocity, metrics that measure team and deployment effectiveness, and increased confidence about application quality at any point in time. The faster internal cycles coupled with a relentless software quality focus translate to faster and greater value delivery to customers. “We think QA should be embedded with a team, being part of the ceremony for Agile and Scrum, being part of planning, asking questions and getting clarification,” said SQA2‘s Harrison. “It’s critical for QA to be involved from the beginning and providing that valuable feedback because it prevents bugs down the line.”

Automation plays a bigger role Testing teams have been automating tests for decades, but the digital era requires even more automation to ensure faster release cycles without sacrificing application quality. “It takes time to invest in it, but [automation] reduces costs because as you go through the various cycles, being promoted from dev to QA to staging to prod, rather than having to run those regression cycles manually, which can be very expensive, you can invest in some man-hours in automation and then just run the automation scripts,” said SQA2‘s Harrison. “It’s definitely super valuable not just for the immediate cycle but for down the road. You have to know that a feature doesn’t just work well now but


optional anymore also in the future as you change other areas of functionality.” However, one cannot just “set and forget” test automation, especially given the dynamic nature of modern applications. Quite often, organizations find that pass rates degrade over time, and if corrective action isn’t taken, the pass rate eventually becomes unacceptable. To avoid that, SQA2 has a process it calls “behavior-based testing,” or BBT, which is kind of like behavior-driven development (BDD) but focused on quality assurance. It’s a way of developing test cases that ensures comprehensive quantitative coverage of requirements. If a requirement is included in a Gherkin-type test base, the different permutations of test cases can be extrapolated out. For example, to test a log-in form, one must test for combinations of valid and invalid username, valid and invalid password, and user submissions of valid and/or invalid data. “Once you have this set up, you’re able to have a living document of test cases and this enables you to be very quick and Agile as things change in the application,” said SQA2‘s Harrison. “This also then leads to automation because you can draw up automation directly from these contexts, events, and outcomes.” If something needed to be added to the fictional log-in form mentioned above, one could simply add another context within the given statement and then write a small code snippet that automates that portion. All the test cases in automation get updated with the new addition, which simplifies automation maintenance. “QA is not falling behind because they’re actually able to keep up with the pace of development and provide that automation on a continuous basis while keeping the pass rates high,” said Harrison.

Service virtualization saves time Service virtualization is another speed enhancer because one no longer waits for resources to be provisioned or competes with other teams for access to resources. One can simply mock up what’s needed in a service virtualization tool. “I remember working on a critical application one time where everything had gone great in test and then when we moved the application changes to prod, things ground to a halt because the configurations in the upper and lower environment differed,” said Project Management Essential’s Zucker. “With service virtualization that goes away.” Within the context of CT, service virtualization can kick off automatically, triggered by a developer pushing a feature out to a branch. “If you’re doing some integration

testing on a feature and you change something in the API, you’re able to know that a new bug is affected by the feature change that was submitted. It makes testing both faster and more reliable,” said SQA2’s Harrison. “You’re able to pinpoint where the problems are, understand they are affected by the new feature, and be able to give that feedback to developers much quicker.” Infostretch’s Mathuria considers service virtualization a “key requirement.” “Service virtualization plays a key role in eliminating the direct dependency and helps the team members move forward with their tasks,” said Mathuria. “Software automation engineers start the process of automation of the application by mocking the back-end systems whether UI, API, end points or database interaction. Service virtualization also automates some of the edge scenarios.”

AI and machine learning are the future Vendors have already started embedding AI and machine learning into their products in order to facilitate more effective continuous testing and to speed application delivery cycles even faster. The greatest value comes from the pattern recognition pinpointing problem areas and providing recommendations for improving testing effectiveness and efficiency. For example, Infostretch’s Mathuria has observed that AI and machine learning help with test optimization, recommendations on reusability of the code base and test execution analysis. “As the test suites are increasing day by day, it is important to achieve the right level of coverage with a minimum regression suite, so it’s very critical to ensure that there are no redundant test scenarios,” said Mathuria of test optimization. Since test execution produces a large set of log files, AI and machine learning can be used to analyze them and make sense out of the different logs. Mathuria said this helps with error categorization, setup and configuration issues, recommendations and deducing any specific patterns. SQA2’s Harrison has been impressed with webpage structure analysis capabilities that learn a website and can detect a breaking change versus an intended change. However, he warned if XPaths have been used, such as to refer to a button that has just moved, the tool may automatically update the automation based on the change, creating more brittle XPaths than were intended. The use cases for AI and machine learning are virtually limitless, but they are not a wholesale replacement for quality control personnel. They’re “assistive” capabilities that help minimize speed-quality tradeoffs. z OCTOBER 2021 3


Testing Showcase 2021

Parasoft moves into the realm of business performance BY ELLIOT LUBER

D

uring the Covid-19 crisis, people got a much better sense of the challenges facing enterprise computing when workers — used to being within the company’s firewall — were suddenly telecommuting en-masse, working on remote teams and learning new applications. Test and development teams have been painfully aware of this kind of transformation for decades and were, for the most part, ready to respond. They may remember that there was a time when using enterprise apps meant one person at a time logging into a dumb terminal linked to a mainframe computer. Developers once wrote static code for static systems — where the system itself was a protective shell around the data. But, today there are few boundaries between the enterprise app based in someone’s cloud and a sea of other components generated by this and other apps in other clouds connected through users on phones and browsers across the supply and demand chain. So the old question pops up: How does one do proper testing and compliance when someone cuts a billion-dollar deal via Facebook chat? Companies are transitioning to reusable code, component strings that could be triggered automatically across applications or the IoT to simplify and secure transactions for end users, or other machine-triggered events, retaining data for compliance and mining for such things as security and process improvement. This automation, of course, does not simplify transactions, only the use of software tools that hide complexities. Additional processes are necessary, just less-human processes. As a result, we often don’t know exactly what is triggering what inside modern networks or how this is actually impacting the business until errors show up or tests flag an issue. We may discover unwanted users designing ingenious schemes to illegally seize control of assets they will hold for ransom, hopefully before they fully engage. All this puts a high importance on continuous testing scenarios that apply artificial intelligence and machine learning to try to determine whether alerts are the result of coding errors, business anomalies, security breaches or processing errors and then learn to spot similar events. 4 OCTOBER 2021

Thus, the red flag is just the beginning of detective work to route the issue toward the right skills for a swift resolution, not necessarily the “aha” moment itself. Some offer AI and ML as the solution, to put artificial detectives to work on the most difficult system triggers. Parasoft sees this differently. We want our clients to put their best minds on the toughest problems. This is where you need creative problem solving and this is where these highly skilled individuals with solid domain expertise shine most brightly, where they are challenged and motivated. This is why you develop talent. If we can save clients time and effort, it should be in the more mundane issues typically handled by your lesser sleuths, the ones of more marginal performance who are going to be de-motivated by the routine drudgery of chasing “the usual suspects” — the roughly 80% of test failures that can easily be attributed to a coding or process-type error. Keeping the test-bed clean will keep test maintenance costs down to earth and reduce job creep, while growing the system’s ability to learn and evolve. This is where AI and ML play best showing solid return. It’s not THE solution, but gives you a handle by focusing your people on the deeper business questions that arise where they can make a real difference. Plus this makes the human hours far more productive, leaving your best talent to innovate, not just test code but business processes with an eye toward on-going performance. Testing is not a coding spell check; it’s about seeing the multiple big pictures of development, security, process and performance — the deeper impacts of issues that are harder to define. As with supply chain issues, one lowers the river’s flow to focus and take a closer look at the underlying rocks that impede flow. Then you put your best people to work removing them. The end result is higher performance and greater efficiency. It’s much the same with continuous testing. It all comes back to performance where the rubber of process meets the road of markets determining profitability. At the end of the day, it’s more about your business far more than your technology. i



Testing Showcase 2021

The future of testing is DevOps speed with managed risk BY SD TIMES AND SAUCE LABS

J

ust as big data transformed the way organizations approach intelligence and cloud transformed the way they think about infrastructure, DevOps is fundamentally altering the way organizations think about software development. In a DevOps world, software development is no longer a balancing act between speed and quality but a quest for both, as forward-thinking development teams aim to increase both release frequency and release velocity while ensuring they have the utmost confidence in production. The driving force, as always, is the customer. Users expect applications to have the latest and best features and functionality at all times. Not tomorrow. Not after the next planned software release. Now. And always. Just don’t even think about impacting application performance or usability to deliver those updates, or those customers you’re catering to won’t be customers for long. These demanding customer expectations, combined with technological advancements in software development made possible by DevOps and CI/CD, have developers focused on pushing smaller and smaller increments of code into production faster and faster, all while product and Q&A teams grow increasingly focused on ensuring that the user experience remains as close to flawless as possible at all times. Against this backdrop, progressive development teams are benefitting from an emerging new approach to testing, one that augments traditional front-end functional testing with error monitoring in production. The combination of these two test methodologies into a single comprehensive approach enables developers to benefit from deep automation of application intent prior to production while also layering in multiple production safety nets in the form of error reporting, rollbacks, and user analytics. “In the modern era of DevOps-driven development, a testing strategy that does not extend into production is simply not complete,” said John Kelly, CTO of Sauce Labs. The ability to pair front-end functional testing in dev and test environments, with error monitoring in production, was the driving force behind Sauce Labs’ recent acquisition of Backtrace, a provider of best-in-class error monitoring solutions for software teams. Sauce Labs is already well-known for delivering one of the industry’s leading test automation platforms. Now, with the addition of Backtrace, the company enables developers of web, mobile, and gaming applications to quickly observe and remediate errors in production as well, often before they’re even discovered by end-users. 6 OCTOBER 2021

For development teams looking to keep up with the pressure to accelerate the release of products into highly competitive and demanding markets, confidence is everything, according to Kelly. “As a developer, knowing that you can quickly discover and fix any bugs that make it to production, and often before production, is a tremendous source of empowerment,” said Kelly. “Having the safety net of error monitoring gives you a level of confidence that you just don’t have otherwise, and that in turn enables you to move with greater pace and deliver releases with greater frequency and velocity.” None of which is to say that the core components of front-end test automation are any less important to a comprehensive testing strategy, Kelly said. “It’s and, not or,” he said. “The development teams we speak to every day are still heavily focused on automating application intent in dev and test environments. But they’re also realizing that there’s no substitute for understanding how the application functions and performs in the production environment, and so they’re taking all the investments they’ve made in cross-browser testing, in mobile app testing, in API testing, and in UI and visual testing and they’re now augmenting them with error monitoring in production.” In fact, Kelly says that error monitoring itself can be leveraged directly in test and dev environments to create additional value for developers. “When you deploy it directly in dev and test environments, error monitoring really complements Selenium, Appium, and other scripted front-end test frameworks by providing an additional layer of depth and visibility into the root cause of an application failure,” said Kelly. Importantly, according to Kelly, developers can also leverage the insights gleaned from error monitoring in production to expand and improve future test coverage during the development and test integration phases of CI/CD. “It’s about enabling developers to shift both left and right and create the kind of continuous feedback loop that’s necessary to mitigate risk and drive quality at speed,” he said. Ultimately, according to Kelly, that ability to combine test signals, understand customer experience insights, and create continuous improvement loops represents the future of testing in the DevOps era. To learn more about how Sauce Labs is helping organizations usher in a new era of testing, visit saucelabs.com. i


NEW NEW

NEW NEW


Software Testing Showcase Featured Companies n

Parasoft: helps organizations continuously deliver quality software with its market-proven, integrated, automated software testing solutions. Supporting embedded, enterprise, and IoT markets, Parasoft’s technologies reduce time, effort, and cost of delivering secure, reliable, and compliant software by integrating everything from deep code analysis and unit testing to web UI and API testing, plus service virtualization and complete code coverage, into the CI/CD pipeline. Bringing all this together, Parasoft’s award-winning reporting and analytics dashboard delivers a centralized view of quality enabling organizations to deliver confidently and succeed in today’s most strategic ecosystems

n A1qa: is a pure-play QA and software testing company. Since 2003, we have been helping global customers, both Fortune 500 enterprises and mid-size organizations, deliver top-rate software products and create exceptional end-user experience. n Applause: gives you the speed and flexibility to scale testing and expand coverage on demand. That’s why the company is a testing best practice for digital innovators across the globe, and an integral part of modern SDLCs in every industry. Applause delivers a harmonized approach to digital quality through our Product Excellence Platform to help organizations see immediate benefits.

n Applitools: is on a mission to help test automation, DevOps, and software engineering teams release mobile and web apps that are visually perfect. We provide the only commercial-grade, visual AIbased test cloud that instantly validates any application’s user interface in a fully automated manner, across all customer engagement points and digital platforms – using our groundbreaking image-processing stack, developed from scratch in-house.

n AutonomIQ: can discover, ingest, and transform English language artifacts into immediately executable, sharable and manageable Test Scripts. Using deep-learning and AI algorithms, AutonomIQ detects natural language documents and changes, automates and enables self-healing, and provides advanced diagnostics. In real world situations, AutonomIQ has been shown to provide ~90% improvement in speed and quality compared to existing tools and techniques. n Broadcom: offers next-generation, integrated continuous testing solutions that automate the most difficult testing activities — from requirements engineering through test design automation, service virtualization and intelligent orchestration. Broadcom’s comprehensive solutions help organizations eliminate testing bottlenecks impacting their DevOps and continuous delivery practices to test at the speed of agile, and build better apps, faster.

8 OCTOBER 2021

and development initiatives—security, safety-critical, Agile, DevOps, and continuous testing.

n Sauce Labs: Sauce Labs is the leading provider of continuous testing solutions that help developers build products that work exactly as intended on every browser, OS, and device, every single time. With solutions spanning live and automated testing, mobile app and mobile beta testing, UI/visual testing and API testing, low-code testing, and error monitoring and reporting in production, Sauce Labs gives organizations the test coverage they need from development all the way to production.

n Eggplant: helps organizations put users at the center of software testing to create amazing digital experiences that drive user adoption, conversion, and retention. Our Digital Automation Intelligence Suite interacts with software exactly like a real user to test the true user experience, and auto-generates tests at the UI and API level for greater productivity. Eggplant solutions enable customers to test the full user experience, including performance and usability.

n Froglogic: is well-known for its automated testing suite Squish with its flagship product Squish GUI Tester, the market-leading automated testing tool for GUI applications based on a wide variety of languages, operating systems and web browsers. In addition, froglogic offers the professional, cross-platform C, C++, C# and Tcl code analysis tool Coco Code Coverage.

n Functionize: is a cloud-based autonomous testing solution that uses AI and ML to provide intelligent test automation. Our Adaptive Language Processing (ALP) converts test plans written in plain English into fully functional test scripts. It can even use the output of your test management system. With autonomous testing, you now have an intelligent test agent (ITA), which is the perfect regression tester — focused, tireless, and driven, but still intelligent.

n IBM: Quality is essential and the combination of automated testing and service virtualization from IBM Rational Test Workbench allows teams to assess their software throughout their delivery lifecycle. IBM has a market leading solution for the continuous testing of end-to-end scenarios covering mobile, cloud, cognitive, mainframe and more.

n Mabl: enables continuous testing with an auto-healing automation framework and maintenance-free test infrastructure. mabl advances traditional UI testing using proprietary machine learning models to automatically identify application issues, including javascript errors, visual regressions, broken links, increased latency, and more.


n Micro Focus: is a leading global enterprise software company with a world-class testing portfolio that helps customers accelerate their application delivery and ensure quality and security at every stage of the application lifecycle — from the first backlog item to the user experience in production.

n Kobiton: solves mobile device sharing and management challenges during development, debugging, manual testing, and automated testing. A pre-installed and pre-configured Appium server provides “instant on” Appium test automation.

n Now Secure: delivers fully automated mobile app security and privacy testing with the speed, accuracy, and efficiency necessary for Agile and DevSecOps environments. NowSecure identifies the broadest array of security threats, compliance gaps and privacy issues in custom-developed, commercial, and business-critical mobile apps.

n Orasi: is a leading provider of software testing services, utilizing test management, test automation, enterprise testing, Continuous Delivery, monitoring, and mobile testing technology.

n Perfecto: offers a cloud-based continuous testing platform that takes mobile and web testing to the next level. It features a: continuous quality lab with smart self-healing capabilities; test authoring, management, validations and debugging of even advanced and hard-to-test businesses scenarios; text execution simulations; and smart analysis. For mobile testing, users can test against more than 3,000 real devices, and web developers can boost their test portfolio with cross-browser testing in the cloud.

n ProdPerfect: fully automates the development and maintenance of browser-level testing using live user data. ProdPerfect analyzes your web traffic to create aggregated flows of common user behavior, which we build into an end-to-end testing suite that we maintain and expand over time, which kicks off automatically from CI.

n Progress: Telerik Test Studio is a test-automation solution that helps teams be more efficient in functional, performance and load testing, improving test coverage and reducing the number of bugs that slip into production. n QASymphony: The company’s qTest is a test case management solution that integrates with popular development tools. QASymphony offers qTest eXplorer for teams doing exploratory testing.

n QMetry: Its s Intelligent Digital Quality Platform is designed for Agile & DevOps teams to build, manage & deploy quality software faster & better. QMetry has the complete agile testing solution with test management, automation, and powerful quality analytics for digital enterprises.

n Sauce Labs: provides the world’s largest cloud-based platform for the continuous testing of web and mobile applications. Founded

by the original creator of Selenium, Sauce Labs helps companies accelerate software development cycles, improve application quality, and deploy with confidence across hundreds of browser / OS platforms, including Windows, Linux, iOS, Android & Mac OS X. Optimized for Continuous integration (CI), Continuous delivery (CD), and DevOps, the Sauce Labs platform is built to handle the most secure data from its customers.

n SmartBear: provides a range of frictionless tools to help testers and developers deliver robust test automation strategies. With powerful test planning, test creation, test data management, test execution, and test environment solutions, SmartBear is paving the way for teams to deliver automated quality at both the UI and API layer. SmartBear automation tools ensure functional, performance, and security correctness within your deployment process, integrating with tools like Jenkins, TeamCity, and more.

n Sofy: iis built from the ground-up to be a no-code test automation platform that uses AI-powered testing to enable “create once and run anywhere” tests without writing a single line of code. Using our library of real devices, you can run manual, automated UI testing and exploratory tests, and ensure fidelity between your test and production environments.

n Synopsys: Through its Software Integrity platform, Synopsys provides a comprehensive suite of testing solutions for rapidly finding and fixing critical security vulnerabilities, quality defects, and compliance issues throughout the SDLC. n TechExcel: DevTest is a sophisticated quality-management solution used by development and QA teams of all sizes to manage every aspect of their testing processes. n Test.ai: offers AI-First powered test automation tools to help QA testers, developers, and other teams meet their goals to release apps faster and with higher quality. Quality assurance can now run at DevOps speed. Scale to testing and supporting thousands of apps continuously across dozens of platforms. n TestRigor: is an automated regression testing tool that allows VPs of Engineering and Directors of QA improve test coverage to 100%, speed up testing schedules by at least four weeks, and increase team productivity by up to 210% — all for less than their entire outsourced QA department.

n Tricentis: is recognized by both Forrester and Gartner as a leader in software test automation, functional testing, and continuous testing. Our integrated software testing solution, Tricentis Tosca, provides a unique Model-based Test Automation and Test Case Design approach to functional test automation — encompassing risk-based testing, test data management and provisioning, service virtualization, API testing and more. i

OCTOBER 2021 9


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.