SD Times - December 2017

Page 1

cov_SDT06.qxp_Layout 1 11/20/17 11:03 AM Page 1

DECEMBER 2017 • VOL. 2, ISSUE 6 • $9.95 • www.sdtimes.com


SDT06 Full Page Ads.qxp_Layout 1 11/16/17 2:51 PM Page 2


003_SDT06.qxp_Layout 1 11/17/17 2:42 PM Page 3

Contents

VOLUME 2, ISSUE 6 • DECEMBER 2017

FEATURES

NEWS 6

News Watch

13

Cloud-native apps need microservices

15

Customer experience v. user experience

16

The importance of OAuth 2.0

18

IoT is dead – long live IoT!

20

8 emerging technologies... and the threats they may pose

23

OutSystems brings DevOps to low-code development

24

Enforcing enterprise security policies on Hybrid Cloud

26

New: Azure Databricks, Visual Studio App Center, VS Live Share, and more

29

Opening up modeling to the entire enterprise

36

DevOps improves application security

3 areas where traditional APMs leave developers exposed page 8

DevSecOps: Baking security into development

COLUMNS 44

ANALYST VIEW by Arnal Dayaratna Rethinking digitized preparedness

45

GUEST VIEW by Scott Shipp Software reflects teams that build it

46

INDUSTRY WATCH by David Rubinstein Of Serverless, Backendless and Codeless

page 30

PDF 2.0 offers many improvements to the PDF specification

page 39 Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 80 Skyline Drive, Suite 303, Plainview, NY 11803. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2017 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 80 Skyline Drive, Suite 303, Plainview, NY 11803. SD Times subscriber services may be reached at subscriptions@d2emerge.com.


004_SDT06.qxp_Layout 1 11/17/17 11:29 AM Page 4

®

Instantly Search Terabytes of Data developer products

www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein drubinstein@d2emerge.com SOCIAL MEDIA AND ONLINE EDITORS Christina Cardoza ccardoza@d2emerge.com Jenna Sargent jsargent@d2emerge.com INTERN Ian Schafer ischafer@d2emerge.com

Over 25 search features, with easy multicolor hit-highlighting options

ART DIRECTOR Mara Leonardi mleonardi@d2emerge.com CONTRIBUTING WRITERS Jacqueline Emigh, Lisa Morgan, Frank J. Ohlhorst

dtSearch’s document filters support popular file types, emails with multilevel attachments, databases, web data

CONTRIBUTING ANALYSTS Rob Enderle, Michael Facemire, Mike Gualtieri, Peter Thorne

CUSTOMER SERVICE SUBSCRIPTIONS subscriptions@d2emerge.com

Developers:

ADVERTISING TRAFFIC Mara Leonardi adtraffic@d2emerge.com LIST SERVICES Shauna Koehler skoehler@d2emerge.com REPRINTS reprints@d2emerge.com

faceted search, advanced data

ACCOUNTING accounting@d2emerge.com ADVERTISING SALES PUBLISHER David Lyman 978-465-2351 dlyman@d2emerge.com

Visit dtSearch.com for

The Smart Choice for Text Retrieval® since 1991

dtSearch.com 1-800-IT-FINDS

WESTERN U.S., WESTERN CANADA, EASTERN ASIA, AUSTRALIA, INDIA Paula F. Miller 925-831-3803 pmiller@d2emerge.com

PRESIDENT & CEO David Lyman CHIEF OPERATING OFFICER David Rubinstein

D2 EMERGE LLC 80 Skyline Drive Suite 303 Plainview, NY 11803 www.d2emerge.com


SDT06 Full Page Ads.qxp_Layout 1 11/16/17 2:52 PM Page 5


006,7_SDT06.qxp_Layout 1 11/17/17 9:55 AM Page 6

6

SD Times

December 2017

www.sdtimes.com

NEWS WATCH Apache Kafka reaches milestone with version 1.0.0 Open-source software foundation Apache released version 1.0.0 of its Kafka distributed data streaming platform last month, with the first full version number indicating Apache’s confidence that Kafka is ready for major professional use. “Apache Kafka is playing a bigger role as companies are moving to real-time streaming and embracing stream processing,” Jun Rao, vice president of the Apache Kafka team, said in the announcement. “The 1.0.0 release is an important milestone for the Apache Kafka community as we’re committed to making it ready for enterprise adoption.” The Apache Foundation highlighted features of Kafka 1.0.0 aimed at enterprises, like the ability to publish and subscribe to streams of data at a massive scale; real-time stream processing with exactly-once semantics, which avoids sending the same messages multiple times in the case of a connection error; and long-term storage of data streams. Accompanying various bug fixes and general improvements in the update are performance improvements to Apache’s implementation of TLS and CRC32C, including Java 9 support, faster controlled shutdown, better JBOD support and exactly-once semantics.

W3C: WebRTC 1.0 is now feature complete The World Wide Web Consortium (W3C) has announced

Web Real-Time Communications (WebRTC) version 1.0 is now feature complete. The specification will now move onto a Candidate Recommendation period to address any feedback from the community before moving to Proposed Recommendation status. WebRTC is a set of protocols and APIs for enabling real-time communications such as live video chat between browsers and mobile applications. “The WebRTC framework provides the building blocks from which app developers can seamlessly add video chat in gaming, entertainment, and enterprise applications,” the W3C writes in a statement. The W3C has been working on WebRTC along with the Internet Engineering Task Force (IEF) since 2011. The standard will enable web browsers to access cameras and microphones, and set up audio and video calls. In addition to its real-time

audio and video capabilities, the organization says WebRTC also provides the ability for peer-to-peer data exchanges to the Web WebRTC 1.0 is expected to move onto Proposed Recommendation status by April 15, 2018. Going forward, the WebRTC working group will turn its focus to interoperability. Other updates will include improving the main WebRTC 1.0 API; finalizing designs of other associated specification such as managing media streams; new designs and features for the next WebRTC version; and adding new functionalities.

Google announces Gmail add-ons for developers Google wants Gmail to be more than just a place where users receive and send emails. The company is announcing

new tools that will allow users to do more from their inbox. The Google G Suite team is releasing Gmail add-ons, a new extensibility framework for developers, and ten ready to use enterprise integrations. Earlier this year, Google announced Gmail add-ons as part of a limited developer preview. Add-ons enable developers to add app functionality directly into Gmail and have it run natively on the web and in Android with iOS support coming soon. The team announced it is expanding the preview to include all developers. “Gmail Add-ons let you integrate your app into Gmail and extend Gmail to handle quick actions. They are built using native UI context cards that can include simple text dialogs, images, links, buttons and forms. The add-on appears when relevant, and the user is just a click away from your app’s rich and integrated functionality,” Wesley

Marko moves to the JS Foundation The JS Foundation has announced that it is bringing on eBay’s Marko library. Marko is a library designed to streamline web development by creating UI components. eBay developed the library in 2012 in order to meet its need of supporting UI components and asynchronous rendering for Node.js applications. “eBay has been a longtime core contributor to open source technology. And, we firmly believe that we should use technology to empower and globally connect people, ” said Patrick Steele-Idem, principal engineer at eBay. “By housing Marko under the JS Foundation, we hope more developers will be able to collaborate and contribute to the long-term goals of the project.” The JS Foundation will be providing support and governance for Patrick Steele-Idem ongoing projects. It will also be providing promotional support in an attempt to grow Marko’s community. “By moving Marko to the JS Foundation, we feel that we will be able to more closely align with other projects in the JavaScript ecosystem,” wrote Steele-Idem. “In addition, we want to make it clear that Marko has and always will be open to outside contributions and outside maintainers. While we have seen great growth in the Marko community, we believe there is still a lot of potential yet to be unlocked. Through neutral governance and close ties with other prominent projects, we believe the JS Foundation will allow the Marko community to grow and flourish.”


006,7_SDT06.qxp_Layout 1 11/17/17 9:55 AM Page 7

www.sdtimes.com

Chun, developer advocate at G Suite, wrote in a post.

GitHub launches Community Forum, Marketplace trials GitHub is officially launching the GitHub Community Forum as well as free trials to the GitHub Marketplace. The company first announced the features at its GitHub Universe event last month. The forums add a new social aspect to the version control and web hosting platform, which GitHub says will be valuable for developers hoping to “tap into the collective knowledge of the world’s largest developer community — and get help from GitHub staff, too.” The forums will also play host to how-tos, tips and tricks, and users will be ranked based on their level of contribution to the community and expertise. In addition, a free trial of GitHub Marketplace is now available, giving users 14 days to try out a selection of six featured apps and familiarize themselves with the marketplace and integrating the apps into their workflow. The current featured selections are Travis CI, Waffle, Dependabot, ZenHub, Codecov and Vetter Code Hub.

Mozilla teams to consolidate browser documentation Mozilla is teaming up with Microsoft, Google, W3C, Samsung and other industry leaders as part of a joint effort to “make web development a little easier” by bringing docu-

December 2017

SD Times

Linux Foundation introducing new AI project, Acumos The Linux Foundation introduced the Acumos Project as part of their effort to help democratize the building, sharing and deploying of AI apps as the technology advances. The new project will work to provide a common framework and platform as well as a marketplace for businesses. “An open and connected AI platform will promote collaboration as developers and companies look to define the future of AI,” said Jim Zemlin, executive director at The Linux Foundation. “Because the platform is open source, it will be accessible to anyone with an interest in AI and machine learning, and customizable to meet specific needs. We expect interest from organizations doing work with autonomous vehicles, drones, content curation and analytics, and much more.” The foundation is still working towards organizing and a governance model for Acumos, which is expected to launch early next year with an initial focus on application development and microservices. The platform will enable developers to edit, integrate, compose, package, train and deploy AI and machine learning applications. The marketplace will enable businesses to access, use and enhance applications. The Acumous team will also work to create an industry standard for AI apps and reusable models. mentation for multiple browsers to their MDN Web Docs educational platform. The project will be led by a newly formed Product Advisory Board for MDN that will be in charge of handling relations between the companies involved, keeping documentation up-to-date, keeping MDN browser-agnostic, and keeping developers aware of updates to the platform and documents. In addition to representatives from the participating corporations, the group has put out a call for active community members who would like to serve on the board. As part of their support for the project, Microsoft has already redirected over 7,700 pages of their MSDN documentation library to corresponding pages on Mozilla’s MDN Web Docs. Developers at Microsoft had taken preliminary steps earlier this year by providing over 5,000 community edits to MDN Web Docs’ information about their Edge browser.

“Just like with end users, we think it’s well overdue for developers to have a simpler view of web standards documentation,” Erika Doyle Navara, dev writer with the Microsoft Edge Team, wrote in a post on the Microsoft development blog. “Developers shouldn’t have to chase down API documentation across standards bodies, browser vendors, and third parties — there should be a single, canonical source which is community-maintained and supported by all major vendors.”

Report: Open source leaves Java apps vulnerable to attacks Java developers should be more aware of the open source software components they put in their applications if they want to avoid a security breach. A new report release by Veracode, a CA Technologies company, revealed 88% of Java apps include at least one vulnerable component,

and about 53.3% of Java apps rely on a vulnerable version of the Commons Collections components. “The universal use of components in application development means that when a single vulnerability in a single component is disclosed, that vulnerability now has the potential to impact thousands of applications — making many of them breachable with a single exploit,” said Chris Wysopal, CTO of CA Veracode. According to the company, the main reason applications become vulnerable is because developers don’t often patch their applications when new vulnerabilities are found or new versions of their components are released. The report, the 2017 State of Software Security Report, found only 28% of organizations conduct composition analysis to track and monitor their application’s components. This becomes a problem when about 75% of application code is made up of open source components. ❚

7


008,9,11_SDT06.qxp_Layout 1 11/17/17 9:53 AM Page 8

8

SD Times

December 2017

www.sdtimes.com

Traditional APMs do not provide developers with t BY SIMON MAPLE

If you’re responsible for creating or managing a customer-facing application for your organization, you have a long list of things to worry about. A scenario like this may actually be at the top of the list: you’ve recently launched a new version of your application to the world, and customers start finding serious issues in production. Excessive latency in the application is destroying its UX. While the APM you’re using is picking up on some of these issues, it is catching them too late. Your customers are already complaining directly to the company, and voicing their displeasure on social media, and your management team is asking, “How did this happen?” This nightmare scenario is the kind of thing that even the best companies in the world can experience. Google, for example, found that traffic dropped by 20 percent with just an extra half-second in search page generation time. Amazon discovered that each additional 100ms of latency resulted in 1 percent fewer sales. If even these giants can fall victim to application issues in production, it can happen to anyone. Relying solely on traditional APMs may be leaving you open to risk in three key areas: • Finding performance issues early

• Diagnosing the root cause of performance issues • Fixing performance issues

Finding performance issues One of the biggest questions for those managing application performance is whether they are finding issues as early as possible. The answer for most organizations is no. In fact, 75 percent of developers report that their performance issues affect their end users in production. APM solutions are traditionally designed to work in production only.

Traditional APMs aren’t built for the testing phase. While traditional APMs are generally built to focus on production environments, some organizations try to use them in the earlier stages of development and test. What they often find is that the metrics and reporting aren’t effective for these stages. A production-focused APM will provide a statistical analysis of your application performance that is essentially an aggregated result of thousands of transactions. This can help point to major issues that may be affecting performance, but

Simon Maple is director of Developer Relations at ZeroTurnaround. Source: Rebel Labs, Zero Turnaround


008,9,11_SDT06.qxp_Layout 1 11/17/17 9:54 AM Page 9

www.sdtimes.com

December 2017

SD Times

h the information they need to fix incidents early in the life cycle because there isn’t any transaction detail, it can be a very vague indicator of the problem. Bottom line: traditional APMs are indicators of trends but those trends aren’t always real problems. Developers are disconnected from how their code changes affect overall performance. In many companies, we still have a situation where developers aren’t tied directly to the performance of the applications they build. They build their applications and throw them over the wall to an operations team in production, and when that team finds issues, they are thrown back to the development team to fix. The DevOps movement has urged companies to try to get away from this by creating one big virtual team and to “shift left” some of the functions and responsibilities from operations to development. But even in DevOps environments, we still see much of the testing happening in production, and the majority of APM tools geared to operations or performance experts. Because of this, developers don’t always feel they are ultimately responsible for delivering performant code, as long as they are meeting functional requirements. This has created a bit of a divide between development and operations teams that still make it difficult to find issues. In order to bridge across these two teams, developers should have more of an ability to gain insight and influence the perform-

Source: Rebel Labs, Zero Turnaround

ance of the applications they’re building. Today’s production-focused APMs don’t give them the ability to do that.

Diagnosing performance issues Once you’ve found an application issue, you have the difficult task of diagnosing the source of the issue. This is a task that becomes more and more difficult as you move away from the development process into production. Teams that test too late are forced to diagnose performance issues that are happening in complex infrastructures and scenarios. In reality, 86 percent of root causes are application-level issues that will manifest in development environments, and scale with the environment. It makes sense therefore to try to catch these application-level issues early

when it’s easier to find the root cause. Overly complex scenarios: Once an application makes it to production, it is small part of a large, often complex system. It is no longer just about whether the application works, but is about all of the technologies that surround the app, from the network infrastructure to distributed systems. A Dynatrace study found that on average, a single transaction uses 82 different types of technology. This makes trying to diagnose the source of a performance issue in production like finding a needle in a haystack. Because this complexity makes it difficult to accurately diagnose the source of the issue, most problems aren’t actually solved, they’re simply patched. continued on page 11 >

9


SDT06 Full Page Ads.qxp_Layout 1 11/16/17 3:00 PM Page 10

Data Quality Made Easy. Your Data, Your Way. NAME

@ Melissa provides the full spectrum of data

Our data quality solutions are available

quality to ensure you have data you can trust.

on-premises and in the Cloud – fast, easy

We profile, standardize, verify, match and enrich global People Data – name, address, email, phone, and more.

to use, and powerful developer tools, integrations and plugins for the Microsoft and Oracle Product Ecosystems.

Start Your Free Trial www.Melissa.com/sd-times

Melissa Data is now Melissa. See What’s New at www.Melissa.com

1-800-MELISSA


008,9,11_SDT06.qxp_Layout 1 11/17/17 9:54 AM Page 11

www.sdtimes.com

December 2017

SD Times

he/she can easily Worse yet, hastily delivered fixes pick up where often break something else, and they left off. Howwith every day that passes, the ever, because it can often take problem gets worse and more months for code to be released convoluted. into production from when it’s No Root-Cause Analysis: developed, the developers As we already covered, tradiaren’t seeing this problematic tional APMs are high-level code until long after it has enough to tell you that a probbeen written. At this point, the lem exists and point to the gencode may be unfamiliar, even eral area that is affected. to the developer who wrote it, They’re built to monitor incredand others may have built on ibly complex infrastructures, so top of the problematic code a general health report is making it part of a big spaghetimmensely useful in production ti codebase. In the time it takes scenarios for operations teams. to research, replicate and Traditional APMs are not, howdevelop a fix for an issue, hunever, as valuable for develop- A research paper shows that time spent fixing bugs grows exponen- dreds and thousands of cusment teams looking to diagnose tially the closer you get to production. tomers can be affected. the source of the issue because they don’t offer a detailed rootTakeaways cause analysis. When an issue is detect- and have many features that developers The way that most companies currently ed and a ticket created and passed on to don’t need. They alert you to an issue handle performance management is a development team, actionable data and point you in a general direction, broken. When you wait until producstill needs to be mined by performance but they don’t provide low-level data tion to catch issues with your applicaexperts using other toolsets, likely in a presentations that cater to the needs of tion, your customers will find them staged environment. developers fixing the issues. Because of before you do. And when you take The issue may be conditional and that, companies run into the following issues that are found in production and hard to reproduce, delaying the diagno- problems when trying to fix issues with send them back to development teams sis even further, especially if you don’t traditional APMs. to fix, it will take longer and cost more have any affected customers volunteerNo Fix Validation Available. Set- than if you had fixed them in the develing to be guinea pigs. All of this again ting up and configuring a traditional opment or test phases to begin with. leads to situations where an issue may APM on a development machine is a Every team, particularly DevOps be patched versus fixed. large task for potentially little return, as focused teams, should take a close look they don’t provide features that aid in at how they can improve the speed with Fixing performance issues isolating, fixing and testing an issue in a which they find, diagnose and fix perThis is the area left most exposed by development environment. Traditional formance issues. If you’re not testing early, your custraditional APMs, as issues are ulti- APMs are unable to provide developers mately fixed by developers. Production- with immediate feedback so they can see tomers are your testers. If you’re subfocused APMs don’t line how code changes are impacting jecting real users to production code up with the workflow of a the performance of the applica- that hasn’t been thoroughly performance tested, this is a great recipe for losdeveloper’s day-to-day, so tion they’re working on. adoption and usage among In order to verify a bug fix, ing your customers. If you’re testing early with producdevelopment teams is a chaldevelopment teams have to lenge. Developers are wait until it’s been deployed tion APMs, you’re not using the right already dealing with tight to production. The fix-test tools. Traditional APMs are built for deadlines and product prescycle is incredibly costly in operations, and are essential to producsures, so the complexity of tratime and business-impact tion, but are not built for developers in ditional APMs simply does not if the bug is live. Long feed- testing and development. Instead, look make it worth their time to figback loops between the owner of the for APM tools built specifically for ure out how to get actionable data. code and manifestation of issues in pro- development and test. Organizations that want to shift left to catch performOn top of that, traditional APMs are duction complicate a fix. seen as absolute overkill in a developThe process for fixing problematic ance issues earlier, need to also shift ment environment. After all, they’re code often involves going to the author their toolset towards developmentbuilt for operations, not development, of the code with the assumption that focused solutions. ❚ < continued from page 9

11


SDT06 Full Page Ads.qxp_Layout 1 11/16/17 2:52 PM Page 12

HELD CAPTIVE BY YOUR

DEVELOPMENT PLATFORM? Free yourself with a multi-cloud, multi-tech application development and modernization platform that allows developers to create cloud-native and cloud-enabled applications faster with microservices and containers.

RED HAT® OPENSHIFT APPLICATION RUNTIMES • Multiple runtimes • Multiple frameworks • Multiple clouds • Multiple languages • Multiple architectural styles

http://red.ht/rhoar

Copyright © 2017 Red Hat, Inc. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, and JBoss are trademarks of Red Hat, Inc., registered in the U.S. and other countries. Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries.


013_SDT06.qxp_Layout 1 11/16/17 2:49 PM Page 13

www.sdtimes.com

December 2017

SD Times

INDUSTRY SPOTLIGHT

Cloud-native apps need microservices BY LISA MORGAN

Many of today’s software teams build cloud-native apps so they can deliver software faster. However, there’s a general belief that cloud-native apps must be built with microservices when in some cases it may be hard to justify the additional complexity. “DevOps and cloud-native principles are a pre-requisite for building microservices but microservices is not a pre-requisite for DevOps and building cloud-native apps,” said Siamak Sadeghianfar, OpenShift technical marketing manager at Red Hat. “Depending on the use case, it may be smarter for you to build cloud-native applications without microservices.”

Why you might not need microservices Many organizations still build or maintain monolithic applications; however, competitive pressures are necessitating faster software delivery. “Microservices are not a prerequisite for building cloud-native apps,” said Sadeghianfar. “Some of our customers are using an approach we call ‘Fast Monolith’ which applies cloud-native practices to monolithic applications.” For example, KeyBank in North America transformed a Java application into a cloud-native application and deployed it using containers. Rather than breaking the application into 40 pieces, which it would have done had the company used microservices, KeyBank broke its application into two pieces and deployed them on the Red Hat OpenShift container application platform. The “Fast Monolith” approach enabled the company to shrink delivery cycles from three months to one week. For years, Agile software teams have successfully reduced software delivery cycles and improved product quality. However, DevOps is necessary to enable continuous delivery all the way to production. Content provided by SD Times and

“In Waterfall teams, people aren’t collaborating effectively and they’re still delivering software in a linear fashion even though they’re applying Agile processes,” said Sadeghianfar. “The lack of proper testing and automation make delivering software into production a very painful experience.” By comparison, DevOps requires a lot of automation and cross-functional collaboration to accelerate release cycles beyond Agile. “The cloud simplifies automation and DevOps practices. The use case

‘Some of our customers are using an approach we call ‘Fast Monolith’ which applies cloud-native practices to monolithic applications. —Siamak Sadeghianfar

determines whether the use of microservices is necessary or not,” said Sadeghianfar. “You can build structured monoliths, employ DevOps practices, do continuous delivery and automate every step involved in delivery process without inheriting the complexity of microservices.”

How to do Fast Monolith right DevOps teams that want to speed software delivery without moving to a microservices architecture are wise to try the Fast Monolith approach on an app that has a complex Waterfall architecture and an appropriate use case. “Cloud-native applications don’t begin or end with microservices,” said Sadeghianfar. “You have to start with DevOps, and to do DevOps right, you have to restructure for DevOps. If you

don’t do that, you’re setting yourself up for failure.” A common mistake is to try to do DevOps using traditional Waterfall team structures. A better approach is to adopt DevOps team structures and practices because continuous crossfunctional collaboration is necessary throughout the application lifecycle. “Delivering software faster isn’t just about the architecture of your application. The first thing is you have to get out of the way of developers,” said Sadeghianfar. “In today’s market, developers are very expensive resources so you can’t have them waiting several weeks or months for VMs, databases or other resources.” To move at the desired speed, developers need the ability to provision the resources they need ondemand. In addition, every step of the software delivery process has to be automated end-to-end so DevOps teams can build pipelines and achieve continuous delivery. “You need to get to a point where you can release software into production without disruption,” said Sadeghianfar. “The biggest obstacle is getting rid of the manual checks and balances we’ve had in place just because we’re emotionally attached to them. What you want to get to are things like rolling deployments and canary releases that allow you to deploy directly into production with very little risk and zero downtime.” If teams can do all of that and achieve delivery schedules that meet customer requirements, then Fast Monolith may be the most effective way to build applications. However, if daily or intra-day releases are necessary, it’s time to consider microservices for that specific team or application. ❚

13


SDT06 Full Page Ads.qxp_Layout 1 11/16/17 2:53 PM Page 14


015_SDT06.qxp_Layout 1 11/17/17 9:24 AM Page 15

www.sdtimes.com

December 2017

SD Times

Customer experience v. user experience

UX defines how people interact with traditional software; customer experience is how that interaction makes them feel BY JENNA SARGENT

In recent years, developing a great user experience has become critical for success in software development. With so many different options for products, users have the power and freedom to choose to use those companies with which they have the best experience. “UX is an established discipline,” said Jason Moccia, CEO of OneSpring. “It has been around for many years in the software development space. I think what’s happening now, and will happen over the next year or two, is it is becoming a more important component of the software development life cycle.” Companies are beginning to understand that if customers have a bad experience interacting with their product, they may not return. Having a bad user interface can be just as bad for business as having a bad product. More recently, customer experience, or CX, has emerged as a new trend. “When I think of the definition between both of them, I think of CX as the kind of emotional side of how customers interact with a company, whereas UX is all about interaction,” said Moccia. The UX is what drives a customer or user to use a product, but the CX aspect is how that interaction makes them feel. According to Moccia, a key part of CX is the journey map. which follows the

journey of a customer as he or she interacts with a company’s product. In UX, they look at ‘personas,’ which are essentially representations of users. Both are very important to look at and take into consideration when developing software. UX/CX are increasingly important due to the impact they can have on a product’s success. “What you are trying to do is impact the bottom line,” Moccia said. “You are trying to increase the emotional positivity somebody will have while interacting with your company.” Now that UX/CX is so crucial to building software that users will love, how can companies fit it into their existing development cycle without having to reinvent the wheel? Moccia said that companies are still trying to figure out where CX fits in their organization. Since it is a relatively new concept, working it into organizations can be tricky. “I think over the next year there is going to be more of a definition within organizations on what CX is, what it looks like for a company, and who oversees CX within that company,” said Moccia. According to Moccia, understanding the roles and responsibilities in regards to UX/CX is a challenge for many. Depending on what development life cycle discipline you follow, UX/CX will be addressed at different points. He gave the example of Agile development,

where UX and usability testing would typically come after development. They said they are seeing more companies bringing UX/CX to the front and building a prototype to show to product users and then developing on that prototype. “You have to adapt to an organization and what they are trying to achieve on the customer experience side and make sure you get that right before you start building,” said Moccia. Moccia says that development teams need to be open-minded to different disciplines in order to be successful in implementing good UX/CX. “There is a lot more that goes into it up front so when we talk to developers about it there is a general resistance because it alters the premise of Scrum in their mind,” he told SD Times. “In software there is somewhat of a resistance because they look at it as a waterfall and what I always tell people is that it is not waterfall, you can still break apart user experience into iterative, bitesized portions,” says Moccia. “We’re going to focus on just one portion of an application and really get that right and then give it to the development team to build and then we will focus on another one. So there are ways to slice this and I would say the challenge for developers, just being open to that and working really with an end in mind.” ❚

15


016_SDT06.qxp_Layout 1 11/16/17 5:01 PM Page 16

16

SD Times

December 2017

www.sdtimes.com

no better alternative has replaced it, is remarkable in the tech space and shows just how useful and well-made it is. How does OAuth 2.0 exactly help organizations secure data?

The importance of

OAuth 2.0 BY CHRISTINA CARDOZA

“There’s an app for that,” but not all apps are created equal. Users expect there to be an endless amount of applications to make their lives easier, but they forget to take into account what kind of security measures those applications provide. In addition, these applications and services often connect to other applications and services, meaning if a user gives one application access to their credentials, they are also giving the other connected services access. While developers are all too familiar with the implications a data breach will have on their application’s reputation, few of them are taking advantage of the tools designed to prevent them. OAuth is an open, secure data sharing standard designed to protect user data by providing access to that data, but keeping a user’s identity private. The standard was created in 2006, and updated to version 2.0 in 2012. Top technology companies such as Google, Yahoo, and Amazon have moved towards OAuth 2.0 for authentication and authorization purposes. But despite its benefits, not enough developers are moving toward the standard. SD Times spoke with Jim King, chief security officer for the financial data company Finicity, to talk about why developers should care about the OAuth 2.0 standard. What is the importance of OAuth 2.0 in the app industry?

King: OAuth 2.0 is the most secure data

sharing standard on the market. The two-factor nature and use of tokenization prevents the single factor disclosure of accounts — a less secure method that was used in the past with 1.0. The single-factor authentication method, which backed up a single credential on sites like Google Drive or file servers, was easy to compromise since hackers only needed to obtain the one piece of information to gain access. OAuth 2.0 requires more levels of authentication to give access to a user. How long has this version of OAuth been around and what type of improvements have been made to 2.0?

OAuth 2.0 has been around since 2012 and was created just two years after OAuth 1.0. The second version has quite a few differences, such as increased OAuth flows and short-lived tokens, and it is not backwards compatible with 1.0. Instead, OAuth 2.0 is a new-andimproved 1.0. Its biggest benefits are that it is more streamlined, less complicated and easier to build into an app. If OAuth 2.0 was developed in 2012, why is this something we are still talking about or something that developers still need to be aware of?

We’re exchanging, using and storing more data than ever before. It’s not even been six years, but the difference between the world of data then and now is night and day. OAuth 2.0 is relevant now more than ever because it’s still the most secure option on the market, and there is still room for adoption. The fact that it’s been around five years, and that

Tokens associated with access — as with other data — can be given or stolen if not secured properly. While OAuth makes authentication relatively secure, it’s only as strong as the refresh interval or the method in which it is secured. Otherwise bad actors can “borrow” or use the token before a refresh cycle to impersonate the intended user. The beauty of OAuth lies in the fact that tokens can easily and quickly be revoked by the server side as needed due to account suspension or abuse of the service. Why haven’t all apps transitioned to OAuth 2.0?

Many companies and developers don’t embrace the security OAuth 2.0 provides until after they need it — for example, after a data breach. Part of the reason adoption has been slow is that some companies think OAuth 2.0 is vendor-centric and there is a large cost associated with it. However, companies can do it themselves and develop in-house. For example, here at Finicity we are using our own OAuth server to simplify our processes and reduce spend. Additionally, we are leveraging OAuth as a secure means of accessing financial data from some of the largests institutions in North America. What do some apps use instead?

SAML and OpenID are well-known alternatives to OAuth 2.0, but they are primarily used for enterprise applications. Comparatively, OAuth 2.0 is a full authentication framework and is leveraged primarily for API or third-party based solutions. Therefore, the best choice of solution usually depends on how it will be used. What else should software developers be aware of when it comes to OAuth 2.0?

In light of recent breaches like Yahoo and Equifax, security is more important than ever. Developers need to be proactive in their approach and focus on security as an inherent part of their job description. OAuth 2.0 is a great step in the right direction, and developers would be wise to leverage OAuth 2.0 in every app they can bolt it on to. ❚


SDT06 Full Page Ads.qxp_Layout 1 11/16/17 2:53 PM Page 17


018,19_SDT06.qxp_Layout 1 11/17/17 9:52 AM Page 18

18

SD Times

December 2017

www.sdtimes.com

IoT is dead – long live IoT! BY ALAN GRIFFITHS

Many industrial applications have been developed to utilize IoT devices and the data they produce. They generally use cloud hosting, analytics and edge computing technology, often provided and connected via an IoT Platform — a set of tools and run-time systems hosted on the cloud that enable the development and deployment of a ‘complete IoT solution.’ And many more standalone IoT applications will be developed in the next few years. Most IoT implementations to date are special projects rather than standard solutions, so a system integrator or service provider is required to scope, design

and manage the implementation. But as IoT becomes pervasive, people in industry will expect their business systems to ‘just work.’ That is, their existing enterprise systems should be IoT enabled just as they now expect them to be Internet-enabled and dataenabled. People will expect to access the IoT from within their existing systems, or IoT technology will be used to create enterprise systems that delivers business value. There will still be a market for IoT platforms and components but most users will consume them in a packaged form. A large number of providers — over 350 by our count — have developed

The six layers of the Industrial Internet of Things Mechanical parts

Electronics, software, sensors and actuators

Connectivity

Product Access and Data Routing

Product-Specific Software Applications

Enterprise Applications 1. The ‘thing’ or mechanical part — a motor, excavator or part of a building. 2. Sensors and actuators with embedded software — this makes the thing into a ‘smart connected product’ — sometimes called an ‘IoT device.’ 3. Connectivity — enables ‘products’ to communicate with back-end systems. In large, complex systems this often includes ‘edge computers’ that act as collection points for the data and provide pre-processing before data is sent to the cloud. 4. Product access and data routing — systems that control and manage who has access to what. 5. Product-specific software applications — this layer makes appropriate connections and integration with other enterprise applications. 6. Enterprise applications — for example, ERP, PLM and MRO (maintenance, repair and operation) systems. This includes analytics, often provided through cloud computing.]

IoT platforms, or components from which applications can be built, often relying on an ecosystem of partners to deliver the complete solution. In most cases, partners provide the IoT devices, cloud storage and computing, edge computing, enterprise applications and the overall project management or systems integration. The main IIoT players come from many directions including industrial automation, cloud computing, PLM/ CADCAM, and specialist IoT providers that have developed their own platforms. Cloud computing, with its global reach, simple subscription pricing and extensive capability, is one of the main reasons that IoT is now becoming so popular. Microprocessor companies like ARM support IoT from the ground up using embedded software and sensors. In fact, the availability of embedded software and sensors is another reason why Industrial IoT is growing so quickly. This is summarized by Rhonda Dirvin, Director IoT Vertical Markets for ARM: ‘The first driver for the spread of IIoT was the proliferation of mobile phone, which drove down the cost of sensors — cameras, GPS, accelerometers, etc. This drove down the cost of acquiring data. At the same time, Cloud computing emerged, which provided a platform where this data could be stored and analyzed relatively cheaply. Altogether this provides the basic framework for IoT. Other technologies such as Big Data, AI and Machine Learning are now coming into play to help make sense of this data, taking it to a whole new level.’ Communications companies provide connectivity products from global cellular through satellite to low-power wide area and short-range communications. And because of the huge amount of data collected by billions of sensors, edge or fog computing providers are springing up to pre-process the data close to its source (i.e. on the edge) so that less is transmitted up to the cloud.


018,19_SDT06.qxp_Layout 1 11/17/17 9:52 AM Page 19

www.sdtimes.com

December 2017

SD Times

The main players in Industrial IoT ■ Industrial automation companies such as GE, Rockwell Automation and Siemens have known for many years the value of control systems and data acquisition, and it even has a name — ‘SCADA’ (Supervisory Control and Data Acquisition). They now offer IoT capability, often integrated with their SCADA systems. ■ The major ‘cloud computing’ providers; Amazon, Google, IBM, Microsoft and Oracle, also offer comprehensive IoT platforms. ■ The big four PLM companies — Autodesk, Dassault Systèmes, PTC and Siemens — provide IoT platforms. ■ Specialist IoT providers such as Aeris, Arkessa, Electric Imp, Exosite and RTI have developed their own IoT products. ■ At the ‘microprocessor end’ of the IoT stack, companies like ARM, Intel and Texas Instruments are expanding their solutions to support IoT ‘from the ground up’ using embedded software and sensors. ■ Communications companies like AT&T, BT and NTT provide connectivity products from global cellular through satellite to low power wide area and short-range communications. ■ The routers that collect data from devices and route it to the Internet (or wherever it needs to go) from Aruba, Cisco, Dell, Ericsson, Juniper and others, are increasing their capability and flexibility with SDN (Software Defined Networking). Electric Imp offers products and services to add connectivity to products and create software for the product and its back-end systems. Cisco recently announced ACI (Application Centric Infrastructure) which will eliminate the need to upgrade physical hardware.

Many enterprise providers are already adding IoT capability to their systems as a component to improve engineering, production, and product capabilities. Examples are SAP Leonardo, Salesforce IoT Cloud, IFS with its IoT Business Connector and GE’s Brilliant Factory initiative. Autodesk, Dassault and PTC also have strong IoT offerings that complement their PLM/CADCAM systems. Even pure IoT companies are finding most success when they provide a business benefit; for example, Exosite’s IoT

■ ‘Edge’ or ‘fog’ computing is becoming critical to industrial IoT solutions because of the huge amounts of data being collected by billions of sensors. Edge computers pre-process this data before sending it to the cloud. This reduces data traffic, increases resilience and improves security by ‘ring-fencing’ groups of devices. Edge computing is a major thrust for Cisco, Foghorn and HPE who all have products that help consolidate and analyse data before it is sent up to the IoT platform in the cloud. The cloud computing providers also provide edge compute capability by different means, for example Amazon has AWS Greengrass and Microsoft has Azure IoT Edge. ■ Enterprise software — providers of critical enterprise systems such as SAP and IFS (ERP systems) and Salesforce (CRM) are providing IoT capability. ■ BIM/AEC providers such as Bentley, Intergraph (Hexagon) and Trimble are using IoT in their solutions in several ways including logistics, security, wearables and drones. ■ System Integrators / Management Consultants — most major consulting / systems integration companies include IIoT as an important part of ‘digital transformation’ — leveraging digital technology such as IoT to radically change the way a company works and does business. This moves them towards ‘Industry 4.0’ and is an essential part of the Fourth Industrial Revolution (4IR). For example, Accenture, Capgemini, Deloitte, EY, KPMG, McKinsey and Wipro all provide IoT services/solutions. ❚

platform is a key component of the Voice of the Machine IoT platform from Parker Hannifin Corp., which helps its customers support proactive maintenance, reduce unplanned downtime and optimize performance. The canny enterprise providers are using the sizzle of IoT to sell their existing products, while building solid, outof-the-box IoT solutions that their customers can easily deploy via their existing enterprise systems. This helps overcome two of the main inhibitors to industrial IoT success reported in recent

— Alan Griffiths

surveys; lack of expertise and complexity. As well as requiring less know-how, these out-of-the-box solutions also incorporate security throughout, thus countering the other main objection; concerns about security. So look forward to the death of IoT as a standalone component and long live IoT technology in business and enterprise systems. ❚ Alan Griffiths is principal consultant at research firm Cambashi (www.cambashi.com).

19


020_SDT06.qxp_Layout 1 11/17/17 1:07 PM Page 20

20

SD Times

December 2017

www.sdtimes.com

8 emerging technologies... and the threats they may pose BY CHRISTINA CARDOZA

With great technology comes great risks. As new technology continues to emerge in this digital day and age, Carnegie Mellon University’s Software Engineering Institute (SEI) is taking a deeper look on the impact they will have. The institute has released its 2017 Emerging Technology Domains Risk report detailing future threats and vulnerabilities. According to the report, the top three domains that are the highest priority for outreach and analysis in 2017 are: intelligent transportation systems, machine learning and smart robots. The top technologies that pose a risk are:

Intelligent transportation systems It seems every day a new company is joining the autonomous vehicle race. The benefits of autonomous vehicles include safer roads and less traffic, but the report states that one malfunction could have unintended consequences such as traffic accidents, property damage, injury and even death.

Machine learning

gence capabilities, these robots and learn, adapt and make decisions based on their environments. Their risk include, but are not limited to, hardware, operating system, software and interconnectivity. “ It is not difficult to imagine the financial, operational, and safety impact of shutting down or modifying the behavior of manufacturing robots, delivery drones, service-oriented or military humanoid robots, industrial controllers, or, as previously discussed, robotic surgeons,” according to the researchers.

Blockchain Blockchain technology has become more popular over the past couple of years as companies are working to take the technology out of cryptocurrency and transform it into a business model. Gartner recently named blockchain as one of the top 10 technology trends for 2018. However, the report notes the technology comes with unique security challenges. “Since it is a tool for securing data, any programming bugs or security vulnerabilities in the blockchain technology itself would undermine its usability,” according to the report.

Machine learning provides the ability to add automation to big data and derive business insights faster, however the SEI worries about the security impact of vulnerabilities when sensitive information is involved. In addition, just as easy as it is to train machine learning algorithms on a body of data, it can be as easy to trick the algorithm also. “The ability of an adversary to introduce malicious or specially crafted data for use by a machine learning algorithm may lead to inaccurate conclusions or incorrect behavior,” according to the report.

With the emergence of the IoT, mesh networks have been established as a way for “things” to connect and pass data. The report notes that mesh networks carry the same risks as traditional wireless networking devices and access points such as spoofing, man in the middle attacks and reconnaissances. In addition, the mesh networks pose more risks due to device designs and implementations.

Smart robots

Robotic surgery

Smart robots are being used alongside or in place of human workers. With machine learning and artificial intelli-

Robot-assisted surgery involves a surgeon, computer console and a robotic arm that typically performs auton-

Internet of Things mesh networks

omous procedures. While the technique has been well established, and the impact of security vulnerabilities have been low, the SEI still has its concerns. “Where surgical robots are networked, attacks—even inadvertent ones—on these machines may lead to unavailability, which can have downstream effects on patient scheduling and the availability of hospital staff,” according to the report.

Smart buildings Smart buildings fall under the realm of the Internet of Things using sensors and data analytics to make buildings “efficient, comfortable, and safe.” Some examples of smart buildings include: real-time lighting adjustments, HVAC, and maintenance parameters. According to the SEI, the risks vary with the type of action. “The highest risks will involve safety- and security- related technologies, such as fire suppression, alarms, cameras, and access control. Security compromises in other systems may lead to business disruption or nothing more than mild discomfort. There are privacy implications both for businesses and individuals,” the wrote.

Virtual personal assistants Almost everyone has access to a virtual personal assistant either on their PC on mobile device. These virtual personal assistants use artificial intelligence and machine learning to understand a user and mimic skills of a human assistants. Since these assistants are highly reliant on data, the report states there is a privacy concern when it comes to security. “VPAs will potentially access users’ social network accounts, messaging and phone apps, bank accounts, and even homes. In business settings, they may have access to knowledge bases and a great deal of corporate data,” the researchers wrote. ❚


SDT06 Full Page Ads.qxp_Layout 1 11/16/17 2:54 PM Page 21


SDT06 Full Page Ads.qxp_Layout 1 11/16/17 2:54 PM Page 22


023_SDT06.qxp_Layout 1 11/16/17 4:49 PM Page 23

www.sdtimes.com

December 2017

SD Times

DEVOPS WATCH In other DevOps news…

OutSystems‘ new test framework is designed for unit test automation and orchestration.

OutSystems brings DevOps to low-code development BY CHRISTINA CARDOZA

OutSystems wants to make it easier for enterprise IT shops to adopt low-code development into their DevOps toolchains. The company announced new DevOps capabilities as part of a new release. The company announced an enhanced LifeTime deployment API; Microsoft Visual Studio Team Service Integration; Jenkins CI/CD Server Integration; a new test framework; new automated visual text merge capability, and LifeTime DevOps advanced deployment options. The updated deployment API will enable DevOps teams to manage apps, modules, environments and deployments within OutSystems. In addition, teams can leverage the API to manage builds and releases through Visual Studio Team Services. “As organizations seek to drive digital transformation, they may adopt lowcode platforms that accelerate delivery of enterprise applications,” said Paula Panarra, general manager at Microsoft Portugal at Microsoft Corp. “Collaborating with OutSystems is a win-win for organizations embracing DevOps and

balancing the need to innovate with the need to maintain legacy applications.” The new test framework will enable automation and orchestration of uniting testing using BDD framework tests, UI tests, API testing and Mobile testing, and enhancing collaboration between development, QA, and operations departments. The automated visual text merge capability allows developers to compare JavaScript and CSS scripts during conflict resolution, merge the scripts and resolve conflicts in order for DevOps teams to find and fix issues. Lastly, the company’s new deployment options feature new deployment flexibility and an improved user interface for searching and selecting applications to be deployed. “OutSystems wants all organizations to take advantage of the development speed of low-code — this is fundamental to them achieving their digital transformation goals,” said Paulo Rosado, CEO of OutSystems. “Where organizations have an existing investment in DevOps tooling, we want to break any barriers to entry and make it easy to add our platform into their toolchain.” ❚

■ Atlassian recently updated Bitbucket Server and Bamboo with new DevOps workflow features. Bitbucket is the company’s Git code management solution while Bamboo is for integration and release management. New features include a configurationas-code feature called Bamboo Specs, Bamboo Smart Mirroring support for distributed teams, new webhooks support for third-party DevOps tools, Bamboo project level permissions, and GVFS support. ■ CA Technologies wants to transform businesses into modern software factories with the release of the CA Automic One Automation platform version 12.1. The 12.1 release is designed with three major themes in mind: intelligent automation, modern software factories, and agility for Ops. It features environment blueprinting, provisioning capabilities, enhanced code-level access, a chatbot interface, new technical enhancements around scale, zero-touch selfservice capabilities and intelligent automation capabilities to automate within the DevOps toolchain. ■ Micro Focus recently released SMAX, the 10th version of its Service Management Automation platform with enhanced automation, analytics and collaboration. According to the company, the platform aims to solve the hybrid IT problem on on-premises and in the cloud, as well as bring DevOps teams closer together. Other key enhancements in SMA-X include codeless and version-less configuration, which speeds onboarding and reduces costs; multi-tenant Managed Service Provider support; and analytics for change management and service agents, as well as for “smart tickets,” featuring OCR and machine learning to accept or reject requests.

23


024,25_SDT06.qxp_Layout 1 11/17/17 2:44 PM Page 24

24

SD Times

December 2017

www.sdtimes.com

S

erver provisioning has historically played a key role in IT’s control over enterprise systems: IT approved and provisioned developers’ requests for compute resources, enforcing security controls and policies along the way. Cloud technologies are being adopted widely, creating hybrid environments. Cloud undermines the traditional provisioning model; developers can now allocate resources themselves with the swipe of a credit card. The cloud’s hyper-scale and automation provide flexibility and virtually infinite scale, which enterprise IT must harness while protecting corporate IP and data. Forward-thinking IT organizations from the world’s biggest banks, media, and retail companies are moving quickly to seek the cloud’s advantages while maintaining control over enterprise assets. This article explores the challenges enterprises face in the hybrid world, as well as the approaches emerging to solve them.

Enforcing enterprise policies on Hybrid BY VINAY WAGH

The Hybrid Cloud is Complex The diverse environments of hybrid cloud create massive complexity. Agents and virtual appliances are unwieldy and difficult to manage. Perimeter defenses like firewalls are no longer sufficient and networks need to be protected internally, but segmentation can create traffic jams or go the other way, allowing too many actors in. Data and workload portability compound these risks. Meanwhile, separation of duties vanishes as developers deploy resources in multiple environments without IT’s knowledge. IT needs to replace traditional enforcement methods so that cloud resources are properly accessed, provisioned, secured, operated, and monitored.

Existing solutions are inadequate The cloud’s complex, varied parameters render conventional security enforcement mechanisms inadequate. The Vinay Wagh is senior product manager at Bracket Computing.

three methods by which security controls and policies are enforced on hybrid clouds come with issues: Provider-Based Security. Enterprise IT sometimes employs a cloud provider’s security controls while maintaining their existing set-up on private clouds. For small companies that use a single public cloud with few regulatory restraints, this can be a viable option. However, adding environments increases complexity, as IT must manage various security postures. Further, for firms subject to regulatory security concerns (e.g. HIPAA), provideroffered encryption often raises objections. Finally, for provider-based security to work, developers shoulder some of the burden of implementation. Agents. Agent-based solutions present another option. Unfortunately, IT runs on a simple truth: “If it’s slow, they’ll turn it off.” If agents incur a significant performance or operational penalty — remember encrypted email?

— users will likely deactivate them or find workarounds. Additionally, though agents can provide insight into activity on workloads compromised during an attack, malware can disable them upon installation, undermining that advantage. Virtual Appliances. This third set of solutions enforces security using virtual appliances, which are unsuited to the highly virtual hybrid cloud. They are unscalable, as a virtual appliance must be placed every few instances. Additionally, virtual appliances degrade performance by creating chokepoints mitigated only with control over hardware appliances.

Workload protection provides the solution Due to these challenges, hybrid cloud environments must be secured differently than independent public or private clouds. “Workload protection platform” is a catchall term for hybrid cloud security


024,25_SDT06.qxp_Layout 1 11/17/17 2:45 PM Page 25

www.sdtimes.com

security Cloud

architectures that’s gaining momentum in the industry. Enforcing policies using workload protection can enable a single policy framework across environments, but it must meet four core requirements: 1) Remains Consistent Across Hybrid Clouds. Consistency is the defining design principle behind hybrid cloud security solutions. In on-premises environments, no one would use Cisco and Juniper in one data center, and another set of providers elsewhere. Yet firms manage multiple sets of controls in their hybrid environments. Security policies must be enforced consistently everywhere developers work, minimizing IT’s operational overhead. 2) Enables Separation of Duties. Separation of duties is critical to the cloud, yet under-discussed. IT needs the ability to enforce security controls and policies without disrupting the end user experience that cloud offers developers.

Ensuring separation of duties requires that IT security enforcement be transparent; like SSL in browsers, developers shouldn’t notice it’s there. Using virtualization to deliver security offers this benefit. When inserted above a cloud provider’s hypervisor, a virtualization layer provides an IT enforcement point with all the benefits of cloud provider-based security, but without the compromises of multi-tenancy and single platform limitations. Solution providers should insist that this virtualization layer be lightweight—for example, nested virtualization is a virtualization-based approach but incurs significant performance penalties. 3) Provides Operational Simplicity. Three constructs deliver operationally simple workload protection: First, policy deployment must operate in concert with existing cloud workflows. Deploying tags on resources, and writing policies on those tags, is one way to achieve this. Already common to the developer workflow, tags define deployments on AWS, GCP, and others. These tags remain with assets if they are copied or moved. An example of policy written on tags is, “environments tagged ‘dev’ can only communicate with other environments tagged ‘dev’.“ Written like this, policies can be general like the above, or extremely granular, written to control specific ports, databases, or volumes. Second, policy enforcement should be decoupled from network constructs. Implementations like VLANs and subnets become incredibly complex to manage when spread across heterogeneous environments. Tagging allows policies to be written on workloads, applications, and data instead of conventional constructs like IP addresses. Third, policies should be cryptographically enforced. In any environment, but particularly across hybrid cloud, IT must deal with the risks of malware, malicious insiders, and mistakes. Encryption of data at rest and/or in motion protects enterprises from these threats, satisfying regulatory requirements for financial services, healthcare, and other large enterprises.

December 2017

SD Times

Tags allow policies to be enforced cryptographically, with the solution checking decryption requests against policies in a centralized control plane before actually decrypting any resource. This yields automated, error-free policy enforcement, with the added benefit of alwayson encryption that doesn’t impede developers or alter their workflow. 4) Protects the Full Workload. Finally, a weakness shared by existing 3rd party enforcement mechanisms is that they deal primarily in network constructs—through IP tables, VLANs, and others. While this is an effective method of protecting the perimeter and creating segmentation, it fails to ensure storage or compute security. Even with network-based protections, data can be moved or copied, and instances can be booted. Without protecting the full workload—network, storage, and compute—existing solutions cannot fully meet the unique security needs introduced by hybrid cloud. Tagging resources, be it data, network links, or instances, and then crypto-enforcing policies on them allows security platforms to both simplify cloud operations and provides a measure of IT control over the full workload.

Conclusion The operational complexity and risk introduced by hybrid environments is significant, but given the advantages of the hybrid cloud, adoption will continue to increase. Enterprises require a single policy framework across environments—and soon. Solution providers must make consistency, separation of duties, operational simplicity, and full workload protection the core of security solutions in the cloud. When enterprises demand it, workload protection platforms will offer the scalability of cloudbased solutions, the host-based context of agent-based solutions, and the flat network appeal of virtual appliances— all in one solution. This powerful architecture allows enterprises to leverage the hybrid cloud with IT control over security, without disrupting developer workflows. ❚

25


SD Times Xamarin2.pdf

C

M

Y

CM

MY

CY

CMY

K

1

11/1/17

11:29 AM


NOM2017AD.qxp_Layout 1 7/26/17 12:25 PM Page 1

Subscribe to SD Times News on Monday to get the latest news, news analysis and commentary delivered to your inbox.

• Reports on the newest technologies affecting enterprise developers — IoT, Artificial Intelligence, Machine Learning and Big Data • Insights into the practices and innovations reshaping software development such as containers, microservices, DevOps and more • The latest news from the software providers, industry consortia, open source projects and research institutions

Subscribe today to keep up with everything happening in the software development industry.

CLICK HERE TO SUBSCRIBE


026_SDT06.qxp_Layout 1 11/17/17 1:18 PM Page 26

26

SD Times

December 2017

www.sdtimes.com

New: Azure Databricks, Visual Studio App Center, VS Live Share, and more BY CHRISTINA CARDOZA

Microsoft is unleashing new tools to help developers increase productivity and simplify app development. The company announced Azure DataBricks, the Visual Studio App Center, Visual Studio Live Share, Azure DevOps projects, Azure IoT Edge and Visual Studio Tools for AI at its Microsoft Connect(): 2017 conference.

be able to simplify Big Data and AI with Azure Databricks.” To increase developer productivity, the company also announced Visual Studio App Center, Visual Studio Live Share, and a preview of Azure DevOps projects. The Visual Studio App Center is designed to automate and manage the lifecycle of iOS, Android, Windows and MacOS apps. “Developers can connect

Microsoft’s Scott Guthrie announced new products at Connect(): 2017.

“With today’s intelligent cloud, emerging technologies like AI have the potential to change every facet of how we interact with the world,” Scott Guthrie, executive vice president of Microsoft’s cloud and enterprise group, said in an announcement. “Developers are in the forefront of shaping that potential.” The company announced the preview of Azure Databricks, a new solution that brings Databricks and Azure together to provide data science and data engineer teams with a collaborative workspace, a unified engine for all types of analytics and a serverless cloud infrastructure. Azure Databricks is built off of Databricks’ Unified Analytics Platform, and also provides full integration with the Azure cloud platform. “There’s a large base of Microsoft Azure customers looking for a high-performance analytics platform based on Spark — and Databricks is already the leading Cloud platform for Spark,” said Ali Ghodsi, cofounder and CEO at Databricks. “These organizations will

their repos and within minutes automate their builds, test on real devices in the cloud, distribute apps to beta testers and monitor real-world usage with crash and analytics data, all in one place,” Guthrie announced. Visual Studio Live Share gives developers the ability to share projects with their development teams or other teams, collaborate in real time, and edit and debug the code in their personalized editor or IDE. “Rather than just screen sharing, Visual Studio Live Share lets developers share their full project context with a bi-directional, instant and familiar way to jump into opportunistic, collaborative programming,” Guthrie wrote. Azure DevOps projects will enable developers to configure a DevOps pipeline and connect to Azure Services, according to Guthrie. “In less than five minutes, this feature will ensure that DevOps is not an afterthought, but instead the foundation for new projects and one that works with many applica-

tion frameworks, languages and Azure hosted deployment endpoints,” he wrote. In addition, the company announced the upcoming preview of Visual Studio Connected Environment for Azure Container Service. This solution will enable developers to edit and debug cloud native apps running on Kubernetes in the cloud. To make artificial intelligence accessible to every developer, Microsoft also announced preview versions of Visual Studio Tools for AI and Azure IoT Edge. Visual Studio Tools for AI is an extension of the VS IDE. The solution will provide debugging and editing capabilities as well as support popular deep learning frameworks. Azure IoT Edge is designed to deploy cloud intelligence to IoT devices and provide advanced AI analytics and machine learning at the IoT edge. “Azure IoT Edge enables developers to build and test container-based workloads using C, Java, .NET, Node.js and Python, and simplifies the deployment and management of workloads at the edge,” Guthrie wrote. The company also announced Azure machine learning updates to enable AI models to be deployed and run on edge devices. Other announcements of the conference included: Microsoft joined the MariaDB Foundation as a platinum member; a preview of Azure Cosmos DB with Apache Cassandra API; and a new GitHub partnership on GVFS. “It’s never been a better time to be a developer, as developers are at the forefront of building the apps driving monumental change across organizations and entire industries. At Microsoft, we’re laser-focused on delivering tools and services that make developers more productive, helping developers create in the open, and putting AI into the hands of every developer so they unleash the power of data and reimagine possibilities that will improve our world,” Guthrie said. ❚


SDT06 Full Page Ads.qxp_Layout 1 11/16/17 2:55 PM Page 27


SDT06 Full Page Ads.qxp_Layout 1 11/16/17 2:55 PM Page 28

Pro Cloud Server Collaborate with

Create

Integrate

with

with

OSLC RESTful API

AAFree of theversion Pro Cloud for up to+ 25 users 5-25version user, Free of Server the Pro+ WebEA Cloud Server WebEA For those with five or more current or renewed licenses of the Enterprise Architect Corporate Edition(or (or above). Conditions apply,apply, see websee site web for details. Corporate Edition above). Conditions site for details.

Visit: sparxsystems.com/pcs-express

Online Demo: North America: spxcld.us | Europe: spxcld.eu | Australia: spxcld.com.au

er v r NEW e dS

u ss o l C Pro Expre

Visit sparxsystems.com/procloud for a trial and purchasing options


029_SDT06.qxp_Layout 1 11/20/17 10:27 AM Page 29

www.sdtimes.com

December 2017

SD Times

29

INDUSTRY SPOTLIGHT

Opening up modeling to the entire enterprise BY ALYSON BEHR

Ever since Sparx Systems released its flagship product Enterprise Architect (EA) almost 20 years ago, the company philosophy has been to open up access and distribution of information about the enterprise and democratize knowledge for the stakeholders. The company specializes in highperformance and scalable visual modeling tools based on the Unified Modeling Language (UML) and its related specifications for the planning, design and construction of software-intensive systems. Sparx Systems is a contributing member of the Object Management Group (OMG) and is focused on realizing the potential of model-driven development, based on open standards. According to Tom O’Reilly, COO of Sparx Systems, the rationale of using visual modeling to allow access to data to every stakeholder, regardless of their respective roles within the organization,

Integration Benefits SIMPLICITY Provides a transparent and simple approach to view, manage and curate corporate knowledge and get it into the hands of the people who need it most. Integration ensures information inside silos is viewable and helps facilitate both access and usability across different platforms. It is simpler to model corporate information when it is readily available via a cloud platform. DEMOCRATIZATION Integration allows decisions to be made quickly using real-time insights from quality data. More eyes, feedback and stakeholder involvement ultimately lead to better business outcomes and help reduce internal barriers between business and IT, management and operations.

Content provided by SD Times and

just makes sense. “Even the most complex of systems can be described simply with a visual model,” O’Reilly says. “Describing a system or process with text can be long, verbose, and hard to follow. Using a visual model, anyone, regardless of their technical background, can look at a model and see what parts of the system exist, how they relate to the other parts of the system and how information flows within that system.” The company backs up its vision and rationale by allowing those models to be available to anyone, on any device with a web browser via its Pro Cloud Server and WebEA solutions.

Living in the Cloud In today’s development environment, having a clear-cut cloud strategy is critical. Sparx Systems has always encouraged integration of EA with other toolsets, whether they be to customize models for a specific vertical industry or SUPPORT ENTERPRISE ADAPTABILITY EA provides an adaptable platform that can facilitate the successful completion of projects in terms of scope, time and budget. Implementing additional functionality, like Time Aware Modeling, allows model changes to be reviewed over time, providing valuable insights for project managers. SIMPLIFIES DEPLOYMENT Server-based deployment is efficient because no apps need to be installed, database connections don’t require configuration, and specialized secure tunneling or VPN infrastructure don’t exist to get in the way of being productive. COST REDUCTION Not all stakeholders require a license to collaborate. Stakeholders can search the model, create Watchlists, contribute to discussions and review items.

proprietary standards, or to import data from legacy sources or platforms. In addition, exchanging information helps to prevent vendor lock-in, improves asset and software utilization and provides access to information throughout the entire project life cycle. Because it’s in the cloud, all stakeholders can access information when and where they need it. They can contribute to the modeling process and update information on the fly, so it’s more accurate and timely. Sparx System’s OSLC Restful API is a key component of its cloud strategy. It works by providing the gateway for the exchange of information between model repositories stored in EA and other systems, using the process of linking data through HTTP standard methods for creating and managing lifecycle artifacts. O’Reilly explains, “OSLC has been developed as a mechanism to allow different web-based software applications to share their information, while keeping them stored in the customer’s tool of choice. This allows EA to integrate with other providers in a complementary way, putting the power of choice back into the customer’s hands.” For instance, if a developer wants to use Jira for requirements, do their application portfolio management using ServiceNow, and enterprise modeling in EA, they can. The Pro Cloud Server API allows users to keep their information in their relevant systems, but then use EA as the hub to describe how those other systems fit into the enterprise architecture of the organization. “Using OLSC the customer still has the advantage of full traceability from the business decision being made, through to the future state architecture, right down to the application and development levels, regardless of whether they are captured and stored inside EA or not,” says Geoffrey Sparx, founder and CEO of Spark Systems. ❚


030,31,33-36_SDT06.qxp_Layout 1 11/17/17 1:57 PM Page 30

30

SD Times

December 2017

www.sdtimes.com

Security can no longer be an afterthought in a DevOps world. Businesses need to evolve and transform their strategies into a DevSecOps way of thinking BY CHRISTINA CARDOZA

S

oftware is the lifeblood of most businesses today. So, what happens if that software is unreliable or insecure? It seems like a no-brainer that the software being pushed out should be protected. But, as software is being developed and deployed at a rapid pace, an important aspect of the life cycle gets lost in the race: Security. Security is not only important because it can help prevent hack attacks and data breaches, it is important because one mistake could have a significant impact on a business’ reputation and sales, according to Walter Capitani, product manager at Rogue Wave Software, a software development company. “Customers want to use new features and upgrades more quickly, and aren’t willing to wait on development,” he

said. “The unpredictable nature of fixing security issues means that release dates keep changing, which frustrates customers and delays new customers from purchasing the product.” The problem is that organizations are so hyperfocused at releasing software faster to stay ahead of the competition that they adopt DevOps approaches to modernize their strategies, but still practice legacy security approaches, according to Peter Chestna, director for developer engagement for Veracode, an application security company. “The biggest problem today with application security is that the development organization is not goaled to secure their software. They are goaled to release software quickly. Without a mandate and shared accountability

The number one best practice for implementing DevSecOps is to simply detect vulnerabilities as early as possible in the software development process. —Walter Capitani, Rogue Wave Software

between security and development that is measured and reported at every level of the organization, security will continue to be hard,” he said. Legacy approaches are no longer an acceptable method to delivering secure code today, Capitani explained. DevOps has proven it can help speed up the development process, but in order for businesses to really take their development processes to the next level and keep up with the next generation of innovation, DevOps teams need to figure out how to take security information or security intelligence, relate it to code, and deliver that to teams as early and as often as possible so they can make the right decisions, according to Derek Weeks, vice president and DevOps advocate for Sonatype, a DevOps automation provider. DevOps, at a high level, is about making the development process easier through automation and eliminating risks. If DevOps is enhanced with tooling and strategies that make it a lot more proactive in finding vulnerabilities, identifying bugs and fixing them right away


030,31,33-36_SDT06.qxp_Layout 1 11/17/17 1:58 PM Page 31

in real time, then that is going to bring your security risk significantly down, according to Srini Vemula, technical program manager for SenecaGlobal, a global technology consulting firm. This enhanced DevOps approach is being called DevSecOps. “DevSecOps seeks to bring security to the table to be involved and integrated into the DevOps team and their responsibilities. By shifting security left, the team de-risks their software by finding and fixing security vulnerabilities early in the SDLC, usually before check-in,” said Chestna. A DevSecOps approach still provides the benefits of DevOps — developing, deploying and delivering new features to customers fast — while removing an antiquated security process, explained Chris McFadden vice president of engineering and operations at SparkPost, an email delivery service for developers and enterprises. “We can’t afford to be slow, we have to be fast,” he said. “You can no longer treat security as some other team out there that it is a nuisance. You have to really build that into your software development process.”

Culture remains a problem When DevOps first emerged, one of the biggest problems organizations faced when transitioning was trying to break down silos. Historically, the development and operation teams did not work well together because their goals were not aligned. Developers had the mindset to bring change, and the operators or the IT team feared what that change would do to the reliability of the solution. But together, they became this DevOps powerhouse that released software faster, with better quality. Today, the same siloed problem is happening when you try to transition to a DevSecOps approach. Getting teams to be on the same page and collaborate is a tough process no matter what departments you are trying to bring together, SparkPost’s McFadden explained. What it comes down it is getting people and leadership oriented towards the same goals and objectives. “You can have the greatest plan and strategy in the world, but if you have a culture that isn’t dedicated to securing the development process around

DevOps, you are going to run into a problem,” said Flint Brenton, CEO and president at CollabNet, a DevOps and agile solutions company. In a traditional operations world, the security team sat the furthest away from the developers, while the testers and operators sat the closest. According to Sonatype’s Weeks, this type of structuring is one of the main reasons security has become an afterthought for DevOps teams. “Developers weren’t necessarily trained on security. It wasn’t a natural behavior for them or a part of their training. Just because security sat further away, it makes it that much harder to bring it in from a culture of organization perspective,” he said. The security team has to be brought into the DevOps teams so they can understand the code and work directly with the developer sooner so they know what the application is supposed to do and not supposed to do. “The strategy starts with people. Security and development must build a relationship with one another. There should be discuscontinued on page 33 >


SDT06 Full Page Ads.qxp_Layout 1 11/16/17 2:56 PM Page 32


030,31,33-36_SDT06.qxp_Layout 1 11/17/17 1:58 PM Page 33

www.sdtimes.com

Gartner’s guide to successful DevSecOps In a recent survey conducted by Gartner, the organization found that the highestranked strategy for a successful DevOps approach was collaboration with information security. “In the past 12 months at Gartner, how to securely integrate security into DevOps — delivering DevSecOps — has been one of the fastest-growing areas of interest of clients, with more than 600 inquiries across multiple Gartner analysts in that time frame,” Gartner’s research director Ian Head, and distinguished analyst Neil MacDonald, wrote in a report. The analysts have taken lessons learned from the organization and its clients, and released 10 steps they believes will set businesses on a successful DevSecOps path. Adapt your security testing tools and processes to the developers, not the other way around: According to the analysts, the Sec in DevSecOps should be silent. That means the security team needs to change their processes and tools to be integrated into DevOps, instead of trying to enforce their old processes be adopted. Quit trying to eliminate all vulnerabilities during development. “Perfect security is impossible. Zero risk is impossible. We must bring continuous risk- and trust-based assessment and prioritization of application vulnerabilities to DevSecOps,” Head and MacDonald wrote in their report. DevSecOps should be thought of as a continuous improvement process, meaning security can go beyond development and can be searching and protecting against vulnerabilities even after services are deployed into production. Focus first on identifying and removing the known critical vulnerabilities. Instead of wasting time trying to break a system, find focus on known security issues from pre-built components, libraries, containers and frameworks; and protect against those before they are put into production. Don’t expect to use traditional DAST/SAST without changes. Scan custom code for unknown vulnerabilities by integrating testing into the IDE, providing autonomous scans that don’t require a security expert, reducing false positives, and delivering results into a bug tracking system or development dashboard.

1.

2.

3.

4.

5.

Train all developers on the basics of secure coding, but don’t expect them to become security experts. Training all developers on the basis of security issues will help prevent them from creating harmful scenarios. Developers should be expected to know simple threat modeling scenarios, how to think like a hacker, and know not to put secrets like cryptographic keys and passwords into the code, according to Head. Adopt a security champion model and implement a simple security requirements gathering tool. A security champion is someone who can effectively lead the security community of practice, stay up to date with maturity issues, and evangelize, communicate and market what to do with security and how to adapt. Eliminate the use of known vulnerable components at the source. “As previously stated, most risk in modern application assembly comes from the use of known vulnerable components, libraries and frameworks. Rather than wait until an application is assembled to scan and identify these known vulnerabilities, why not address this issue at its source by warning developers not to download and use these known vulnerable components,” Head and MacDonald wrote. Secure and apply operational discipline to automation scripts. “Treat automation code, scripts, recipes, formation scripts and other such infrastructure and platform artifacts as valuable source code with specific additional risk. Therefore, use source-code-type controls including audit, protection, digital signatures, change control and version control to protect all such infrastructure and platform artifacts,” according to the report. Implement strong version control on all code and components. Be able to capture every change from what was changed, when the change happened and who made the change. Adopt an immutable infrastructure mindset. Teams should work towards a place where all the infrastructure is only updated by the tools. This is a sign that the team is maturing, and it provides a more secure way to maintain applications, according to Head. ❚

6.

7.

8.

9.

10.

—Christina Cardoza

December 2017

SD Times

< continued from page 31

sions about struggles and goals to build mutual empathy and understanding. At the highest level of the organization, security and development leaders should agree on mutual goals for the security of the software that is built,” said Veracode’s Chestna. The company also needs to train its development teams in security. According to Ian Head, research director at Gartner, a research and advisory firm, developers don’t have to become security consultants, but they should know basic security skills so that they can be trusted to follow good security practices. In addition, the security team needs to adopt new tools that promote collaboration and provide security analytics. They can no longer afford to work in their old, siloed, and security-specific tools, according George Gerchow, vice president of security and compliance for Sumo Logic, a cloud-based log management and analytics service provider. “Security only seems hard because we are untrained. With the proper training and tooling, developers can learn to write software securely the first time. That will prevent unplanned work farther downstream and avoid the costs of finding, tracking, fixing, testing and verifying the changes necessary to pass policy. In DevOps, it’s all about failing fast and verifying quality (including security) at the earliest possible time. A secure mindset can be learned and secure coding can become second nature,” said Chestna.

Implementing a DevSecOps strategy Once the culture is established among the DevSecOps teams, there are additional methods and tactics that can add to the success of DevSecOps. The number one best practice for implementing DevSecOps is to simply detect vulnerabilities as early as possible in the software development process, Rogue Wave’s Capitani said. Teams can do this by applying techniques such as static code analysis, “which can discover vulnerabilities even before code is compiled for the first time, and reduce the time associated with finding and fixing security vulnerabilities,” he said. continued on page 34 >

33


030,31,33-36_SDT06.qxp_Layout 1 11/17/17 1:58 PM Page 34

34

SD Times

December 2017

www.sdtimes.com

< continued from page 33

At Sumo Logic, the code analysis is one of six pillars it enforces in a DevSecOps strategy. The other five are: change management, compliance monitoring, threat investigation, vulnerability checks, and security training for developers. Code analysis rides along with building code in small chunks, and doing multiple releases a day. “This allows us to start looking into our continuous development chain very rapidly to make sure we are not releasing things that have vulnerabilities in them,” said

Sumo Logic’s Gerchow. Change management at Sumo Logic empowers all of the development teams to implement changes, and have them implemented within 24 hours. The teams work through tools such as Slack and JIRA to make changes quickly, and determine if a change is going to be good or bad. Compliance monitoring helps gather evidence for compliance while the code is being developed in order to shorten the auditing process. Threat investigation is where the

DevOps and the database Security isn’t the only aspect overlooked in a DevOps approach. According to Robert Reeves, co-founder and CTO of Datical, a database automation company, database deployments are often forgotten about. “Pushing out the application is the easy part of DevOps,” he said. “It is managing and automating database changes that is the real challenge.” According to Reeves, the database deployment process is often slow, error-prone, and resource intensive because a lot of companies are still doing it manually; but that causes it to get in the way of development teams and operations working together. “We call it the velocity gap,” Reeves said. “We are getting faster and better at application development, but none of that is available to the database team. Companies are finding their drive to adopt DevOps is being blocked because they can only move as fast Robert Reeves as their slowest member; and right now it is the database.” Similarly to security, the reason why the database is causing such a roadblock is because it is typically the last team to be brought into the life cycle. Databases cannot be reverted or replaced like application features. It is designed to preserve and protect data, and for that reason it must be preserved itself. “Another reason why the database has been late in the DevOps game is because solving this part of the application delivery lifecycle is so complex,” said Ben Geller, vice president of marketing for Datical. “On the application side, if the developer makes a mistake and breaks the application, they can blow away that code and start over. You can’t do that when you are updating the database. The database is always in constant motion, so you have to have a purposeful way with respect to how you get those changes made and go fast.” To solve this, the database team needs to be engaged sooner, and the process needs to be automated, Reeves explained. According to the company, to successfully bring the database into the DevOps fold, database administrators should be integrated into the team, learn about development, and trust the development process. DevOps means having cross-functional teams, so the database administrators should be a part of the team and able to weigh in on the architecture, according to Reeves. In the traditional way of doing things, when a change happens the database admin typically doesn’t know why the change is happening or how it will impact the overall product. Bringing them to the team will help them understand not only the function of the product, it will enable them to weigh in on the architecture. The database team doesn’t need to become full-fledged developers, but they should learn a little bit of coding to be able to support developers and operations, and understand where the team is coming from when making important changes. In addition, bringing the database team into the DevOps team will help create a culture of trust where all parties understand the implications of a database change, —Christina Cardoza and are able to do it correctly and successfully. ❚

team discovers and remediates any threats across any of their services. Vulnerability checks ensure the team is constantly scanning all of its code and environments to look for security threats and remediate. Training developers goes beyond teaching them OWASP or on-site solutions. Developers need to go to hacking events and see first-hand what real-life hacking looks like. This reminds them why they need to do all of the above checks. SenecaGlobal’s Vemula added that a mature organization will define its security architecture through CIA, or confidentiality, integrity and availability. It is about getting these three tenets of security implemented across the entire organization without becoming an impediment in terms of agility. Vemula explained security is often short-changed because developers don’t want to have to regularly change passwords, or apply automatic processes to apply patches. Having a well-defined security architecture and principles, and being able to apply them to DevOps, will help teams be in control of what is going on in the development life cycle. In order to know whether an organization is on the right track, it needs to ask key questions like: Do we know when we are compromised? What is our ability to respond to the compromise? How confident can we say that all the software and third-party services used today in production are well patched and running the latest versions as possible? “This type of questioning will enable teams to identify a lot of weak areas,” Vemula said. Another sought-after, but hard to find, strategy or role on a DevSecOps team is the role of a DevSecOps specialist, according to CollabNet’s Brenton. A DevSecOps specialist is constantly examining every step in the development process, and making sure the team is focused on securing code. “Having a DevSecOps security specialist who is constantly analyzing what you are doing, comparing it to market trends, comparing it to best in class, and then coming back and telling you not only what you can do better, but how you can do it, is an enormously valuable position to have


www.sdtimes.com

December 2017

SD Times

DevSecOps guides and resources Introducing security champions to the DevSecOps life cycle One of Gartner’s top 10 steps to a successful DevSecOps approach is: “Adopt a security champion model and implement a simple security requirements gathering tool.” According to Synopsys, developers make great security champions because they are familiar with the organization’s software and development groups, and have a deep understanding of technical issues and challenges their organization faces. This Synopsys white paper walks organizations through recruiting software developers as security champions, and injecting security all throughout the life cycle. Necessity is the mother of the ‘Rugged DevOps’ movement Tools are an important part of bringing development, operations and security under one umbrella. This DevOps Buyers Guide provides a guide to DevOps offerings, and how tools can help teams consider security as a first-class citizen in DevOps. “Security is still one of the last places where that archaic approach of development handing off the software to a different team and walking away still reigns,” Tim Buntel, vice president of products for XebiaLabs, said in the guide. “Secure software is just good software, and good software is secure software. Everything that we’re doing in DevOps is allowing us to build better software at scale and release it faster.” Automating Security in DevOps — Security in the Pipeline In this video, DJ Schleen, information security advisor at Aetna, talks about how organizations and DevOps teams can implement security controls into continuous delivery pipelines. According to Schleen, security controls can help

in the enterprise, and folks that have this degree of speciality are extremely valuable,” said Brenton. However, today it is a bit unrealistic to have one person responsible for all of this. Instead, it should be a team that has experience in security and DevOps, and that can provide a point of view from what works in other companies as well as what worked in their own experience. A way to add this type of specialist or team into DevSecOps is by growing them internally within the company, or bringing on a consulting firm that specializes in this area and can constantly educate you, Brenton explained. In addition, Sonatype’s Weeks said DevSecOps is as much of a tooling problem as it is a culture problem because siloed teams normally work in their own

promote secure code to production and provide minimal impacts. Schleen also walks through securing applications from throughout the application, from conception to deployment, as well as how Aetna integrated security into its life cycle. Security for DevOps and DevOps for Security is an “unprecedented opportunity. It allows us to disrupt traditional mindsets of security,” Schleen said. “It gives us an opportunity to collaborate with these folks and really improve the culture of the organizations that we work in. Security is just a part of everything we do on a day-to-day basis.” Governance and Transparency in GovSec DevOps Leonel Garciga, CTO of the Joint Improvised Threat Defeat Organization, recently gave a keynote at the All Day DevOps conference in November to talk about the government, DevOps and how CISOs and CIOs can approach this new strategy. According to Garciga, having a DevOps pipeline is imperative for CIOs and CISOs because it provides a single pane of glass for delivery and risk assessment, bakes in regulator and auditing functions, provides real-time visibility into assessing risk and continuous monitoring, and provides a repeatable process with metrics. Hard-Won Lessons of a DevOps Addict “We have to be 100% right. Attackers only have to be right once. And that means the odds are stacked against us,” Shannon Lietz, the director of DevSecOps at Intuit, recently said at the Lonestar Application Security Conference in the beginning of the year. She talks about the rise of DevOps and how security needs to evolve. “We could change the world if we got security to be everybody’s responsibility, and it is our job to make that happen,” she said. “What you will get out of DevSecOps is safer software sooner.”

specific set of tools. When DevSecOps is shifting security left in the development cycle, it introduces a need to have more decision making, more evaluation-type criteria, and more quality or performance evaluations as early as possible in the development life cycle. Tools should include automated tests early on and as early as possible, intelligence to analyze the code for known vulnerabilities, and be able to monitor code in production for any red flags. Vemula adds that there is an umbrella of tools available on the market that enable continuous vulnerability assistance all while integrating well within a DevOps initiative; it is just about finding the ones that will enable teams to apply application security best practices and check for vulnerabilities

—Christina Cardoza

before they are put into deployment. “The more precise a security tool is in the DevSecOps environment, the more efficient it is going to be at doing tasks. If you are getting good and bad results that aren’t accurate, you actually are not that much faster. You are getting the answer back faster, but if the answer is wrong you are actually inefficient,” Weeks said. In addition, the tools need to be highly scalable so that features such as vulnerability testing can be extended as the team grows, as well as be easily integrated into CI systems, Capitani explained. At the end of the day, Gartner’s Head says DevSecOps is more about continuous security or continuous assurance. “We are much more focused on continual learning and continual improvement rather than the perfect projects and delivery,” he said. ❚

35


036_SDT06.qxp_Layout 1 11/16/17 2:48 PM Page 36

36

SD Times

December 2017

www.sdtimes.com

INDUSTRY SPOTLIGHT

DevOps improves application security BY LISA MORGAN

More companies are using DevOps and continuous delivery to accelerate software releases. While the added speed and agility help businesses keep pace with changing customer demands, some wonder whether DevOps and Continuous Delivery actually make software less secure than it might be otherwise. With Micro Focus, organizations can simultaneously improve speed and security as well as governance and compliance. “Software is moving from conception to production in a matter of minutes, hours or days instead of months,” said Ashish Kuthiala, senior director at Micro Focus. “Nobody wants to release software that has security flaws because

tions usually have 10 testers and one security expert,” said Kuthiala. “If you’re building security into your code, everyone is thinking about security at all times.” As recent security gaffes indicate, security has become everyone’s responsibility.

Improve security while coding

There are a number of tools, solutions, frameworks and processes that can be incorporated throughout the DevOps lifecycle to help secure code. For example, Security Assistant, a new feature in Micro Focus Fortify Static Code Analyzer (SCA), is an effective first line of defense because like a spellchecker, it checks code against known vulnerabilities in the frameworks as the code is being written. If a vulnerability is detect‘You can’t just have code ed, Security Assistant moving through the pipeline automatically prevents without the appropriate the developer from controls in place.’ committing the code, —Ashish Kuthiala so the issue can be resolved swiftly while it’s relatively easy and cheap to fix. “Security Assistant provides valuable feedback in context. Even if you fed the code into the pipeline, you’d be able to it can cost millions or billions of dollars use Fortify for static code analysis or in lost revenue, litigation and brand Fortify WebInspect for dynamic code analysis,” said Kuthiala. “If at any point, reputation damage.” Companies that want to manage secu- a security vulnerability is detected rity risks more effectively can no longer while the code is moving through the do application security testing some- continuous delivery pipeline, the code where between testing and production. will be kicked back to its origin so the Instead, security practices need to shift issue can be resolved.” Micro Focus ALM Octane provides left, so developers can minimize the number of vulnerabilities that seep into another layer of protection. It provides production. In addition, DevOps security full visibility into the status of code, practices enable security personnel to including security vulnerabilities, who focus on the security of each iteration injected them, when, when the probrather than daunting amounts of applica- lem was fixed and how it was fixed. Such visibility is necessary for complition code once every few months. “For every 100 developers, organiza- ance in highly regulated industries; Content provided by SD Times and

however, even companies in unregulated industries have mandates to improve application security, visibility and traceability throughout the application lifecycle. ALM Octane tracks the entire pipeline so the status of code and security defects are always known. Many leading companies across industries use ALM Octane for the quality and test management of complex application portfolios in hybrid application development environments. It provides a single source of truth for enterprise governance and compliance regardless of the environmental complexity.

Improve traceability Businesses with DevOps and Continuous Delivery practices can’t get mired in version control details. And yet, version control is always necessary. “You can’t just have code moving through the pipeline without the appropriate controls in place. Every change that goes through the pipeline should be codified and version controlled,” said Kuthiala. Organizations also need to think about security threats in broader terms. Quite often, code vulnerabilities are considered synonymous with hackers when insider threats can be even more dangerous. In addition to malicious internal actors, businesses face inadvertent threats from permissions settings that weren’t configured properly or have not been updated in a timely fashion. ALM Octane tracks those details so organizations ensure the right people have access to code. “If you have the right tools in place and you’re integrating security throughout the application lifecycle, you have an opportunity to make your software more secure than it was before,” said Kuthiala. “DevOps doesn’t threaten application security, it fortifies it.” ❚


SDT06 Full Page Ads.qxp_Layout 1 11/16/17 2:56 PM Page 37

Release quality applications securely.

Micro Focus Secure DevOps Measure and control your security posture across teams and applications in your Agile/DevOps practice. As you release quality applications across the CI/CD pipeline, you can continuously examine the state of your application security as it is being developed - with certainty. Let Micro Focus help you proactively prioritize and remove application vulnerabilities as they happen.

Learn more at www.microfocus.com/devops


SDT06 Full Page Ads.qxp_Layout 1 11/16/17 2:57 PM Page 38


039,40,43_SDT06.qxp_Layout 1 11/17/17 9:21 AM Page 39

www.sdtimes.com

December 2017

SD Times

Buyers Guide

PDF 2.0 offers many improvements to the PDF specification

BY JENNA SARGENT

A

s we continue to innovate in nearly every different aspect of technology, there is one thing that has remained relatively consistent: PDF. That is not to say it hasn’t grown, just that it has been a format that companies have been able to rely on for the past 24 years. “Anytime there’s a need to deliver a document, or make it deliverable, or produce a version of it that is as it existed at a particular time and date, PDF is just the inevitable file format,” said Duff Johnson, executive director of the PDF Association, an organization that promotes the adoption and use of the International Standard for PDFs. “It has no competition in its niche and its niche is fairly significant.” Some of the challenges developers have faced when developing for PDF is that the specification gets very complicated. According to Johnson, the PDF specification is about a thousand pages long and is not similar to a lot of other specifications, especially to those com-

ing from the world of HTML. “PDF has a core syntax that’s very well understood and that has a wide variety of very good implementations,” said Matt Kuznicki, CTO of Datalogics and chairman of the PDF Association. “But there’s a lot to PDF that a lot of people out there don’t know a lot about.” Even though it is fairly easy to get the basics of PDF down, when companies start to want to do more and utilize the full capabilities of PDF, it becomes harder for them. “Not all implementations are created equally so what we find is that the breadth, and the depth, and the quality of support for PDF features in different toolkits and different SDKs can vary quite a bit.” “A qualified PDF developer must be skilled in a broad spectrum of technologies, including cryptography, computer graphics and compiler construction, and must familiarize themselves with the range of application areas in which the format is used,” said Hans Bärfuss, CEO of PDF Tools. PDF Tools suggests developers focus on the application-oriented functions of

PDF and leave the more basic functions to be handled by a software library. “The easier such a library is to use, the quicker the developer can implement the desired requirements and effective support from the manufacturer plays an important role in this,” Bärfuss said. Earlier this year, the PDF 2.0 standard was released, featuring many improvements to the format. The new standard was created with the goal of refining the language of the PDF specification and to smooth over areas that were confusing for those that implement PDF, according to Kuznicki. According to Johnson, PDF 2.0 dramatically improves interoperability for PDF. “Developers can now read the specification, choose to support this or that PDF feature and implement it with very high confidence that other developers also supporting that feature will be reading off the same page, and thus improving interoperability,” he said. “It’s a far better specification, more complete, more detailed, and fully documented.” “PDF is part of a universe of docucontinued on page 43 >

39


039,40,43_SDT06.qxp_Layout 1 11/17/17 9:21 AM Page 40

40

SD Times

December 2017

www.sdtimes.com

A guide to PDF management tools ■ Accusoft: PDFXpress is a full-featured PDF SDK that makes it fast and easy to enhance your application with a broad range of PDF features including file creation, editing, text and image extraction, and standard PDF security using easy-toimplement, concise code. Users are empowered to rapidly render large PDF images and files. Apply customizable compression settings, and perform lossless compression to reduce file size without sacrificing render quality. ■ ActivePDF: Over 14 years, ActivePDF has developed and refined a comprehensive collection of PDF automation tools that make development easy. ActivePDF helps avoid delays, downtime and headaches. More than 23,000 satisfied customers have chosen ActivePDF, from startups to Fortune 100 companies. ■ Adobe: A company defined by its market-leading PDF technology, Adobe offers Adobe Document Cloud for document management across mobile devices and PCs. The Document Cloud features the Adobe Acrobat DC PDF solution, which provides a touch interface for document management through native mobile apps. ■ Amyuni: Amyuni provides developers and system administrators with high-performance PDF conversion and processing tools. Certified for Windows desktops and servers, Amyuni PDF Converter enables developers to easily integrate powerful PDF and PDF/A functionality into their applications with just a few lines of code. Amyuni PDF Creator produces optimized PDF documents and is available for .NET, WinRT and ActiveX. ■ Aspose: Aspose creates file format APIs that help .NET and Java developers work with documents. Aspose.Pdf for .NET and Aspose.Pdf for Java are APIs for creating, editing and converting PDF files. They support a wide range of features, from simple PDF file creation, through layout and formatting changes, to more complex operations like managing PDF forms, security and signatures. In addition, the company also provides PDF solutions for Cloud, Android, SharePoint, Reporting Services and JasperReports.

FEATURED PROVIDERS ■ Datalogics: Datalogics provides best-of-breed PDF technologies for developers. The Adobe PDF Library is a multi-platform API offering a wide range of PDF manipulation and printing capabilities, with Adobe’s staple color and font accuracy. PDF Java Toolkit is a pure Java API with robust support for PDF forms and digital signatures. ■ PDF Tools:

PDF Tools provides PDF solutions supporting the entire PDF and PDF/A process including conversion, validation, rendering, manipulation, optimization, security and signature. 3-Heights components and solutions is designed to handle large volumes quickly and reliably to provide high quality PDF and PDF/A-compliant documents for further processing or digital long-term archiving. ■ CeTe: CeTe Software’s DynamicPDF product line, including Merger, Generator, Viewer, Rasterizer, PrintManager and Converter, provides developers access to a complete integrated PDF solution. Functionality includes PDF creation and manipulation, PDF conversion (to and from PDF), PDF printing, as well as an embeddable PDF Viewer. The DynamicPDF libraries and components have functionality for .NET (C# and VB.NET), Java and COM/ActiveX. ■ ComponentPro: Ultimate PDF for .NET is a 100%-managed PDF document component that helps you add PDF capabilities in .NET applications. With a few lines of code, developers can create a complex PDF document from scratch, or load an existing PDF file without using any third-party libraries or ActiveX controls. The Ultimate PDF component also offers many features, including drawing text, image, tables and other shapes; compression; hyperlinks; security; and custom fonts. PDF files created using the Ultimate PDF component are compatible with all versions of Adobe Acrobat as well, as is the free version of Acrobat Viewer from Adobe. ■ GrapeCity: Within the ComponentOne Studio product, GrapeCity provides UI

controls for application development. Its offering includes PDF controls for creating and viewing PDF documents in Windows, web, and Windows Store apps without requiring users to install Adobe Acrobat. With the ComponentOne Studio PDF control for WinForms, WPF, UWP, MVC, ASP.NET, and Silverlight, users may generate and view full-featured reports with encryption, compression, outlining, hyperlinking, attachments, and everything else PDF users need. The new FlexReport reporting engine exports to PDFs and also includes FlexViewer for Windows and web apps, and supports PDF viewing with full navigation features. ■ LEADTOOLS: LEADTOOLS’ Document Imaging toolkits include a full suite of PDF SDK technology for viewing, editing, creating and converting PDF and Office formats. The Document Viewer framework includes an advanced set of tools such as text searching, annotations, memory-efficient paging, inertial scrolling, and vector display. Developers can implement comprehensive PDF reading, writing and editing with support for the extraction of text, hyperlinks, bookmarks, digital signatures, PDF forms and metadata, as well as updating, splitting and merging pages from existing PDF documents. ■ Persits Software: Persits Software’s AspPDF and AspPDF.NET are featurepacked server components for managing Adobe PDF documents for ASP and .NET environments, respectively. Their simple and intuitive programming interface enables a Web application to perform many useful PDF-related functions, such as form fill-in, HTML-to-PDF, and PDF-toimage conversion, text extraction, stamping, digital signing, automatic printing, barcode generation, and many others, in just a few lines of script. Free fully functional 30day evaluation versions are available. ■ ORPALIS: GdPicture.NET offers extended support of the PDF format for .NET (C# and VB.NET) and non-managed applications written in VB6, Delphi, Microsoft Access and more. Its numerous features include full Unicode support, PDF/A generation, digital signature support, PDF merging and splitting, PDF modification, PDF continued on page 43 >


SDT06 Full Page Ads.qxp_Layout 1 11/16/17 2:57 PM Page 41


042_SDT06.qxp_WirelessDC Ad.qxd 11/16/17 2:46 PM Page 1

DON’T MISS A SINGLE ISSUE!

Renew your FREE subscription to SD Times!

Take a moment to visit sdtimes.com. Subscribing today means you won’t miss in-depth features on the newest technologies affecting enterprise developers — IoT, Artificial Intelligence, Machine Learning and Big Data. SD Times offers insights into the practices and innovations reshaping software development such as containers, microservices, DevOps and more. Find the latest news from the software providers, industry consortia, open source projects and research institutions. Available in two formats — print or e-mail with a link to download the PDF. Subscribe today to keep up with everything happening in the software development industry!

Sign up for FREE today at www.sdtimes.com.


039,40,43_SDT06.qxp_Layout 1 11/17/17 12:02 PM Page 43

www.sdtimes.com

A guide to PDF tools < continued from page 40 rasterization, and PDF creation with interactive form fields. With GdPicture.NET, you can also repair corrupted PDFs, add or extract fonts, and draw barcodes and annotations on documents. ■ Qoppa: Qoppa Software offers an extensive suite of PDF libraries and visual components that cover all PDF processing needs. PDF functions include creation and modification, assembly, conversion to images and HTML, automated printing, encryption and digital signatures, form fields, viewing and markup, optimization, and a lot more. Qoppa products provide the highest level of performance and reliability and are 100% Java, so they run on all servers and desktop operating systems. ■ TallComponents: TallComponents offers reliable and proven .NET class libraries for desktop, server, mobile and cloud to create, modify, convert, read, print and render PDF documents. The libraries are written entirely in C#, have no external dependencies such as Adobe Reader, and are characterized by an intuitive API combined with knowledgeable and fast support. ■ PDFTron: PDFTron provides powerful cross-platform PDF APIs enabling app development for desktop/server, mobile and web apps, with consistent, high-quality output, as well as top-notch performance on even the most complex files. PDFNet SDK APIs can be accessed from any language/platform (Xamarin/C#, JavaScript, C++, Java, Objective-C, etc.), providing support for annotation, collaboration, forms, digital signing, editing, printing, file conversion, redaction, and more. PDFTron’s WebViewer technology enables viewing and embedding PDF, Office and other formats in any HTML5 app on any device. PDFNetJS is the latest addition to PDFTron’s web-based technologies, enabling to view, annotate and edit PDFs directly in any modern desktop browser. ■ Glyph & Cog: Glyph & Cog offers a full line of software components designed to help developers add PDF capabilities into their applications. Functionality includes PDF viewing (Qt and ActiveX), printing, text extraction, and more with cross-platform support for Windows, Mac and Linux. Glyph & Cog’s newest product is PDFdeconstruct, a tool that decomposes PDF content into an XML file. ❚

< continued from page 39

ment formats such as HTML5, SVG, etc., and rather than standing alone, is part of broader multichannel communications,” said Bärfuss. “PDF should therefore be increasingly focused on interoperability with these formats, and include characteristics such as responsiveness. However, it should also not lose its role as an ‘electronic paper’ format.” Even though PDF 2.0 has been released, work continues to be done to improve it. According to Johnson, the ISO committee that owns PDF met in San Jose at the end of October to discuss their plans for the future, including whether or not to release PDF 2.1 soon or wait until there is a substantial

December 2017

SD Times

update. He expects that the 2.0 standard will be the version of PDF that developers continue to implement for next five to 10 years, at least. “I expect work that happens with PDF past 2.0 will really build on an evolutionary manner on PDFs capabilities while continuing to keep as foundational the notion of being a reliable and a portable document format and a means for people to convey information to others,” said Kuznicki. “The future for PDF looks very bright, there’s really nothing else on the horizon that can do what PDF does and it’s very clear that PDF meets critical business needs in a vast variety of workflows,” said Johnson. ❚

Utilizing the new features of PDF 2.0 SD Times talked to companies about how they are utilizing the new features of PDF 2.0 to make their product better.

Matt Kuznicki, CTO, Datalogics As a provider of PDF developer and SDK tools, our products enable our customers to expand and enable the increased functionalities of PDF 2.0. We, too, are keen to see the continued progress of PDF 2.0 ourselves and how our customers will develop new and innovative workflows, tools, and solutions for their end users. We have and will continue to support the PDF 2.0 standard moving forward. In fact, Datalogics donated an initial set of example PDF 2.0 files to the PDF Association to help in their mission of providing technical resources and education to PDF practitioners. Datalogics will maintain active participation in defining the 2.0 standard and supporting its future.

Hans Bärfuss, CEO, PDF Tools PDF 2.0 offers a wide variety of new features for various application areas. Some of them have already been implemented in our products and others will be implemented in the near future. Here some examples: Digital Signatures are an important means to guarantee the integrity and authenticity of transaction documents. PDF 2.0 has been extended to support ECC-based certificates, CAdES signatures, long term validation, document security store and timestamps dictionaries. These features help to realize separate signature creation and verification processes over a long time period. Prepress documents now can profit from new features such as use of black point compensation, page-level output intents, external output intents, halftone origins, and extensions to output intents such as mixing hints and spectral data. These features make it easier to process hybrid documents from various sources and for various output devices. Documents for mass-printing now can be structured using document parts. This feature helps to implement document driven print and packaging processes. Documents for interactive use may now contain more features such as rich media annotations, measurement and point data, 3D measurements, various go to actions and many more. Many features have not been mentioned here. Nevertheless, they will be added to our products if the there’s a demand for them in the years to come. —Jenna Sargent

43


044_SDT06.qxp_Layout 1 11/16/17 5:05 PM Page 44

44

SD Times

December 2017

www.sdtimes.com

Analyst View BY ARNAL DAYARATNA

Rethinking digitized preparedness Dr. Arnal Dayaratna is Research Director, Software Development at IDC.

O

ne of the questions raised by the August and September hurricanes that wreaked massive destruction on Texas and Puerto Rico concerns the role of software and digital transformation in accelerating rescue and recovery efforts. The question of the role of software and digitization in facilitating those efforts deserves amplified relevance because of the dramatic acceleration of digital transformation initiatives in almost every industry vertical in the U.S over the last decade. Has the industry-wide acceleration of digital transformation initiatives translated into a corresponding enrichment of applications that facilitate rescue and recovery efforts specific to major hurricanes such as Harvey, Irma and Maria? Questions about the current state of software as they relate to disaster management go right to the heart of software’s ability to enrich, enhance and preserve human lives. FEMA, for example, leverages a constellation of software applications to model flooding, rainfall and the effects of weather, more generally, across different geographies. In addition to using discrete applications, FEMA tracks and records disaster-related databases to disseminate grants and disasterrelated information to states and other agencies that are eligible to receive funding. With respect to consumer software, a proliferation of mobile apps include the FEMA app and the Red Cross app, both of which provide information about weather alerts, the locations of emergency shelters, tips on how to prepare for a hurricane and advice regarding what to do after a hurricane has passed. Zello, a walkie-talkie type app, allows users to talk via a WiFi or cellular connection, whereas Waze provides information about real-time traffic conditions based on data collected through a social network of drivers. Meanwhile, Gasbuddy provides insight about the cheapest nearby gas stations as well as data about which gas stations have gas and power. Facebook’s “Mark Safe” attribute and apps such as Snap Map allow users to communicate their status to loved ones. Like Facebook, Snap Map allows users to photographically document stories of their experience while Twitter famously enables users to provide status updates regarding a natural disaster. Although the software world has witnessed a ver-

Questions about the current state of software as they relate to disaster management go right to the heart of software’s ability to enrich, enhance and preserve human lives.

itable proliferation of web and mobile apps focused on emergency management, disaster readiness and emergency preparedness, their functionality remains either highly centralized insofar as they are administered by discrete agencies such as FEMA or the Red Cross, or specialized as measured by their focus on specific use cases such as traffic and gas. Centralized apps inherently circumscribe the domain of data purveyed to consumers. Meanwhile, specialized apps perform a different kind of circumscription by focusing on one use case, even though their associated data may well be crowdsourced from users. The centralized and specialized qualities of consumer-focused apps in the emergency preparedness space means that opportunities abound for app developers to aggregate discrete apps into a Platform-as-a-Service (PaaS)-based digital emergency preparedness kit that enhances the variety, availability and integrative capabilities of emergency-related applications. For example, an integrated digital disaster emergency preparedness platform empowers users to tap into a library of emergency-related applications that differentially address additional important use cases such as medical services, pharmacy and medication management, food, drinking water, public health, looting and crime. Moreover, a digital disaster emergency preparedness kit provides opportunities to integrate data from discrete apps that deliver integrated, holistic solutions to parties interested in emergency preparedness and emergency management. An integrated platform that houses emergency preparedness apps allows users to obtain preconfigured “kits” of applications while concurrently personalizing their digital emergency response kit as deemed appropriate. On one hand, the space of enterprise-focused software solutions for emergency management has matured considerably since 9/11, particularly given the increasing sophistication of risk management software frameworks in the private sector that facilitate compliance with regulatory protocols such as ORSA (Own Risk and Solvency Assessment). While the landscape of consumer-based apps for emergency management has experienced a corresponding maturation, apps for emergency management are often siloed and stand to benefit from inclusion within an integrated digital disaster emergency preparedness kit. ❚


045_SDT06.qxp_Layout 1 11/17/17 12:03 PM Page 45

www.sdtimes.com

December 2017

SD Times

Guest View BY SCOTT SHIPP

Software reflects teams that build it S

top me if you’ve heard this one. To deliver more customer value, a software team decides to upgrade the database. They talk to their operations people, who say, “Sorry, we can’t upgrade to the latest version because another team shares the database, and their application won’t support it.” Resigned to their fate, they put the card in the backlog for a faraway day, and shelve any planned features that relied on it. Not funny? I agree. The sad part is the word “database” in the story could variably be replaced with operating system, application server, VM platform, etc., and the story would ring true with just as many people. If you’ve worked in the trenches of software half as many years as I have, you know that things like this are the hidden friction in software. Often, the teams outside of an application are coupled together by shared dependencies as much as the code inside the application. Lister and DeMarco famously observed that, “The major problems of our work are not so much technological as sociological in nature.” But I wonder if anyone has considered what technology (especially software) and people might share in common? Consider Conway’s Law, which makes such a connection: “organizations which design systems… are constrained to produce designs which are copies of the communication structures of these organizations…” Conway’s insight is that software built by teams will be comprised of pieces roughly corresponding to those teams. But I think we can extrapolate further. We all agree that software with certain properties is easier to change. Its components should have low coupling, high cohesion, well-defined interfaces, clear contracts, and so on. Extrapolating from Conway’s Law, we get a new observation: if we want to produce software with these properties, our teams must also have them. Teams should have strong separation of concerns, few dependencies, clearly defined boundaries, etc. Teams like that are able to evolve their piece of the system independently, making the entire organization more agile. For one example, take the rising popularity of the “platform” team. Ironically, organizing teams within a software company around an internal platform produces a sociological problem that mimics a classic software design paradox almost perfectly. It encour-

ages reuse, which is good, but it also enmeshes the platform team in far too many decisions. They get coupled to every team, because they’re like a class imported into every other class in an application. Whenever it changes, every dependent class must change as well. Another example can be seen in “vertically siloed” companies, where teams are organized by role: finance, product, engineering, support, operations, etc. In this scheme, how many different teams need to be involved to deliver a single customer feature? What does attendance in a project kickoff meeting look like? It reminds me of having a class with ten parameters in its constructor. I think it’s worth asking how many dependencies your team has, even if it is cross-functional. You may be surprised how many there are. Try making a “Conway system diagram” of your company. Take a normal system diagram and mark each component with the team or teams that have responsibility for it. Connect all the teams required to enhance, support, or maintain a single customer scenario. Minimizing team dependencies isn’t the only application, though. An analog to the Single Responsibility Principle might be that teams should have only one reason to change focus. If a team has too many customer concerns, maybe their attention is fractured and the resultant context-switching is killing them. A final example might be seen in how adding a security review phase to a project always results in inadequate security in the product. The “bolt on” nature of such an approach violates the software design principle of “secure by design.” Teams that care about building secure software should place that knowledge in the team itself and it should be present throughout the project lifecycle. By thinking of team organization with a software lens, you may find key insights leading to better ways to organize. Conway’s Law is practical, not merely theoretical. Independent and de-coupled teams, like independent code modules, will be better positioned to produce software that can evolve, and software with evolvability allows the organization to deliver customer value more quickly and effectively. ❚

Scott Shipp is a software engineer with a Master of Software Engineering degree from Seattle University.

Conway’s insight is that software built by teams will be comprised of pieces roughly corresponding to those teams.

45


046_SDT06.qxp_Layout 1 11/17/17 1:07 PM Page 46

46

SD Times

December 2017

www.sdtimes.com

Industry Watch BY DAVID RUBINSTEIN

Of Serverless, Backendless and Codeless David Rubinstein is editor-in-chief of SD Times.

S

erverless technology is being called the next generation of the cloud. The first layer abstracted organizations from their physical servers. (Serverless, like cloud, of course, doesn’t literally mean ‘no server;’ it simply means not YOUR servers!) In its simplest terms, serverless is about developers writing code as a function, which the cloud provider then hosts and runs on demand. The benefits are both economic — the organization behind the application only pays for its usage, not idle time — and agility. Nate Taggart, CEO of a startup called Stackery, explained that serverless aligns with DevOps in that it’s about shipping software quickly. But it does introduce challenges, as developers become responsible for not just iterating the code, but now also the underlying infrastructure. Like it or not, Taggart said “developers are now part of provisioning and the monitoring cycle” of applications. There is a blurring of responsibilities between developers and operations teams, but organizations should be focused not on deploying software so much as maintaining the software’s health over time. Serverless helps organizations drive the most value for customers while running on the least amount and lowest cost of infrastructure, he said. “Servers are traditionally overprovisioned to have availability and maximize customer experience, and companies are way overpaying for servers they aren’t running at 100 percent.” In a serverless world, he explained, an application composed of dozens or hundreds of functions sit in a cloud, and when they are idle, there is no cost. This architecture is function-as-a-service, where the code is a function that is triggered by events. Cloud providers might put these functions on several servers, to balance them out by understanding which functions get triggered frequently and which are seldom activated — what is known as ‘traffic shaping,’ Taggart said. Stackery grew out of the applicaton performance space, and has been in business a little over a year now, he said. Serverless has gone from being seen as development technology to being seen as operations technology with a broader business impact, he said. But, he noted, it is to the indus-

In its simplest terms, serverless is about developers writing code as a function, which the cloud provider then hosts and runs on demand.

try’s discredit “if we speak of serverless as a replacement for IT. Serverless is the compute layer, but we still have the database layer, and we still have the network. AWS Lambda (for example) doesn’t do those pieces. IT still needs to maintain data fidelity, backups, network security…” Taggart said he sees serverless as the next generation of cloud. The first generation abstracted the physical server, and “now, we’re abstracting the virtual idea of the server.” Mark Pillar is the founder of Backendless Corp.; the term backend-less predates serverless, he said. The company has built an abstraction layer that provides developers with a front-end console to see data, files or manage users, and lets the developers use API calls to save something in the database, or validate a user with all the business processes that follow — all functions traditionally handled on the server side. “As a result, developer productivity skyrockets, because they’re freed of all the tasks that normally they would have to allocate all their time for.” But the biggest benefit, he said, is that by launching the app, it’s automatically ready to scale to millions of users on Day 1, because the back end is completely, automatically scalable. “If an app becomes extremely popular, and tens of thousands of users download the app, the servers continue chugging along and handling those transactions without any slowdown. As the user base grows, and there are more devices that have the application installed, all of the requests are being sent to our servers where we scale out the back end automatically.” Pillar explained that the backend-less platform has three tiers on the back end. “There are virtual instances of the web tier, an app server, and the database. Depending upon where we see bottlenecks form, we do balancing and re-routing to handle every request as quickly as possible.” Add to all this Codeless, which Pillar said was released in August, and developers can create custom business logic without writing a single line of code — his company’s entry into the low-code/nocode category. So, we’re entering into a world where developers can create business applications without writing code, running with back ends on the cloud. It’s certainly an abstract view of the world. ❚


047_SDT06.qxp_WirelessDC Ad.qxd 11/20/17 10:26 AM Page 1

Be a more effective manager

Visit the sdtimes.com Learning Center Watch a free webinar and get the information you need to make decisions about software development tools.

Learn about your industry at www.sdtimes.com/sections/webinars


11:43 AM

SDT06 Full Page Ads.qxp_Layout 1 11/16/17 3:00 PM Page 48


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.