SD Times - July 2017

Page 1

SDT01 cover_Layout 1 6/16/17 11:26 AM Page 1

JULY 2017 • VOL. 2, ISSUE 1 • $9.95 • www.sdtimes.com


SDT01 Full Page Ads_Layout 1 6/14/17 10:39 AM Page 2


SDT01 page 3_Layout 1 6/16/17 11:41 AM Page 3

Contents

VOLUME 2, ISSUE 1 • JULY 2017

FEATURES

NEWS 8

News Watch

Climbing the IoT data mountain

page 10 12

C is for cognitive

15

WWDC: Apple App Store redesigned for the first time

16

IBM’s journey: Building blockchain in the enterprise

19

EA in the cloud drives digital shift

22

Updating the book on AI and games

25

Citizen developers are a necessity

27

State of DevOps report released

31

How data science improves ALM

COLUMNS 7

INDUSTRY WATCH by David Rubinstein The Times, it is a-changin’

53

GUEST VIEW by Ciaran Dynes Subscription models fuel innovation

55

ANALYST VIEW by Peter Thorne Quantifying software quantities

page 28

Top 10 considerations when planning Docker-based microservices

page 34 DevOps: Continuous integration and delivery

Getting an end-to-end perspective

page 41 Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 225 Broadhollow Road, Suite 211, Melville, NY 11747. Periodicals postage paid at Huntington Station, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2017 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 225 Broadhollow Road, Suite 211, Melville, NY 11747. SD Times subscriber services may be reached at subscriptions@d2emerge.com.


SDT01 page 4_Layout 1 6/14/17 11:58 AM Page 4

®

www.sdtimes.com

Instantly Search Terabytes of Data DFURVV D GHVNWRS QHWZRUN ,QWHUQHW RU ,QWUDQHW VLWH ZLWK GW6HDUFK HQWHUSULVH DQG developer products

EDITORIAL EDITOR-IN-CHIEF David Rubinstein 631-421-4154 drubinstein@d2emerge.com SOCIAL MEDIA AND ONLINE EDITORS Christina Cardoza ccardoza@d2emerge.com Madison Moore mmoore@d2emerge.com SENIOR ART DIRECTOR Mara Leonardi mleonardi@d2emerge.com

Over 25 search features, with easy multicolor hit-highlighting options

CONTRIBUTING WRITERS Lisa Morgan, Alexandra Weber Morales, Frank J. Ohlhorst CONTRIBUTING ANALYSTS Rob Enderle, Michael Facemire, Mike Gualtieri, Peter Thorne CUSTOMER SERVICE

dtSearch’s document filters support popular file types, emails with multilevel attachments, databases, web data

SUBSCRIPTIONS subscriptions@d2emerge.com ADVERTISING TRAFFIC Mara Leonardi adtraffic@d2emerge.com LIST SERVICES Shauna Koehler skoehler@d2emerge.com

Developers: ‡ $3,V IRU 1(7 -DYD DQG & ‡ 6'.V IRU :LQGRZV 8:3 /LQX[ 0DF DQG $QGURLG ‡ 6HH GW6HDUFK FRP IRU DUWLFOHV RQ faceted search, advanced data FODVVLILFDWLRQ ZRUNLQJ ZLWK 64/ 1R64/ RWKHU '%V 06 $]XUH HWF

Visit dtSearch.com for ‡ KXQGUHGV RI UHYLHZV DQG FDVH VWXGLHV ‡ IXOO\ IXQFWLRQDO HYDOXDWLRQV

The Smart Choice for Text Retrieval® since 1991

dtSearch.com 1-800-IT-FINDS

REPRINTS reprints@d2emerge.com ACCOUNTING accounting@d2emerge.com ADVERTISING SALES PUBLISHER David Lyman 978-465-2351 dlyman@d2emerge.com WESTERN U.S., WESTERN CANADA, EASTERN ASIA, AUSTRALIA, INDIA Paula F. Miller 925-831-3803 pmiller@d2emerge.com

PRESIDENT & CEO David Lyman CHIEF OPERATING OFFICER David Rubinstein D2 EMERGE LLC 225 Broadhollow Road Suite 211 Melville, NY 11747 www.d2emerge.com


SDT01 Full Page Ads_Layout 1 6/14/17 10:39 AM Page 5


SDT01 Full Page Ads_Layout 1 6/14/17 10:39 AM Page 6


SDT01 page 7_Layout 1 6/15/17 10:07 AM Page 7

www.sdtimes.com

July 2017

SD Times

Industry Watch BY DAVID RUBINSTEIN

The Times, it is a-changin’ T

here’s a saying that goes ‘when one chapter closes, another one begins.’ This issue of SD Times marks the close of the BZ Media chapter of this publication’s history and opens the chapter on D2 Emerge LLC, a new-age publishing and marketing company founded by two long-time members of the SD Times team: the publisher, David Lyman, and the editor-in-chief … me! We will work hard to maintain the quality of SD Times and build on the solid foundation that has been built over the past 17 years. Wherever we go, we hear from readers who tell us they look forward to each issue, and they say they’re learning about things they didn’t know they needed to know. And we’re proud of that. The accolades are certainly nice — and always welcome. Yet, there is nothing more important to us than the stories we tell. Whether putting a spotlight on new trends in the industry and analyzing what they mean, profiling the amazing, brilliant people behind the innovation in our industry, or helping software providers tell their unique stories to the industry, our mission is to inform, enlighten and even entertain. But, as much as things will stay the same, there will be some changes. We will look to introduce you to different voices and perspectives from the industry, inviting subject matter experts to share their knowledge and vision of changes in our industry. The exchange of ideas and free flow of information are the bedrock of our publishing philosophy. We will somewhat broaden the scope of our coverage to include topics that might once have been thought of as ancillary to software development but are now important areas for you to follow as silos explode and walls come tumbling down in IT shops around the world. We will work to improve our already excellent digital offerings by bettering the user experience and the way in which we deliver content to you. So, whether you’re reading SD Times on a desktop at work, or on a tablet at a coffee shop, or even on your cellphone at the beach, we want you have the same wonderful experience. For our advertisers, we will help guide you toward the best way to reach our readers, whether through whitepapers, webinars, or strategic ad placement across our platforms. And, we will look

to add to an already robust list of services we can provide to help you tailor your messages in a way that best suits our readers. BZ Media was a traditional publishing company, with a print-first attitude (only because there weren’t any viable digital platforms back in 2000). D2 Emerge offers an opportunity to strike the right balance between a digital-first posture and all that is good about print publishing. I would be remiss if I didn’t acknowledge BZ Media founders Ted Bahr and Alan Zeichick, who took a cynical, grizzled daily newspaperman and turned him into a cynical, grizzled technology editor. But as I often say, covering this space is never dull. Years ago, I covered sports for a few newspapers, and after a while, I saw that I had basically seen every outcome there was: A walk-off home run, a last-second touchdown, a five-goal hockey game. The only thing that seemed to change were the players. Sure, once in a while a once-in-a-lifetime player comes along, and we all enjoy his feats. But mostly sports do not change. Technology, on the other hand, changes at breakneck speed. As we worked to acquire SD Times, I had a chance to look back at the first issues we published, and realized just how far we’ve come. Who could have known in 2000, when we were writing about messaging middleware and Enterprise JavaBeans that one day we’d be writing about microservices architectures and augmented reality? Back then, we covered companies such as Sun Microsystems, Metrowerks, IONA, Rational Software, BEA Systems, Allaire Corp, Bluestone Software and many more that were either acquired or couldn’t keep up with changes in the industry. The big news at the JavaOne conference in 2000 was extreme clustering of multiple JVMs on a single server, while elsewhere, the creation of an XML Signature specification looked to unify authentification, and Corel Corp. was looking for cash to stay alive after a proposed merger with Borlaand Corp. (then Inprise) fell apart. So now, we’re excited to begin the next chapter in the storied (pardon the pun) history of SD Times, and we’re glad you’re coming along with us as OUR story unfolds. z

David Rubinstein is editor-in-chief of SD Times.

Who could have known in 2000... that one day we’d be writing about microservices and augmented reality?

7


SDT01 page 8-9_Layout 1 6/13/17 12:40 PM Page 8

8

SD Times

July 2017

www.sdtimes.com

NEWS WATCH GitHub releases 2017 Open Source Survey As open-source software continues to become a critical part of the software industry, GitHub wants to ensure the community understands the pervasive landscape. The organization recently released an open set of data designed to help researchers, data enthusiasts and open-source members comprehend the overall needs of the community. Some of the major findings highlight how valued documentation is to developers, even though it is often overlooked. The open-source data also reveals the impact on negative interactions, how open source is used by the world, and who makes up the open-source community.

Microsoft introduces Draft for Kubernetes app development Microsoft announced a new open-source development tool for cloud-native applications that run on Kubernetes at CoreOS Fest in May. This is the first Deis announcement since the company acquired the Kubernetes company back in April. Draft was developed to address the complexity and constraints the development community was facing when it came to working with Kubernetes. “Application containers have skyrocketed in popularity over the last few years. In recent months, Kubernetes has emerged as a popular solution for orchestrating these containers. While many turn to Kubernetes for its extensible architecture and vibrant open-source community, some still view Kuber-

FORMAC developer Jean Sammet passes away Jean E. Sammet, a computer scientist widely known for developing the programming language Formula Manipulation Compiler (FORMAC), passed away late last month. Sammet was 89 years old. Throughout her life, Sammet developed FORMAC, served as the first Association for Computing Machinery (AMC) female president, helped design the COBOL programming language, and received a number of awards in the field like the Ada Lovelace Award and the Computer Pioneer Award. Sammet’s career started off in mathematics, and she turned over to programming in 1955. After she received the 2009 IEEE Computer Society Pioneer Award, she was asked about how she got involved in the computer field. She said: “In 1955, I was working at Sperry Gyroscope company on Long Island, and I was doing mathematical world involving submarines and torpedoes, and my boss came over to me one day and said ‘Do you know that we have a couple of engineers who are building digital computers?’ My answer was yes, I didn’t quite known what it meant but yes, and he said ‘Would you like to be our programmer’ and I said what is a programmer?’ and his answer, and I kid you not, his answer was ‘I don’t know but I know we need one.” From there, she went on to teach computer programming classes at Adelphi, oversaw a team of developers for the U.S. Army’s Mobile Digital Computer for Sylvania, worked at IBM, organized the first Symposium on Symbolic and Algebraic Manipulation, became a member of the ACM Council, and became a fellow of the Computer History Museum (CHM). “Jean Sammet was a leading figure in the study of computer programming languages. Her work has been widely recognized as an invaluable record of the origin and development of computer languages used since the start of the computing era,” CHM wrote. netes as too difficult to use,” Gabe Monroy, lead PM for containers on Microsoft Azure, wrote in a blog post.

Red Hat to acquire Codenvy Red Hat is adding the developer tools and containerized workspaces provider Codenvy to its portfolio. The company has signed a definitive agreement to acquire Codenvy. “Thanks to the increasing push towards digital transformation and the use of technology platforms, including apps, as a strategic business advantage, the role of the developer has never been more important. But, accelerated innovation through agile development requires new approaches and tools,” said Craig Muzilla, senior vice president of application platforms business for Red Hat. Red Hat plans to make

Codenvy an essential part of OpenShift.io, its recently announced hosted development environment for building hybrid cloud services. “When the transaction closes, Codenvy and Red Hat will combine resources to create an agile development platform for OpenShift-powered applications,” wrote CEO of Codenvy, Tyler Jewel, in a blog post.

Node.js 8 is now live Node.js 8 is now available with a big emphasis on debugging and developer workflow. The Node.js 8 release had previously been delayed because the the Node.js team wanted to give themselves the option to ship the Node.js 8.x release line with the TurboFan and Ignition pipeline, which would become the default in V8 5.9. According to Borins, “this would allow our next LTS

release line to run on a more modern compiler + jit pipeline, making backporting easier and giving us a longer support contract from the V8 team.”

Red Hat announces Red Hat Enterprise Linux 7.4 beta Red Hat is updating its enterprise Linux platform with new security and compliance features, automation and an improved admin experience. The company announced the beta release of Red Hat Enterprise Linux 7.4. The solution is designed to give enterprises a foundation to roll out new apps, virtualize environments and create secure and hybrid clouds. The latest beta release focuses on mission-critical deployments and defending against the latest threats with support for network bound


SDT01 page 8-9_Layout 1 6/13/17 12:41 PM Page 9

www.sdtimes.com

disk encryption, enhancements to OpenSSL HTTP/2.0, and updated audit capabilities. In addition, the solution targets management and automation with the include of Red Hat Enterprise System Roles. This inclusion simplifies the management and maintenance of Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7-based deployments. Other features include improvements to RAID Takeover, an update to Network Manager, and support for new performance co-pilot client tools.

Google releases open-source platform Spinnaker 1.0 Google is giving the opensource community another tool for continuous delivery and cloud deployments. Google has released Spinnaker 1.0, an open-source multi-cloud continuous delivery platform, which companies can use for fast, safe and repeatable deployments in production. Back in November 2015,

Netflix and Google collaborated to bring Spinnaker, a release management platform, to the open-source community. Since that initial release, Spinnaker has been used in several organizations like Netflix, Waze, Microsoft, Oracle, and Target. Spinnaker 1.0 is opensource, comes with a rich UI dashboard, and is able to be installed on premises, locally, in the cloud, and it can run either on a virtual machine or on Kubernetes.

N|Solid v2.2 adds Node data integration In order to streamline integration of Node.js application data to team workflows, NodeSource released the new version of N|Solid v2.2, which allows users to send Node application data directly to any statsd-compliant system. N|Solid adds streamlined integration with statsd-compliant systems, making it easy for teams to send Node application data to existing monitoring or reporting systems. Teams can

also integrate this data with existing metrics infrastructure.

Mozilla: Political parties agree on supporting net neutrality Despite their varying stances on today’s issues, there is one thing that Democrats, Republicans and Independents seem to agree on, and that is protecting net neutrality. A recent poll from Mozilla and research firm Ipsos, shows support across party lines for net neutrality, and it reveals that a majority of Americans do not trust the government to protect Internet access. Mozilla found that 76% of Americans support net neutrality, with 81% of Democrats and 73% of Republicans in favor of it. Along with these findings, Mozilla found most Americans do not trust the Trump administration or Congress to protect the Internet (78% place no or little trust, according to the poll). Other findings across the political spectrum include 78% believing that equal

MapR unveils cloud-scale data fabric MapR-XD As part of its rebranding effort to become a platform company, MapR is extending its Converged Data Platform to create a cloud-scale data store to manage files, objects and containers. The MapR-XD supports any data type from the edge to the data center and multiple cloud environments. “What we are hearing from our customers is that they are looking for a scalable storage platform, or data management platform, that does a lot of things including support for multisite, data centers, and cloud at the edge in one global namespace,” said Bill Peterson, senior director of industry solutions for MapR.

July 2017

SD Times

access to the Internet is a right, with 88% of Democrats, 71% of Independents, and 67% of Republicans in agreement. When it comes to corporations protecting access to Internet, 54% of respondents distrust ISPs. While the public waits for the results of the FCC’s decision for net neutrality, Mozilla said it will continue to work with Americans to endorse net neutrality. According to the organization, it has garnered more than 100,000 signatures and over 50 hours of voicemail messages for the FCC. Mozilla is also joining Fight for the Future, Free Press, Demand Progress, and others for a call to al Internet users to defend net neutrality.

Google, IBM and Lyft announce Istio for microservices Google, IBM and Lyft are merging some of their learned best practices around microservices to create a new opensource project called Istio. Istio was developed to connect, manage and secure microservices. The goal of the project is to tackle challenges around resilience, visibility, and security. Istio is a Layer 7 traffic monitoring and control network designed to work with Kubernetes everywhere, on premise or in the cloud. Today, developers can manually enable the alpha release of Istio on Google Container Engine. The service mesh lets developers delegate a lot of problems around visibility and security. It also gives developers and teams traffic encryption, and automatic load balancing for HTTP, gRPC, and TCP traffic. z

9


SDT01 page 10-11_Layout 1 6/15/17 4:42 PM Page 10

10

SD Times

July 2017

www.sdtimes.com

BY MADISON MOORE

C

ompanies today are expanding their collaboration efforts beyond workplace chat groups and creating software delivery teams through activities like pair programming, or group hackathons. One longtime software engineer and scrum master thinks that companies can go a step further — creating collaborative groups through improv. Wayde Stallmann, who currently works as an agile coach for Asynchrony Labs, has convinced companies that the lessons learned from

improv activities can actually go on to create great software delivery teams, and more productive and collaborative meetings. Most people are familiar with improv troupes, or groups of actors that get together in front of a live audience. Sometimes these groups are complete strangers, and sometimes, like in the recently added Netflix original “Don’t Think Twice,” they are a band of comedic friends that let the audience set the stage for them, working together to craft a hilarious story. When you bring the world of

A look at some opening acts Start daily stand-up meetings with a 3-minute improv warm-up. These warm-ups practice the art of Collaboration, Creativity, Communication, and Trust.

BIG OL’ SENTENCE: The first person comes up with a simple and short sentence. Each successive person repeats the sentence, adding another detail. The progression could go something like this: “There’s a car.” “There’s a red car.” “There’s a red sports car.” “There’s a red sports car on fire.” The game goes until the sentence gets too long for people to repeat, at which time a new sentence must be started. FREE ASSOCIATION: The first person says a random word. Each subsequent person tries to say a word that has some connection to the previous word. Everyone must listen for when the topic changes. If someone is stuck they should toss out any random word. MINISTER’S CAT: Each person uses the next letter of the alphabet and fills in the phrase “The Minister’s cat is a ____ cat”, where blank is replaced with an adjective.

QUESTION: This could be played every day forever. The leader asks a question and everyone answers in turn. This could be: • What is something in your car that is interesting or unusual? • What’s the oldest thing in your refrigerator? • How would you blow $200? or maybe $200,000? Source: Team First Development • Describe your perfect week long vacation.

improv into a meeting, it promotes the same team player qualities of a troupe performance, according to Stallmann. He identifies these qualities as collaboration, creativity, communication, and trust. Every improv session or performance starts with three- or four-minute activities. He encourages companies to use these games during their daily standups or agile retrospectives. In fact, one of the great books on retrospectives, titled “Agile Retrospectives: Making Good Teams Great,” by Esther Derby and Diana Larsen, says that the secret to retrospectives is to get everyone talking in the first five minutes. “Well, that’s another benefit of these games,” said Stallmann. “In the first few minutes, everyone talks and has equal voice. No one dominates the game which sets the tone that no one dominates the meeting.” Unlike other improv games, Stallmann’s strategies are not scenariobased; instead, the activities are more like warmup games with lessons to be learned. One of the activities Stallmann starts with is a game called Alphabet Conversation. The way it works is one person starts off with a statement or sentence, beginning with the letter “A.” Then, the next person starts a sentence with the letter “B,” keeping the conversation going all the way to the letter “Z.”


SDT01 page 10-11_Layout 1 6/15/17 4:42 PM Page 11

www.sdtimes.com

Of course, to really show readers what the game is like, Stallmann tested it out with SD Times. Here’s a sample of the activity:

SD Times

11

straight,” said Stallmann. “They did it for a year after I left the team, which I think is testament to the fact that it wasn’t because I was asking them to do it.” Stallmann does emphasize one thing, and that is he doesn't use these activities to promote faster software delivery using improv. That’s not the connection he is making, nor is he saying that doing these improv activities will ultimately lead to an “a-ha” moment from software delivery teams. He promotes the use of improv activities to create team players, which is exactly what today’s software delivery teams need. And since creativity is subjective, it’s hard to put a metric around these games and say that an improv game led to a breakthrough in software design, said Stallmann. These games are more about opening up in the creative sense to help get the team to more creative solutions and innovations. One thing is for certain, he said, and that is that improv activities do get teams into the collaborative spirit, it gets them focused, and it brings teams back “into the moment to be present.” Often times in meetings, people’s minds wander, but when these games are played, there isn’t a moment that they aren’t thinking about the game, Stallmann said. “You are focused and in the moment, and that being in the moment is where we want our software delivery teams to be,” said Stallmann. “The more times we practice, the better we get at it.” z

Photo by Jason Tice

SD Times: After this call, I have to transcribe this interview. Stallman: But, I want to make sure that I have time for a good lunch. SD Times: Can’t you go to lunch around 12:45 pm? Stallman: Didn’t really want to do that because I have a masseuse coming in at 12:30 pm. SD Times: Everything will work out just fine...as soon as I transcribe this interview… Stallman: Fine? You call that fine?! Oh yeah, that is fine, let’s get to work on that interview right now!

“You and I, as a team, created a story from nothing and neither of us could have done it on our own,” said Stallmann. “We get into the aspect of how a team can solve a problem that no one individually can solve.” Not everyone approaches these “games” with open minds and arms, according to Stallmann. He has come into trainings and had teams that simply say, “We don’t play games.” When greeted like this, he encourages them to try the activities, and by the end of it, they are laughing, they let their guard down, and they actually learn a skill from it, he said. “When I was AT&T as a Scrum Master, I had a team where we did a threeminute warmup game to start each standup, and we did this for two years

July 2017

Agile coach Wayde Stallman, left, leads an improv session of warmup activities during a recent Scrum standup meeting.


SDT01 page 12_Layout 1 6/13/17 12:42 PM Page 12

SD Times

July 2017

www.sdtimes.com

C IS FOR COGNITIVE* IBM, Sesame Workshop enhance childhood learning with new vocabulary app said vice president of development and schools. The Gwinnett pilot program is Ask Cookie Monster what is the “Letter offering management at IBM Watson the first time that Sesame Workshop of the Day,” and he just might give you Education Chalapathy Neti. content and Watson technology have three: IBM. With its cognitive power, the Vocab- been tested by both students and eduThat’s because IBM and Sesame ulary Learning App continuously learns cators. Workshop, the nonprofit organization with a child as the child engages it, Neti According to Neti, IBM collected behind the educational program explained. Instead of bombarding the 18,000 feedback points from 120 stuSesame Street, are workdents at Gwinnett. From ing together to prove that these data points, IBM it’s possible to enhance found that the app helped early childhood educamany students acquire tion experiences with a new vocabulary words, new cognitive vocabulary like “camouflage” and learning app. “arachnid.” Built on IBM and “Not only did they Sesame’s intelligent play learn the meaning of and learning platform these new words, they and powered by IBM began to naturally incorCloud, this ecosystem porate the words in their taps into IBM Watson’s conversations throughout cognitive capabilities and the classroom,” said Neti. content from Sesame “Furthermore, the pilot Workshop’s early childshowed that the students hood research, and is the really enjoyed learning first of many future cogthrough the videos and nitive apps and toys that Students at Georgia’s Gwinnett County public schools play with a new cognitive with the Sesame characwill be built on the new vocabulary learning app from IBM and Sesame Workshop. ters; this engagement led platform. them to listen more closeThe Vocabulary Learning App is an child with words he or she may not ly and ask more questions.” intelligent tutoring platform for early understand, the app identifies each The teachers involved in the pilot childhood education. It uses Watson’s individual student’s ability level. It can noted that the app was a beneficial natural language processing, pattern identify words or areas that might need addition to their classroom, and during recognition, and other cognitive com- additional focus, “refining the experi- the pilot, several teachers found that puting technologies to refine content ence to deliver content that engages kindergarteners were able to use chaland create personalized experiences for and inspires a child — this ultimately lenging words (like arachnid) based on each child. helps advance the child’s the progression of words they were “This lends itself to a vocabulary based on his/her exposed to in a two-week period. true transformation in acumen,” said Neti. Before rolling out the app to stuearly childhood education — IBM and Sesame completed dents and educators around the world, enabling deep levels of peran initial pilot with the Vocabu- IBM will first expand the pilot program sonalized and adaptive learnlary Learning App at one of this fall, and then eventually, the coming globally, through multiple the top urban school dis- pany plans to release similar cognitive experiences in both digital tricts, Georgia’s Gwin- learning tools in the future, like games and physical worlds,” nett County public and educational toys, said Neti. z BY MADISON MOORE

John O'Boyle/Feature Photo Service for IBM

12

*and cookie!

Sesame Workshop


SDT01 Full Page Ads_Layout 1 6/14/17 10:39 AM Page 13


SDT01 Full Page Ads_Layout 1 6/14/17 10:40 AM Page 14

Data Quality Made Easy. Your Data, Your Way. NAME

@ Melissa provides the full spectrum of data

Our data quality solutions are available

quality to ensure you have data you can trust.

on-premises and in the Cloud – fast, easy

We profile, standardize, verify, match and enrich global People Data – name, address, email, phone, and more.

to use, and powerful developer tools, integrations and plugins for the Microsoft and Oracle Product Ecosystems.

Start Your Free Trial www.Melissa.com/sd-times

Melissa Data is now Melissa. See What’s New at www.Melissa.com

1-800-MELISSA


SDT01 page 15_Layout 1 6/15/17 10:08 AM Page 15

www.sdtimes.com

July 2017

SD Times

WWDC: Apple App Store redesigned for the first time

Photo: Apple

ing operating system will also include Offload Unused Apps capabilities that automatically deletes apps that are not in use, a new control center, and multitasking capabilities. The company is also giving developers a new set of machine learning APIs they will be able to leverage in their own solutions. The company plans to roll out a vision API and a natural language API. In addition, Apple announced a new augmented reality framework, ARKit, that developers can use to bring computer vision to their solutions and implement and interact with objects. Developers can start accessing iOS 11 today, a public beta of iOS 11 will be available later this month, and the release is slated for the fall. The company’s other operating system is also getting a number of new updates with the release of macOS High Sierra. The new macOS will feature better performance, autoplay blocking, tracking prevention, fullscreen splitview in mail, h.265 for high-efficiency video coding, Metal 2, and Metal for VR. In addition, the operating system will include Apple’s File System (APFS) to provide a more modern file system experience, better performance, and enhanced security. Developers can access the latest release now. A public version is expected to roll out in a couple of weeks. Other announcements included a new iPad Pro, Apple Pencil updates, and new drag and drop capabilities, and the HomePod. HomePod is a smart speaker, similar to Amazon Echo and Google Home, that provides Apple Music, natural voice interaction, and artificial intelligence technology. z Image: Apple

record numbers,” Philip Schiller, What would happen if suddenly all our Apple’s senior vice president of worldapplications disappeared and develop- wide marketing, said in a statement. ers stopped developing them? Total “Seventy billion dollars earned by chaos, according to Apple’s Worldwide developers is simply mind-blowing. We Developer Conference held last month. The conference’s message was clear: The world is depending on application developers. To help developers keep the app economy alive within Apple, the company announced new software updates, and a redesign of its App Store. The App Store turns 9 this year, and for the first time ever, Apple is changing up the look and feel of the store. In the last nine years, the App Store has seen 500 million vis- The App Store is being redesigned to improve app discovery. itors weekly, has had more than 180 billion apps downloaded, and has are amazed at all of the great new apps paid out more than $70 billion to devel- our developers create.” opers, with 30% being in the last year The new App Store will be designed alone. to give users new ways to discover apps “People everywhere love apps and and learn about developers. The store our customers are downloading them in will feature a new home tab called Today, and it will feature new dedicated spaces for games and apps. Other features coming soon include the ability to phase releases and showcase in-app store purchases within the App Store. This new redesign will be a part of iOS 11. “Today, we are going to take the world’s best and most advanced mobile operating system and turn it up to 11,” Tim Cook, CEO of Apple said at the event last month. iOS 11 will feature a redesigned app drawer that makes message apps and stickers more accessible, updates to Apple Pay, new Siri capabilities, camera and photo enhancements, and a new Do Not Disturb while Driving mode. Siri will feature new machine learning and artificial intelligence to better Tim Cook stressed importance of developers. understand and help users. The upcomBY CHRISTINA CARDOZA

15


SDT01 page 16_Layout 1 6/14/17 2:58 PM Page 16

16

SD Times

July 2017

www.sdtimes.com

IBM’s journey: Building blockchain in the enterprise BY CHRISTINA CARDOZA

IBM has embarked on a journey to take blockchain out of cryptocurrency and build blockchain software for the business. Over the last couple of years, the company has developed a permissioned blockchain, IBM Blockchain, to solve one of the biggest problems the bitcoin blockchain didn’t solve — protecting data privacy in the context of industry and government regulations. “After looking at bitcoin and the blockchain craze in general, we got this idea for a blockchain for business. We put the idea to a test in building out a new style of blockchain that was wellsuited for regulated companies that had to interact and follow rules as well as pass audits if an audit would occur,” said Jerry Cuomo, vice president of blockchain at IBM. Through this work, IBM worked with “network founders” to activate blockchain networks on the company’s technology. Today, IBM wants to take what it has learned along this journey and help other network founders go from concept to active network. The company is launching an IBM Blockchain Founder Accelerator program to help enterprises and their developers take blockchain networks into production faster. Through the accelerator, IBM will pick eight of the best ideas, or blockchain network founders, to participate in the program. Founders will come from a range of industries such as banking, logistics, manufacturing and retail. The program will provide guidance, support and technical expertise for getting blockchain networks up and running. “Blockchain is a team sport. With the right network of participants collaborating on the blockchain, the benefits can be exponential,” said Marie Wieck, general manager for blockchain technology at IBM. “IBM has worked on more blockchain projects than any oth-

er player in the industry and we understand the challenges organizations face and the resources needed to get blockchain networks right the first time. IBM is proactively building solutions and entire blockchain ecosystems across a broad range of industries and we are sharing our expertise and resources to help more organizations quickly set up their networks.” Cuomo explained the accelerator program consists of different parts:

actions will spawn new business models, processes, and platforms where both producers and consumers can be in a connected ecosystem to create new kinds of value,” said Brigid McDermott, vice president for blockchain business development at IBM. Other findings of the IBM report included 100% of those exploring blockchain expect it to support their enterprise strategy, 78% of blockchain explorers are investing to respond to

IBM’s blockchain garage offerings, where it will help founders build out their ideas; access to IBM’s assets such as its secure document store, provenance engine, process engine and member onboarding; and access to the developer team for mentorship, counseling, and code review . According to Cuomo, the interest in the blockchain is only going to get stronger as time goes on. In a C-Suite executive study, Forward Together, the company found one third of 3,000 executives surveyed are or intend to use blockchain in their business. Eight in ten of those interested say they are driven by financial pressures and the need for new business models. “With blockchain, everyone is looking at the same thing at the same time. This new way of making trusted trans-

shifting profits, and 78% of those actively using blockchain believe customers are important to advancing the technology. “I think 2016 was the year of blockchain experimentation,” said Cuomo. “2017 is the year of adoption, and this year we are seeing networks activated. We will measure the success of this year by the number of permissioned blockchains we see activated. As we get into 2018, we are going to see both the growth of those networks and the interoperation of those networks.” Through the accelerator program, Cuomo hopes to see the industry start building a best practice for how to start a blockchain business, and a blueprint of how to build a blockchain business that will make money for the members of its ecosystem. z


SDT01 Full Page Ads_Layout 1 6/14/17 10:41 AM Page 17


SDT01 Full Page Ads_Layout 1 6/14/17 10:41 AM Page 18


SDT01 page 19,20_new_Layout 1 7/5/17 3:06 PM Page 19

www.sdtimes.com

July 2017

SD Times

INDUSTRY SPOTLIGHT: ENTERPRISE ARCHITECTURE

EA in the cloud drives digital shift Models that can be shared collaboratively speed creation of business value BY DAVID RUBINSTEIN

The growth and evolution of business has required a growth and evolution in modeling, which has emerged as a way to communicate ideas and minimize complexity across the enterprise. Modeling is used to get a handle on business processes, database design, code engineering — and even the enterprise itself, through organizational charts, workflows and enterprise frameworks. Sparx Systems, whose Enterprise Architect modeling platform enables teams to collaborate on business rules, requirements and more to create UML 2.5-based models, last month came out with Pro Cloud Server, a web-based platform that gives all project stakeholders a way to create, review, comment on and edit models, diagrams and processes from any browser and device. According to Geoffrey Sparks, founder and CEO of Sparx Systems, “The ability to dynamically create, collaborate and integrate models over multiple domains and technical platforms is a remarkable and highly agile

‘The ability to dynamically create, collaborate and integrate models over multiple domains and technical platforms is... remarkable.’ —Geoffrey Sparks

solution that has the capacity to radically improve the quality, accuracy and effectiveness of model-based projects.” Sparx has 580,000 registered users of Enterprise Architect, and was listed in the Challenger category of the latest Gartner Enterprise Architecture Magic Quadrant. The Pro Cloud Server currently boasts three main areas. The first is a comprehensive RESTful API for OSLC (Open Services for Life-cycle Collaboration.) This is the platform that provides access to the back-end Enterprise Architect repository. The second is WebEA, WebEA builds off the RESTful API and provides a

mobile interface into the model allowing the model to be consumed, commented and discussed from any device with a web browser. The third is a URL based approach to connecting to both these services allowing a secure http(s) connection without the need for database specific drivers or configurations for each platform. Having the ability act quickly and collaboratively reflects a radical change in enterprise architecture, according to a presentation at a Gartner Enterprise Architecture Summit conference. Organizations need to address business disruptions coming from all angles by embracing digital technologies such as cloud computing, intelligent machines and more. “EA success in the digital age is fueled by a spirit of alignment and collaboration with business and IT stakeholders throughout the enterprise,” Gartner wrote in a summary of the summit. “EA practitioners must be willing and able to reimagine traditional EA roles and responsibilities and develop a new business acumen and digital skill sets.”

Focus on the business analyst With Pro Cloud Server and WebEA, remote users can get a real-time view of project content within Enterprise Architect, at any time, from any device.

Content provided by SD Times and

Along with the release of Pro Cloud Server, Sparx worked with the Internacontinued on page 20 >

19


SDT01 page 19,20_new_Layout 1 7/5/17 3:06 PM Page 20

20

SD Times

July 2017

www.sdtimes.com

INDUSTRY SPOTLIGHT: ENTERPRISE ARCHITECTURE

EA in the cloud helps drive digital transformations < continued from page 19

tional Institute of Business Analysis to create and release a public beta of a toolkit for the Business Analysis Body of Knowledge® (BABOK® Guide v3). The model, when implemented within Pro Cloud Server, is designed to give business analysts the ability to collaborate on requirements and business models, to provide better business outcomes. “It is the first time business analysis techniques have been actualized in this way and we are excited with the out-

come,” said Sparx Systems COO Tom O’Reilly. “The solution… has the capacity to revolutionize the way the BABOK Guide v3 is applied within the enterprise.” The Tools and Techniques for BABOK Guide v3 instructs the professional on implementing business analysis standards that offer reliability, repeatability and improved productivity, according to Sparks. Better requirements capture and management result in better project outcomes. Analysts have to reach out to various

What Exactly IS Enterprise Architecture? For a younger generation of developers and business analysts, the term ‘enterprise architecture’ might not be totally understood. Yet EA could very well help them – and their organizations – better compete in a world that is changing ever more rapidly. In fact, according to a paper titled “A Common Perspective on Enterprise Architecture” produced by the Federation of Enterprise Architecture Professional Organizations, EA was first developed to help companies deal with the shifting, sliding technology landscape and with diversity in operating systems. Today, enterprise architecture continues to be implemented, but instead of helping organizations deal with a migration off mainframes onto distributed systems, for example, it is being used to help organizations make a digital transformation. Through the use of models, EA gives adopters repeatable techniques to map out their future – whether that’s a migration to the cloud, or implementing a strategy for mobile device access to data – and ensure its success. According to the paper, “Organizational changes can be dramatic, with large-scale reorganization of people, systems and accountabilities. They can also be gradual and steady, involving hundreds of small, non-disruptive steps. Regardless of the approach taken, change is often complex and error-prone. Enterprise Architecture, through continuous evaluation and adaptation of the enterprise, reduces the cost of change and improves the chances for success.” –David Rubinstein

departments to assess their needs, then write it out, analyze it and deliver it in technical language to the development team. Without adequate tooling to handle requirements, this task alone can add months to a software project. Especially in today’s software development environment of agile practices and continuous integration/deliver, the need to respond quickly to customer feedback as well as the needs of the business requires that collaborative tools be in place to let stakeholders work through their needs while providing transparency into the change process. As Sparks noted in a recent blog post: “These examples are a reflection of the changes in the Requirements Management process, which in the development model of today, supports iterative requirements gathering and continuous delivery of software. It has become an Agile practice approach, being adopted to address the challenge of digital transformation. This collaborative, iteration based business lifecycle, between requirements and stakeholders, has given rise to DevOps, a strategy for managing continuous change. Enterprise Architect is unique in its ability to support requirements throughout the development lifecycle and to deliver the benefits of the Agile practice approach. Requirements can be defined in the model, or imported from other tools including Visio.” An organization using the WebEA view into the Enterprise Architect Cloud could have a business analyst or project manager create requirements, use cases, tests and more right in the mobile browser. This ability to capture task assets remotely for use in a model later on helps make the process more agile. More detailed information on Sparx, Enterprise Architect, Pro Cloud Server and WebEA can be found at www.sparxsystems.com. z


SDT01 page 21_WirelessDC Ad.qxd 6/16/17 10:28 AM Page 1

DON’T MISS A SINGLE ISSUE! Renew your FREE subscription to SD Times!

Take a moment to visit sdtimes.com. Subscribing today means you won’t miss in-depth features on the newest technologies affecting enterprise developers — IoT, Artificial Intelligence, Machine Learning and Big Data. SD Times offers insights into the practices and innovations reshaping software development such as containers, microservices, DevOps and more. Find the latest news from the software providers, industry consortia, open source projects and research institutions. Available in two formats — print or e-mail with a link to download the PDF. Subscribe today to keep up with everything happening in the software development industry!

Sign up for FREE today at www.sdtimes.com.


SDT01 page 22_Layout 1 6/15/17 4:42 PM Page 22

SD Times

July 2017

www.sdtimes.com

Updating the book on AI and games Professors write comprehensive text reflecting new developments BY MADISON MOORE

Artificial intelligence research has seen a lot progress over the years, moving from simply understanding images and speech to actually detecting emotions, driving cars, searching the web, and playing games. Because of these advancements, two AI experts decided to write “Artificial Intelligence and Games,” which will they hope will serve as the first comprehensive textbook on application and use of AI in, and for, games. Georgios Yannakakis and Julian Togelius, authors of the book, have both been teaching and researching game artificial intelligence at graduate and undergraduate levels. Yannakakis is currently an associate professor at the Institute of Digital Games at the University of Malta. His research interests include everything from AI, affective computing, neural networks, and procedural content generation. Togelius is an associate professor at the Department of Computer Science and Engineering at the Tandon School of Engineering at New York University. His research interests include artificial intelligence techniques for making computer games better, and he’s researching how games can make AI smarter. Both Yannakakis and Togelius felt that a textbook on game AI was necessary to future students and beneficial to the learning objectives of their programs. “[Yannakakis] and I have been working in this research field since at least 2005,” said Togelius. “We were active at the very beginning of the field's formation, and did a number of influential research contributions in for example procedural content generation and player modeling.” Togelius added that the book is not only built on their own research, but it also features a lot of research published in the IEEE Conference on Computa-

tional Intelligence and Games, the AAAI Conference Artificial Intelligence in Interactive Digital Entertainment and the IEEE Transactions on Computational Intelligence and AI in Games. They are also looking at work that's being done in the industry and presented at conferences like the Game Developers Conference. While there are a couple of books out there that delve into some of these topics around artificial intelligence, Togelius said that these books are older

games to develop and test AI, and there's also a long-standing use of AI methods in games,” said Togelius. “These fields use very different methods, and don't always understand (or even know of) each other. While we come from the academic perspective, we are doing our best to try to bridge this divide.” The authors also represent a third perspective, and that is “the growing interest of AI in the game design process, to help designers by automat-

Image: Jasper Flick, Catlike Coding

22

The “Artificial Intelligence and Games” book includes a set of exercises on ways to use AI in games, like this exercise from Catlike Coding where you can generate a maze and navigate through it.

and tend to focus more on the needs of the game industry. “Our book is fully up to date with academic research as well,” said Togelius. “The use of video games as research environments in AI research in academia (and in big companies such as Google DeepMind and Facebook AI Research) has exploded in recent years, and our book reflects those developments.” Togelius also said that unlike other books, he and Yannakakis devote plenty of space to discussing the role of AI in content generation and player modeling, not just in game-playing. “There is an increasing use of video

ing some of the design or provide automated feedback or suggestions,” said Togelius. Right now, the book’s first public draft is available for review. Any suggestions or feedback will be accepted by no later than June 20. Togelius and Yannakakis’ book will include three main chapters around playing games, generating content, and modeling players. They will also have an introductory chapter with overviews of the field and summaries of key algorithms, with a few chapters trying to “stake out the future of the research field,” said Togelius. z


SDT01 Full Page Ads_Layout 1 6/14/17 10:41 AM Page 23


SDT01 Full Page Ads_Layout 1 6/14/17 10:42 AM Page 24

TRUSTED FOR INTEGRATION LEARN MORE WWW.REDHAT.COM/INTEGRATE


SDT01 page 25_Layout 1 6/13/17 2:10 PM Page 25

www.sdtimes.com

July 2017

SD Times

INDUSTRY SPOTLIGHT: LOW-CODE SOLUTIONS

Citizen developers are a necessity ing infrastructure so it can be conSoftware development team roles are sumed easily by business applications,” changing as the pace of business contin- said Aradhya. “The more plumbing IT ues to accelerate. Agile development, developers can do, the less coding lines continuous integration and continuous of business have to do.” delivery continue to become more important. At the same time, there are Cloud-based services help more low-code and no-code platforms Cloud-based development models simthat enable less technical “citizen devel- plify service provisioning and they opers” to build, update and enhance line make it easier for citizen developers to consume the services their IT departof business applications. “We need citizen developers because ments provide. Citizen developers also lines of business need to keep pace with need a way to make sense of those servrapidly changing market conditions and ices because they tend not to underregulatory requirements,” said Prakash stand software architecture and related Aradhya, product management director issues. Low-code and no-code platat Red Hat. “With all of those changes, line of business profes- ‘Citizen developers aren’t sionals want more control of their expected to have a deep applications so they can make the understanding of the code, changes necessary, update those so automation will help quickly and get to market faster.” simplify business application Toward that end, more IT changes and the creation departments are creating infra- of new applications.’ structures that help abstract the —Prakash Aradhya, technical complexity of software product management director development so citizen developers can create, maintain and manage forms mask all that complexity behind line of business applications with drag- visual interfaces that citizen developers can easily understand and use. and drop simplicity. Not all low-code and no-code platCitizen development is growing forms integrate equally well into existMore software organizations have ing business processes, however. If citimoved away from waterfall develop- zen developers have to change the way ment because their companies can’t they work to conform to the limitations wait months or years for competitive of a particular tool, they’ll either stop business applications they need today. using it or risk losing some of the timeAgile and lean development methods to-market benefits the tool is designed have accelerated software delivery, but to provide. they don’t ensure that all line-of-busi“Architectural flexibility is critical in ness applications are always up-to-date. today’s dynamic environment,” said As a result, business users continue to Aradhya. “Citizen developers should be wait for application changes they think able to use tools within the context of should be implemented faster, so more existing business operations.” of them are looking for ways to update The cloud provides both IT and and build applications themselves. lines of business with other foreseeable “In an ideal world, IT would set up benefits including simple infrastructure an app service around some of the exist- provisioning and elasticity, which are BY LISA MORGAN

Content provided by SD Times and

necessary to speed application changes while controlling costs. Meanwhile, citizen developers are hearing more about the benefits of microservices so they’re starting to ask whether IT and the tools they use support them. Using the cloud, IT can easily make microservices available that citizen developers can consume and combine at will, assuming their platform supports them.

Automation speeds processes Robotic Process Automation (RPA) is expected to enable a lot of business process efficiencies, but many lines of business are concerned about job displacement. Still, many software development tasks, particularly those that are easily repeatable and reproducible are already being automated. Additional tasks will be automated in the future that will enable citizen developers to accomplish more using their existing skills. “Citizen developers aren’t expected to have a deep understanding of the code, so automation will help simplify business application changes and the creation of new applications,” said Aradhya. “The automation will range from rote, repetitive tasks to more complex and predictive cognitive process automation. Ultimately, there’s an opportunity for lines of business to identify how they can streamline their operations.” For now, citizen developers are more concerned about timely software delivery which is further enabled by automation and self-service capabilities. As the pace of business continues to accelerate, more lines of business will be demanding platforms and tools that enable them to make changes to their own applications quickly and simply. Learn more at www.redhat.com. z

25


SDT01 Full Page Ads_Layout 1 6/14/17 10:42 AM Page 26


SDT01 page 27_Layout 1 6/14/17 2:03 PM Page 27

www.sdtimes.com

July 2017

SD Times

DEVOPS WATCH

State of DevOps report released Automation, loose coupling of architectures and teams, and leadership are predictors of organizational success ance, the report found — not surprisingAutomation is an important technique ly — that high IT performance (as measused by high-performing IT organiza- ured in throughput of code and stability tions. Yet medium-performing groups of systems) results in organizational sucdo more manual work than low-per- cess in faster time to markets, improved forming groups, according to the 6th experiences for the customer and the annual State of DevOps Report released ability to respond quickly to changes in at the DevOps Enterprise Summit in the market. London. The report found that loosely couThe middle performers were doing pled architectures and teams result in less automation of processes higher IT performance. The for change management, testadoption of lean product maning, deployment and change agement techniques also facapproval, explained Alanna tors in. “There is no longer a Brown, senior product mar‘done’ in software,” Puppet’s keting manager at Puppet, Brown said. “Teams are workwhich presented the report ing in small batches, making along with DORA (DevOps their work visible and using Research and Assessment). feedback to inform design.” “These groups have already Nicole Forsgren Jez Humble, one of the begun automation, and are founders of DORA, tied in the seeing benefits,” she said. “But that ways teams are set up with IT performreveals technical debt that they didn’t ance and organizational success. “Does realize before they started, which is a allowing teams to make their own tool normal phase of the [DevOps] journey. choices and change systems predict an It’s a J curve; the initial performance is ability to do continuous delivery? Can high, but then it gets worse before it gets they do testing on demand, without relybetter again.” ing on other teams or services? Do Another measure of successful teams have the autonomy to get work DevOps implementations is leadership. done without fine-grained collaboration Transformational leaders have a clear with other teams?” These factors, he understanding of the organization’s said, impact whether or not an organizavision, communicate in a way that tion can ensure their software is always inspires and motivates, challenge their deployable. teams through intellectual stimulation, As for system stability and resilience, are supportive, thoughtful and caring of Humble measures these by how long it others, and are generous with praise, takes to restore the system after an outaccording to Nicole Forsgren, CEO age, or what proportion of changes lead and chief scientist at DORA. “It’s hard to outages or degradation of quality of to measure the impact of leaders service. because they’re not doers or practitionTo Humble, the important question ers, but still they have a big influence in measuring success of a DevOps over teams and architecture,” she said. implementation is, “Can you deploy Looking at the impact of IT perform- software on demand, during business ance on overall organizational perform- hours, at any point of the life cycle?” z BY DAVID RUBINSTEIN

In other DevOps news… n A newly released study from CA Technologies reveals if companies really want to boost their software delivery performance, they should combine DevOps with cloud-based tools and delivery mechanisms. The study polled more than 900 senior IT professionals worldwide to decipher how they were achieving success. According to the study, 81% of respondents saw an overall improvement when they combined DevOps methodologies with cloud tools, compared to 52% who reported an improvement just using DevOps and 53% when just leveraging cloud. n New Relic is adding new capabilities to its New Relic Digital Intelligence Platform to give enterprises better insight into their applications and infrastructure. The company’s new Health Map feature provides all the performance information a company needs to know in one unified view. The new feature is designed to give DevOps teams the ability to quickly understand if there is an issue. Other features include integrated support with AWS strategies and New Relic Infrastructure enhancements. n A recently announced testing solution from Sauce Labs is designed to help DevOps teams bring software to market faster. The Sauce Labs Test Analytics solution supports DevOps initiatives by providing near real-time, multi-dimensional test analytics. Key features include: quick data analysis, early access to build and test statistics, the ability to identify errors with multidimensional filters, ability to view levels of parallelization, and REST APIs. In addition, analytics are available to executives, developers, test engineers and quality assurance teams.

27


NOM2017AD.qxp_Layout 1 7/28/17 10:14 AM Page 1

SUBSCRIBE TO

SD Times News on Monday to get the latest news, news analysis and commentary delivered to your inbox.

• Reports on the newest technologies affecting enterprise developers — IoT, Artificial Intelligence, Machine Learning and Big Data • Insights into the practices and innovations reshaping software development such as containers, microservices, DevOps and more • The latest news from the software providers, industry consortia, open source projects and research institutions

Keep up with everything happening in the software development industry!

CLICK HERE TO SUBSCRIBE


SDT01 Full Page Ads_Layout 1 6/16/17 11:55 AM Page 2


SDT01 page 28,29,32,33_Layout 1 6/14/17 3:00 PM Page 28

28

SD Times

July 2017

www.sdtimes.com

Climbing the IoT B data mountain BY CHRISTINA CARDOZA

usinesses are in the midst of a digital transformation. To transform, they must become software companies, they must turn their products and services online, and they must provide more intelligent solutions. This new and connected world has companies turning to the Internet of Things (IoT) to lead them towards new business opportunities. The Internet of Things has been making headlines for years now, but according to Bart Schouw, director of IoT at Software AG, the area is still new and unknown. Businesses are still trying to navigate and understand how they can implement this, and how they can then monetize it. It is important to distinguish that the Internet and the Internet of Things are two very different things. Bryan Hughes, CTO of IoT at SpaceTime Insight, an advanced analytics solution provider, explained the Internet is the digital representation of digital things such as web pages while IoT is the digital representation of physical things. These physical things or devices provide the ability to better understand and engage with customers, anticipate failures, and avoid costly downtimes; but to do that businesses need to be able to collect, analyze, store and comprehend data. Data is the key to making valuable insights. The problem in the IoT world is you now have the ability to apply inexpensive sensors onto solutions that enable you to collect all kinds of different information. Suddenly, you end up with a mountain of data you can easily get buried in. “We collect all this data, and as a result the data is growing like crazy,” said Svetlana Sicular, research vice president for data and analytics at Gartner. “There is a big challenge to understand which data is useful, and how to make sense of it. So far, there aren’t too


SDT01 page 28,29,32,33_Layout 1 6/14/17 3:00 PM Page 29

www.sdtimes.com

many companies that are successful analyzing this data.”

Treading through deep data waters The biggest change that is enabling the IoT age is changing how data is processed and stored, according to Jack Norris, SVP of data and applications at MapR Technologies. Norris explains the first phase of IoT was focused on deploying devices and getting access to data. Today, we are in the second phase of IoT where we are focused on the data itself: How do we collect it appropriately, how do we analyze it, and how do we act on it. “You need an approach that can handle high scale, high speed and reliability all at the same time because it is really about understanding the context of the data as fast as possible and being able to act in real time,” he said. The number one best practice to gain valuable insight from IoT solutions is to utilize data analytics, according to Gartner’s Sicular. “It is not only about installing sensors and creating alerts, but it’s about understanding the longterm value of analytics,” she said. “Analytics is what brings the value of an existing IoT project to the next level.” Sicular explains there are four types of ways to approach analytics: Through a platform with analytical capabilities; a general purpose analytics tool; do-ityourself, custom-developed solutions; and a packaged application that solves a particular use case. But before you deal with analytics, you need to handle the data itself, and that includes the speed of data, the amount of data, and new users of the data. According to Sicular, the first question you need to answer is “How are you going to store the data?” Software AG’s Schouw explained more IoT platforms are actually living in the cloud today because the cloud provides the ability to scale up and scale out as well as the space necessary to store information. Suraj Kumar, vice president and general manager of PaaS at Axway, added “The reason companies turn to a public or private cloud is because IoT devices and the amount of data need a place that provides high levels of scalability

July 2017

SD Times

Using graphic processing units for IoT data and analytics Advanced analytics database provider Kinetica wants to tackle the Internet of Things and its data analytics tsunami with a new graphical processing unit (GPU) approach. According to the company, the IoT is expected to reach $470 billion in revenue by 2020 from 30 billion or more IoT devices. In order to access and do more things at the edge, enterprises need to turn their data into actionable insights. Kinetica started out in 2009 in the United States Army Intelligence and Security Command space. Its mission was to help the US Army and NSA track and capture terrorists as well as discover other national security threats. To do that, they needed to correlate a number of different data feeds together and come up with a high probability of where a terrorist target would be in real time. To meet their needs and provide fast data with the ability to scale, Kinetica created a new database around parallelization utilizing the GPU. Today, Kinetica is trying to bring the lessons learned from the US Army to the enterprise. “We started as a geospatial and temporal computational engine for any data that dealt with space and time. Through the years we became a full-fledged, highly available database. Many of the challenges we saw in the military, we are seeing now in the commercial space as this IoT phenomenon is becoming more and more prevalent,” said Amit Vij, CEO and founder of Kinetica. By taking traditional database operations and accelerating them with GPUs, Vij explained, the company has been able to provide 100 times performance on a tenth of the hardware. According to Vij, a GPU approach provides more than 4,000 cores per device to enable fast, real-time and predictive analytics while traditional CPU-based solutions provide about 16 to 32 cores per device. Kinetica can be used for connected cars, fleet management, infrastructure, smart grid, customer experience, and just-in-time inventory. The company has worked with the United States Postal Service (USPS) to help track carrier movements in real time and become more proficient and accurate in delivering on time. The company was able to do this based on personnel, weather and traffic data. Other features include geospatial data, time-series data, sensor data, structured data, machine learning, deep learning, locationbased analytics, geospatial visualization, third-party integration and real-time visualization.

and elasticity. A cloud-oriented solution enables both from a compute perspective and a data storage perspective.” When deciding on a cloud provider, companies need to understand their business requirements. According to Kumar, there are basic security requirements and compliance requirements that need to be considered. For instance, healthcare providers need to ensure cloud providers have HIPAA compliance, financial companies need to have compliance policies as well, and government entities need to have federal compliance. “Businesses need to ask who has access to the data and data center, and what kind of security and controls do they have in place,” he said. In addition to understanding the

business requirements from a cloud perspective, knowing the business outcomes will help businesses handle all the data as well as generate insight. Donna Prlich, chief product officer at Pentaho, a Hitachi Group Company, explained when there are so many different varieties of data and information coming in from different data sources, it can be overwhelming to know what to look at. When you focus on the business use case, you are not trying to take every single data source in. Instead, you are looking at the places the data applies to the business outcome, according to Prlich. “Focusing on business outcomes and what you are trying to accomplish, starting small, and growing is going to continued on page 32 >

29


SDT01 Full Page Ads_Layout 1 6/14/17 10:42 AM Page 30


SDT01 page 31_Layout 1 6/14/17 2:02 PM Page 31

www.sdtimes.com

July 2017

SD Times

INDUSTRY SPOTLIGHT: PREDICTIVE ANALYTICS

How data science improves ALM Teams can tap into data to optimize the four stages of agile application delivery BY ALEXANDRA WEBER MORALES

If you’re an agile team, you may still be planning, developing, testing and deploying by instinct. But what if you bring data science into the picture? Enter HPE Predictive Analytics, which can surface everything from accurate planning estimates in agile projects to efficiencies in defect detection for continuous testing. SD Times spoke with Collin Chau, a 10-year HPE veteran based in Sunnyvale, about how HPE is applying data science to historical ALM project data. Chau, senior marketing manager for HPE ALM Octane and Predictive Analytics, explained how machine learning, anomaly detection, cluster analysis and other techniques improve the fours stages of the lifecycle. SD Times: Where do you get data to improve ALM?

Chau: There’s metadata that sits within the ALM platform that’s untapped. If you look closely, a lot of this data can be used to accelerate quality application development lifecycle. The customers we spoke to want to get more out of that data, to use it to help guide application project teams optimize resources and reduce risk when managing the application development lifecycle. We have an experienced team of data scientists today developing algorithms for one thing alone: quality application delivery. Data science with the absence of domain knowledge is useless information. We have data scientists sitting in application development lifecycle teams to cross-pollinate our tools with ALM-specific data science, offering users prescriptive guidance that is pertinent. Content provided by SD Times and

These advanced analytics are multivariate in nature, and borrow technologies specific to machine learning that continuously learn from past data — because only with a constant learning cycle that feeds on updated data that you improve and offer better recommendations and predictions.

check-in, to proactively analyze source code for defects or complexity. It can also recommend code to supplement the build — I’m pretty excited that it has the ability to continuously learn from different data points and classify it into information that developers can actually use to avoid rework.

Where do these analytics show up?

Stage three is predictive testing — does that prevent QA from being squeezed on both ends by DevOps?

Predictive analytics is offered and sold as a “plug-in”, to be shown through the ALM Octane dashboard, which we are positioning as a data hub that will feed

‘The tool is intelligent enough to predict code that will break the build even prior to code check-in.’ —Collin Chau

into other ALM tools that are on the market to offer a single source of truth. Do you have real-world examples?

We have several customers who are participating in the technology review. Predictive Analytics for ALM will go public beta next month, and we’ll have general availability shortly. You describe four stages of predictive analytics in ALM. What’s the first one?

The first is predictive planning. Most agile projects have no proper planning; teams start out in the dark running as fast as they can. Development time frames can get over-extended or mis-resourced. In predictive planning, the tool learns from past historical data and provides teams the recommendations in terms of user requirements, story points, and feature size estimates, etc. Next is predictive development?

For coders, the number-one job is to build quality code fast. The tool is intelligent enough to predict code that will break the build even prior to code

A: Yes. It’s about how you accelerate not just testing, but continuous testing. In this world of continuous delivery, predictive testing gets you to the next level. It helps identify root cause analysis in test failures. By so doing, it actually goes a step further by recommending a subset of tests to be done based on code changes checked-in. Predictive analytics helps you zoom in. It says you don’t have to run a suite of 20,000 automated tests, when 100 specific ones are sufficient enough to cover the latest code commits. The final stage is predictive operations?

We are taking real-world production data and leveraging that to tell the customers where there are test inefficiencies. The wow about this is that, because we are taking actual production data, we are infusing application development decisions with data from real-world conditions. It’s no longer labbased or static, as applications are consistently refined to meet needs in actual operating environments. How can teams try this on for size?

To sign up for the public beta, go to saas.hpe.com/software/predictive. Learn to optimize your resource investments and reduce risk for agile app releases within DevOps practices. Discover how predictive analytics multiplies the power of ALM Octane’s data hub as a single source of truth. z

31


SDT01 page 28,29,32,33_Layout 1 6/14/17 3:01 PM Page 32

32

SD Times

July 2017

www.sdtimes.com

< continued from page 29

be super important to be successful,” she said. Rather than trying to provide every single piece of information, Axway’s Kumar suggests looking at what the customer wants and tailor a solution to a particular customer experience.

4 KEYS TO DATA ANALYTICS An IoT/data analytic approach should consist of four things, according to SpaceTime Insight’s Hughes: • The collection of data • Edge computing • Processing in real-time • Security For collection, it is about building a system that can withstand the real world, meaning building systems that are designed for failure. Edge computing allows you to go from the end point to the cloud. Then you need to be able to process everything in real time somehow, and reduce as many attack vectors as possible, Hughes explained.

“The challenge is that as we move towards the future, more and more things will be operating in very remote locations, or traveling through the air or across the country on the roads and rails. In most cases, connecting through cellular networks. In these cases, the amount of data generated cannot be transmitted feasibly to the cloud for processing. Instead, data collection and analytics needs to move to the edge, improving latency, reliability, and cost,” Hughes said. In the end, to provide a successful solution, Software AG’s Schouw says top management needs to have close ties with operations because IoT has a huge impact on the business. “If top management hasn’t bought into it and doesn’t understand why it is going to change, operations will never be able to make those painful decisions to reorganize and realign the organization along the new business models because organizations are built to resist change,” he said.

Applying AI and machine learning Even taking into account all of the best practices and putting the necessary tools and platforms in place, it still is almost impossible to sift through all the data,

especially in real time. Humans aren’t always capable of understanding the right questions to ask, Hughes explained. “The growth of analytics and business intelligence has always been around knowing the question to ask, and then being able to ask the question to get the answer,” he said. “In the mountains of data, it is not about knowing the question to ask. It is about discovering patterns in the data.” To discover the patterns, businesses need to leverage machine learning. According to Schouw, machine learning learns from the data, discovers patterns that might not be clear to the human eye, and deciphers whether or not the user should act on the data or ignore it. Additionally, it helps make data analytics more of an automated process. “Even if you have predictive analytics in place to spot a pattern and alert an operator to it, there might be so many alerts and data going on that a human doesn't want to be confronted with that continuously. Being able to have artificial intelligence or machine learning take automated action on it is something you want to do. That is a big efficiency gain,” Schouw said. According to Schouw, artificial intelligence is becoming the new UI. Schouw cited Amazon Alexa. A user might be able to tell Alexa to lock the door, but Alexa has to figure out which door to lock, the front door, the back door, the bathroom door? And if you want to lock the front door and someone is still inside, Alexa can ask if you still want to lock it. “In those complex environments, the question is how do you want to interact, and AI will be the way humans want to interact with it,” he said. While machine learning and artificial intelligence are providing many productivity benefits to IoT organizations, Axway’s Kumar believes this area is still new and expects there will be many improvements in the future to have the technology automate certain decision-making and provide even deeper insights. “Machine learning and artificial intelligence can drive further improvement in getting insight, helping with decisions and automating certain decisions when it comes to IoT analy-

sis,” Kumar said. Software AG’s Schouw notes machine learning is not a magic bullet. You can’t just apply machine learning out of the box, connect to things, and expect it to tell you when things will fail. You need to have an understanding, and you need to invest in data scientists that can help you build up that knowledge so you can start applying things like predictive maintenance to machine learning, he explained.

Having tools support your strategy Once you find a place for the data to live and come up with a strategy, you need a solution to execute on that strategy and apply techniques like machine learning and artificial intelligence. There is no right answer or single tool to help you magically handle the Internet of Things landscape, but there are some features you can look for in a tool to help make life easier. When looking for a tool, the first thing to consider is your data pipeline, according to Pentaho’s Prlich. How are you going to manage the pipeline and how are you going to address the different types of users in your organization. For instance, you might have data scientists, ETL engineers, analysts, and developers all working with the data in some shape or form, Prlich explained. “You want to ask ‘Is this something that can help me solve the end-to-end problem all the way from data engineering through data preparation to data analytics, or is it a siloed event or set of tools?’” said Prlich. In addition, Prlich explained the solution should be open where team members can bring in a set of tools or platforms that can coexist with one another. “You want to think about what is coming in the future, the tools you are choosing, and if things shift and change, are you prepared to manage that,” she said. This helps companies “future proof” themselves for what is coming next. MapR’s Norris suggests having a distributed data fabric that can extend to the edge and intelligently process data. The IoT landscape requires businesses to collect data, aggregate, and learn across a whole population of devices to


SDT01 page 28,29,32,33_Layout 1 6/14/17 3:01 PM Page 33

www.sdtimes.com

July 2017

SD Times

How a manufacturing company became an IoT company When you think of the Internet of Things, typically smartwatches, connected home devices, and smart cars come to mind. But the Internet of Things extends to all different kinds of industries. For instance, Caterpillar Marine, a subsidiary of the construction and mining equipment provider Caterpillar, recently turned to the Internet of Things to gain real-time insight into their fleets and ships, and provide better customer experience. Through the company’s data analytics service developed by ESRG Technologies, Caterpillar is able to collect information from sensors on their ships to manage their fleets. The type of information collected predicts machinery failure, allowing Caterpillar to schedule necessary maintenance. This type of service can provide massive savings and have a huge impact on a business. “We identify trends toward failure before they become alerts,” Jim Stascavage, marine asset intelligence technology manager at Caterpillar Marine, said in a case study. “The deviation won’t trigger an alarm, but it should, because the trend is starting to go in the wrong direction.” Caterpillar Marine wanted to go further with their analytical capabilities, and uncover trends that could potentially provide them with biggest cost savings or payoffs. Caterpillar choose a data integration and business analytics solution from Pentaho

understand events and situations. At the same time, businesses need to inject intelligence to the edge so they can react to those events very quickly. According to Norris, enterprises need to be able to converge the different data cycles, harness data flows and provide agility. Having a common data fabric can help handle all of the data in the same way, control access to the data, and apply intelligence in a high performance and fast way.

Security and privacy aspects Security continues to be a huge challenge with the Internet of Things. As devices become more widely used and spread out, a bigger surface area of attack is created. According to SpaceTime Insight’s Hughes, machine learning comes into play here as well because businesses need to be able to perform intrusion detection. “You can’t fully secure anything. You need to be able to understand as quickly as possible when there has been a breach. Machine learning comes into play for that and can do anomaly detec-

to help it combine its sensor data with operational data and find meaningful patterns in the equipment and solutions. The data Pentaho was able to collect included things like temperature, pressure, geographical coordinates and geometric angles. “We’re mashing all this data together and trying to figure out what it means for the performance of the ship,” Stascavage said in the case study. “It’s not simple for even one ship, so you can imagine how complex it is across an enterprise. There are literally trillions of data points that need to be evaluated every year.” According to Caterpillar, this new IoT analytical approach was able to provide better insight into equipment performance, strengthen customer relationships with ROI, and even provide savings in fuel efficiency, unscheduled downtime and environmental compliance. “We see this convergence of the machine-generated data being able to cut into organizations. Applying the other data sources for context is really what is driving these great business outcomes. That is what we see in the early IoT market. It is moving quickly and there is a lot of opportunity to take advantage of,” said Pentaho’s Prlich.

tion to determine whether or not a system has been breached, and then respond to it quickly,” he said. In addition, security has to be granular, according to MapR’s Norris. Any time data is moving, it has to be encrypted so it is not easily accessed. There also has to be some intelligence to how data flows, where the data moves, and where it is processed. Axway’s Kumar believes API management plays a big role in IoT and data analytics because most devices leverage APIs in some way or another. An API management solution can help ensure the data being passed is securely opened up and transmitted. However an API management solution will only take you so far when it comes to security. In addition, businesses need to ensure the policies implemented for API management are solid, provide governance, and enforce a set of corporate security policies that don’t enable data to be accessed by people who don’t need to access it, Kumar explained. “API management combined with best practices, policy management, and

governance help essentially both securing and putting the data into places that it needs to for further analysis or storage,” he said. As far as the privacy aspect of all of this, there are two pieces of it, according to Kumar. There is the user aspect and the company aspect. From the user perspective, we typically just go through and click okay into disclosures when we sign up for a service and connect our devices online. Users trust the business to protect them, or look out for their best interest. “People are either open or don’t have the full knowledge of privacy, so on the business side there are stricter rules they need to follow,” said Kumar. While businesses typically share data to third parties for further analysis, there are strict data privacy laws on how the data is stored and who has access to it that businesses need to adhere to in most cases, Kumar explained. “People are realizing it is not just about the devices, and not about the fast proliferation,” said MapR’s Norris. “It is about being able to deploy and leverage IoT effectively.” z

33


SDT01 page 34-36,38,39_Layout 1 6/15/17 4:38 PM Page 34

34

SD Times

July 2017

www.sdtimes.com

TOP 10 Considerations When Planning Docker-Based Microservices BY AATER SULEMAN

R

eplacing monolithic apps — or building greenfield ones — with microservices is a growing consideration for development teams that want to increase their agility, iterate faster and move at the speed of the market. Providing greater autonomy amongst different teams, allowing them to work in parallel accomplishing more in less time, microservices offer code that is less brittle, making it easier to Aater Suleman, Ph.D. is CEO & co-founder at Flux7, an Austin-based IT consulting company recognized by AWS for its expertise in DevOps.

change, test and update. Docker containers are a natural fit for microservices as they inherently features autonomy, automation, and portability. Specifically, Docker is known for its ability to encapsulate a particular application component and all its dependencies thus enabling teams to work independently without requiring underlying infrastructure or the underlying substrate to support every single one of the components they are using. In addition, Docker makes it really easy to create lightweight, isolated containers that can work with each other while being very portable. Because the application is decoupled from the

underlying substrate, it is very portable and easy to use. Last, it is very easy to create a new set of containers; Docker orchestration solutions such as Docker Swarm, Kubernetes, or AWS ECS make it easy to spin up new services that are composed of multiple containers — all in a fully automated way. Thus Docker becomes a natural fit for microservices when creating a microservices substrate on which Docker containers can run. All that said, there are several process and technology design points to consider when architecting a Dockerbased microservices solution. Doing so will help avoid costly rework and other headaches down the road.


SDT01 page 34-36,38,39_Layout 1 6/15/17 4:38 PM Page 35

www.sdtimes.com

Process Considerations 1.How will an existing microservice be updated? The fundamental reason developers use microservices is to speed development, which increases the number of updates they have to perform to a microservice. To leverage microservices fully, it is critical that this process be optimized. However, there are several components that make up this process and there are decisions that come with each step in the process. Let us explain with the help of three examples. First, there is the question of whether to set up continuous deployment or set up a dash-

board where a person presses a button to deploy a new version. The tradeoff is higher agility with continuous deployment versus tighter governance with manually triggered deployment. Automation can allow implementation of security with agility and allow both benefits to co-exist. Developers need to decide their workflows and what automation they require, and where. Second, it is important for businesses to consider where the actual container will be built. Will it be built locally, pushed and travel through the pipeline? Or will actual code first be converted into artifacts, and then to a Docker image that travels all the way to

July 2017

SD Times

production? If you go with a solution where the container is built in the pipeline, it is important to consider where it will be built and what tools will be used around it. Third, the actual deployment strategy must also be thought through. Specifically, you can update a microservices architecture through a blue-green deployment setup where a new set of containers are spun up and then the old ones are taken down. Or, you can opt for a rolling update as you go through the multiple service containers, creating one new container and putting it in service while you take out one of the continued on page 36 >

35


SDT01 page 34-36,38,39_Layout 1 6/15/17 4:38 PM Page 36

36

SD Times

July 2017

www.sdtimes.com

< continued from page 35

old ones. These decisions are multifaceted and require consideration of several factors including current flows, the skill levels of operators, and any technology inclinations. 2. How will developers start a brand new service? Starting a new service is a fundamental requirement of microservices. As a result, the process for starting a brand new service should be made as easy as possible. In this vein, an important question to ask is, ‘how will you enable developers to start a new service in a self-service fashion without compromising security and governance’? Will it require going through an approval process such as filing an IT request? Or, will it be a fully automated process? While I recommend erring on the side of using as much automation as possible, this is definitely a process point development teams will want to think through in advance to ensure you correctly balance the need for security, governance and self-service. 3. How will services get a URL assigned? This question really goes hand-in-hand with starting a brand new service. A new URL or subcontext (e.g., myurl.com/myservice) needs to be assigned to a new service each time it is created and a process for assigning them should ideally be automated. Options can include a self-service portal for assigning URLs manually or a process whereby the URL is automatically assigned and pulled from the name of the Docker container and any tags that are applied to the Docker container. Again, just as with starting a new service, I recommend erring on the side of using as much automation as possible — and therefore spend some ample time thinking through this important design point well in advance. 4. How will container failure be detected and dealt with? A key requirement of modern infrastructure is that it doesn’t require

allocate resources needed to execute a job, assign work to resources and orchestrators ensure that the resources necessary to perform the work are available when needed. There are many tool choices for container orchestration. Those typically considered are: ECS for customers in AWS, and Docker Swarm or Kubernetes for those who would like a vendor-agnostic solution. There are several angles for organizations to weigh in making this decision including portability, compatibility, ease of setup, ease of maintenance, the ability to plugand-play, and having a holistic solution. 7. What tool will be used to load balance requests between the containers of the same service? High availability and the ability to have multiple container services in the environment make it critical to support more than one container per microservice. For services that are non-clustered, for example web-based microservices developed in house, there is need for an external load balancer to balA very important decision in this is how ance incoming traffic between different coneach microservice is to be structured. tainers on the same server. For load balancers within the same service, there are several options — from taking process — one for each advantage of AWS ELB in Amazon to service — should be created. In these open source tools that can act as load cases, it is imperative that each process is balancers such as NGINX or HA Proxy. This is an important technology decikept homogeneous. A very important decision in this is sion that should be thoroughly evaluathow each microservice is to be struc- ed. Some salient design points to contured. For example, the Dockerfile sider in your evaluation: requirements should always appear in the exact same for session stickiness; the number of place and whatever is specific to the services you plan to have; the number of service should be contained with the containers you have per service; and any Dockerfile. In this way, the process can Web load balancing algorithms you would like to have. be made microservice agnostic. Similarly, other files such as a Docker compose 8. What tool will be used to route file or a task definition for AWS ECS traffic to the correct service? should consistently be put in the same This design point goes hand-in-hand place — across all services — so that with load balancing as it directly processes can run consistently in a addresses application load balancing. As pointed out earlier, individual URLs homogeneous fashion. or sub contexts are assigned per service. Technology Considerations When traffic hits the microservices 6. What tool will be used to sched- cluster, another task is to ensure that ule containers on compute nodes? the traffic coming in is routed to the continued on page 39 > Schedulers are important tools as they ‘babysitting’; it can self-heal and selfrecover if it goes down. As a result, it is paramount to have a process to detect failure and a plan for how it will be handled when it does occur. For example, it is important to have a pre-defined process to detect a container application that is no longer running, whether through a networking check or log parsing. Additionally, there should be a defined process for replacing the container with a new one as a possible solution. While there are many approaches to this process, the design point is to make sure that the requirements are met, ideally via automation. 5. How will the code for each microservice be structured? We want a fully automated process for building and deploying new services. Yet, if the number of services is going to be large, it can quickly become cumbersome to manage. As a result, multiple versions of the


SDT01 Full Page Ads_Layout 1 6/14/17 10:43 AM Page 37


SDT01 page 34-36,38,39_Layout 1 6/15/17 4:38 PM Page 38

38

SD Times

July 2017

www.sdtimes.com

Thank You for Not Adopting Microservices Unless you’ve already adopted modern development practices BY MOSHE KRANC

Microservice architectures are all the rage these days, and with good reason. In a nutshell, microservices is a software architecture pattern which decomposes monolithic applications into smaller single-purpose services that are built and managed independently. The benefits of a microservices architecture are: n Quality: separation of concerns minimizes the impact of one service’s bugs on another service. n Agility: upgrading an existing service or adding a new one can be done on a granular level, without impacting other services. n Scalability: Each service can be provisioned individually so it has the hardware and software resources it needs. n Reuse: Services can be shuffled and combined in new ways to provide new functionality. As CTO of Ness Digital Engineering, I frequently meet customers who want to implement a microservices architecture, so they can enjoy all these benefits. But, I’ve learned that a microservices architecture is not for everyone, and adopting it will not succeed unless the organization has already embraced several other modern software development practices and technologies: n Agile development: In a microservices architecture, each service is single Moshe Kranc is CTO at Ness Digital Engineering

purpose, and should therefore be developed by a fairly small team. If a problem is encountered, a fix can be quickly deployed for that service alone, without affecting the stability of other services. That’s the essence of agile development: small teams delivering frequent releases in short iterative cycles. If your development process is still waterfall or “fragile” (faux agile, i.e., waterfall disguised by daily standups), you won’t be able to develop or maintain microservices at the required pace. n Automated testing: To maintain and upgrade a microservice quickly, you need to be able to quickly assess the impact of a change. Multiply this by hundreds or thousands of services, and you quickly realize that there aren’t enough manual testers in your company to keep up. The only solution is test automation, where unit tests and end-to-end system tests are automatically run and the results are automatically validated. n DevOps: A microservices architecture vastly increases the velocity and volume of deployments, which can present severe operational challenges. To solve this problem, you’ll need to break down the walls that separate development from operations. That is precisely the goal of DevOps, which is the practice of operations and development engineers participating together in the entire product lifecycle, from design through development to production support. The core values of DevOps are summed up by the acronym CAMS, which stands for:

Culture: DevOps is first and foremost about breaking down the barriers between development and operations, fostering a safe environment for innovation and productivity and creating a culture of mutual respect and cooperation. This value may sound “soft”, but it is the most important (as Peter Drucker once observed, “Culture eats strategy for breakfast”), and it is the most difficult to implement, because there are no shortcuts or workarounds for the hard work of changing people’s attitudes. Automation: To develop high quality microservices at scale, you’ll need to perform continuous integration, where regression tests are automatically run each time a piece of software changes. To deploy at scale, you must eliminate the possibility of human error by automating the deployment process, to the point where your infrastructure is code, i.e., recipes that that have been proven correct and can be run on demand across a myriad of machines. Measurement: You need to capture metrics about each stage of the development and deployment process, and analyze those metrics in order to create an objective, blameless path of improvement. Sharing: A key to the success of DevOps in any organization is sharing the tools, techniques and lessons across groups, so that duplicate work can be eliminated and so that teams can make new mistakes instead of constantly repeating other teams’ mistakes. n Docker: You’ll need a well-defined,


SDT01 page 34-36,38,39_Layout 1 6/15/17 4:39 PM Page 39

www.sdtimes.com

consistent, self-contained environment to run each of your microservices. The most popular environment today is Docker, a Linux-based container which provides fast provisioning and low performance overhead. Implementing some microservices will require running several cooperating containers, e.g., one container for a NoSQL database and another container that generates data and stores it in the NOSQL database. To manage these container swarms, you’ll need to master a toolchain that includes tools like Kubernetes (for orchestration), Spring Boot (for deployment), Consul (for service discovery) and Hystrix (for failure recovery). n Cloud computing: The cloud is the ideal environment for deploying a microservices architecture, because the cloud provides the needed scalability and automation, with higher reliability than on-premise infrastructure. But, you’ll need to choose a cloud provider and then familiarize yourself with their tools and interfaces for deploying and monitoring cloud-based applications. If your organization is comfortable with all or most of these practices and technologies, then you are ready to enter the world of microservices architectures and reap the benefits. If not, then your organization is not ready for microservices, and the pain of attempting such a project will probably far outweigh the benefits. You would be best served in the short term by adopting a more traditional non-monolithic architecture that is better suited to your development culture, e.g., a Service Oriented Architecture (SOA) based on coarse-grain services. For the longer term, you’ll need to up your game, because the rate at which applications are released and updated will only increase over time, and you’ll need to embrace modern software development practices that support the new pace of business. Unfortunately, there are no short cuts to cultural change. Get help from a partner who can guide your organization through adoption of Agile and DevOps practices, so that you can then painlessly benefit from microservices architecture and whatever comes after that in the ever-evolving world of software engineering best practices. z

< continued from page 36

right microservice given the URL that the traffic is addressed to. Here we can apply HAProxy, NGINX or AWS Application Load Balancing (ALB). AWS ALB was introduced in August and in the short time it’s been available, a debate has emerged as to which tool is best for application load balancing. Two key questions you might ask to make the right decision are, how many microservices do you plan to have and how complex do you want your routing mechanism to be. 9. What tool will be used for secrets? With the number of microservices in a given application expected to increase over time, and modern applications relying more and more on SaaS extended solutions, security simultaneously becomes really important and more difficult to manage. For microservices to communicate with each other, they typically rely on certificates and API keys to authenticate themselves with the target service. These API keys, also known as secrets, need to be managed securely and carefully. As they proliferate, traditional solutions, such as manually interjecting at time of deployment, don’t work. There are frankly just too many secrets to manage, and microservices require automation. Organizations need to settle on an automated way to get secrets to containers that need them. A few potential solutions include: • In-house solution built for saving secrets in encrypted storage, decrypting them on the fly and and injecting them inside the containers using environment variables. • AWS IAM rules which can interject Amazon API keys. However, this solution is limited to Amazon API keys and can only be used to access secrets stored in other Amazon services. • HashiCorp Vault uses automation to effectively handle both dynamic and static secrets. Vault is a very extensive solution with several features unavailable in other solutions and we are finding it to be a more and more popular choice going forward. Your answer to this technology question depends on how many secrets you

July 2017

SD Times

have; how you expect that number to grow; your security and compliance needs; and how willing you are to change your application code to facilitate secret handling. 10. Where will SSL be terminated? One question that arises frequently, especially around microservices that service web traffic is: where should SSL be terminated? Typical design factors to consider include your security and compliance requirements. Typical options are at the application or network load balancer, for example terminating them at AWS ELB or ALB. A second option is to terminate SSL at an intermediate layer such a NGINX, or at the application container itself. Certain compliance initiatives, like HIPAA, require that all traffic be encrypted. Thus, even if you decrypt at the load balancer, it needs to be reencrypted before it is sent to containers running the application. On the flip side, the advantage of terminating at the load balancer is that you have a central place for handling SSL certificates. And fewer things have to be touched when an SSL certificate expires or needs to be rotated. Elements to consider as you make a design decision include your specific compliance and security requirements; the ability of your applications to encrypt and decrypt data; and your container orchestration platform as some have the ability to encrypt data seamlessly. The combination of all the above should be the basis for your SSL termination decision. While all these design and technology points may feel overwhelming, making the right choices will have longterm implications to your organization’s success with its microservices architecture. Like painting a house, more than half the work is in the preparation. From choosing the right primer to properly taping the wall, setting the right foundation and process boundaries are of significant importance in planning Docker-based microservices. Don’t short-change your preparatory process and you’ll end up with an endproduct that delivers on your organization’s most critical microservices goals. z

39


SDT01 Full Page Ads_Layout 1 6/14/17 10:43 AM Page 40


SDT01 page 41,42,45,46,49,50_Layout 1 6/15/17 4:36 PM Page 41

www.sdtimes.com

July 2017

SD Times

Buyers Guide

Getting an end-to-end perspective DevOps requires global view to successfully implement continuous integration, delivery BY MADISON MOORE

V

elocity. Quality. Traceability. Scaleability. For companies, it’s these characteristics that form the definition of “DevOps.” But it doesn’t stop there; doing DevOps right also means considering the processes, the culture, and the tools to deliver software and solutions for today’s value-hungry customers. And it still doesn’t stop there, because DevOps is all about having end-to-end perspective while embracing continuous integration and continuous delivery. As an organization matures, it will most likely include some modern CI and CD practices, but to do this right, but it’s going to take more than just a few automated tasks to make development easier. To be truly end-to-end, with visibility and constant feedback, the business needs to realize it’s not about one single tool, it’s about a chain of solutions and continued on page 42 >

41


SDT01 page 41,42,45,46,49,50_Layout 1 6/15/17 4:39 PM Page 42

42

SD Times

July 2017

www.sdtimes.com

< continued from page 41

how to pull them all together into a giant tool that operates as one. This is what will fuel a continuous integration and continuous delivery workflow, with feedback, visibility, and the ability to deliver services to customers quickly.

Accepting DevOps Regardless of the industry, change is always difficult, whether it’s changing a process, culture, the methodology, or a paradigm. It’s natural, and often expected, to feel a sense of resistance or hesitance. Getting teams to accept change, especially when it comes to workflow or culture, is part of the overall challenge of DevOps. According to Rod Cope, CTO of Rogue Wave, some teams will embrace it quickly, agreeing that it is the key to efficiency and scalability. Those that resist, however, fear that automation will make them obsolete, and so they inject manual steps along the way to make sure they seem valued in the organization, even if this does slow things down, said Cope. “It can be challenging to get those people over the hump to say well, there’s always more work that can be done so it’s not like we are going to need less people,” said Cope. “We are just going to move faster.” Instead, these automation tools allow organizations to be more flexible, keep up with rapidly changing technology, and it will let the organization move faster, he added. The reality is, DevOps is not about the tools. The tools are the enabler of the “holy grail” of what companies are trying to get to, which is delivering more to customers faster with better quality, said Nicole Bryan, vice president of product management at Tasktop. DevOps is trying to broaden the scope of the conversation to include this culture shift, which Bryan believes will take some time. However, she does see the conversation becoming inclusive of more than just the development side. “Ten years ago people didn’t realize delivering a software product was about more than just the code,” said Bryan. “And I think people do realize that now.”

How does continuous delivery actually make development easier? Just like DevOps is about more than just tools and processes, continuous integration and continuous delivery is more than just automating code and deploying changes. CI and CD can make development easier, but it’s going to take more than an automation tool to do so. Getting CI and CD to make development easier means developers are checking code daily, and when the CI/CD pipeline fails, then new work stops, said John Jeremiah, IT and software marketing leader at HPE. Diagnosing and correcting the build becomes the top priority in this scenario, and either the code is fixed or it’s pulled from the repository to correct the build. “Disruptive? Perhaps, but in the long run, quality improves, velocity improves, as does productivity,” said Jeremiah. “ Consider how security improves when every build is reviewed, scanned and validated from an app security perspective. Letting the build go red and not immediately correcting the issues is the recipe for a mess.” Having this discipline to correct build issues before introducing new changes is the key to making development easier with CI and CD, said Jeremiah. And of course, automation is a huge benefit of CI and CD. Automating all the individual pieces makes it easier for the developer, and he or she can focus on what they need to develop, without having to worry about the external pieces, said Stephen Feloney, vice president of products for application delivery at CA Technologies. Also, a good CD platform will remove unnecessary manual tasks and error-prone processes, which means developers can focus on functionality instead of non-coding tasks, according to vice president of products at XebiaLabs, Tim Buntel. “Things like provisioning, deployment, compliance, approvals, status reports: all of that still happens but developers don’t have to deal with them because they’re automated and standardized,” said Buntel. —Madison Moore

Don’t be a ‘fool with a tool’ Unlike the days of waterfall development, continuous delivery refers to the process of improving the overall release pipeline, which includes deployment automation, provisioning, testing, and continuous integration. Developer changes on a shared repository are merged daily, sometimes several times a day, giving teams the ability to work fast and stay involved all the time. According to Thomas Hooker, vice president of marketing at CollabNet, the very ideas of continuous integration and continuous delivery flow very natu-

rally into DevOps. The people and the processes are working rapidly together, and then the proper tooling supports these interactions, he said. He said the right tool can have a tremendous impact in the process for not just the developers, but the QA test team, the build team, and everyone else involved. It’s these tools that give teams the visibility, the scalability, and the traceability that is needed to deliver value to the customer. “At the end of the day we work to serve the customer,” said Hooker. “The continued on page 45 >


SDT01 Full Page Ads_Layout 1 6/14/17 10:43 AM Page 43

Does balancing speed, quality and scale feel like rocket science?

Blast off with HPE’s Application Lifecycle Management. Deliver quality applications rapidly, and at enterprise scale. Manage tests with an integrated ALM toolchain built for waterfall and Agile application development. Grow from defining and managing work items tracking, to optimizing program and portfolio. Project Agile is not Enterprise Agile. Discover the New. Visit saas.hpe.com/software/alm-octane


SDT01 Full Page Ads_Layout 1 6/14/17 10:44 AM Page 44


SDT01 page 41,42,45,46,49,50_Layout 1 6/15/17 4:39 PM Page 45

www.sdtimes.com

< continued from page 42

way we best serve the customer is to provide them value, and when we provide them real consumable value, they will buy more from us — not because they have to, but because they want to.” Simply using a continuous integration tool or an automation tool does not mean you are doing DevOps or doing CI. There is no set of tools that companies can buy that will give them DevOps. Continuous delivery supports the promise of DevOps, according to Tim Buntel, vice president of products at XebiaLabs, but it’s the companies that “must be willing to change their organizational structures, communications and processes to be successful with DevOps,” he said. According to John Jeremiah, IT and software marketing leader at HPE, continuous delivery tools are insufficient if the company doesn’t have the discipline to commit frequently, build often, test often, and correct build issues. “The tools help immensely, but without the right discipline, you can become a fool with a tool,” said Jeremiah. Companies can easily become a “fool with a tool” if they think that there is a one-size-fits-all solution. That doesn’t exist, and frankly, it never did exist, said Bryan. For each team, there is often a mix of tools automating their continuous integration or their continuous delivery workflow.

Focus on integration While there is no tool to rule them all, leaders in the continuous delivery and continuous integration tool space say that there is one important aspect that a good tool chain must include, and that is the ability to integrate third-party tools. That means, according to Stephen Feloney, vice president of products for application delivery at CA Technologies, if you have a continuous integration tool or orchestration tool, it needs to be able to integrate with say, Jenkins, and it should integrate with other thirdparty tools across the board, he said. At XebiaLabs, Buntel said that integration is the most important part of a solid continuous delivery tool. Small

teams have it easier, but the reality of most organizations is there are several different tech stacks involved in building software — lots of different environments and a mix of skills and languages can easily complicate things. The best platform will be flexible enough to accommodate all these different user types, these features, these integration points, as well as different parts of the organization, he said. Cope agrees, and said that integration is key for whatever tool is chosen. The tool should be able to work with the other tools in an environment, whether that’s Puppet, Docker, Jenkins, or other tools in the marketplace, including open-source tools. “You are going to use a

July 2017

SD Times

be hard coded or scripted for every one of the applications,” said Feloney. “And you need something that follows dependencies.” The scaling aspect of continuous delivery and continuous integration tools is not to be underestimated, says Tasktop’s Bryan. While there is the fear aspect of how a new tool might impact a job, the bigger concern should be with scaling, she said. Rob Elves, Tasktop’s senior director of product management, said “when you scale up, it’s not just simply volume; it’s scaling out to include inbound and outbound processes that are coming into that DevOps ‘infinity loop.’ ” Scale is the biggest challenge Tasktop hears from its customers, because it’s a “whole different ball game” trying to do DevOps with 100 people versus 10,000 people. This is where Bryan sees a lot of tools falling short. At a large organization, a tool needs to offer that collaboration, that communication, and have all of the right information flow.

There is no set of tools that companies can buy that will give them DevOps. bunch, and I like to say open-source is like potato chips, you can’t have just one,” said Cope. “You’re going to end up with a whole bag [of tools] at one point, so make sure they place nicely with each other.” And, flexibility is crucial here, said Cope. Every company has different packaging processes for their software, so preserve your choices and be modular. The only way to adapt, he said, is to not be locked into one process or tool chain.

Scaling for the enterprise When it comes to enterprises, which are large and often complex in nature, organizations need to find a tool chain that can actually scale. According to Feloney, scripting, coding and maintaining one application is different than scaling for an enterprise of say, 600 applications. “From an enterprise point of view, you need something that can actually fully scale to that level and not have to

“It boils down to the ability to have that value stream inclusive of full traceability with full streams going back and forth, real-time, all the time, the be able to satisfy that,” she said. “Scale is the short answer.”

The need for speed Today’s software delivery business initiatives are centered primarily around the idea of speed. The faster the continuous delivery pipeline runs, the faster the team gets feedback, which ultimately means, the consumer is getting their value, their product or service — faster. According to HPE’s Jeremiah, enterprise IT has been “chronically” unable to deliver at the speed of business. Part of this is due to a legacy of monolithic applications, brittle architecture, and regulatory and compliance requirements, he said. Regardless, software is critical to digital-first businesses, and so the software delivery teams are going to continued on page 50 >

45


SDT01 page 41,42,45,46,49,50_Layout 1 6/15/17 4:40 PM Page 46

46

SD Times

July 2017

www.sdtimes.com

What continuous integration/delivery challenges does your company’s solution solve? Tim Buntel, vice president of products at XebiaLabs: For small companies or greenfield projects where you’re starting from scratch or you’re working with one team and a narrow set of technologies, implementing DevOps is fairly straightforward. Where companies begin to run into difficulties is when they start to scale their DevOps operations across disparate teams and when projects start involving a lot different individuals with varied skill sets, different technologies for building software, and deployment to many different environments. The XebiaLabs DevOps Platform is built with all these variations and this complexity in mind. Our platform offers many integration points that make it easy for you to bring all your tools and teams into a Continuous Delivery pipeline, with full visibility for both technical and nontechnical team members. It allows you to proactively spot bottlenecks and potential failures anywhere in the pipeline. It lets you automate deployment to all kinds of environments, changes to workflows, compliance requirements, audit reports, and release management tasks. Rod Cope, CTO of Rogue Wave: Rogue Wave Software’s Continuous Delivery Assessment provides a blueprint for continuous delivery, helping companies develop an effective roadmap to adopting automation in software delivery processes, including architecting and optimizing open source-powered build and delivery pipelines. Open source tools like Jenkins, Maven, Puppet, Chef, Docker, and Kubernetes drive continuous integration and delivery in the modern enterprise, but getting them all working well together can be difficult. Each of these tools has many deployment options, dozens of configuration settings, and a large range of performance tuning capabilities. Developers may get something running, but is it optimized? Will it break at scale? Is it secure? We guide teams to implement a successful continuous delivery process, adopt DevOps best

practices, and implement open source tools for maximum efficiency. John Jeremiah, IT and software marketing leader at HPE: One of the problems facing enterprises is gaining visibility across pipelines, integrating different tools to get a handle on overall speed, quality and security. We’re innovating on a number of CI/CD related initiatives. I’ll highlight a couple. We’ve been building ALM Octane as a platform to help enterprises manage and maintain visibility into the health of their delivery. Octane helps to address the enterprise needs for mature CI/CD based delivery. Because automated testing is so critical, UFT Pro extends open source tools like Selenium and makes it easier for teams to create and maintain their battery of automated functional test scripts. These tools are critical enablers in achieving speed, quality and security of faster app delivery, but they have to do the work. Stephen Feloney, VP of products for application delivery at CA Technologies: Despite agile practices in many companies today, there are still bottlenecks within the release cycle that severely impede the ability of companies to deliver more value to their customers. In a recent survey by Computing Research, DevOps pros stated that 63% of delays were coming from Test/QA stage of the cycle. Again and again, we hear from our customers that they are missing the mark on continuous delivery because they still have manual testing silos, manual release processes and a disconnected DevOps toolchain. It is evident that companies cannot achieve continuous delivery if they do not modernize their testing and release practices, and CA solutions are purpose-built to solve these bottlenecks. We enable teams to generate test scripts from requirements, simulate test environments anywhere, access robust test data when it's needed and execute performance and security testing early and often. We

orchestrate continuous “everything”— development, testing, release and improvement — with robust integrations to open source, commercial and home grown solutions across the DevOps toolchain, including planning, CI, testing and deployment tools. Nicole Bryan, VP of product management at Tasktop: Many CI/CD tools focus on solving “the right side” of the DevOps delivery pipeline i.e. connecting releases to builds, automating deployments and monitoring of production application. Or in other words, improving the time to value from code completion to code in production. Of equal importance is “the left side” of the pipeline i.e. efficiently turning customer requests into requirements, features, epics and stories. These two sides must be interlinked because the success of the left side influences the success of the right side (and vice versa). Both must be integrated and communicating to create a value stream that continuously builds software that meets customers’ needs. An organization’s main problem is knowledge worker access to the right information at the right time, and Tasktop solves that. Integration relies on the realtime flow of project-critical information. Thomas Hooker, vice president of marketing at CollabNet: When selecting a solid continuous integration or continuous delivery tool, one of the things it has to have is visibility so managers and teams can see what is going on at all times. I think that is key, and you have to be able to provide that visibility based on the persona of the user. Our products give customers that visibility across the “whole forest,” so you get to see all the different work steps from a common dashboard. This is what CollabNet’s DevOps Lifecycle Manager provides managers with: a single view of all the Value Streams in their portfolio, as well as insight into the ways in which these applications contribute to the value the organization delivers. z


SDT01 Full Page Ads_Layout 1 6/14/17 10:44 AM Page 47


SDT01 Full Page Ads_Layout 1 6/16/17 1:30 PM Page 48


SDT01 page 41,42,45,46,49,50_Layout 1 6/15/17 4:40 PM Page 49

www.sdtimes.com

July 2017

SD Times

A guide to continuous integration and delivery tools n Atlassian: Atlassian offers cloud and on-premises versions of continuous delivery tools. Bamboo is Atlassian’s onpremises option with first-class support for the “delivery” aspect of Continuous Delivery, tying automated builds, tests and releases together in a single workflow. It gives developers, testers, build engineers, and systems administrators a common space to work and share information while keeping sensitive operations like production deploys locked down. For cloud customers, Bitbucket Pipelines offers a modern continuous delivery service that’s built right into Atlassian’s version control system, Bitbucket Cloud. n Chef: Chef Automate, the leader in Continuous Automation, provides a platform that enables you to build, deploy and manage your infrastructure and applications collaboratively. Chef Automate works with Chef’s three open source projects; Chef for infrastructure automation, Habitat for application automation, and Inspec for compliance automation, as well as associated tools. Chef Automate provides commercial features on top of the open-source projects that include end-toend visibility across your entire fleet, tools to enable continuous compliance, a unified workflow to manage all change, enterprise grade support, and more. n CloudBees: CloudBees is the hub of enterprise Jenkins and DevOps, providing companies with smarter solutions for automating software development and delivery. CloudBees starts with Jenkins, the most trusted and widely-adopted continuous delivery platform, and adds enterprise-grade security, scalability, manageability and expert-level support. By making the software delivery process more productive, manageable and hassle-free, CloudBees puts companies on the fastest path to transforming great ideas into great software and returning value to the business more quickly. n Dynatrace: Dynatrace provides the industry’s only AI-powered application monitoring that transcends the challenge human beings struggle with to manage complex, hyper-dynamic, web-scale applications. Bridging the gap between enter-

n

FEATURED PROVIDERS n

n CA Technologies: Comprised of the CA Application Test, CA Mobile Cloud, CA Release Automation and CA Service Virtualization solutions, the CA Continuous Delivery product portfolio addresses the wide range of capabilities (from pre-production to release) necessary to compete in today’s evolving digital economy. CA’s highly flexible, integrated solutions allow organizations to fully embrace the evolving requirements of the software-driven business landscape, enabling rapid development, automated testing, and seamless release of mission-critical applications. n CollabNet: CollabNet helps enterprises and government organizations develop and deliver high-quality software at speed. CollabNet is a Best in Show winner in the application lifecycle management and development tools category of the SD Times 100 for 14 consecutive years. CollabNetoffers innovative solutions, consulting, and Agile training services. The company proudly supports more than 10,000 customers with 6 million users in 100 countries. n HPE: HPE’s DevOps services and solutions focus on people, process and toolchain aspects for adoption and implementing DevOps at large-scale enterprises. Continuous Delivery and Deployment are essential elements of HPE’s DevOps solutions, enabling Continuous Assessment of applications throughout the software delivery cycle to deliver rapid and frequent application feedback to teams. Moreover, the DevOps solution helps IT operations support rapid application delivery (without any downtime) by supporting a Continuous Operations model. n Rogue Wave: Rogue Wave helps thousands of global enterprise customers tackle the hardest and most complex issues in building, connecting, and securing applications. Since 1989, our platforms, tools, components, and support have been used across financial services, technology, healthcare, government, entertainment, and manufacturing to deliver value and reduce risk. From API management, web and mobile, embeddable analytics, static and dynamic analysis to open source support, we have the software essentials to innovate with confidence. n Tasktop: Transforming the way software is built and delivered, Tasktop’s unique model-based integration paradigm unifies fragmented best-of-breed tools and automates the flow of project-critical information across dozens of tools, hundreds of projects and thousands of practitioners. The ultimate collaboration solution for DevOps specialists and all other teams in the software lifecycle, Tasktop’s pioneering Value Stream Integration technology provides organizations with unprecedented visibility and traceability into their value stream. Specialists are empowered, unnecessary waste is eradicated, team effectiveness is enhanced, and DevOps and Agile initiatives can be seamlessly scaled across organizations to ensure quality software is in production and delivering customer value at all times. n XebiaLabs: XebiaLabs develops enterprise-scale Continuous Delivery and DevOps software, providing companies with the visibility, automation and control they need to deliver software faster and with less risk. Global market leaders rely on XebiaLabs to meet the increasing demand for accelerated and more reliable software releases. prise and cloud, Dynatrace helps dev, test, operation and business teams light up applications from the core with deep insights and actionable data. We help companies mature existing enterprise processes from CI to CD to DevOps, and bridge the gap from DevOps to hybrid-to-native NoOps.

n Electric Cloud: Electric Cloud is a leader in enterprise Continuous Delivery and DevOps automation, helping organizations deliver better software faster by automating and accelerating build, test and deployment processes at scale. Induscontinued on page 50 >

49


SDT01 page 41,42,45,46,49,50_Layout 1 6/15/17 4:40 PM Page 50

50

SD Times

July 2017

www.sdtimes.com

< continued from page 45

have to embrace this pressure to increase speed, quality and security. Since things are moving fast through the system, teams need to manage that entire release pipeline, said Hooker. Now that there are so many releases happening each day, a tool chain can help the team understand the realities of this new way of development. While businesses want a tool to help them keep the continual stream of releases moving, both business leaders and technical people care about different metrics. Hooker said the technical teams want to see technical information on code and commits, but this means nothing to the business owner, who most likely wants to know whether an application is delivering value to the customer, he said. “The ability to show what is going on

across your continuous delivery or continuous integration toolchain in terms of what is now being called a value stream, is very important,” said Hooker. “We can show how all this work taking place is delivering value to the business and value to the customer, and those are the big things.” Feloney also hears customers saying they need to go faster, and his question to them is always “why?” Once you drill down, it’s obvious it’s not just about going faster, it’s about the company trying to achieve better business outcomes, he said. What is inhibiting these companies from going faster? Feloney said it’s a multitude of things, but first and foremost, it’s the culture. Tools can certainly help customers “go faster,” but tools will not fix the problems. “I can provide customers with tools

all day long, but if they can’t fix their culture it’s not going to help,” said Feloney. “Most enterprises are so used to doing waterfall and having these hand offs and all these check boxes, and everyone has a say about things being released, there is all this bureaucracy, and that is what’s inhibiting this faster delivery.” XebiaLabs’ Buntel said all of their customers’ initiatives aim for that goal — to deliver value to their customers faster. And it’s these high-level business goals centered around digital transformations that are translated down to IT and development, said Rogue Wave’s Cope. Those teams need to be more open and move faster and become more flexible, he said, which gets translated to having continuous integration and continuous delivery streamline development through production. z

A guide to continuous integration and delivery tools < continued from page 49 try leaders like Cisco, E-Trade, Gap, GE, Qualcomm and SpaceX use Electric Cloud’s solutions to boost software productivity. The ElectricFlow DevOps Release Automation Platform allows teams of all sizes to automate deployments and coordinate releases. n JetBrains: TeamCity is a continuous integration and deployment server that takes moments to set up, shows your build results on-the-fly, and works out of the box. It will make sure your software gets built, tested, and deployed, and you get notified about that appropriately, in any way you choose. TeamCity integrates with all major development frameworks, version control systems, issue trackers, IDEs, and cloud services. n Micro Focus: Micro Focus offers solutions to help businesses successfully implement DevOps and Continuous Delivery. Silk Central allows users to gain control, collaboration and traceability across all areas of software testing. Atlas is Micro Focus’ agile requirements delivery platform that enables development teams to gather and define business requirements in alignment with agile delivery.

StarTeam Agile provides support for Scrum-based sprint planning, backlog management and tracking.

n Microsoft: Visual Studio Team Services, Microsoft’s cloud-hosted DevOps service offers Git repositories; agile planning tools; complete build automation for Windows, Linux, Mac; cloud load testing; Continuous Integration and Continuous Deployment to Windows, Linux and Microsoft Azure; application analytics; and integration with thirdparty DevOps tools. Visual Studio Team Services supports any development language, works seamlessly with Dockerbased containers, and supports GVFS enabling massive scale for very large git repositories. It also integrates with Visual Studio and other popular code editors. n Puppet: Puppet provides the leading IT automation platform to deliver and operate modern software. With Puppet, organizations know exactly what’s happening across all of their software, and get the automation needed to drive changes with confidence. More than 75% of the Fortune 100 rely on Puppet to adopt DevOps practices, move to the cloud, ensure security and compliance, and deliver better software faster.

n Redgate Software: Including SQL Server databases in Continuous Integration and Continuous Delivery, and stopping them being the bottleneck in the process, is the mission at Redgate. Whether version controlling database code, including it in continuous integration, or adding it to automated deployments, the SQL Toolbelt from Redgate includes every tool necessary. Many, like ReadyRoll and SQL Source Control, SQL Compare and DLM Automation, integrate with and plug into the same infrastructure already used for application development. Think Git or Team Foundation Server, Jenkins or TeamCity, Octopus Deploy or Bamboo, for example, and the database can be developed alongside the application. n TechExcel: DevSuite helps organizations manage and standardize development and releases via agile development methods and complete traceability. We understand the importance of rapid deployment and are focused on helping companies make the transition over to DevOps. To do this, we have partnered with many automation tools for testing and Continuous Integration, such as Ranorex and Jenkins. Right out of the box, DevSuite will include these technologies. z


SDT01 Full Page Ads_Layout 1 6/14/17 10:45 AM Page 51


SDT01 Full Page Ads_52tk_Layout 1 6/15/17 3:38 PM Page 52


SDT01 page 53_Layout 1 6/14/17 12:36 PM Page 53

www.sdtimes.com

July 2017

SD Times

Guest View BY CIARAN DYNES

Subscription models fuel innovation W

ith the immense success of cloud platforms and Software-as-a-Service (SaaS) models on one hand and the increase of subscription services on the other, it’s clear that software offered as a subscription is becoming the new standard. In fact, as early as 2015, Gartner estimated that by 2020, 80 percent of vendors would adopt a subscription model. This change in the way companies use software has frequently been said to reflect users’ demand for flexibility. Indeed, companies are no longer willing to lay out a major investment to get equipped. They are looking to prioritize variability in their spending based on usage and to ensure they benefit from the value of the software before making a long-term commitment. As business models rotate toward subscription services, if technical developments don’t keep pace, a crucial piece of the puzzle will be missing.

Subscription services Let’s look at another essential aspect: the value software offers both IT and business users. This is where the real challenge is: The ability to provide IT managers with frequent releases that encapsulate current technology innovation and customer demands. For business users, cloud solutions provide the ability to have quick access to the computer resources and apps you need when you need them to help deliver qualitative results faster, cheaper and with higher quality in order to create a competitive advantage. A perpetual license model also allows for periodic software updates. However, the rhythm of these updates and the frequency at which they are available to users cannot be compared with the ongoing agility and innovation offered by providers of subscription services. This is not related to how their software is marketed, but rather to the vendor’s ability to establish a continuous cycle of innovation for its products.

Big Data and Cloud The continued growth in the use of big data and

cloud technologies is in and of itself a compelling proposition for continuous innovation. The speed at which these technologies become obsolete requires users to adapt at an unprecedented rate. Consider how the platforms adopted by customers today can become obsolete in as little as 12-18 months — Spark replaced MapReduce in record time and Spark 2.0 is a revolution compared to Spark 1.6. It is essential for integration, processing and operating software vendors responsible for these massive volumes of data to get as close to the market as possible, which means complying with key standards such as Hadoop, Spark and Apache BEAM. Not to mention they need to align themselves with the open source communities defining them. In practical terms, a company needs to anticipate the product roadmap needed to align with these innovative technologies and keep pace with customer demands. Open-source technologies — backed by a technically adept developer community — are particularly well-suited to a continuous innovation model. Additionally, subscription services are a logical way to embrace and foster a continuous innovation model. Previously dominant or legacy software models, marked by “proprietary” software solutions and perpetual licensing, take 18 to 24 months to deliver new features. Delivering new versions every 18 months is simply not viable for businesses.

Ciaran Dynes is vice president of products for Talend.

Open-source technologies ... are particularly well-suited to a continuous innovation model.

To support the emergence of new data uses Modern solutions for big data and cloud integration must be at the front lines of technology innovation. Not only to address customers various and rapidly evolving challenges—including customer intimacy, business sustainability, agility and economies of scale—but also encourage the emergence of new data uses like streaming, real-time insights and self-service in order to gain a competitive advantage. z

Statement of Ownership, Management, and Circulation for SD Times as required by 39 U.S.C. 3685 PS Form 3526; SD Times, publication number 0019-625, filed July 1, 2017, to publish 12 monthly issues each year for an annual subscription price of $179. The mailing address of the office of publication, the headquarters of business of David Lyman, President and Publisher; David Rubinstein, Vice President and Editor-in-Chief, is 225 Broadhollow Road, Suite 211, Melville, NY 11747. The owner is D2 Emerge LLC, 2 Roberts Lane, Newburyport, MA 01950. Known bondholders, mortgagees, and other security holders owning or holding 1% or more of total amount of bonds, mortgages, or other securities are David Lyman, 2 Roberts Lane, Newburyport, MA 01950, and David Rubinstein, 225 Broadhollow Road, Suite 211, Melville, NY 11747. The actual number of copies of the July 1, 2017 issue includes: total number of copies (net press run): 21,234; paid and/or requested outside-county mail subscriptions: 19,502; paid/requested in-county subscriptions: 0; sales through dealers, carriers, and other paid or requested distribution outside USPS: 0; requested copies distributed by other mail classes through the USPS: 0; total paid and/or requested circulation: 19,502; outsidecounty nonrequested copies stated on PS form 3541: 1,732; in-county nonrequested copies stated on PS form 3541: 0; nonrequested copies distributed through the USPS by other classes of mail: 0; nonrequested copies distributed outside the mail: 0; total non-requested distribution: 1,732; total distribution: 21,234; copies not distributed: 38; for a total of 21,272 copies. The percent of paid and/or requested circulation was 91.84%. I certify that all information furnished on this form is true and complete. –David Lyman, Publisher.

53


SDT01 Full Page Ads_Layout 1 6/14/17 12:41 PM Page 54


SDT01 page 55_Layout 1 6/13/17 12:42 PM Page 55

www.sdtimes.com

July 2017

SD Times

Analyst View BY PETER THORNE

Quantifying software quantities I

f you can estimate — to the right order of magnitude — the significant quantities in your software system, you will have a much better chance of making good decisions about architecture, algorithms, data structures and deployment.

By the numbers The single point I want to focus on is the role of numbers in decision making about software systems. I’m talking about the numbers that define characteristics of the finished system — number of connected users/machines, scope, scale, speed, data throughput, acceptable down time and so on. My belief is that you don’t need to know these numbers accurately, but it is absolutely vital to know them to the right scale, the right order of magnitude — tens, hundreds, thousands, millions? To make good decisions at any level, one of the many things needed is to be able to identify which numbers are relevant, then estimate them. From number of records for a sorting algorithm to number and complexity of moving objects in a video game scene rendering system, the numbers guide your decisions. Most of the time, being within a factor of 10 will be good enough. But most of the time isn’t all the time — safety-critical software usually needs better estimates!

New rules for Internet of Things IOT is changing expectations, and introducing new relevant quantities. One example is sensor data. How much data should you expect when you have five thousand sensors being scanned once per second? Most people would say 5,000 data points per second. This is not wrong, but in practice it is an upper limit. Why? Because some sensors, or the first level of processing of sensor signals, report changes, not actual values. Imagine a lighting control system in which software scans the on/off switches, then decides what to do. On average, the switch setting stays the same for long periods of time. But everyone expects fast response of a light to its switch. So the switch setting must be scanned 10 or more times per second. But there is no need to report long sequences of ‘no change’, just react to a few switch ‘events’ per day. I recently saw an example for over 150 thousand sensors. The ‘typical’ scan rate was believed to be

once per second, so the rough estimate for data rate was ‘one or two hundred thousand data points per second.’ When measured, this system generated between 850 and 1,350 events per second. So the reality — one or two thousand events per second — was two orders of magnitude, a factor of 100 times, smaller than the estimate. The point is that a communications system optimized for 100K data points per second is probably suboptimal for 1K data points per second in almost every respect, including hardware required, storeand-forward queue sizes, data structures used for event reporting, error detection/correction strategy, failover sequences and the interface to the management and reporting systems. Even choices of development environment, software technology and test strategy may be wrong. So the designers of this system need serious conversations about what they should do. Do they need to handle a theoretical peak of 100 times ‘normal’ capacity? Or perhaps the estimate was just wrong.

It’s not just speed!

Peter Thorne is director at analysis firm Cambashi.

Software engineers need to decide what numbers matter to the development method and the software design.

There are many quantities that may not be specified in requirements — for example, dataset size, range of screen resolutions, number of simultaneous users, time before an inactive user is logged out, max and min update frequency for the code. Software engineers need to decide what numbers matter to the development method and the software design. Sometimes it’s right to pass these questions back to the requirements author. Dig inside the code, and other numbers become significant — how many times will that code be triggered, is there a maximum iteration count for that loop, how many ways should we index that data, and so on. This is almost always territory where there’s no way of passing the problem.

Ask, then answer or estimate So in all cases, it’s right to question…how many, how big, how frequent, how long? Then think what would change if your estimate changed by an order of magnitude. This will help you build insights into the problem and the solution, and these insights will lead you to build better software. z

55


SDT01 Full Page Ads_52,56TK_Layout 1 6/15/17 11:22 AM Page 56


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.