SD Times DevOps Showcase 2019

Page 1

DevOpsShowcase2019.qxp_Layout 1 8/28/19 1:18 PM Page 1

INSIDE 2

Going ‘lights out’ with DevOps

4

Redgate: Starting a DevOps initiative requires cultural and technology shifts

7

Tasktop Illuminates the Value Stream

9

Broadcom: Your DevOps Initiatives Are Failing — Here’s How to Win

10

Scaled Agile: The Most Important Tool in DevOps — Value Stream Mapping

13

CircleCI: Avoiding The Hidden Costs of Continuous Integration

14

Bringing Rich Communication Experiences Where They Mattermost

17

Instana Monitoring at DevOps Speed

18

DevOps Showcase


DevOpsShowcase2019.qxp_Layout 1 8/28/19 1:18 PM Page 2

P

BY NATE BERENT SPILLSON

eople sometimes describe DevOps as a factory. It’s a good analogy. Like a factory, code goes go in one end of the DevOps line. Finished software comes out the other. I’d take the idea one step further. In its highest form, DevOps is not just any factory, but a ‘lights-out’ factory. Also called a “dark factory,” a lightsout factory is one so automated it can perform most tasks in the dark, needing only a small team of supervisors to keep an eye on things in the control room. That’s the level of automation DevOps should strive for. In a lights-out DevOps factory, submitted code is automatically reviewed for adherence to coding standards, static analysis, security vulnerabilities and automated test coverage. After making it through the first pass, the code gets put through its paces with automated integration, performance, load and end-toend tests. Only then, after completing all those tests, is it ready for deployment to an approved environment. As for those environments, the lights-out DevOps factory automatically sets them up, provisions them, deploys to them and tears them down as needed. All software configuration, Nate Berent Spillson is a technology principal at software services provider NexientaL.

secrets, certificates, networks and so forth spring into being at deploy time, requiring no manual fidgeting with the settings. Application health is monitored down to a fine-grained level, and the actual production runtime performance is visible through intuitive dashboards and queryable operator consoles (the DevOps version of the factory control room). When needed, the system can self-heal as issues are detected. This might sound like something out of science fiction, but it’s as real as an actual, full-fledged lights-out factory. Which is to say, “real, but rare.” Many automated factories approach lights-out status, but few go all the way. The same could be said of DevOps. The good news is that you can design a basic factory line that delivers most of the benefits of a “lights-out” operation and isn’t too hard to create. You’ll get most of the ROI just by creating a DevOps dark factory between production and test. Here is a checklist for putting together your own “almost lights-out” DevOps solution. Don’t worry. None of these decisions are irreversible. You can always change your mind. It will just take some rework. IaaS or PaaS or containers: I recommend PaaS or containers. I’m a big fan of PaaS because you get a nice price point and just the right

1.

amount of configurability, without the added complexity of full specification. Containers are a nice middle ground. The spend for a container cluster is still there, but if you’re managing a large ecosystem, the orchestration capabilities of containers could become the deciding factor. Public cloud or on-premises cloud: I recommend public cloud. Going back to our factory analogy, a hundred years ago factories generated their own power, but that meant they also had to own the power infrastructure and keep people on staff to manage it. Eventually centralized power production became the norm. Utility companies specialized in generating and distributing power, and companies went back to focusing on manufacturing. The same thing is happening with compute infrastructure and the cloud providers. The likes of Google, Amazon and Microsoft have taken the place of the power companies, having developed the specialized services and skills needed to run large data centers. I say let them own the problem while you pay for the service. There are situations where a private cloud can make sense, but it’s largely a function of organizational size. If you’re already running a lot of large data centers, you may have enough core infrastructure and competency in place to make the shift to private cloud. If you

2.


DevOpsShowcase2019.qxp_Layout 1 8/28/19 1:18 PM Page 3

3

centric, as they were in the 90’s and 00’s, with the RDBMS the center of the enterprise universe. Relational still has its place, but cloud-native storage options like table, document, and blob provide super-cheap high-performance options. I’ve seen many organizations that basically applied their old standards to the cloud, and said, “Well, you can’t use blob storage because it’s not an approved technology,” or “You can’t use serverless because it’s an ‘unbounded’ resource.” That’s the wrong way to do it. You need to re-examine your application strategy to use the best approach for the price point. Mobile: Mobile builds are one of the things that can throw you for a loop. Android is easy, Mac is a little more complicated. You’ll either need a physical Mac for builds, or if you go with Azure DevOps, you can have it run on a Microsoft Mac instance in Azure. Some organizations still haven’t figured out that they need a Mac compute strategy. I once had a team so hamstrung by corporate policy, they were literally trying to figure out how to build a “hack-intosh” because the business wanted to build an iOS app but corporate IT shot down buying any Macs. Once we informed them we couldn’t legally develop on a “hack-intosh,” they just killed the project instead of trying to convince IT to use Mac infrastructure. Yes, they abandoned a project, with a real business case and positive ROI because IT was too rigid. DB versioning: Use a tool like Liquibase or Flyway. Your process can only run as fast as your rate-limiting step, and if you’re still versioning your database by hand, you’ll never go faster than your DBAs can execute scripts. Besides, they have more important things to do. Artifact management, security scanning, log aggregation, monitoring: Don’t get hung up on this stuff. You can figure it out as you go. Get items in your backlog for each of these activities and have a more junior DevOps resource ripple each extension through to the process as its developed. Code promotion: Lay out your strategy to go from Dev to Test to Stage to Prod, and replace any manual

5. decide to go that route, you absolutely must commit to a true DevOps approach. I’ve seen several organizations say they’re doing “private cloud” when in reality they’re doing business as usual and don’t understand why they’re not getting any of the temporal or financial benefits of DevOps. If you find yourself in this situation, do a quick value-stream analysis of your development process, compare it to a lights-out process, and you’ll see nothing’s changed from your old Ops model. Durable storage for databases, queues, etc.: I recommend using a DB service from the cloud provider. Similar to the decision between IaaS and PaaS, I’d rather pay someone else to own the problem. Making any service resilient means having to worry about redundancy and disk management. With a database, queue, or messaging service, you’ll need a durable store for the runtime service. Then, over time, you’ll not only have to patch the service but take down and reattach the storage to the runtime system. This is largely a solved problem from a technological standpoint, but it’s just more complexity to manage. Add in the need for service and storage redundancy and backup and disaster recovery, and the equation gets even more complex. SQL vs. NoSQL: Many organizations are still relational database-

3.

4.

6. 7.

8.

setup like networking, certificates and gateways with automated scripts. Secrets: Decide on a basic toolchain for secrets management, even if it’s really basic. There’s just no excuse for storing secrets with the source control. There are even tools like git-secret, black-box, and git-crypt that provide simple tooling and patterns for storing secrets encrypted. CI: Set up and configure your CI tool, including a backup / restore process. When you get more sophisticated, you’ll actually want to apply DevOps to your DevOps, but for now just make sure you can stand up your CI tool in a reasonable amount of time, repeatedly, with backup. Now that you’ve made some initial technology decisions and established your baseline infrastructure, make sure you have at least one solid reference project. This is a project you keep evergreen and use to develop new extensions and capabilities to your pipelines. You should have an example for each type of application in your ecosystem. This is the project people should refer to when they want to know how to do something. As you evolve your pipelines, update this project with the latest and greatest features and steps. For each type of deployment — database, API, front end and mobile — you’ll want to start with a basic assembly line. The key elements to your line will be Build, Unit Testing, Reporting, and Artifact Creation. Once you have those, you’ll need to design a process for deploying an artifact into an environment (i.e. deploying to Test, Stage, Prod) with its runtime configuration. From there, keep adding components to your factory. Choose projects in the order that gets you the most ROI, either by eliminating a constraint or reducing wait time. At each stage, try to make “everything as code.” Always create both a deployment and rollback and exercise the heck out of it all the time. When it comes to tooling, there are more than enough good open-source options to get you started. To sum up, going lights-out means committing to making everything code, automated, and tested. z

9.

10.


DevOpsShowcase2019.qxp_Layout 1 8/28/19 1:18 PM Page 4

4

Starting a DevOps initiative requires cultural and technology shifts SD Times

September 2019

www.sdtimes.com

Redgate Software, which builds tools for developers and data professionals, is about to celebrate the 20th anniversary of its SQL Compare tool. Kendra Little, DevOps advocate at Redgate said, “This product has evolved quite a lot over the last 20 years, but the technology is still at the heart of a lot of our products because a lot of what SQL Compare originally did was just help people compare databases,” she added. “We want to help people develop in a simple, intuitive way, and know how to adapt this tooling with other tools as much as possible to create a solution that helps people avoid manual work and create quality code for databases.” GDPR has had a big impact on the solutions in the space. The company has created technology that helps people implement guardrails for their databases around sensitive data. “Guardrails have become more common in this area since GDPR has started because we need to be able to foster innovation more than ever, but we can’t just let the data be at risk,” Little pointed out. “We have to protect it by design. In the software development lifecycle, we enable ways for people to use realistic data that has been masked so that you can do things like on demand, create environments, provision databases, but you’re not just copying around risky data.” The mission of delivering a compliant database service for Redgate is guided by the philosophy to meet people where they work and support them throughout their DevOps journey. It knows starting a DevOps initiative is tough and there are a lot of cultural and technology changes that have to happen. Little explained, “We want to help people continue to use familiar tools that they like,

and we want our solution to map into that. We also recognize that the way they work is going to evolve over their journey.” The cultural changes that have to happen for people to start DevOps are significant. Developers can be resistant to DevOps, even though it’s a very developer-focused discipline. The database area is particularly siloed. It’s common for database specialists to be in a gatekeeper position for production, and for developers to try to throw changes over the walls. The cultural changes in DevOps require shifting this relationship dramatically and finding ways to bring these specialists into the process early in the development lifecycle. Little said, “A lot of what we help folks with is

“The central problem that people hit even after doing an Agile initiative is they end up having what we call a two-speed culture.” —Kendra Little

to identify the places in their process where they can bring these people in and what’s the most effective way to make the best use of everyone’s time. You can’t have your specialist just attend six or seven meetings every day. That doesn’t work.” Redgate’s Compliant Database DevOps stands out from its competitors because it does not require developers to learn to use a new development environment. It creates extensions that hook into environments and tools that its customers already use. The automation components enable people to work with scripting and provides graphical extensions for tools like Octopus Deploy or Azure DevOps so developers can use their orchestration functionality. “One of

the latest things we’re working on that’s in our early access program right now, is a way that developers who prefer to use Microsoft Visual Studio can work in Visual Studio and collaborate on the same project with database administrators who are using SQL Server Management Studio,” adds Little. Developers tend to be quite nervous about changes to a database and one of the philosophies of DevOps is, if something causes you pain you should do it more so you get used to it. Little said that the guardrails in Compliant Database DevOps reduce risk and allow developers to keep all of the benefits of practices that they’ve been using to produce quality code for applications for years, but now they have the ability to do that for databases as well. Little attended a Gartner Architecture conference and found that people were surprised that you can do DevOps to the database. She believes this is an area where Agile has been taking over. The problem she sees is when people implement Agile they tend to implement the lowest hanging fruit first, and at the application level only, without realizing there are other areas where they can implement. “So essentially the central problem that people hit even after doing an Agile initiative is they end up having what we call a two-speed culture, where they can deploy these application changes really fast but as soon as they have to touch the database, everything slows down. When you explain you can apply Agile methodologies to the database, and you can do DevOps to the database, people are amazed.” Tough problems can now be solved using Agile with DevOps. There’s been such a fixed mindset about this that even explaining to people that, “Yes, you really can do this successfully!” is absolutely a mind-blowing, mind-opening thing. z


DevOpsShowcase2019.qxp_Layout 1 8/28/19 1:18 PM Page 5

Compliant Database DevOps Deliver value quicker while keeping your data safe

Redgate's Compliant Database DevOps solution gives you an end-to-end framework for extending DevOps to your database while complying with regulations and protecting your data.

Find out more at www.red-gate.com/DevOps


DevOpsShowcase2019.qxp_Layout 1 8/28/19 1:18 PM Page 6

CI/CD is just the beginning.

;ย rv -11;ัด;u-|;v เฆ l; =uol 1o7; 1ollb| |o 7;rัดoย ฤป 0ย | ย _-| -0oย | ;ย ;uย |_bm] |_-| _-rr;mv 0;=ou; -m7 -[;uฤต Connect your DevOps tools and teams to all work upstream Automate |_; Yoย o= ย ouh =uol b7;-เฆ om |o or;u-เฆ om -m7 0-1h Visualize ย oย u ;m7ล |oล ;m7 ย ouhYoย |o vro| 0oย ัด;m;1hv Accelerate |_; ย -ัดย ; 7;ัดbย ;uย o= |_; -ย ;vol; ruo7ย 1|v ย oย 0ย bัด7 o lou; 7ย rัดb1-|; ;m|uย ฤท mo lou; ย -v|; ล fย v| ย -ัดย ;ฤบ

Coming 2020 bm-ัดัดย - 0ย vbm;vv l;|ub1v |ooัด =ou vo[ย -u; 7;ัดbย ;uย ฤบ Tasktop.com


DevOpsShowcase2019.qxp_Layout 1 8/28/19 1:18 PM Page 7

Tasktop Illuminates the Value Stream Software delivery teams are delivering software faster with the help of Agile and DevOps, but are they delivering value faster? Although release frequency is increasing, most teams lack the visibility across the product lifecycle — and the associated toolchain — to help identify the obstacles to value stream flow. “Value stream management is not just about delivery; it’s also about protecting business value,” said Carmen DeArdo, senior VSM strategist at Tasktop. “Enterprises need to have a ‘True North,’ and challenge how they can work more closely with the business so they can be more responsive to the market and disruption. We help customers navigate that, which also means moving from a project to a product model to change the perception of IT as a cost center, rather than a revenue driver.”

Understanding the Value Stream Flow is essential to value stream optimization and management. To understand value stream flow, the teams that plan, build and deliver software need a single source of truth into the flow of events, from the earliest stages of product ideation through production —including customer feedback. While a product life cycle seems like a continuous flow conceptually, Tasktop reveals the otherwise hidden wait-state points that interfere with value delivery. “High-performing companies are focused on products because products are sustainable. Projects come and go,” said DeArdo. “A product model prioritizes work, ensuring you have the right distribution (across all work), as well as paying close attention to technical debt which is critical. Debt is what causes companies to go under, hurting their ability to compete with disruptive forces in the marketplace.”

Value streams begin and end with customers, but what happens in between is often a mystery. Although specialist teams have insight into artifacts within the context of a particular tool, they tend to lack visibility across the artifacts in other tools in the value stream to understand where the obstacles to value delivery reside. “Enterprises spend a lot of time assuming they’re improving the way we work, but how do we really know?” said DeArdo. “It’s hard to determine how long it takes for a feature or defect to go through the value stream if you lack the data to quantify it. Quite often, I talk to teams that are focused on delivery speed, but they tend not to think about the toolchain as a product that is intentionally architected for speed of delivery.” Tasktop enables teams to measure work artifacts in real-time across 58 of the most popular tools that plan, build and deliver software. They’re also able to visualize value stream flow to pinpoint bottlenecks. “It’s about making work visible,”

Value stream management is not just about delivery; it’s also about protecting business value. said DeArdo. “It’s not just the Scrum team’s board, it’s how you manage the entire value stream — features, defects, risks, debt. When you understand the value stream, you’re in a better position to prioritize work and optimize its flow.”

Optimize Value Stream Flow Understanding value stream flow is crucial. Start small, with a few teams that might represent a slice of some giv-

7

en product and that have a supportive IT leader and a business leader. Then, observe the flow, considering the following five elements: l Flow load — the number of flow items (features, risks, defects, debt) in progress l Flow time — time elapsed from when flow item is created to delivery l Flow velocity — number of flow items completed in a given time l Flow distribution — allocation of flow items completed across all four flow item types l Flow efficiency — the proportion of time flow items are actively worked compared to the total time elapsed. “If you ask a developer or operations what’s slowing them down, they will typically say they are waiting — for work to flow, for infrastructure, for approval,” said DeArdo. “Those holdups are what you want to discuss and get metrics around. Pick an experiment, run it, learn from it and then use what you learn as a model. That model can be used to scale and sustain what you’re doing across the organization.” Tasktop helps enterprises to extract technical data from the toolchains that underpin product value streams and translates them into Flow Metrics, presenting them in a common language and context that the business can understand. This view includes the value delivered, cost, quality and the team’s happiness since a lack of happiness is an indicator that something is amiss — e.g., too much debt that’s impeding a team’s ability to work. This helps the teams doing the work to better understand the value stream flow and the business outcomes they are aiming to achieve. Conversations can center around these metrics to support strategic investment decisions in IT. Learn more at www.tasktop.com. z


DevOpsShowcase2019.qxp_Layout 1 8/28/19 1:18 PM Page 8


DevOpsShowcase2019.qxp_Layout 1 8/28/19 1:18 PM Page 9

Your DevOps Initiatives Are Failing: Here’s How to Win

Everyone wants to do DevOps, but not everyone understands how to do it. While most organizations understand the benefits successful DevOps brings to the business, they don’t understand how to get there. Harvard Business Review recently found 86 percent of respondents want to build and deploy software more quickly, but only 10 percent are successful at it. Additionally, Gartner predicted by 2023, 90 percent of DevOps initiatives will fail to meet expectations. So despite the desire to do DevOps, there is a disconnect between wanting to do it and actually doing it. Businesses that don’t want to become one of these statistics need to understand why they are not meeting expectations and how to address the problem.

Change is hard According to Stephen Feloney, head of products for continuous testing at Broadcom, it all comes down to the culture, skills and tools. When businesses say they want to do DevOps, they mean they want to release applications faster with higher quality. “When we say these DevOps initiatives fail, they fail because they are failing to meet the needs of the customers,” he said. “They are failing to meet the goal of giving their customers, their users, what they want in a timely fashion. They might provide the feature customers want, but the quality is poor, or they take too long and the feature isn’t differentiating. “DevOps is a big culture change, and if everyone is not on the same page, things can get lost in translation. It is not that people don’t want to change; they don’t know how to change,” Feloney explained. “So even if a DevOps initiative is being mandated from the top, businesses need to provide the necessary resources and training to deploy it.”

Different teams use different tools

In addition to culture and skills, businesses have to take into account that historically, developers and testers use different tools. According to Feloney, you don’t want to make it harder on teams by forcing them to use tools that don’t suit the way different teams work. Agile testers are going to lean towards tools that work “as code” in the IDE, and are compatible with open source. Traditional testers tend to prefer tools with a UI. The key is to let teams work the way they want to work, but ideally with a single platform that allows teams to collaborate; share aspects like tests, virtual services, analytics, and reports; and ultimately break down the silos so your company no longer has Dev + Ops, but truly has DevOps. Broadcom recently announced BlazeMeter Continuous Testing Platform, a shift-left testing solution that handles GUI functional, performance, unit testing, 360° API testing, and mock services. It delivers everything you need in a single UI, so organizations can address problems much faster. With support for mock services, developers can write their own virtual, or mock services, and deploy a number of tests to a service without having to change the application. The solution will store those requests and responses so a tester can go in, see what happened, and enhance it. BlazeMeter CT also supports popular open-source tools developers are accustomed to using, such as Apache Jmeter, Selenium and Grinder. “You are learning from what the developers have done as opposed to reinventing the wheel,” said Feloney. “BlazeMeter CT enables development and Agile teams to get the testing they need to get done much easier and much faster while allowing the collaboration that DevOps requires.”

There is no insight into what is happening

As mentioned earlier, DevOps activities often fail because they take too much time. If teams are unable to find where the bottlenecks are, they are unable to come up with a solution. Having insight into what is happening and how it affects the overall success of the application and business is crucial, according to Uri Scheiner, senior product line manager for Automic Continuous Delivery Director. To this end, Automic Continuous Delivery Director provides a real-time workflow for monitoring and managing features and fixes throughout your entire pipeline. Teams have full visibility into app progress, can easily manage multiapp dependencies, map development efforts to business requirements, and more. Automic Continuous Delivery Director is more than release planning. It’s end-to-end orchestration and pipeline optimization that empowers a culture of shared ownership and helps you deliver higher-quality applications faster.

Businesses are impatient “You can’t go into DevOps saying it is all or nothing,” Feloney explained. DevOps happens in baby steps and with the understanding that it is going to take time to learn and build upon that learning. “A lot of companies are impatient,” Feloney said. “You have to do this in bite sizes and find a team that is willing to do it, work through the problems, know you are going to fail and work on those failures. Once you get that success, and it will be a success if you have the right mentality, then you can share that — make that team and that project the example — and grow that success across your company.” Learn more at www.blazemeter.com and www.cddirector.io. z


DevOpsShowcase2019.qxp_Layout 1 8/28/19 1:18 PM Page 10

10 10

The Most Important Tool in DevOps: Value Stream Mapping SD Times

September 2019

www.sdtimes.com

“We want to change the conversation about tooling in DevOps. Everyone is ‘doing’ DevOps, but only a handful are getting the value they expected. Why? They’re using the wrong tools or applying tools in the wrong ways. The solution is to apply the right tools at the right times, and for the right reasons,” said Marc Rix, SAFe Fellow and Curriculum Product Manager at Scaled Agile. To help organizations achieve the best customer-centric results, Scaled Agile now urges teams to start out with Value Stream mapping in order to get both the business and tech sides fully involved in DevOps. While some engineering teams are achieving technically impressive results doing “pieces” of DevOps, DevOps is about much more than just “Dev” and “Ops,” according to Rix, who joined Scaled Agile in January, 2019 after five years of leading largescale Agile and DevOps transformations. The Scaled Agile Framework (SAFe) is a knowledge base of proven, integrated principles and practices for Lean, Agile, and DevOps, aimed at helping organizations deliver high-quality value to customers in the shortest sustainable lead time. In addition to providing SAFe free of charge, Scaled Agile offers guidance services for implementation and licenses its courseware directly and through third-party partners for coaching customers in SAFe. The SAFe DevOps course also prepares participants for the SAFe DevOps Practitioner (SDP) certification exam. “DevOps technology is really cool. But it’s not for winning science-fair prizes,” quipped Rix. “DevOps is for solving real business problems. It’s our mission to help everyone investing in DevOps to achieve the culture of continuous delivery they’re looking for so they can win in their markets.” Scaled Agile frequently advises customers licensing its courseware to kick

things off by using the Value Stream mapping within SAFe DevOps, focusing on the following three learning objectives.

“How you get work into the deployment pipeline is equally as important as how you move work through the pipeline,” said the Scaled Agile product manager.

Mindset over practices As Gene Kim and his co-authors pointed out in The DevOps Handbook, “In DevOps, we define the value stream as the process required to turn a business hypothesis into a technology enabled service that provides value to the customer.” Business value is the ultimate goal of DevOps, and value begins and ends with the customer. DevOps needs to optimize the entire system, not just parts of it. Flow should be Lean across the entire organization, and Value Stream mapping is a Lean tool, said Rix. “DevOps is the result of applying Lean principles to the technology value stream,” attested The DevOps Handbook.

Everyone is essential “If someone touches the product or influences product delivery in any way, they are involved,” according to Rix. Participation in DevOps shouldn’t be offered on an opt-in/opt-out basis, he added. DevOps must involve both IT leaders and business leaders such as corporate executives, line managers, and department heads, Rix said. Non-technical participants should also include product managers, product owners, program managers, analysts, and Scrum Masters, for example. Technical folks should include testers, architects, and info-security specialists, along with developers and operations engineers. “An IT team could be deploying a hundred times per day, but if their work intake is not connected to the business, the results will not materialize.,” observed Mik Kersten, in the book Project to Product.

Plan the work, work the plan (together) By embracing DevOps, organizationwide teams need to face the realities of the current system. Teams should avoid simply “automating for automation’s sake,” or automating a broken system. By mapping and baselining the current system, team members can “think outside the box” and discover the true bottlenecks. Then, they can work together on designing the target state Value Stream, re-engineering the current system based on business needs, and quantifying the expected benefits. “DevOps then evolves incrementally and systematically, with everyone committed, participating, and learning as one team,” Rix maintained.

Applying Value Stream mapping The Value Stream mapping exercises in SAFe DevOps facilitate all three learning objectives. Initially, they should be applied to fully understand the current situation, from the customer point of view, align on the problem across all roles in the organization, and identify the right solutions and metrics, Rix said. In the SAFe DevOps experiential class, attendees from throughout the organization use Value Stream mapping to visualize their end-to-end delivery process, pinpoint systemic bottlenecks, and build an action plan around the top three improvement items that will provide the best results in their environment. Learn more at www.scaledagile .com/devops/ z


DevOpsShowcase2019.qxp_Layout 1 8/28/19 1:18 PM Page 11

Is Your 9DOXH 6WUHDP Optimized?

$FKLHYH FXVWRPHU FHQWULFLW\ DQG IDVW Ŵ RZ ZLWK 6$)H® DevOps Mindset over practices Business value is the ultimate goal, and value starts and ends with the customer.

Everyone is essential Technical, non-technical, and leadership roles come together to optimize the end-toend Value Stream.

Relentlessly improve Understand your UHDO ZRUNŴ RZ DQG bottlenecks. Then design your targetstate Value Streams.

Learn more at scaledagile.com/devops

© Scaled Agile, Inc.


DevOpsShowcase2019.qxp_Layout 1 8/28/19 1:18 PM Page 12


DevOpsShowcase2019.qxp_Layout 1 8/28/19 1:18 PM Page 13

Avoiding The Hidden Costs of Continuous Integration In 2018, 38 percent of infrastructure decision-makers that implemented DevOps and automated their continuous deployment and release automation efforts saw revenue growth of 10 percent or more from the prior year. In contrast, only 25 percent of those that had not adopted DevOps reported comparable growth, according to Forrester Research, Inc. Continuous integration and continuous delivery provide teams with faster feedback, higher confidence in code, and the agility that can give them the competitive advantage to win. The overlooked reality is that implementing and managing a CI/CD platform for any reasonably sized organization can tally up to huge expenses in terms of the training, operations, and rollout. These costs exist whether you’re leveraging a SaaS, or running it yourself. There are several key factors to CI/CD implementation and management that will keep costs in check, and help teams optimize their software delivery, dramatically increasing the value they get out of CI. There are also some key strategies that will shorten the time to value for teams embarking on new or updated implementations of CI/CD pipelines. Here are the hidden costs you need to watch out for, some tradeoffs worth thinking about, and how to mitigate common pitfalls to optimize your CI/CD expenditure.

Reduce people spend DevOps teams are expensive, and often some of the most knowledgeable people in a company when it comes to the intricacies of your company’s software and infrastructure. Despite this wealth of knowledge, many of these specialists spend their days configuring delivery pipelines for other developers on the team to use. “When we're talking about people spend, that's obviously a very sensitive subject and so, I think the point

here is really that it’s lovely to reallocate your people spend to the areas that it can have the most impact,” says Edward Webb, Director of Solutions Engineering at CircleCI. Find a CI/CD vendor that can abstract common operational concerns, so that these folks can be freed to work on higher leverage projects. Although your team might only have a few designated DevOps engineers, consider the opportunity costs of what they could be working on instead.

when they try to follow this paradigm of lift and shift. You can’t continue doing things the same way you've always done them, in the cloud, and save money. That means, in order to achieve the cost savings, you need to find ways to reinvest the people.” Organizations transitioning to the cloud should seize the opportunity to take a close look at their pipeline for wholesale opportunities for improvement, rather than just migrating a legacy setup to new infrastructure.

Reduce infrastructure spend

Increase agility and speed

According to Webb, “Servers individually don't cost a ton of money, but when you look at them in aggregate, having a large number of servers that run day in and day out over the course of days, months, and years, those actually add up to be pretty substantial.” If you have teams writing in multiple languages, or different versions of the same language, a common practice is to maintain separate servers running each language or language version. The result is that maintaining a large fleet of heterogeneous CI agents can be prohibitively expensive from a pure infrastructure cost perspective. Instead, consider running jobs within isolated containers, leveraging commodity compute power. This gives teams the ability to define exactly what languages and frameworks they need as they need them, without carrying the overhead of pre-provisioning the maximum number/variety of machines. On top of the effort saved from maintaining a huge fleet of designated servers, you’ll also be able to run more work on fewer servers total, saving infrastructure costs.

By working with systems that shift the configuration tasks and provisioning away from specialized DevOps engineers to the developers working closest to the code, you’ll save infrastructure cost and optimize your people spend. This shift can be a huge cost savings, but it means that less specialized team members are taking on responsibility for operations. It will require a CI/CD platform that is intuitive to your application teams, and that limits the amount of product or domain-specific knowledge they must absorb to get started. Finding a platform that leverages a simple configuration format like YAML or HCL over custom DSL can reduce this burden. Bonus points if you find a provider that lets teams replicate best practices and patterns through shareable configuration. Once teams have the autonomy to update pipelines on their own, you will start to see a multiplier effect of increased agility, iteration, and responsiveness across the entire organization. You will have removed the bottlenecks to experimentation and making change, which means the team will learn faster in service of gaining a crucial competitive advantage. While getting started with CI/CD can seem daunting, the benefit of implementing it is worth the effort. Ultimately, the biggest cost is the cost of not taking action; not doing anything at all. z

Leverage SaaS intelligently People talk about SaaS offerings and working in the cloud as being much more cost-effective, but that isn't automatically the case. Webb points out, “Teams don't end up seeing cost savings

13


DevOpsShowcase2019.qxp_Layout 1 8/28/19 1:18 PM Page 14

14 14

Bringing Rich Communication Experiences Where They Mattermost SD Times

September 2019

www.sdtimes.com

One of the basic premises of DevOps is to break down barriers between development and IT teams that historically functioned in relative siloes. Collaboration is the core tennant that makes DevOps work. As teams start to collaborate in real time, the idea is they will see fewer errors and more opportunities for innovation. Getting teams to work together is entirely different from getting teams to simply share status updates with each other. Corey Hulen, CTO and co-founder of the open source messaging and collaboration platform Mattermost, explained developer and operations teams have silos of information where they are looking at different metrics, reports and log files and sometimes monitoring completely different things. In order to really gain the true value of DevOps, they need a common space where they can not only easily connect with each other (chat/video..), but also share files, systems and workflows. Mattermost provides a central communications hub where everyone in an organization can come together, share updates and critical messages, work together to resolve incidents and outages, integrate DevOps tools and create a single shared view of ‘all the things’. Its notifications hub keeps everyone updated and on the same page, and social coding features enables teams to collaborate on code snippets. Integrations with popular tools like Jenkins, Jira, GitHub, Bitbucket, GitLab and PagerDuty allow teams to see all of the real-time notifications from across the development lifecycle — all without having to login to each of the systems. The information gets placed in private or shared channels (depending on a team’s security needs) where developers, operations and even non-technical team members can participate in the conversa-

tion and work together — everyone having the same access to information. Sometimes, what ends up happening is there will be a team collaborating on a performance issue or system outage in the “war room channel,” and someone unexpected is listening in who has a solution to the problem, Hulen explained. “I always describe it as a cooperative board game. You are all cooperatively trying to solve this problem, and every person brings a unique piece to the puzzle to solve it,” he said. “This really can

“It is about keeping that human connection beyond work.” —Corey Hulen

only happen when everyone can see all of the conversations, files and data that are relevant to the issue.”

Making sure the conversation isn’t too loud Starting out, the ability to freely communicate is a very valuable experience, but as the conversations continue to build up it can start to feel like an information overload. Mattermost enables bot integrations so teams can work better and faster. Some of the bots include the ability to monitor and debug clusters, receive best practices, respond to messages, and send notifications. Webhooks can be added to post messages to public, private and direct channels. Additionally, organizations can go in and remove some of the cruft by removing certain bots and webhooks that they find aren’t providing valuable information over time or moving them to anoth-

er channel so they don’t block productivity. Teams can also use multiple channels to put all the webhook and bot information in one channel and then have a secondary channel to interact with all that information. “Mattermost has taken an approach where we’ve built a rock-solid platform and we give developers the API, and many integration options so they can extend the platform to fit their needs. We’ve found developers, in particular, really appreciate the ability to customize their collaboration tools.

Staying connected Remote work is another reason why a central communication hub is essential to a business. According to Hulen, Mattermost itself is a remote-first company, and having the messaging platform enables them to stay connected and a part of the team. Video conferencing plugins with Zoom, Skype, BigBlueButtom and other popular services enable teams to have face-to-face meetings. Mattermost also supports voice and screen sharing capabilities to expand remote teams ability to work together. In addition, channel-based or topicbased communication features become an essential communication tool. “When you are in a remote environment, those types of channel-based systems really take off and lend to that experience. You may have really needy channels where people are doing real work or monitoring outages, but you can also have some fun social channels,” Hulen said. “Minor features” like emoji reactions or Giphy integration enable team members to convey emotion and have some fun with their posts. “Those things really make remote culture thrive,” said Hulen. “It is about keeping that human connection beyond work.” Learn more at https://mattermost .com/ z


DevOpsShowcase2019.qxp_Layout 1 8/28/19 1:18 PM Page 15


DevOpsShowcase2019.qxp_Layout 1 8/28/19 1:18 PM Page 16

#WV QOCV GF #2/ H QT &GX1RU 6WT DQEJCT IG [QWT %+ %& RK RGN K PG

7U\ LW IRU \RXUVHOI ZZZ LQVWDQD FRP VGWLPHV IUHH WULDO


DevOpsShowcase2019.qxp_Layout 1 8/28/19 1:18 PM Page 17

Instana Monitoring at DevOps Speed There is no one-size-fits-all approach when it comes to successfully implementing DevOps, but there are some concrete methods you need in place to help get you there. The “2019 Accelerate State of DevOps” report found that efforts like automation and monitoring can help organizations improve the speed of software delivery and create value. “For example, teams can monitor their own code, but will not see full benefits if both application and infrastructure are not monitored and used to make decisions,” the report stated. Kevin Crawley, DevOps evangelist for the APM company Instana, added that deployment frequency, lead time for changes, time to restore services, and change fail rate are leading indicators of how mature a given company’s software development life cycle is. Crawley explained that successful CI/CD or DevOps cannot happen if there is not monitoring and observability in place. Without it, rapid introduction of performance problems and errors, new endpoints causing monitoring issues, and lengthy root cause analysis will occur as the number of services expand. “In order to successfully have a true continuous integration pipeline where you are continuously deploying changes, you have to have solid monitoring around those systems so you can understand when you do push a change that does break something, you have immediate feedback,” he said in a recent webinar. The problem is that most traditional monitoring tools require manual efforts for many tasks such as: l writing and configuring data collectors l instrumenting code for tracing l discovering dependencies l deciding how to create data l building dashboards to visualize correlation l configuring alerting rules and

thresholds building data collection to store metrics. All of this can be quite a bit of work and very time consuming, Crawley explained. The loop of CI/CD and DevOps is a never-ending delivery process and any manual steps will slow teams down. “Without monitoring, the SRE or DevOps teams really have no visibility into operations and how the application is performing in production,” he said. “In this world where we talk about continuous integration and continuous deployments, manual steps really prevent the velocity and the speed that your organization needs to get software out the door.” When evaluating how an organization's monitoring solutions are working for them, some of the questions they need to ask themselves are, how many services are they capable of monitoring and what are they collecting from that monitoring? “There are a lot of questions we can

“Without monitoring, the SRE or DevOps teams really have no visibility into operations and how the application is performing in production.” —Kevin Crawley

ask here that will give you an idea of how much value Instana can bring to your operation teams,” he added. “How much effort are the engineers spending to build this visibility and if you are using an APM tool, what are you using and does it help you automate some of these steps.” Crawley went on to say “If you don’t have automation in your monitoring, you likely won’t have good visibility, and therefore you won’t have the velocity needed to get new services and new changes confidently out the door. What

17

this ends up resulting in is unhappy customers and loss of revenue. Without having an automated monitoring solution, you are left only with limited visibility and turtle pace speed.” When looking for a tool to automate as much as possible, organizations should look for automated monitoring solutions that provide zero or minimal configuration for the automatic discovery of infrastructure and software components, automatic instrumentation and tracing of every component, pre-existing alerts, and high resolution metrics and analytics. “With our solution, all you will need to do is install a single agent per virtual host and Instana will continually discover every technology. It will automatically collect the metrics and the traces for every app request, and will automatically map all the dependencies so that when an issue does occur we can correlate that issue back to a root cause or a service which intiatied that issue,” said Crawley. “At the end of the day what we determined is dynamic applications need automatic monitoring and what that ultimately translates to is that we need the ability to automatically detect technology as it is deployed and or scaled. We will need to automatically capture time series metrics and automatically capture distributed traces between your services and then we also need to utilize machine learning to analyze all that data and give you actionable insights for your environment.” Instana can also help DevOps teams use AI techniques to identify and resolve performance issues, achieve zero-effort monitoring for every service's health and availability, and accelerate delivery through automatic observability and analysis. “We don’t want to slow you down, we want to let the Instana robot do all the work,” said Crawley. Learn more at www.instana.com. z


DevOpsShowcase2019.qxp_Layout 1 8/28/19 1:18 PM Page 18

FEATURED COMPANIES n Broadcom:

With an integrated portfolio spanning the complete DevOps toolchain from planning to performance, Broadcom delivers the tools and expertise to help companies achieve DevOps success on platforms from mobile to mainframe. We are driving innovation with BlazeMeter Continuous Testing Platform, Intelligent Pipeline from Automic, Mainframe DevOps with Zowe, and more.

n CircleCI: The company offers a continuous integration and continuous deliv-

ery platform that helps software teams work smarter, faster. CircleCI helps teams shorten feedback loops, and gives them the confidence to iterate, automate, and ship often without breaking anything. CircleCI builds world-class CI/CD so teams can focus on what matters: building great products and services.

n Instana: Agile continuous deployment practices create constant change.

Instana automatically and continuously aligns to every change. Instana’s APM platform delivers actionable information in seconds, not minutes, allowing you to operate at the speed of CI/CD. AI-powered APM delivers the intelligent analysis and actionable information required to keep your applications healthy.

n Mattermost: The open-source messaging platform built for DevOps teams. Its on-premises and private cloud deployment provides the autonomy and control teams need to be more productive while meeting the requirements of IT and security. Organizations use Mattermost to automate workflows, streamline coordination, and increase organizational agility. It maximizes efficiency by making information easier to find and increases the value of existing software and data by integrating with other tools and systems. n Redgate: Its SQL Toolbelt integrates database development into DevOps

software delivery, plugging into and integrating with the infrastructure already in place for applications. It helps companies take a compliant DevOps approach by standardizing team-based development, automating database deployments, and monitoring performance and availability. With data privacy concerns entering the picture, its SQL Provision solution also helps to mask and provision database copies for use in development so that data is preserved and protected in every environment.

n Scaled Agile: To compete, every organization needs to deliver valuable

technology solutions. This requires a shared DevOps mindset among everyone needed to define, build, test, deploy, and release software-driven systems. SAFe DevOps helps people across technical, non-technical, and leadership roles work together to optimize their end-to-end value stream. Map your current state value stream from concept to cash, identify major bottlenecks to flow, and build a plan that will accelerate the benefits of DevOps in your organization.

n Tasktop: Transforming the way software is built and delivered, Tasktop’s unique model-based integration paradigm unifies fragmented best-of-breed tools and automates the flow of project-critical information across dozens of tools, hundreds of projects and thousands of practitioners. The ultimate collaboration solution for DevOps specialists and all other teams in the software lifecycle, Tasktop’s pioneering Value Stream Integration technology provides organizations with unprecedented visibility and traceability into their value stream. Specialists are empowered, unnecessary waste is eradicated, team effectiveness is enhanced, and DevOps and Agile initiatives can be seamlessly scaled across organizations to ensure quality software is in production and delivering customer value at all times.

n Atlassian:

Atlassian offers cloud and on-premises versions of continuous delivery tools. Bamboo is Atlassian’s on-premises option with first-class support for the “delivery” aspect of Continuous Delivery, tying automated builds, tests and releases together in a single workflow. For cloud customers, Bitbucket Pipelines offers a modern Continuous Delivery service that’s built right into Atlassian’s version control system, Bitbucket Cloud.

n Appvance: The Appvance IQ solution is

an AI-driven, unified test automation system designed to provide test creation and text execution capabilities. It plugs directly into popular DevOps tools such as Chef, CircleCi, Jenkins, and Bamboo.

n Chef:

Chef Automate, the leader in Continuous Automation, provides a platform that enables you to build, deploy and manage your infrastructure and applications collaboratively. Chef Automate works with Chef’s three open source projects; Chef for infrastructure automation, Habitat for application automation, and InSpec for compliance automation, as well as associated tools.

n CloudBees:

CloudBees is the hub of enterprise Jenkins and DevOps. CloudBees starts with Jenkins, the most trusted and widely adopted continuous delivery platform, and adds enterprise-grade security, scalability, manageability and expert-level support. The company also provides CloudBees DevOptics for visibility and insights into the software delivery pipeline.

n CollabNet VersionOne:

CollabNet VersionOne’s Continuum product brings automation to DevOps performance with performance management, value stream orchestration, release automation and compliance and audit capabilities. In addition, users can connect to to DevOps tools such has Jenkins, AWS, Chef, Selenium, Subversion, Jira and Docker.

n Compuware: Our products fit into a unified DevOps toolchain enabling crossplatform teams to manage mainframe applications, data and operations with one process, one culture and with leading tools of choice. With a mainstreamed mainframe, any developer can build, analyze, test, deploy and manage COBOL applications. n Datical:

Datical solutions deliver the database release automation capabilities IT teams need to bring applications to market


19 19

faster while eliminating the security vulnerabilities, costly errors and downtime often associated with today’s application release process.

n New Relic:

Dynatrace provides the industry’s only AI-powered application monitoring. We help companies mature existing enterprise processes from CI to CD to DevOps, and bridge the gap from DevOps to hybrid-to-native NoOps.

Its comprehensive SaaSbased solution provides one powerful interface for Web and native mobile applications, and it consolidates the performancemonitoring data for any chosen technology in your environment. It offers code-level visibility for applications in production that cross six languages (Java, .NET, Ruby, Python, PHP and Node.js), and more than 60 frameworks are supported.

DevOps lifecycle by enabling Concurrent DevOps. Concurrent DevOps is a new vision for how we think about creating and shipping software. It unlocks organizations from the constraints of the toolchain and allows for better visibility, opportunities to contribute earlier, and the freedom to work asynchronously.

in Continuous Performance Validation for Web and mobile applications. Neotys load testing (NeoLoad) and performance-monitoring (NeoSense) products enable teams to produce faster applications, deliver new features and enhancements in less time, and simplify interactions across Dev, QA, Ops and business stakeholders.

n Dynatrace:

n GitLab: GitLab aims to tackle the entire

n JFrog:

JFrog’s four products, JFrog Artifactory, the Universal Artifact Repository; JFrog Bintray, the Universal Distribution Platform; JFrog Mission Control, for Universal DevOps flow Management; and JFrog Xray, Universal Component Analyzer, are available as open-source, on-premise and SaaS cloud solutions.

n JetBrains:

TeamCity is a Continuous Integration and Delivery server from JetBrains. It takes moments to set up, shows your build results on the fly, and works out of the box. TeamCity integrates with all major development frameworks, version-control systems, issue trackers, IDEs, and cloud services.

n Micro Focus: Continuous Delivery and

Deployment are essential elements of the company’s DevOps solutions, enabling Continuous Assessment of applications throughout the software delivery cycle to deliver rapid and frequent application feedback to teams. Moreover, the DevOps solution helps IT operations support rapid application delivery (without any downtime) by supporting a Continuous Operations model.

n Microsoft: Microsoft Azure DevOps is a

suite of DevOps tools that help teams collaborate to deliver high-quality solutions faster. The solution features Azure Pipelines for CI/CD initiatives, Azure Boards for planning and tracking, Azure Artifacts for creating, hosting and sharing packages, Azure Repos for collaboration and Azure Test Plans for testing and shipping.

n Neotys: Neotys is the leading innovator

n OpenMake: OpenMake builds scalable

Agile DevOps solutions to help solve continuous delivery programs. DeployHub Pro takes traditional software deployment challenges with safe, agentless software release automation to help users realize the full benefits of agile DevOps and CD. Meister build automation accelerates compilations of binaries to match the iterative and adaptive methods of Agile DevOps

n Perfecto: A Perforce company, Perfecto enables exceptional digital experiences and helps you strengthen every interaction with a quality-first approach for web and native apps through a cloud-based test environment called the Smart Testing Lab. The lab is comprised of real devices and real end-user conditions, giving you the truest test environment available.

n Puppet: Puppet provides the leading IT automation platform to deliver and operate modern software. With Puppet, organizations know exactly what’s happening across all of their software, and get the automation needed to drive changes with confidence. More than 75% of the Fortune 100 rely on Puppet to adopt DevOps practices, move to the cloud, ensure security and compliance, and deliver better software faster. n Rogue Wave Software by Perforce:

Rogue Wave helps thousands of global enterprise customers tackle the hardest and most complex issues in building, connecting, and securing applications. Since 1989, our platforms, tools, components, and support have been used across financial servic-

es, technology, healthcare, government, entertainment, and manufacturing to deliver value and reduce risk.

n Sauce Labs:

Sauce Labs provides the world’s largest cloud-based platform for automated testing of Web and mobile applications. Optimized for use in CI and CD environments, and built with an emphasis on security, reliability and scalability, users can run tests written in any language or framework using Selenium or Appium, both widely adopted open-source standards for automating browser and mobile application functionality.

n SOASTA: SOASTA, now part of Akamai, is the leader in performance measurement and analytics. The SOASTA platform enables digital business owners to gain unprecedented and continuous performance insights into their real user experience on mobile and web devices — in real time, and at scale. n TechExcel:

DevSuite helps organizations manage and standardize development and releases via agile development methods and complete traceability. We understand the importance of rapid deployment and are focused on helping companies make the transition over to DevOps. To do this, we have partnered with many automation tools for testing and Continuous Integration, such as Ranorex and Jenkins. Right out of the box, DevSuite will include these technologies.

n Tricentis: Tricentis Tosca is a Continuous Testing platform that accelerates software testing to keep pace with Agile and DevOps. With the industry’s most innovative functional testing technologies, Tricentis Tosca breaks through the barriers experienced with conventional software testing tools. Using Tricentis Tosca, enterprise teams achieve unprecedented test automation rates (90%+) — enabling them to deliver the fast feedback required for Agile and DevOps. n XebiaLabs: XebiaLabs develops enterprise-scale Continuous Delivery and DevOps software, providing companies with the visibility, automation and control they need to deliver software faster and with less risk. Global market leaders rely on XebiaLabs to meet the increasing demand for accelerated and more reliable software releases. z


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.