4 minute read
MAKING THE CASE FOR ON-PREM
CLOUD NATIVE MAY GET THE HEADLINES BUT TECHNOLOGISTS CAN’T OVERLOOK THE NEED TO OPTIMIZE PERFORMANCE WITHIN ON-PREMISE ENVIRONMENTS, WRITES
GREGG OSTROWSKI, CTO ADVISOR, CISCO APPDYNAMICS
Advertisement
On-premise computing isn’t going anywhere
Government is perhaps the most obvious example of a sector which will continue to rely on traditional, on-premise computing as there are strict requirements to run air-gapped environments, with no access to the internet. Beyond the public sector, industries such as banking and insurance need to negotiate major data sovereignty issues. Organisations need to ensure that customer data resides within the borders of the country where they are operating. As has been the case recently, those who fail to comply with these rules face significant fines and reputational damage.
For all of the focus on cloud native technologies over the last couple of years, it’s sometimes easy to forget that many organisations continue to run most of their IT estate on-premise. And, this is likely to be the situation for some years to come.
Across all industries, IT departments have embraced no code and low code platforms to accelerate release velocity in response to rapidly changing customer and employee needs. Modern application stacks provide organisations with speed, agility and resilience and undoubtedly represent the future of innovation.
However, the reality is that the transition to cloud native technologies isn’t going to happen overnight and, in some cases, it won’t happen at all due to the unique sensitivities of the data organisations manage. This is why it’s essential for IT teams to ensure they have the right tools and insights to oversee and optimise availability and performance within traditional, onpremise environments.
However, it isn’t just regulation that is driving organisations to maintain their on-premise environments. Some business and IT leaders also favor the additional control that on-premise provides in comparison to cloud, with complete visibility on where data sits and the ability to handle their own upgrades within their own four walls.
This is particularly the case for major global brands that possess large volumes of sensitive intellectual property (IP) — think big global tech companies and semiconductor companies — that are choosing not to place their data in the cloud. They simply don’t want to take the risk of sharing or storing their IP outside of their organisation. So, for all of the hype surrounding the move to cloud native technologies, there is always going to be a need for some business critical applications to remain onpremise.
This is also causing some organisations to rethink their strategy moving to the cloud. With the economic slowdown continuing to impact businesses, and concerns about the rising costs of cloud computing, IT leaders are making difficult decisions about how and what they migrate to the cloud, in order to keep costs in line. Increasingly, we’re seeing organisations moving to the cloud at their own speed, rather than the haste we saw throughout 2020 and 2021 in response to the pandemic.
Evidently then, there are and will continue to be huge numbers of organisations that still need to manage and optimise legacy applications and infrastructure, whether that is solely on-premise or within a wider hybrid environment.
Managing scale and speed in an onpremise environment
One of the major benefits of cloud computing is that it allows organisations to automatically and dynamically scale their use of IT, with minimal or zero human input. But when you’re deploying an on-premise environment, organisations need to manage scale and speed themselves.
This is particularly challenging when there are pronounced fluctuations in demand. Within most sectors, there are spiking events at certain times during the year. An obvious example is within retail, where there are big shopping days such as Yellow/White Friday and Cyber Monday. And the impact of these events also extends to other industries such as financial services where banks see a massive spike in payments transactions.
IT teams need to be prepared for these fluctuations in demand, ensuring their on-premise applications and infrastructure are able to handle major spikes. They simply cannot afford any disruption or downtime, otherwise they risk losing customers, reputation and revenue.
IT teams need unified visibility across all IT environments
In order to always deliver seamless digital experiences, technologists need real-time visibility into IT availability and performance, up and down the IT stack, from customer-facing applications right through to core infrastructure. This allows technologists to quickly identify causes and locations of incidents and sub-performance, rather than being behind and spending valuable time trying to understand an issue.
Crucially, technologists need to be able to correlate IT data with realtime business metrics so they can identify those issues which have the potential to do most damage to end user experience. And when it comes to managing surges in demand, they need tools which provide dynamic baselining capabilities to trigger additional capacity within their hyperscale environment. This takes the pressure off of IT teams during the busiest times of the year.
Increasingly, as organisations move to hybrid environments, with application components running across cloud and on-premise environments, IT teams need to ensure they have unified visibility across their entire IT estate. They need an observability platform which provides flexibility to span across both cloud native and on-premise environments — with telemetry data from cloud native environments and agent-based entities within legacy applications being ingested into the same platform. This unified visibility and insight are crucial for technologists to cut through complexity and manage soaring volumes of data.
Currently, most IT departments are deploying separate tools to monitor onpremise and cloud applications, and this means they have no clear line of sight of the entire application path across hybrid environments. This means they are having to run a split screen mode and can’t see the complete path up and down the application stack. As a result, it becomes incredibly difficult to troubleshoot issues which means that MTTR and MTTX inevitably go up.
While the shift to modern application stacks will undoubtedly grow in popularity, the reality is that some organisations will continue to run most of their IT estate on-premise for some years to come. And within some industries, the shift is unlikely.
Organisations, therefore, need to keep one eye on the present, rather than focusing all of their attention on the future. They have to ensure that they have the tools and insights they need to optimise availability and performance at all times, and the capabilities to predict and respond to spikes in demand.
Crucially, where organisations are running applications which span both on-premise and cloud native environments, they need to adopt a hybrid observability strategy in which they are able to correlate telemetry data into the overarching mix of already instrumented applications through traditional agent-based monitoring systems. This unified visibility across on-premise and cloud environments will provide the foundation for seamless digital experiences at all times, and ensure that technologists are able to support their organisations’ moves to the cloud, as, when and if it happens.