10 minute read

SAVING THE PLANET WITH TECHNOLOGY

SUSTAINABLE CLOUDDRIVEN INNOVATION

SUSTAINABLE IT IS A KEY PRIORITY FOR CIOS TODAY. ABDUL RAHMAN AL THEHAIBAN, MANAGING DIRECTOR, TURKEY, MIDDLE EAST & AFRICA, GOOGLE CLOUD, EXPLAINS HOW THE COMPANY IS REDUCING ENVIRONMENTAL IMPACT WITH THE CLEANEST CLOUD IN THE INDUSTRY.

Advertisement

Why should CIOs and CTOs care about sustainability?

Today, sustainability is a top priority for almost all businesses, and rightly so. Multinational corporations account for 20% of all Global CO2 emissions, and executives want to make an impact in this area. Not only that, but it is clear customers care about sustainability as well. According to research from the National Retail Federation and IBM, 57% of consumers are willing to change their spending habits if it means new technologies to help them. Cloud providers will play a critical role in helping organisations achieve these goals - 75% of global IT leaders say sustainability is important when selecting a cloud provider, with 51% stating it as a major consideration, and an additional 24% stating it is a must-have.

reducing their environmental impact. Organisations are also increasingly finding that sustainability efforts make business sense. According to a study by the United Nations Global Compact and Accenture, “between 2013-2019, companies with consistently high environmental, social and governance (ESG) performance enjoyed 4.7x higher operating margins and lower volatility than low ESG performers over the same period. Organisations want to support a cleaner, healthier, more sustainable world and look for tools, solutions, and

What sustainability initiatives are you embracing throughout 2021?

Sustainability objectives are, and always have been, central to every area of Google’s operations. In 2007, we were the first major company to

become carbon neutral. Today, we have matched 100% of our electricity consumption with renewable energy purchases on an annual and global basis - what is often referred to as a 100% renewable energy target in the market.

We remain the largest annual corporate buyer of renewable energy and the only major cloud provider to purchase enough renewable energy to match our electricity consumption, matching our annual global electricity consumption for the fourth year running in 2021. Moreover, 90% of Google Cloud waste is currently diverted from landfills, and we have achieved a Power Usage Effectiveness (PUE) of 1.10, compared to the global average of 1.58.

Through innovations in artificial intelligence and machine learning, our data centers are twice as energy efficient as an average enterprise data centre. We’ve also been able to cut the energy used to cool our data centres by 30%. For necessary energy consumption, our engineers have designed a first-of-its kind system to shift the timing of nonurgent customer compute tasks to when carbon-free power sources are most plentiful, optimising hour-byhour guidelines to increase the level of lower-carbon energy consumed. We’re also introducing data-driven decarbonisation and the deployment of sustainability bonds. We recently announced our most ambitious sustainability goal yet. By 2030, we intend to be the first major company to operate on carbon-free energy, in all locations, at every hour of the day. Of course, this is far more challenging than the traditional approach of matching global annual energy usage with renewable energy, but we’re working on getting this done by 2030.

How can customers work with Google Cloud to help realize their own sustainability goals?

Cloud technology presents a tremendous opportunity to accelerate meaningful and positive environmental BY 2030, WE INTEND TO BE THE FIRST MAJOR COMPANY TO OPERATE ON CARBON-FREE ENERGY, IN ALL LOCATIONS, AT EVERY HOUR OF THE DAY. OF COURSE, THIS IS FAR MORE CHALLENGING THAN THE TRADITIONAL APPROACH OF MATCHING GLOBAL ANNUAL ENERGY USAGE WITH RENEWABLE ENERGY, BUT WE’RE WORKING ON GETTING THIS DONE BY 2030.

change. Public cloud technology is leading efforts to streamline and decarbonise business processes, but the drive for sustainability within technology is far from over. From reducing the emissions of digital applications and infrastructure to getting smarter about how you source and trace materials, the technologies available on Google Cloud can help organisations achieve their own sustainability goals.

We make sure our customers can use our technology and information to help build their own climate action plans. We work alongside our customers to pinpoint their unique opportunities for becoming increasingly sustainable and how they can use our technologies at every stage of the supply chain. To help our customers decarbonise the electricity consumed by their cloud applications, we share the average hourly Carbon-Free Energy Percentage (CFE%) for most of our Google Cloud regions. In addition, we offer a tool providing this data— a Google Cloud region picker—that helps customers assess key inputs like price, latency to their end-users and carbon footprint as they choose which Google Cloud region to run on. Customers can also use products like BigQuery and Vision AI to manage their data in real-time and glean insights that can help them decarbonise their operations and improve defect detection accuracy, eliminate waste, reduce production time, and increase customer satisfaction. Google Cloud’s partnership with consumer goods brand Unilever is just one example of how organisations can build their own plan. Announced in September last year, the partnership will combine the power of cloud computing, satellite imagery, and AI to build a more complete picture of the ecosystems intersecting with Unilever’s supply chain. In providing a more holistic view of these environments, we hope to raise sustainable sourcing standards and directly support Unilever’s existing work with other technology partners to achieve a deforestation-free supply chain by 2023.

As we enter our most ambitious decade yet, collaborative innovation will be at the forefront of our sustainability efforts at Google Cloud. We will work to empower every one of our partners, whether businesses, governments, or individuals, to achieve a more sustainable tomorrow.

THE WHY AND HOW OF LINUX PATCHING

TAREK NAJA, SOLUTION ARCHITECT – MIDDLE EAST, QUALYS, ON HOW TO KEEP YOUR LINUX SYSTEMS SECURE

Since the region’s governments initiated their economicdiversification initiatives, Middle East enterprises have been digitising at a robust pace, putting them squarely in the crosshairs of cybercriminals. But when COVID-19 struck, and businesses and governments flocked to the cloud for its promise of continuity, things got worse. In the UAE, for example, the nation’s top cybersecurity official revealed a 250% increase in attacks from 2019 to 2020. This is what bad actors do. They take advantage of circumstances, any circumstances, to pounce.

And what a circumstance the pandemic turned out to be for digital malefactors. To settle quickly into their new homes in the cloud, regional organisations had to accept new, untested ecosystems. Multiple network domains that fell outside the control of IT, coupled with a mushrooming of Shadow IT, dumped alien environments on the heads of thousands of underresourced tech teams.

Among the many bugbears resulting from this technology sprawl was the issue of unpatched vulnerabilities. Their management is a key focus for IT and security professionals tasked with protecting their infrastructures from incursion. In the world of cloud, at a global level, much of this time is taken in

ensuring that Linux-based ecosystems are tended to appropriately. The opensource OS accounts for the lion’s share of public cloud infrastructure (nine of the top 10 public clouds, according to the Linux Foundation). And considering that its kernel is the heart of Android, Linux can also be found in 82% of the world’s smartphones.

Not invulnerable

While Linux stacks up well against other operating systems for security, it has its vulnerabilities. And these need to be managed. Prompt reaction to known, fixable issues is the hallmark of sound cybersecurity, but because of its non-proprietary nature, Linux does not have a Patch Tuesday. Instead, a global community of vendors, White Hats and freelance coders discovers issues for itself and shares them freely with others.

While this open community system is worthy of much praise, it does have some drawbacks. The greatest of these is a far-flung assumption that Linux is secure. While the attack stats show it as a less frequent victim, it would be unwise to assume it is invulnerable. When vulnerabilities are found, they are shared, and the community being what it is — a global family of empirically minded devotees — proofs of concept are required. This means that not only is the vulnerability made public, but so is the playbook on how to exploit it.

And everyone knows the same information at the same time, from vendors to cybercriminal cabals. So, the criminals have an advantage. The customisability of Linux has led to many “flavors” that may need their own variant of a patch when one is issued. That means the community is in a race with attackers to develop and release several workarounds before the bad actors can duplicate the exploit.

The patch playbook

The Linux end-user, therefore, is in dire need of their own patch playbook. And three basic actions can form a good foundation. Before anything else, build a comprehensive IT asset inventory. Hardware, OS, and applications, listed together with any cloud services and their up-to-date statuses, will allow security and IT teams to visualise their environments.

Enterprises that build asset inventories should avoid “tool clutter”. Some may argue that asset discovery is only comprehensive if you use ideally designed tools to identify each kind of asset, but this can lead to precisely the complexity they were trying to avoid, through duplication of work and data across teams. Different tools may classify the same asset in different ways, leading to inaccuracy. Tool clutter is easily dispelled by adopting a single-dashboard system capable of discovery, scan, prioritisation and even remediation.

Second begins the triage of issues. Classifications of risk vary widely with industry and enterprise, but teams should be looking at factors such as the age of the vulnerability, whether a fix exists, how common the issue is and what the results of an exploit are likely to be. For example, a vulnerability that is easy to exploit but leaves no vital assets exposed to compromise is probably a low priority, as is a zero-day that involves a lot of manhours and expense (on the part of the bad actors) to leverage. Conversely, a well-known, old, dangerous vulnerability for which a patch exists is a higher priority.

Race against time

Doing this triage well also involves adoption of the right tools, because just as tool clutter can duplicate and mislabel assets, it can lead to slow triage and a laborious patching process (the third step), slowing the very process that is in competition with well-equipped, well-informed cybergangs. While the Linux community has published several tools to overcome this challenge, their need for manual intervention can often dampen their own effectiveness.

Again, a single-console solution that scans, prioritises, and remediates is more efficient, and frees up beleaguered teams to concentrate on more analytical and innovative tasks. Dozens of manhours spent trying to determine the most appropriate action for each asset is saved. Add in the capabilities to patch with a single button press and automatically compile up-to-date reports on remediated vulnerabilities, and we arrive at a highly optimised solution.

The region leverages a vast range of OS platforms, applications, and clouds to innovate for the next trend or crisis. Designing a workflow that optimises the patching of Windows and Linux and everything in between, as well as internal cloud assets, is vital. Cyber incidents stop innovation in its tracks, so we need to be ready to discover, mediate and prevent digital incursions.

An asset inventory that gives a rich overview of the environment; a riskbased assessment of what to address and in what order; and a patching process that is capable of both proactive and reactive operations – these are the pillars of effective patch management. Without them, we leave open the doors, windows, and chimneys of our digital estates. It is a short hop from there to real harm.

This article is from: