16 minute read

Back to the future: Putin’s return to classical geopolitics

Ezra Sharpe gives an overview of Russia’s modern military advancements and what they mean for Europe.

The Russo-Ukrainian border has been confict-ridden for over a century. An estimated 100,000 Russian troops now lie in wait on the eastern frontier of Ukraine, ready to test the limits of Western lip service. Diplomatic frenzy has ensued; Joe Biden and Vladimir Putin discussed tensions and exchanged warnings over Ukraine on the 30th December, whilst US National Security Advisors continue to urge dialogue with Russian Foreign Policy aids. This is nothing new; Russian presence on Ukraine’s eastern-most border has become a routine exercise since the annexation of Crimea in 2014. The strategic importance of Ukraine to Putin’s regime cannot be understated. Since the formation of the USSR in 1922, the insatiable Russian bear has always looked westwards for its next meal. The answer to confict prevention lies in asking why this happens, and how we might prevent it.

Advertisement

Most of the grand theories of classical geopolitics were sequestered at the end of the Cold War. They were overly totalising, generalising, and universal to explain modern phenomena. The new neoliberal world, with all its messy contradictions and complexities, was simply too vast and too unforeseeable to be predicted with grand theories, most argued. But Putin’s Russia has proven itself to be an exception, reviving the age-old, dusty theories of Halford Mackinder’s ‘Heartland’ and Nicholas Spykman’s ‘Rimland’ from the shadows. If the recent actions of Moscow are explicable, that is where the answer lies.

The inspired military mood of Moscow has prompted much debate amongst geopolitical strategists. Should the West adopt a line of appeasement, nodding to Putin’s unwavering request that the US rescind the eventual admittance of Ukraine and Georgia to NATO? For many analysts, this is just another one of Putin’s bluffs to add to the large catalogue of unrealised threats. To others, Moscow is slowly curating a milieu to exploit as a pretext for military invasion. Either could be possible.

That is why it is essential that the US, amongst other Western powers, take the initiative to mobilise active troops within Ukraine - albeit, without the intent to ever raise a fst. If the US is seen to finch when clarion calls are issued and violence is threatened to be exerted, the consequences for global geopolitics could be fatal. Wars occur not when aggression is snuffed

“The insatiable Russian bear has always looked westwards for its next meal.”

out early, but when peace is no longer deemed to be worth fghting for.

The best way to prevent war is not to deploy troops once it has already started – it is to ensure that the guns are never loaded in the frst place. To achieve this, however, politicians and strategists must learn to identify the precursors of war when they lie brazenly before us, much like a canary in a coal mine. History proves that large-scale conficts do not erupt out of thin air. They occur when fickers of unchecked aggression become the status quo. And they also occur when pacifsts become blind to division and identity politics, which sow the seeds for hatred, blame, and anger. Recognising the rationale for Putin’s foreign policy is good, but understanding the common denominator in the outbreak of war alongside that is even better.

The most famous translation of geopolitical hypothesis into geopolitical reality has been through Halford Mackinder’s ‘Heartland’ theory. Mackinder postulated that control over the core of Eurasian territory would be the key to global power:

“Who rules Eastern Europe commands the Heartland

Who rules the Heartland commands the World-Island

Who rules the World-Island commands the world” The ‘Heartland’ would be the most advantageous geopolitical location, located at the pivot of Eurasia, inaccessible by militant sea-vessels, and impregnable through its harsh winters and vast land fortress. He argued that power would lie in the victory of the dominant land powers over the sea powers. This was built upon by Spykman’s ‘Rimland’ theory, which argued that the strip of coastal land surrounding Eurasia was more signifcant. The ethos of these theories can be seen in the repudiation of the Molotov-Ribbentrop pact, and Hitler’s invasion of the USSR in 1941. Despite sealing the diplomatic promise that Nazi Germany and the Soviet Union would not invade one another during the Second World War, Hitler chose to do so anyway. Rather than being a symptom of power-hungry petulance, it was likely that gaining control of Eastern Europe, or the ‘Heartland’, was always in the Nazi blueprint. After all, the chief Nazi geopolitician, Karl Haushofer, was an avid disciple of Mackinder’s work which explicitly outlined that the successful invasion of Russia by a Western European nation could be used as a catalyst for the reclamation of global hegemony.

Putin is the most recent leader to follow suit, but with a new favour. Of course, these theories are grossly outdated. They were written at a time before airpower had come into fruition, and where the power of the digital world would be nothing other than a fgment of one’s imagination. Moscow has chosen to rewrite them instead.

Amongst other enticements, Putin’s desire to irreversibly absorb Eastern Ukraine into his desired territory can be reduced to two main factors relating to these theories: access to warm water ports, aligning with Spykman’s ‘Rimland’, and the expansion and protection of Eastern land power, reflecting Mackinder’s ‘Heartland’. In a globalised world, the ability to trade with ease brings economic leverage, and leverage brings power. For a country with such vast coastal territory, Russia has appallingly bad access to global sea routes and trading, with many ports frozen year-round. The Crimean Port of Sevastopol is a missing piece to Putin’s strategic puzzle, providing warm water access to global shipping routes and allowing the Russian military to aggrandise control into the Black Sea and further beyond. Secondly, as in traditional cold-war fashion, any westwards territorial expansion is deemed as advantageous to the Russian regime, who see the US and NATO as omnipresent and ever-looming threats. To understand the actions of Putin, it is critical we attempt to analyse his motives. These examples do not tell us that Putin will invariably stick to Mackinder and Spykman’s geopolitical blueprints. But, crucially, they demonstrate that diplomacy over the new ‘Eastern Question’ only serves to kick the can down the road. If the well-thumbed geopolitical playbook continues to be followed with increasing resolve, we should preemptively prepare for escalated flare-ups along Ukraine’s eastern border. Just as much as it is important to recognise Putin’s raison d’état, it is equally important to learn the signs of warmongering before conflict is allowed to ensue. Large scale wars are not momentary spasms in the peacekeeping status quo but rather emerge when small-scale escalations of violence are left unchecked. The First World War was not a global bicker over who was responsible for the assassination of Archduke Franz Ferdinand; it was the culmination of decades of colonial jostling, battles for naval supremacy, and military sabre-rattling. By the same token, the outbreak of The Second World War was steeped in years of uncurbed aggression extending from Nazi Germany, both in its domestic and foreign affairs. Appeasement does not work when you are sat across the table from warmongers. The placement of Russian troops on the Ukrainian border may only seem like a momentary spasm in the otherwise smoothly running peacekeeping operations of Europe. But it is these very glitches which, when left unchallenged, can mutate into actions far more deleterious.

Biden claiming that stationing US troops in Ukraine was “not on the table” is therefore a serious diplomatic blunder, severely weakening NATO’s standing by ruling out preventative military responses to Russian aggression. Global security cannot be left strictly to the realm of rhetoric. When world leaders claim their unwavering support for the retention of autonomy, sovereignty, and democracy, boundaries must be drawn and the red-line must be enforced. Artwork by Ben Beechener. Read the full article on cherwell.org

“Moscow is slowly curating a milieu to exploit as a pretext for military invasion.”

Is the Oxford collegiate system financially fair for all students?

Isobel Lewis

There are certainly big disparities between costs of accommodation at different colleges. I’m at St Peter’s, and biased as I am in its favour even I can see that it might not be as well known to applicants as somewhere like Christ Church. Inevitably, a lot of students get pooled here in the interview process, meaning offer holders are faced with the choice between stumping up the exorbitant costs of living out in second year, because there isn’t enough room for us in college, and turning down a place at Oxford. That’s just one example of how the collegiate system can be uncompromising and unfair.

Zoe Lambert

Not at all! The wealthiest Oxford colleges are St. John’s, Magdalen and Christ Church; colleges renowned for attracting students from elite private schools. They then offer subsidised accommodation and meals alongside generous grants, providing fnancial support to those who least need it and therefore perpetuating the cycle of privilege.

Sonya Ribner

While I recognize that colleges have different fnancial circumstances and different accommodation available on site, colleges that allocate housing based on students’ fnancial resources unnecessarily differentiates between their students. I attend Magdalen where students cannot pay more for better accommodation and, thus, all room allocation is ballot-based. However, at a college such as St. John’s, which also provides accommodation for the full duration of undergraduate courses, there are a range of pricing options. Though this system may allow for lower prices than Magdalen can offer, the different pricing of housing is inappropriate because a student’s room should not be based on their ability to pay.

Vlad Popescu

The disparity between college resources and support creates a sort of paradoxical situation where concerns about access become meaningless- particularly when the colleges providing some of the best support for students from access initiatives are also some of the same colleges with the highest proportion of private school students. The fnancial disparity in the collegiate system is not only unfair but also counter-productive to creating a more accessible Oxford University.

Science Snippets

Panda DNA found: A panda rolling down a hill- or is it…? Probably won’t be able to tell… Researchers have fnally uncovered the counterintuitive reason why pandas have the iconic fur patterning we all know and love: it’s keeping them hidden. Using image analysis on various photographs of pandas in both captive and wild settings, it was found that though we may be able to pick out these high-contrast furballs in captivity, the panda’s coat can evade detection of predators in natural habitats. The different regions of a panda’s coat blend in with various elements in the background. Black fur breaks up the body outline and conforms with dark shadows. White fur seamlessly morphs into snowy landscapes. ‘Intermediate’ (brownish) fur matches well with the earthy ground. This mishmash of tones seem to provide pandas with multi-setting camoufage! -Taylor Bi

Tech Tidbits

Brain cells in dish beat AI in video game. Scientists from Cortical Labs found out that neurons in a dish learned to play ‘Pong’ faster than AI technology.

James Webb Telescope prepared for mission. NASA’s giant telescope reached its fnal stage of deployment ahead of setting out to measure infrared radiation in space.

Could artificial intelligence disrupt our world?

L. Sophie Gullino discusses why you should work on AI safety.

Every time that Netfix recommends you a movie, or you ask Alexa for today’s weather, you are using an artifcial intelligence (AI) designed to perform a specifc function. These so-called “narrow” AIs have become increasingly more advanced, from complex language processing software to self-driving cars, however they are only capable of outperforming humans in a relatively narrow number of tasks.

Following the intense technological race of the last few decades, experts state that there is a signifcant chance that machines more intelligent than humans will be developed in the 21st century. Whilst it is diffcult to forecast if or when this kind of “general” AI will arise, we cannot take lightly the possibility of a technology that could surpass human abilities in nearly every cognitive task.

AI has great potential for human welfare, holding the promise of countless scientifc and medical advantages, as well as cheaper highquality services, but involves a plethora of risks. There is no lack of examples of failures of narrow AI systems, such as AIs showing systematic biases, as it was the case for Amazon’s recruiting engine which in 2018 was found to hire fewer women than men.

AI systems can only learn from the information they are presented with, hence if the Amazon workforce has historically been dominated by men, this is the pattern the AI will learn, and indeed amplify.

Science fction refects that our greatest concerns around AI involve AI turning evil or conscious, nonetheless in reality the main risk arises from the possibility that the goal of an advanced AI could be misaligned with our own. This is the core of the alignment problem: even if AIs are designed with benefcial goals, it remains challenging to ensure that highly intelligent machines will pursue them accurately, in a safe and predictable manner.

For example, Professor Nick Bostrom (University of Oxford) explains how an advanced AI with a limited, well-defned purpose, could seek and employ a disproportionate amount of physical resources to intensely pursue its goal, unintentionally harming humans in the process. It is unclear how AI can be taught to weigh different options and make decisions that take into account potential risks.

This adds on to the general worry about losing control to machines more advanced than us, that once deployed might not be easy to switch off. In fact, highly intelligent systems might eventually learn to resist our effort to shut them down, not for any biological notion of self-preservation, but solely because they can’t achieve their goal if they are turned off.

One solution would be to teach AI human values and program it with the sole purpose of maximizing the realization of those values (whilst having no drive to protect itself), but achieving this could prove to be quite challenging. For example, a common way to teach AI is by reinforcement learning, a paradigm in which an agent is “rewarded” for performing a set of actions, such as maximising points in a game, so that it can learn from repeated experience. Reinforcement learning can also involve watching a human perform a task, such as fying a drone, with the AI being “rewarded” as it learns to execute the task successfully. However, human values and norms are extremely complex and cannot be simply inferred and understood by observing human behaviour, hence further research into frameworks for AI value learning is required.

Whilst AI research has been getting increased media attention thanks to the engagement of public fgures such as Elon Musk, Stephen Hawking, and Bill Gates, working on the safety of AI remains a quite neglected feld. Additionally, the solvability of the problem, as well as the great scale and seriousness of the risks, make this a very impactful area to work on. Here, we discussed problems such as alignment and loss of control, but we have merely scratched the surface of the risks that could arise and should be addressed. For example, there are additional concerns associated with the use of AI systems with malicious intent, such as for military and economic purposes, which could include large-scale data collection and surveillance, cyberattacks and automated military operations.

In Oxford, the Future of Humanity Institute, has been founded with the specifc purpose of working “on big picture questions for human civilisation” and safeguarding humanity from future risks, such as those resulting from advanced AI systems. Further research into AI safety is needed, however you don’t necessarily need to be a computer scientist to be able to contribute to this exciting feld, as contributions to AI governance and policy are equally important. There is a lot of uncertainty associated with how to best transition into a world in which increasingly advanced AI systems exist, hence governance structures, scientists, economists, ethics and policymakers alike can contribute towards positively shaping the development of artifcial intelligence. This article is part of a collaboration with Oxford WIB‘s Insight Magazine.

Getting the right information

Mauricio Alencar considers science’s challenges against misinformation.

Science saved the world, and it will save the world. 2021 saw the wide distribution of COVID vaccinations, increased research into state of the climate crisis and the remedying of it, the installation of the world’s largest machine sucking carbon dioxide from the air in Iceland, the approval of a vaccine against malaria by the WHO, the landing of a NASA rover on Mars, and more. But the fight is on. Misinformation, ignorance, and shadowy stats are resisting progress.

Fake accounts, for one, are still something social media companies are trying to get a grip of. The exact aims of the people behind these accounts are equally ambiguous and ominous. It seems like all are getting fooled. Take top football clubs, such as Chelsea FC and Aston Villa, whose deals with Asian football gambling firms are promoted by people with fake Linkedin accounts whose identity cannot be traced. And how convincing it has become to create a credible fake account, some of which have garnered huge online attraction to promote questionable causes, as it is the case with KateStewart 22, who is an admirer of Saudi Arabia yet is thought to be one of the country’s online propaganda tools. While conspiracy theory is just as unnerving, popular shady accounts can get away with not being held to account, able to stir up the untruth.

And even where initiatives are rightminded, honest, and in favour of positive change, discrepancies can be found. The Netflix hit-documentary ‘Seaspiracy’ led by Ali Tabrizi shows just how vital it is that we protect our oceans just as much as we seek to protect our land. Yet, as a BBC article questioned, “Is Netflix’s Seaspiracy film right about fishing damaging oceans?”. In some moments, the documentary jeopardises the reliability of all the information it provides in not being as rigorous as it is expected to be, using hyperbolic statistics from 2006 which are outdated and not providing enough detail in other parts. The documentary ends by directing the viewer to the possibility of eating only plant-based food. That’s great, but I do wonder how accessible purely plant-based food actually is in cities, towns, and villages across the world. There’s still a long way to go before you can successfully advertise something which is simply not widely available. The overall message of Seaspiracy risked being lost somewhere between the scenes, and ultimately thrown away altogether.

Information must not be caught up between these lines. It is as important that media companies are able to filter the truth from the false as it is that new scientific breakthroughs continue to be made. If such complex issues of information and data are not managed correctly, scientific solutions and corporate decisions which could save the world risk being delayed, or perhaps wrongly made, such that the decimal celsius degree catastrophically tips over.

Image Credit: Bob Mical / CC By 3.0 via Optimus.

This article is from: