29 minute read
Digital Coloniality and Privatizing the Degradation of the First Principles Rights cum Economic Conditions
Regulatory Sovereignty in India: Indigenizing CompetitionTechnology Approaches, ISAIL-TR-001
Digital Coloniality and Privatizing the Degradation of the First Principles Rights cum Economic Conditions
Advertisement
Digital Coloniality, for the purposes of this report, is a tendency of gaining influence, in the cyberspace, through digital technologies, which become an important means to organize and systemically distribute whatever is considered as “power” through the control of access to knowledge, moral and artistic resources by the “dominant group”. Coloniality since is a modern concept, and is related to colonialism, implying that there should be some entity of foreign origin and basis (vicariously as well as absolutely), this sub-section does not specifically cover digital feudalism – the phenomenon that companies in the realm of digital technologies feudalise and derogate or undermine national sovereignty of their host state by their corporeal actions, which supersede the scope of their actions defined by the law of the host state, and to some limited extent, if necessary – international law. However, the role of international law in such affairs since is not clearly defined anywhere, apart from the call for adapting business practices, which are ethical, fair, and reasonable, there is nothing specific on digital coloniality in public and private international law literatures. We have therefore, assessed in this sub-section carefully, with an approach to suggest legal, principle-based and policy solutions in the further sections of the report accordingly. Digital coloniality is based on specific and planned actions, which can lead to serious issues of not just undermining state sovereignty, but it can also deprive the people, whose host state is being subjected to the same, through commercial and other kinds of operations, of their constitutional rights, recognized by their basic law & even human rights – of individual, social and economic nature, which are recognized under public international human rights law. The UNESCO Recommendation on AI Ethics of 2021 (UNESCO, 2021) takes into consideration the adverse impacts of algorithms, with 4 perspectives –ontology, pragmatism, epistemology and explanability. The preservation of human rights as recognized constitutional rights is always central to the onus of the host state, whose citizens are impacted by any activity. Now, it also must be determined whether actions by private actors such as technology companies
of foreign origin have anything to do with the onus of the state. There are two possible interpretations we have adopted: • The first approach is what various constitutional courts in
India, including the Supreme Court of India has adopted, which is agreeing that in certain matters, public duty (Supreme Court of India, 2000; Supreme Court of India, 1989) and utility resembles the necessity to enforce the role of the state to protect the fundamental rights enshrined in the Indian Constitution. In education and service law, environment law and other legal issues-related cases, the courts have taken a rights-centric approach. Of course, the appropriateness and proximity of the interpretation has been taken into consideration, which is why the interpretation-inagreement is in principle and is subject to the case studies and illustrations given in the report, taking into consideration that even the judgments have taken nuanced approaches to the issues in the past. • Determining public duty might be considered a legal tool to endorse judicial overreach. However, we do not endorse any such approach, because our estimate is that constituting public duty should be clearly done with the approach of quantifying the role of artificial intelligence via estimating the effects of algorithms and ascertaining the design and default features behind the juridical persona of
AI-based technology put into use. Beyond the question of
“public duty” and “public utility”, we have also adopted the second approach, which means that self-regulation (a part of soft law) is a tendency among companies, which, in the name of abiding ESG compliances (for example), or at least at face value, indicates a sense of corporate governance. Now, this sub-section does not directly delve into corporate governance issues, but indeed critiques the ontological and therefore, foundational premises and backgrounds behind the corporate governance approaches which companies might be keen to adopt by assessing examples. Companies using such approaches can show or pretend that they are filling gaps in the services they are providing, which transcend with the peripheral of public duty due to their large-scale impact (which is omnipresence and omnipotence, discussed in the further sections of the report) and the technocratic nature of
20
Regulatory Sovereignty in India: Indigenizing CompetitionTechnology Approaches, ISAIL-TR-001
the proceduralist manifestations which cause the transcendence. There are even politico-commercial and even other strategic-tactical aspects related to such phenomena of transcendence, which has been assessed accordingly.
How Digital Coloniality Privatizes the Advanced and EverEvolving Components of First Principles Rights
We are living in strange times today and one of the most agreeable sentiment about our life today is that “something big is going on with data” (Couldry, et al., 2019 pp. 336-349) . Many academics are of the view that Big Data processing is ushering a new stage of capitalism (Cohen, 2018; Srnicek, et al., 2017; Zuboff, 2015). Such new world order has incessant amounts of data which supporting its foundations. These companies which developing such all-encompassing influence happen to have resources even surpassing middle income companies (Pinto, 2018). These companies having been able to stockpile such colossal resources can be boiled down to three reasons. Firstly, the amount of control they exercise over the digital space because of their ownership of state of the art hardware and the best intellectual and human capital; secondly it is the overall international legal order which has facilitated the rise of Big Tech especially Intellectual Property and International Free Trade Agreements which has led to a chilling effect on small countries of trying to domesticate such complex services which are performed by Big Tech; the third element is that these Big tech companies mostly belong to countries of the Global North and they had access to either cheap financial capital or venture capital or public private partnerships, since these countries are investing not only to maintain their lead in the industry but to also benefit from such companies by helping them expand into newer markets (Pinto, 2018). This monopoly which these companies exercise over data and the influence they are exercising in shaping our current needs to be explained through the lens of a phenomena which it can be best correlated to and that phenomena is ‘Data Colonialism’.
Data colonialism is the term for the extension of a global process of extraction that started under colonialism and
continued through industrial capitalism, culminating in today’s new form: instead of natural resources and labour, what is now being appropriated is human life through its
conversion into data (Couldry, et al., 2020). The basic colonial move of appropriating data from human life works hand in hand with social arrangements and technological infrastructures, some that emerged during earlier capitalism and some new, that enable that data to be transformed into a commodity. An entire industry has been developed whose means of profit reside in the extraction of human data termed as the ‘social quantification industry’ (Couldry, et al., 2020). The original context was industrial capitalism’s progressive globalization in the late twentieth century through trade liberalization and extended supply chains as well as its financialization through an explosive growth of debt (both corporate and personal) and the acceleration of global capital flows. These information infrastructures were developed
which aided people and processes to be connected to each other under conditions that facilitate data extraction
(Couldry, et al., 2019). Even though the first example which comes to anyone’s mind when the word data processing is discussed is social media companies. However, we need to
realise that such data extraction extends way beyond recommending users products based on their behaviour
based on the social media platform. This sector includes companies like Apple, Microsoft and Samsung and other companies which manufacture IOTs; data brokers such as Acxiom, Equifax, Palantir, and TalkingData (in China) that collect, aggregate, analyze, repackage, and sell data of all sorts while also supporting other organizations in their uses of data; companies which use consumer data to personalise their services such Netflix, Amazon etc., (Couldry, et al., 2020). A key characteristic of the business model of these companies is that when the user agrees to their user end agreement to use the applications which are built by them they subject themselves to constant surveillance by these companies which is a polar opposite to the basic autonomy which is a fundamental right of every human. The technology which is built by these
companies doesn’t just stop at mass surveillance but instead
has even more grave and far reaching consequences. One such example is the ZunZuneo case, the United States Agency
22
Regulatory Sovereignty in India: Indigenizing CompetitionTechnology Approaches, ISAIL-TR-001
for Development (USAID) deployed a social media program called ZunZuneo in Cuba, and one of the objectives of this program was to create ‘smart mobs’ to protest Castro’s rule and eventually lead to a Cuban Uprising (Anderson, 2014). This example is a testament to how far Silicon Valley has come in its operations, as now its tech is being used to influence global politics. In light of the above example, it is also pertinent to note Shoshana Zuboff’s words in her article ‘Dark Google’ where she mentions how the governments are in a rush to connect the global poor would the tech and use that tech to their own geopolitical advantage (Zuboff, 2014) . Marx presumed that communism would be the obvious next for the society to venture after it had undergone capitalism but considering the past decade McKenzie Wark in capital dead ponders upon the question, “Whether our economy has transformed into something which is even worse than capitalism?” and Joel Kotkin while answering that question in an affirmative manner, terms this new economic phenomena as ‘Neo- feudalism’ (Dean, 2020). Neofeudalism marks the birth
of this new world order which is governed by multinational
technology and finance giants. Jaron Lenier who is the author of the book ‘You are not a tech gadget’, observed the emergence of peasants and lords of the internet. Such an environment has
been the result of mass accumulation of wealth by organisations and then these organisations working in cahoots with the governments of the nation states where they have a market, using every politico-legal instrument to exercise control over the society which they have created. Such influence upon the world is only possible when the wealth which they have accumulated is situated beyond the reach of the governments in whose jurisdictions they are
working. For example, Apple, Amazon, Microsoft, Facebook, and Google/Alphabet have trying to act as if these corporations were themselves sovereign states — negotiating with, trying to attract, and cooperating with them on their terms (Dean, 2020). These big corporations have managed to create a feudal system where they have created platforms which they have a monopoly over, generated profits by capitalising on the data which is provided to them by their consumers and created a notion that such data exists in nature for free but instead it is created by
them. Essentially equating data with the concept ‘terra
nullius’ or ‘no man’s land’, which was the justification used by the British to plunder the land and its resources
according to their whims and wishes (Cohen, 2018).
If historical colonialism occupied territories, their resources, and the bodies that worked on them, data colonialism is trying to capture the capture and control of human life itself through appropriating the data that can be
extracted from it for profit. If that is the premise on which data colonialism is built on, then just as historical colonialism created the fuel for industrial capitalism’s eventual rise, so too is data colonialism paving the way for a capitalism based on the exploitation of data. Human life is quite literally being invaded to transform it into capital, by collecting extensive amounts of data on human activity which is being used as direct fodder to expand the capital of these Big Tech companies (Couldry, et al., 2020).
This vision of big tech companies is further being realised by IOT devices.
The most appropriate explanation of this nexus between IOT devices and data colonialism is given by IBM where they state that by turning the human environment into a network of listening devices that capture data about all activities, IBM suggests that they can “liquify” areas previously inaccessible to capital. The company put it this way:
“Just as large financial marketplaces create liquidity in securities, currencies and cash, the IoT can liquify whole industries, squeezing greater productivity and profitability out of them than anyone ever imagined possible.” In this view, every layer of human life, whether on social media platforms or not, must become a resource from which economic value can be extracted and profit generated.”
Data colonialism is concerned with the external appropriation of data on terms that are partly or wholly beyond the control of the person to whom the data relates. This external appropriation is what makes possible such data’s exploitation for profit. This is progressive opening up of human life to externally driven data extraction is what we mean by the capitalization
of human life without limit. It is important to analyse the above
24
Regulatory Sovereignty in India: Indigenizing CompetitionTechnology Approaches, ISAIL-TR-001
statement in light of Marx’s established insight that capitalism has always sought to manage human life for the maximization of profit; at the same time, colonialism absorbs new aspects of human life streams directly into the productive process. It is not that social limits to life’s capitalization can no longer be imagined but as things currently stand, much corporate discourse fails to recognize any limits except those that it sets itself. Thus, resisting data colonialism becomes the only way to secure a human future not fused in dissoluble with capitalism, indeed, the only way to sustain the value that capitalism claims to promote that is human freedom.
Bearing in mind the harms of ‘data colonialism’ - this portion tries to analyse the naturalisation of data which is a precondition to ‘data colonialism’, elaborates upon the digital domination and how such domination is used by big tech firms to collect data and articulates upon the considerations which need to be kept in mind when discussing the notion of ‘decolonising data’.
The Naturalization of Data Capture of personal data of many sorts is appropriated for ends which are not themselves “personal.” Personal data includes all data of actual or potential relevance to persons, whether collected from them or from other persons or things.
For personal data to be freely available for appropriation, it must first be treated as a natural resource, a resource that is just there. Extractive rationalities need to be naturalized or normalized, and, even more fundamentally, the flow of everyday life must be reconfigured and represented in a form
that enables its capture as data (Couldry, et al., 2019). Jason Moore argues that capitalism historically depended on the availability of cheap nature: natural resources that are abundant, easy to appropriate from their rightful owners, and whose depletion is seen as unproblematic, but whose “availability to capital” itself had to be constructed through elaborate means of marketization. So too with what we now call “personal data,” but which is the outcome, not the precondition or prior target, of a newly “computed sociality” (Alaimo, et al., 2016 pp. 175-191).
Natural resources were and are not cheap per se, but legal and philosophical frameworks were established to
rationalize them as such, on the basis that they were “just there.”
Only later did the costs to humanity of treating natural resources this way come to be appreciated. The apparent naturalness of data colonialism’s appropriations relies also on a large amount of ideological work, just as historic colonialism did. Consider the business cliché that data are “the new oil,” lost to humanity until corporations appropriate it for some purpose. This rests on the construction of data as a “raw material” with natural value, enabling data as a valuable resource of the 21st century becoming a new type of raw material that’s on par with capital and labour”. Through this informal comparison, the links of data back to a prior process of data collection (i.e., appropriation) are complicated. A blurring is achieved metaphorically through the common idea that data are “merely” the “exhaust” exuded by people’s lives, and so not capable of being owned by anyone.
To accomplish the appropriation of personal data, data colonialism relies on other extractive rationalities as well.
There is, as many critics have noted (Fuchs, 2017; Scholz, 2013)—and this is what we share with earlier debates a social
rationality that treats much of the labour that contributes
to data extraction as value-less, as “just sharing.” There is also a practical rationality that frames corporations as the only ones with the power and capacity to process (and thus appropriate) data. Simultaneously, a political rationality
operates to position society as the natural beneficiary of corporations’ extractive efforts, just as humanity was supposed to benefit from historical colonialism as a “civilizational” project.
Digital Coloniality and Corporate Exceptionality in Utility: Corporate Hostage of Specialities
Digital Domination and Modes of Extraction Under digital colonialism, foreign powers, led by the US, are planting infrastructure in the Global South engineered for their own needs, enabling economic and cultural domination while imposing privatised forms of governance. To accomplish this task, major corporations design digital technology to ensure their own dominance over critical functions in the tech
26
Regulatory Sovereignty in India: Indigenizing CompetitionTechnology Approaches, ISAIL-TR-001
ecosystem. This allows them to accumulate profits from revenues derived from rent in the form of intellectual property or access to infrastructure and surveillance in the form of Big Data. It also empowers them to exercise control over the flow of information (such as the distribution of news and streaming services), social activities (like social networking and cultural exchange), and a plethora of other political, social, economic and military functions mediated by their technologies. The control of code is foundational to digital domination. In Code and Other Laws of Cyberspace, Lawrence Lessig famously argued that computer code shapes the rules, norms and behaviours of computer-mediated experiences in ways similar to architecture in physical space (e.g. imperial railways designed for colonisation) (Reidenberg, 1998). ‘Code is law’ in the sense that it has the power to usurp legal, institutional and social norms impacting the political, economic and cultural domains of society. This critical insight has been applied in fields like copyright, free speech regulation, Internet governance, blockchain, privacy, and even torts (Kwet, 2019 pp. 3-26). What has been missed, however, is how US dominance of code – and other forms of digital architecture – usurps other countries’ sovereignty. Digital forms of power are linked through the three core pillars of the digital ecosystem: software, hardware and network connectivity (Kwet, 2019). Software is the set of instructions that define and determine what your computer can do. Hardware is the physical equipment used for computer experiences. The network is the set of protocols and standards computers use to talk to each other, and the connections they make. Software functions as the coded logic that constrains and enables particular user experiences. For example, software determines rules and policies such as whether or not users can post a message anonymously at a website, or whether or not users can make a copy of a copyright-restricted file like an e-book. The rules that a programmer codes into the software largely determine technological freedoms and shape users’ experiences using their devices. Thus, software exerts a powerful influence on the behaviour, policies and freedoms of people using digital technology. Control over software is a source of digital domination primarily exercised through software licences and hardware ownership. Free Software
licences allow people to use, study, modify and share software as they see fit (GNU Project for example). By contrast, non-free software licences grant a software designer control over users by precluding the ability to exercise those freedoms. With proprietary software, the human readable source code is closed off to the public, and owners usually restrict the ability to use the software without paying. In the case of Microsoft Windows, for example, the public must pay for the programme in order to use it, they cannot read the source code to understand how it works, they cannot change its behaviour by changing the code, and they cannot share a copy with others. Thus with proprietary licensing, Microsoft maintains absolute control over how the software works. The same goes for other proprietary apps, like Google Play or Adobe Photoshop. By design, non-free software provides the owner power over the user experience. It is authoritarian software. Control over hardware is a second source of digital domination. This can take at least three forms: software run on third-party servers, centralised ownership of hardware, or hardware designed to prevent users from changing the software. In the first scenario, software is executed on someone else’s computer. As a result, users are dispossessed of their ability to control it. This is typically accomplished through Software as a Service (SaaS) in the cloud. For example, when you visit the Facebook website, the interface you are provided executes on third-party hardware (i.e. on Facebook’s cloud servers). Since, users cannot change the code running on Facebook’s servers, they cannot get rid of the ‘like’ button or change the Facebook experience. ‘There is no cloud’, the saying goes, ‘just someone else’s computer’. Corporations and other third parties design cloud services for remote control over the user experience. This gives them immense power over individuals, groups and society (Stallman, 2018). In the second scenario, people become dispossessed of hardware ownership itself. With the rise of cloud computing, it is possible that hardware manufacturers will soon only offer low-powered, lowmemory devices similar to the terminals of the 1960s and 1970s and computer processing and data storage will be primarily conducted in centralised clouds. With end-users dispossessed of processing power and storage, software and data would be under the absolute control of the owners and operators of clouds (Pierce, 2018). In the third scenario, hardware is
28
Regulatory Sovereignty in India: Indigenizing CompetitionTechnology Approaches, ISAIL-TR-001
manufactured with locks that prevent users from changing the software on the devices. By locking down devices to a
predetermined set of software choices, the hardware manufacturer determines which software is allowed to run
when you turn on your device (Sullivan, 2008). Thus, hardware restrictions can prevent the public from controlling their devices, granting device manufacturers power over users. Control over
network connectivity is a third source of digital domination. Net neutrality regulation proposes that Internet traffic should be ‘neutral’ so that Internet Service Providers (ISPs) treat content flowing through their cables, cellular towers
and satellites equally. According to this philosophy, those who own the pipes are ‘common carriers’ and should almost never be allowed to manipulate the data that flows through them. This constrains the ability of wealthy media providers to pay for faster content delivery speeds than less wealthy providers (such as grassroots organisations, small businesses, and common people). More importantly, by treating traffic equally, net neutrality prevents network discrimination against various forms of traffic critical to civil rights and liberties. And political lobbyists manage to persuade their governments around the to abolish this principal which is protecting the users of internet from paying extra to access the internet as it is now, this will provide immense control to ISPs who can then censure the content which is watched by the general population. This monopoly over resources enables the big tech firms to freely extract mass amounts of data through the software which they employ and distribute to their consumers. For example, social media platforms such as Facebook and Twitter act as a domain where people can interact with other people essentially giving people a window as to what happens in their everyday life, much of which was outside the scope of these companies earlier. What they are doing is the appropriation of the social itself (Arvidsson, 2016 pp. 3-23). Since these social media platforms, to improve
the engagement of their existing users, encourage their users to put out endless amounts of content, such a philosophy does away with a limit to facet of the human life
which can be appropriated. As mentioned earlier about in this portion about the extraction of data via the means of IOT devices, this has given rise to a new trend of self-data or voluntary data
collection as a requirement of their labour or other important contractual commitments, such as insurance or social security (Levy, 2015 pp. 160-74). For example - Amazon has been planning to introduce a four-camera system which will track their drivers and inform the management when the drivers look away from the road over speed or even yawn; a 2019 Insider Report claimed that Amazon is working on devices to track their workers activities with respect to the time they spend not working on their task when they are not working or when they are indirectly not working, so this way the company would fire the employees if they don’t match the productivity standards set by the company. Amazon has been able to get away with all these invasive practices without anyone batting an eye because the data they are collecting does not come under the definition of “private data” and does not lead to violation of their workers privacy under law and hence face no liabilities when they delve into such unscrupulous practices. Paraphrasing Frances Haugen, who is the whistle blower behind Facebook’s corrupt practices, in an interview revealed that there were conflicts of interest what was good for the company and what was good for the public and Facebook over and over again chose to favour their interest (CBS, 2021). In a report revealed by her they quoted themselves to be the best in the world at filtering out posts which incite hate and violence but they themselves chose to do nothing about it because they know that such posts increase user engagement. A 2019 internal report by Facebook which was obtained by Haugen, claimed that a few parties complained to Facebook to re-write its algorithm which was forcing to add more negative words on their communications on Facebook leading them into more extreme policy positions, thus leading to the creation of more polarised world as long it benefits the interest of Facebook.
The Epistemic Monopolization of “Needs” and “Wants”: Privity and Public Vicinity
Decolonizing data Even before the eruption of the Facebook/Cambridge Analytica scandal, there were signals of a growing willingness of regulatory bodies, especially in Europe, to challenge the great powers of data colonialism (for example, Google and Facebook). However, the Cambridge Analytica scandal provoked a crisis of
30
Regulatory Sovereignty in India: Indigenizing CompetitionTechnology Approaches, ISAIL-TR-001
higher intensity: instability in tech-sector share prices, a popular movement (on social media, of course) to #leavefacebook, calls in the mainstream press to learn again the lesson of how the nineteenth century restrained the raw injustices of early capitalism, and even an editorial in the Financial Times that entertained the case for “everyone to leave Facebook” (Couldry, et al., 2020). It could be proposed to step aside and let the government fix the problem, with the help of robust regulation. But this response risks missing the bigger picture entirely. Suppose regulators did tame the raw force of data colonialism into a more measured pattern of data extraction. Suppose that users generally did become less trusting of data corporations’ motives, puncturing the more obviously self-serving ideologies of the social quantification sector. Suppose that Facebook or even Google’s data harvesting power was opened up to various forms of public use and benefit. What these changes still would not touch would be the strategy of data colonialism as a whole to build a new social and economic order based on data appropriation. It is this larger narrative of transforming life through data that we are told is impossible to halt, because it is driven by a “technological momentum” that is “inevitable.” (Kelly, 2016). The notion that datafication is inevitable is, as we noted previously, a myth of data colonialism. But how to resist it is a relevant policy question altogether. As already stated earlier in the portion, colonialism is about appropriation; whereas historical colonialism appropriated land, resources, and bodies, today’s new colonialism appropriates human life through extracting value from data. Platforms play
a key role in making our participation in data relations seem
natural. In the initial portions, it was emphasised that there was something even more fundamental to digital colonialism: the drive to capitalize human life itself in all its aspects and build through this a new social and economic order that installs capitalist management as the privileged mode for governing every aspect of life. This annexation of human capital is what links data colonialism to the further expansion of capitalism. This is the fundamental cost of interacting with the world through the technology present in our hands.
Confronting data colonialism as a whole is seriously inconvenient. That is because data colonialism’s model for organizing things underlies countless business models and
everyday resources. Hence, the point of opposing colonialism was never immediate economic success. Over the past two decades, billions of people have started to organize much of their personal lives around the infrastructures of digital platforms and other services that depend on seamless flows of data.
This problem of inconvenience drives the temptation toward partial solutions—for example, the idea that we just need better networks, perhaps networks with an element of public purpose or, indeed, that we should follow big tech’s own proposals for how to use their products in ways that enhance
“digital wellness.” (Srnicek, et al., 2017). Better data-driven networks will not save us. However, by attempting to reform a particular network within a wider system of platforms, we are not challenging the foundations of the system but merely finding alternative ways to replicate it. It is no part argument to suggest that people should instantly and completely disconnect from the infrastructures of connection that have been built over the past two decades. Hundreds of millions of people have adapted their lives in response to the existence of platforms, and much use of social media and data processing is productive and well-meaning. But if the larger
outcome of data infrastructures’ use of us is a wider order that over time dismantles human autonomy, there can be no neutral use of those tools; there can be no benefits from the use of platforms that don’t at the same time reproduce, at root, is the larger quotient of concern to develop risksensitive approaches.
Hence to bring any substantive change we need to reimagine our data relations with the devices we choose to interact with. The order of data relations relies on the unlimited possibilities of data processing generated by the infinite connectivity of the contemporary world. Data relations rely on removing all limits to data appropriation and thus building an expanding, knowledge-based social and economic order. From simple starting points (computer’s data capture, the connect ability of computers, and the information processing and monitoring that together they enable), institutions and systems are acquiring the capacity to govern life in a completely new way.
32
Regulatory Sovereignty in India: Indigenizing CompetitionTechnology Approaches, ISAIL-TR-001
The unlimited targeting of persons by marketing messages is not “personalization”. The pursuit of continuous automated surveillance does not really bring the democratization of health or the educational promotion of
digital citizenship.
This reimagining of our existing relations to data is much
more than saying no. Rejection of the idea that no obstructions must be allowed to the flow of data can be formulated positively as the affirmation of what Janet Vertesi calls this the principle of seamfulness (Vertesi, 2014). This is the idea that, instead of prioritizing the seamless movement of data, transfers of data must first always be responsible and accountable to those affected by that data.
Affirming “seamfulness” as a positive principle also suggests the wider possibility of building a whole set of ecological principles that would challenge the naturalness of data
colonialism’s so-called ecology. These would include defending the possibility of autonomous human agency and human relations, on which data practices have relentless effects (Couldry, et al., 2020). Data colonialism proposes a connected world that appears to know itself through its absolute connect ability and through unfettered data flows between all its points (whether living or not). But this methodologically clashes with a much older vision of how human life should be, an ecological view of human life, unchallenged until very recently, which assumes that human life, like all life whatever its limits, constraints, and deficits is a zone of open-ended connection and growth. This is the kind of view which argues for humans to have complete autonomy over their actions and it should be societal instruments and human forces which should influence the development of humans rather than a few corporations who have managed to monopolise this world’s data.
Human centric Rationality How can human beings exercise freedom of choice in societies in which processes of discrimination and opportunity segregation operate in an algorithmic shadow zone? We have in this small part presented a general non-Eurocentric approach to
rationality, for merely critical review and not to show any policy acceptance to the same.
Critically, Amartya Sen considers that the account of freedom starts, from the question of whether daily life gives to an individual “the actual ability to achieve those things that she has reason to value.” (Sen, 1999). Economic thinking, he argues, assumes away much of the complexity of individual reasoning, just as do the hollowed-out models of human agency in much contemporary social science. Sen argues that we need a better and more inclusive model of human rationality if we are to avoid endangering “the conditions and circumstances that ensure the range and reach of the democratic process.” (Sen, 2002). In line with which data science, and the wider “science” of Big Data, can be construed to be in need of a similar challenge. The method and goal, then, is not to abandon the idea of rationality (or order) but to reanimate it in terms of different values. The goal is not to abandon rationality, order, or even the claim to universality but to reject the highly distinctive claim to absolute universality that characterizes European modernity (Couldry, et al., 2020). The West’s much heralded liberal “pluralism” has always, as Quijano explains, ruled out the pluralities that it found inconvenient (Quijano, 2007 pp. 168-178). The ideologies of data colonialism are no different, insisting that the whole world every part of the social world, on every possible scale, and at every possible layer of meaning can be organized in accordance with a single integrated scheme or totality that categorizes all people, acts, and possibilities singly and in opposition to one another. There is no empirical test that could verify this vision of social knowledge: it has authority only by virtue of being imposed on a world reconfigured to its own image. The practical starting point for resistance to data colonialism at last becomes clear. It is to articulate a vision that, against the background of Big Data reasoning, will appear counterintuitive to many, a vision that rejects the idea that the continuous collection of data from human beings is a rational way of organizing human life. From this perspective, the results of data processing are less a natural form of individual and societal selfknowledge than they are a commercially motivated “fix” that serves deep and partial interests that as such should be rejected and resisted. It may help here to listen to those who have fought