27 minute read
The Proposed Regulatory Approach
Regularizing Artificial Intelligence Ethics in the Indo-Pacific, GLA-TR-002
involved. The new technologies should require a standardization such as technical standards and certification to increase economic efficiency, reduce asymmetrical data sharing among consumers and stakeholders. The regulatory intervention is also essential in cases where AI technologies can potentially produce risks or harms by degrading the human environment, safety. In such a case rights and interests must be assessed and formulate regulations that ensures protection of rights, evaluate safety concerns such as in the case of lethal autonomous weapons, cybersecurity, privacy, and data concerns. As a result, instead of placing primary focus on the emerging technology of AI, the regulatory system and the rationale behind the regulatory ecosystem should place a reliance on the socioeconomic impact or changes that can be effectuated through the deployment of the AI technologies or applications (Moses, 2017). More reliance of the regulatory ecosystem should be placed on the sociotechnical effects that occur and thereby replacing an overbroad pressure on technocentric approach that is not specific to a particular technology making the projections of regulatory regime obsolete for any future technologies that is subject to advancement and research.
Advertisement
The Proposed Regulatory Approach 1. Gaining Politico-Legal Consensus for Commonalities on the Regulatory Approach
It is an absolute imperative that legal & political agreeability needs to be reached in the Indo-Pacific Region in order to ensure that the appropriate developments come & that they come to stay & evolve for the better. It is suggested that three primary themes emerged which need to be looked into, namely - Domestic Division, Uncertainty & Hedging.
Domestic Division: All the countries have had the constant concern that their own national policies &/or the policies of the potential partner(s) are differentiated into two lobbies. The perception has been that large sections of political and economic communities are looking towards closer ties with China, while defence, intelligence
and security communities are concerned with Beijing’s influence and intentions - both domestically and internationally. This fracture has proven to be impeding decisive and focused decision-making.
Uncertainty: Looking forward to 2024, participants often have mentioned the extraordinary amount of prevailing uncertainty in international affairs, with one US participant having said, ‘there are more balls in the air than at any time since World War II’. Factors mentioned in fact, included Brexit, US commitment to allies (and vice versa), economic stability, the role of artificial intelligence in warfare, advances in Chinese military technologies, the speed and depth of India’s strategic engagement in the wider Indo-Pacific, and Beijing’s intentions in places such as Hong Kong, the South China Sea and Taiwan.
Hedging: More often than not, domestic divisions and uncertainty have resulted in hedging – whereby countries attempt to manage their relationships with the US and China in a manner that leaves their options as far wide open as possible. However, the general sentiment has been - that, by 2024, hedging would come to an end as both Beijing and Washington have increased pressure. These three themes have manifested themselves differently in the six countries and, over the course of the project, have evolved to create an increasingly tense and dynamic strategic environment in the Indo-Pacific.
The aforementioned themes are common to all the regions in question. Therefore, it is a must that they be dealt with.
As a solution to the Domestic Divide, frequent & efficient talks & conferences are the best ways to guarantee the resolution of the issue as early as possible. While, for the factor of uncertainty, only the passage of time & implementation & announcement of policies by the relevant actors can cast away the ambiguities & concerns. With regards to the factor of hedging, increased pressure from relevant actors can achieve the objective of specific
Regularizing Artificial Intelligence Ethics in the Indo-Pacific, GLA-TR-002
alignment, but the executives & legislatives of the respective region need to take the initiatives upon themselves to make up their own minds & decide on specific alignments.
2. Create a Space of Regulatory Authority by Common Juridical Mechanisms and Regulatory Authorities
On April 2021, the European Commission (EC) proposed its awaited draft on the AI regulation referred to as the Artificial Intelligence Act. The regulation proposes a uniform legal and regulatory approach for the usage, development, and marketing of AI if it confirms with the rules laid down in the present regulation and thus urges all the member states to not prevent the growth and development of AI unless it is prohibited by the present regulation. The legal framework is risk based horizontal approach such that legal intervention is mandated when there is a concern or if it anticipates any risk in the future. As such the enforcement and the regulatory aspects concerning the application of the present act is shared between the member states and the concerned national courts or any other bodies that the member states deem as applicable • The European commission of Artificial Intelligence • The national supervisory authority for the purpose of ensuring the application and extending their supervisory powers at the national level • The market surveillance authorities of all the high-risk AI systems in the market for testing and conformity assessment for assessing if any incidents resulting in breaches have occurred.
Japan published its AI based R&D guidelines for international discussion of AI networks have a set several non-binding guidelines for promoting and fostering innovation within the country that are based on Human
centric society and soft law guidelines and reducing the imposition of increasing obligations on the developers. Singapore on the other hand also has a broad AI National strategy at place that is to develop guidelines, ethics, and governmental frameworks for the human centric AI.
From the various kinds of approaches adopted by the countries in the Indo Pacific, it is of relevance to understand that the AI framework takes into shape in the form of either 1. National Ai Strategies such as in the case of
Singapore and EU 2. Sector specific AI principles 3. Foundational Guidelines and standards and approaches to AI (European Parliament, 2021) The categorical grouping of the AI legal framework is not compact and specific and sometimes they can run parallelly or can be a culmination of both. Several countries such as South Korea is still in the position of making such National AI strategies. While Australia and Thailand have taken a sector Specific AI strategy complemented by the foundational AI principles in place. Thailand through the Thailand Personal Data Protection Act and the Cybersecurity Act (International Institute of Communications, 2020).
The juridical underpinning behind the various AI approaches taken by the countries in the Indo pacific are dependent upon the vulnerabilities and the risks it poses, this includes the physical or technical risks such as poor design or quality of the development of AI algorithms and societal risks such as the lack of dissemination of public information of AI technologies in place and the policy recognition that either hypes AI realities or overestimates the risk factor of AI and takes a precautionary approach that adopts a state centric regulatory interventionist approach (Rodrigues, 2020). The lack of an effective legislation and enforcement
Regularizing Artificial Intelligence Ethics in the Indo-Pacific, GLA-TR-002
mechanism is also an issue, but the legal intervention should foster innovation. In this sense, framework should be a foundational approach containing several principles and guidelines for the various AI approaches adopted by the complementary National strategies by the countries assessing the several risks it poses such as the social, political, and regulatory risks.
Considering the countries in adoption of the AI strategies, it is important to determine the stakeholders that are impacted by the approach. This includes Government, Enforcement and regulatory authorities, consumers and citizens and companies or corporations. It should also place focus on defining the several stakeholders and the interests and rights they would hold in a regulatory regime.
EU though adopted a strategy has provided member states with powers to impose penalties and conducting the legal affairs in accordance that imposes upon the member state for the establishment of a competent authority at the state level. Therefore, the Act only proposes a foundation for adopting the guidelines within the state level and the oversight mechanism which is again at the Union level. It adopts a more risk-based approach that classifies several technologies as high risk and several AI technologies that are prohibited from sale, usage, and marketing within the member states of EU. The enforcement mechanism at the state level.
Similarly, general guidelines and foundational principles through common consensus and deliberations should be necessary for establishing of international standards (Cihon, 2020) in the IndoPacific. This can focus on creating an approach that not only favours the nation states but fosters innovation, development of technologies within companies and corporations and promote effective dispute resolution in
terms of civil law liabilities and IPR claims. The common guidelines can act as a reference point for various guidelines and principles to be adopted by the countries depending on sociotechnical and socio-economic approach that is country based and is efficient in addressing specific country-based challenges.
Such a composite regime can also readily be negotiated by the various Indo-Pacific countries that can adopt a risk-centric approach (RCA) that is effective not only in addressing the risks but also facilitate intervention depending on the practical risks that is imposed upon the interaction with its environment. The risk-centric approach will be instrumental in shaping the rightsbased approach depending upon the risks that are exposed on the deployment of AI technologies and thereby shaping the Human centric approach of AI. At the state level, a regulator such as the AI board can be set up that extends oversight authority and monitor the effective implementation of the basic guidelines and the foundational principles in company levels and gather the periodic compliance report to analyse the same and impose penalties in case there is contravention with the guidelines or principles as laid down.
The companies should also adhere to these guidelines and set up AI ethics board or an institutional review board that examines the contextual usage of AI within the company and its impact on the consumers to assess the AI risks and
the ethical concerns it raises. The guidelines as discussed above will attempt in classification of various AI technologies and the risk level. Depending upon such risk level of the research and development of AI, the company should frame specific set of guidelines, principles, and standards (Cihon, 2020) that should be monitored by the ethics board that ensures regulation, compliance, and due diligence to be carried out before any development of AI technology. Effective internal
Regularizing Artificial Intelligence Ethics in the Indo-Pacific, GLA-TR-002
compliance and regulation will prevent a regulatory intervention and reduce the costs of dispute resolution and ensure the flow of Knowledge management systems without any disruptions. The board should however be composed of both legal and technical members who are impartial authorities and work together in creating an oversight board for ensuring maximum compliance. 1. Additionally for companies that are big corporations and MNCs, the AI ethics board should be composed of a government member from the AI board with whom voting rights and decision-making power should be granted. 2. Further the internal audit and compliance reports should be periodically submitted to AI ethics board and assessed by the members of the board to ensure that compliance requirements are accordingly met. 3. Further, apart from MSMEs, MNCs and other companies and corporations should submit their internal or external reports periodically (Quarterly or Bi-annually) to the regulatory authority at a state level. 4. For MSME’s the foundational principles of AI and their guidelines would be sufficient and an internal report that is conducted within the company and is only mandated to be submitted if the regulatory authority compels or wants disclosure of the AI technologies that MSMEs are working on. For our reference the medium enterprise has total assets valued between $3 Million to $15 Million employing about 50-300 employees (International Financial
Corporation, World Bank Group, 2021).
3. Alignment of Self-Regulated Approaches on the
Principled Approaches to AI Ethics and Knowledge
Management
There can be specific AI Ethics approaches, which in line with the analysis made in the domain of KM, or
knowledge management can be taken into regard to explain wherein a proper alignment of self-regulated approaches can be reached, which is beneficial for companies, who invest and transact in the Indo-Pacific region, which again – can be overseen by governments accordingly. Following are the principled approaches, so to start with: • Explainable Artificial Intelligence Ethics • Social AI • Non-Exploitative AI • Bias-Alleviating AI • Common-sense AI • Trust-centric AI
These are the principled approaches on which companies must attempt making self-regulated approaches. Each of them are described as follows:
Explainable Artificial Intelligence Ethics: Explainability of AI technologies would become critical to shape up the credibility of the stakeholders who would be interested in sharing technologies as well as ensuring that the implications of such kind of technology would be assessed closely. Technocratization of the manifestly available AI must be closely scrutinised and companies must work on retaining the characteristic of the technology used to be explicable in its patterns of activities as it provides services.
Social AI: In the information age, disruptive technologies can become pseudo or partial means of social control of human lives. Anthropomorphism as a technological expectation from AI technologies is a serious concern raised by various AI scientists and ethicists around the world, because anthropomorphising the manifestly available AI when such AI technologies are not “conscious” and “sensitive” enough to avoid unforeseen & disruptive implications of their actions on humans as data subjects and objects & human-based environments, economically, socially, individually, technology and even
Regularizing Artificial Intelligence Ethics in the Indo-Pacific, GLA-TR-002
ecologically, it becomes an imperative for companies to be conscious about the same. The Indo-Pacific as a normative construct in this context must be seen by two geoeconomics-geopolitical aspects, i.e., (a) the Asian and African countries which share the maritime and landlocked routes to complete the geopolitical construct of focus (i.e., the Indo-Pacific itself); and (b) those actors, who have special interests in the focused geopolitical construct. Companies from category (b) countries are required to adhere and perpetuate norms of selfregulations which must be coherent and not denigrating the economy of development of the Global South countries, most of which come under the category (a) countries. Companies which belong to category (a) countries of course have to develop self-regulation norms which have affirmative competency in comparison with those of utilized by companies of category (b) countries. There is another important issue which must not be ignored. The exploitative effects of algorithmic anthropomorphism (subject to the manifest availability of AI) will be risky, and they will be multi-layered and multi-directional. In that case, the Indo-Pacific must be seen with a risk-centric approach (RCA), but the RCA has to be practical – with a sociotechnical approach. Understanding technology distancing and the relationship between society and the technology itself, from both individual and collective aspects, is a necessity. Instead of adopting ideological approaches, the approach has to be realist and give space to optimal regulatory intervention.
Non-Exploitative AI: Since RCA is becoming a strategic need for interested public actors across the globe, with companies assuming their own means, the question of exploitation due to AI technologies must be looked into seriously. It however does not mean to stifle innovation via state-centric regulatory interventionism, which has no rationale. Yet, the same can happen in the reverse as well, as exploitation of human data subjects and objects is a problem. Companies and entities can subject to creative approaches to stifle innovation in the Global South
countries, which has to be taken seriously. There should be regional standards to estimate compliances on maximum avoidance of algorithmic activities and operations, which cause deliberate or indeliberate yet spontaneous forms of exploitations against human data subjects and objects. Questions related to intellectual property law, technology politics and cyber law might be raised. However, the element of exploitation must be alleviated and gradually removed, so that (a) the cycle of innovation in KM is not affected; and (b) sustainable avenues of cooperation in resource support utilization to adhere with better compliance and CSR frameworks can be achieved, which lubricate the interests of the companies.
Bias-Alleviating AI: Algorithmic bias is a generic problem in AI studies, where kinds of biases, which exist in the manifestly available AI technology, specifically or systemically, affect the operational capabilities of the AI technology. If we have a critical approach AI-based anthropomorphism, then biases have a deeper role in becoming systemic or specific to discriminate or deliver consequential actions – which are undesirable towards the data subject and objects, and have some risks involved. Evaluating the risks matters, and handling with biases becomes complex, in reality. Companies would therefore can be treated with much permeable policy & regulatory interventions considering the geoeconomics and sociotechnical relevance of the Indo-Pacific region by the governments in the Global South. Here, the companies have to be careful in democratizing and structuring their knowledge management.
Common-sense AI: There is no doubt that many AI technologies which are being used or could be put into use commercially, might be limited to menial or normal tasks, which do not necessarily affect largely. However, as manifestly available AI technologies can exert indirect means of social control due to technology distancing, it becomes important that as self-regulation transforms, they must, for consumer benefits and better compliance
Regularizing Artificial Intelligence Ethics in the Indo-Pacific, GLA-TR-002
measures, be with common-sense. It simply means that their functionality must bear optimal simplicity in terms of their explicability (explanability) and impact, considering the degree of tasks involved on the basis of their anthropological impact. Anthropomorphism would play an important role because if the characteristics of AI technologies – which are regularly anthropomorphic by design – not being controlled reasonably, then there could be issues. Hence, simplifying and harmonizing the impact is to be taken into special regard.
Trust-centric AI: Technology distancing can often lead to decline in human contact and interaction, in many ways. This does not limit to physical exchanges, but can extend to even cyber, digital and maybe even psychological levels, if the disruptive technology is doing to job of the data subjects per se. Now, exploitation is an issue, and abrupt means of technology distancing has led to a heavy decline in human trust. Then, the way accessibility has transformed, where several paths have been marginalized by design mostly, at a strategic level due to AI technologies, the element of human trust is certainly missing. Thus, AI technologies must be developed to enhance human trust and completely sponsor human evolution, instead of anthropomorphising technology, leading to which the element of human nuances and experientiality is virtually lost. Accountability and accessibility must be designed in a way trust becomes the key constituent of the technology’s success.
Alignments thus can be in these feasible forms: • Incremental alignments – focus where step-by-step, alignments can be done to create a sustainable market to utilize compliance and resource dynamics in line with emerging regulatory standards. • Specialised alignments – if solution does not exist by gathering everything or as much as possible, then a rather slower approach can be adopted generally to align, provided one or two areas can be taken to build
bridges for leveraging incremental alignments further. • Target-based alignments – there can be targets based on which alignments can be developed and differentiated, which can be place of manufacturing/retailing, managing and optimizing regulatory costs, enhanced democratization of CSRoriented activities, etc.
4. Making Convergences on AI Ethics Approaches in
Practice and Regularization
For governments, converging to develop AI Ethics approaches, would be affected due to reasons to differ. Thus, there are potential avenues to collaborate to seek which of and how those AI ethics approaches can be adopted in reality, leading to eventual regularization. Instead of covering which AI ethics approaches can be agreed upon, we have provided the methods to converge to adapt practicable and regularizable AI ethics approaches: • Multilateralism could be considered as a “moral” approach to negotiate and design recommendations.
However, multilateralism for AI ethics could be considered as a top-down mechanism, which again, in spirit might be agreed, but not in practice. Then, there are questions of regulatory competence and leverage, which governments would ask, thereby propelling for more plurilateral approaches to negotiate further. • The Indo-Pacific region is dominated by the Global
South countries, especially India, Nigeria, Israel,
Saudi Arabia, Japan, Singapore and others. Therefore, it becomes much of a contentious question whether there would be any strategic alignment among countries like these. Even the legal development would be subject to the strategic pluralism which countries would adopt reasonably. Hence, it is recommended that multilateral forums can be considered as mere forums to merely develop the recipe of principled AI ethics, which for years to come can be shaped incrementally.
Regularizing Artificial Intelligence Ethics in the Indo-Pacific, GLA-TR-002
• India will play a major role in the Indo-Pacific, especially when it comes to AI ethics, provided it ushers an approach which is unique to Indian value and knowledge systems. The role of Indic knowledge systems becomes essential for India to make Indo-
Pacific as an India-centric construct. Even RCAs should have space enough to give India special status of primacy and leadership, provided that India manages its resources and regulatory systems at the best level, and exert its economic and knowledge economy weight with specifics and distinctive issues cleared. There is no sight of this happening for now, but this should be India’s position. • Indigenization, Localization and Economic
Rights (ILER) would matter a lot in shaping each step of AI-related manufacturing to the stage of AIbased knowledge management in the Indo-Pacific region. A step by step approach can be dealt wherein the manifest availability of artificial intelligence can be closely looked into, and regional and local consensuses can be developed slyly. For example, in
AI education, an estimate could be made as to in what respects the AI is subject to consideration, either as a
Subject, an Object or a Third Party (SOTP
Classification) (Indian Society of Artificial
Intelligence and Law, 2021) since either of the entitative classifications if are applied, the government authorities can audit the economic impact, and then, avenues of cooperation can be built.
The economic aspects of RCA therefore must be taken into point. • Any human-centric approach (HCA) to AI, cannot be unrealistic. Further, the HCA must not be in conflict with the RCAs adopted, which can be reasonably agreed by the Indo-Pacific. HCAs also should not limit the scope of review and decisionmaking to a rights-based approach (RiCA) where the exertion would be invested into merely creating an infrastructure of rights enforcement without any weight. Instead, the centrality of human beings can be
understood by the risks of algorithmic anthropomorphism, which compel governments to adopt quicker and permeable & interventionist RCAs.
Hence, the focus of sensitivity must be not at investing at weightless or incoherent RiCAs, which have no virtual relevance to the strategic and risk considerations per se. A simple formation of any approach can be adopted by the governments in any of the following ways, non-exhaustively: o RiCAs must be central to the RCAs adopted,
which then can shape the HCAs
o HCAs can be based on the RCAs, which can
then shape RiCAs
o RCAs should be focusing on the element of
anthropomorphism, a core component of HCAs, which can shape the RiCAs
• There will be a baggage of other risks, which may emerge in the fields, for example, environmental sciences, cybersecurity, telecommunication, commercial and economic law, and others. For each of them, HCAs, based on countering and understanding algorithmic anthropomorphism, can be very instrumental in shaping the RiCAs and RCAs comfortably. • It would therefore become an interesting question whether there could be convergences on the RiCAs,
RCAs and HCAs together in simultaneity. That is a contentious issue since there is no guarantee it can happen. The practicality and strategic relevance of any of the approaches would largely decide grounds to collaborate. RiCAs therefore need to converge to ensure that a comprehensive AI-related rights-based regulatory and foresight network can be established.
That can potentially happen when RCAs have larger scope of alignment, and the anthropomorphic element of HCAs becomes the optimal and larger quotient of risk (OLQR) realization. In such a circumstances,
RiCAs can be formidably adopted. Of course, the enforcement mechanisms would have limited aberrations, since RCAs are not the same anyways
Regularizing Artificial Intelligence Ethics in the Indo-Pacific, GLA-TR-002
ideally. However, effective feedback in the form of jurisprudence, policy assertions and analyses can be put into good use. • The case of the anthropological element of the HCAs becoming the optimal and larger quotient of risk is tricky, because it stems down to the R&D, skill and many other manufacturing and service sector compliance issues. How governments study and act robust is their business, but there even, a special focus should be on ILER. That would realistically shape the
OLQR accordingly.
5. Hold Negotiations for Converging for the Indo-
Pacific Region for “Inclusive AI”
The last or the repeating stage of the proposed regulatory approach, would be central to the mobility to negotiate and come on reasonable conclusions to develop a diplomatic, legal, ethical and even political constellation of convergence in the Indo-Pacific for Inclusive AI. The
Inclusivity of AI is a constructive question in the genealogy of technology policy, which can be churned through negotiations among governments and even the companies. How they happen, and what relevance they bear, is again a contentious question.
6
Proposing Dispute Resolution Methods
An efficient dispute resolution mechanism is of utmost importance not only to address the rights and interests of the stakeholders but is pertinent to harness the benefits of the AI technologies in a sociotechnical approach. The dispute resolution proposed is devised in accordance with the specific stakeholders that are involved within the process. There are several combinations of disputes that can arise in the usage and deployment of AI technologies. However, we are not focusing on the subject matter of the dispute resolution since it would not be an exhaustive list considering the permeation of AI in every sphere, it creates an interface with almost virtually an enormous area of laws ranging from (civil law, cybersecurity, IPR, Criminal law, competition law etc.). Therefore, our focus should be on devising an appropriate method of dispute resolution depending upon the stakeholders in the dispute. The possible combinations of disputes that can arise in the sphere of AI technologies are between (i) Government v. Businesses (ii) Business v. Business (iii) Consumers/Citizens v. Business In accordance with the actors in the dispute resolution mechanism, we propose the method of how the disputes should be resolved efficiently. Primarily, in case of disputes arising out of Government and the Businesses that are dealing with production, development & research of AI technologies/software, the most effective method would be to adjudicate the dispute by an ad hoc authority which will deal with disputes arising out of AI technologies. The ad hoc dispute resolution centre will specifically deal with AI related matters or concerns, disputes that arises between the government and business or consumers and the business. The decision of the ad hoc tribunal will be binding upon the parties. The ad hoc dispute resolution centre should consist of both legal professionals or members and the members with relevant technology expertise, and the decision of the majority members of the ad hoc authority will be binding upon the parties.
Regularizing Artificial Intelligence Ethics in the Indo-Pacific, GLA-TR-002
However, for disputes arising between the businesses themselves, it is more viable for them to address the disputes using ADR methods such as online dispute resolution, arbitration, negotiation, expert determination, and mediation. Any of the modes of the dispute resolution can be favoured by business as a method of private settlement. The essentials for arbitration would be the existence of pre-existing arbitration clause. The willingness of the parties to settle upon the dispute resolution will favour in speedier disposal and resolution. Expert determination can also be utilized as a method for dispute resolution between companies where an independent expert deals with an issue between the parties and the parties agree beforehand if the prospective decision made by the expert will be binding upon the parties and is a speedier and an informal way of resolving disputes. Since private parties’ favour informal yet speedier resolution of disputes which are not hurdled by procedural obligations such as arbitration, this can be an attractive option of dispute resolution between private parties such as MNCs and companies. However, the parties should specify the expert determination clause which prescribes the conduct and participation of the parties to hold the decision enforceable against the party. In this way the various methods of dispute resolution framework are proposed keeping in mind the parties or stakeholders, their negotiating and bargaining power, economic costs and the formal or informal mode of dispute resolution that is suggested accordingly. However, a combination of formal and informal recourses of dispute resolution is also possible while we advance creative concepts that would favour the parties and their interests.
7
Recommendations
The following are the recommendations provided in this technical report: • Companies should develop their own autonomous Risk-
Centric Approaches, have confidence-building measures – to cater on any substantive and procedural issues related to knowledge management of & via AI technologies. • Companies, whether MNCs or MSMEs – must constitute autonomous and impartial AI Ethics and Oversight Bodies.
The body constituted should assess the seminal infrastructure of AI technologies within their corporate structure, and how improvements can be regularly implemented. Much coordination would ensure that future companies and MNCs develop reasonable approaches to innovation and knowledge management. • Companies must adhere gradually in line with the emerging international and national-level AI ethics standards, provided the standards have legal tangibility. At the same time, they can develop self-regulated AI ethics approaches, which can be audited and subject to compliance review.
Hence, their soft law approaches to AI ethics standards must also have some policy and ethical tangibility. • Emerging MSMEs must be considered as critical stakeholders as far as the business environments for the proliferation, democratization and regularization of AI technologies is concerned. Acknowledging them as critical stakeholders must be based on their host country in the Indo-
Pacific. When it comes to second countries, where they wish to expand further, they must be considered at a suggestive level, optimal stakeholders, so that based on their performance in foreign business environments in due cooperation with the government of their host country, their status as being optimal stakeholders develops gradually at times. Foreign governments thus, must have a larger say, but
Regularizing Artificial Intelligence Ethics in the Indo-Pacific, GLA-TR-002
they must be conducive and conscious of the interests of the
MSME and their host country’s government. • Dispute resolution mechanisms should be dealt with Risk-
Centric Approaches (RCAs) – but they must be fundamentally oriented with a Stakeholder-centric approach.
We do not recommend exhaustive methods, and we suggest that future research could be done in this matter. However, the basic recommendation provided in this matter must be taken into higher considerations for future research. • Algorithmic anthropomorphism & its pertinent risks must define Human-Centric Approaches, and based on the fiduciary relationship between HCAs and RCAs, much mature Rights-based Approaches (RiCAs) can be accepted on specific issues. Commonalities on which RiCAs can be transpired based on the commonalities of the RCAs, which can provide avenues for constructive engagement towards enshrining more global RiCAs. • Instead of defining global or regional aspects of what constitutes Inclusive AI as per the important AI ethics principles discussed in the section on the Regulatory
Approach, the suggestion is that governments must focus on the cyclicality of the purpose and democratisation of AI technologies distinctively based on the components of ILER.
Resilience of the global supply chains is a necessity, and so to avoid repeating mistakes of the erstwhile models of technology transfer, the Indo-Pacific countries must ensure the that RCAs are sensitive and conscious towards enabling resilience of global supply chains. • The Indo-Pacific as a conception when it comes to the policy, the geopolitical considerations and the neorealist tendencies of the normative construct must be practically inclined towards two geopolitical and geostrategic schemes of policy, i.e., Indo-Europeanism and India-centricity. The former means that cooperation between India and European countries not just at governmental levels, but also at community and corporate levels, with a permeable aspect, is necessary. The latter means that India is an eligible and highly responsible stakeholder in the Indo-Pacific region, not just in terms of mere opportunity costs that India can afford, but also because of the geoeconomics role of India, which
ranges to the impact and role of information, development and knowledge economies not just within India, but along the Indo-Pacific region per se.