4 minute read
Implications of Artificial Intelligence for INTERNAL LEGAL & COMPLIANCE DEPARTMENTS
Working in the Compliance Team for a technology-driven company in Africa is never dull, and never simple. Over the past few months, our department (like other compliance departments all over Africa) has received a number of questions about the legal and compliance implications of Artificial Intelligence (AI). We have Open AI’s revolutionary ChatGPT language model to thank for this. ChatGPT has recently gained significant attention for its potential to disrupt established ways of working and teaching, and businesses all over Africa are considering ways to take advantage of emerging opportunities to utilise AI to build efficiency and efficacy into business processes. This article speaks to some practical challenges posed by AI and the ways in which existing Data Governance and compliance processes can enable better business decisions around AIadoption.
For internal legal and compliance teams, the use of AI has brought about a revolution in the way private law operates, particularly with regard to contracts. However, it has also presented challenges such as determining who bears civil liability in cases where AI behaves unexpectedly, and how institutions can e minimise AI-related risks through contractual agreements that align with their risk appetite. To do so, it is crucial for internal legal and compliance teams to partner with stakeholders to ensure that they are aware of the potential risks and to consider seeking external advice where appropriate. The precise legal challenges AI will introduce (whether new or ampli nature of the business or institution and how AI is implemented which is important to consider as part of risk management initiatives. On 26 January 2023, the National Institute of Standards and Technology (NIST) published an AI Risk Management Framework (RMF), roadmap, and playbook (suggested ways to use the RMF) amongst other resources which may support those initiatives, especially in the absence of mandated regulatory frameworks. The RMF organises AI risk management into four functions: govern, map, measure, and manage. The govern function is infused throughout the other three functions, underscoring its importance.
Advertisement
AS REGULATIONS CONTINUE TO EVOLVE, COMPLIANCE EFFORTS MUST REMAIN ADAPTABLE TO MINIMISE RISKS AND ENSURE THAT THE ORGANISATION IS IN COMPLIANCE WITH APPLICABLE REGULATIONS
Dedicated AI Regulation in Africa has been slow despite increased AI adoption but is gaining traction. In addition to specific AIrelated legislation, the use of AI may also impact other areas of law, such as those related to discrimination and privacy. As regulations continue to evolve, compliance efforts must remain adaptable to reduce risks and ensure that the organisation is in compliance with applicable regulations. This includes monitoring for new regulations and supporting businesses in implementing adequate controls. This is particularly important in the context of data protection legislation, where considerations such as lawful grounds for processing, further processing limitations, and retention limitations must be taken into account. However, AI also has the potential to optimise existing compliance programs through the use of automated solutions and improved data governance. This could result in smoother audits and a more proactive approach to emerging regulatory obligations.
Automated decision-making can significantly reduce the operational workload for companies, but it may also pose risks to individuals and compliance. To address these risks, companies should begin by understanding the regulatory boundaries surrounding automated decision-making. In many cases, this will involve navigating complex and nuanced regulations in different countries. One method to manage privacy legislation across multiple jurisdictions is by subscribing to provisions with the highest standards possible and addressing outlier requirements on a case-bycase basis. Data governance can play an important role in mitigating these risks by ensuring that data used for automated decision-making is accurate and reliable, and by minimising the risk of bias and discrimination.
The use of AI raises ethical concerns since it depends on the quality of the data it's trained on. To ensure that the data used for AI is accurate, complete, and unbiased, diligence in selecting and verifying the data is required. This can be accomplished by establishing data governance processes that ensure the ethical development and deployment of AI. These processes include assessing the quality of the data, providing clear explanations of the decision-making process, and creating ethical guidelines and principles for AI development and use. Regular monitoring and evaluation of AI would also improve the likelihood of ethical behaviour. Although there may always be a margin of error, it is necessary to assess whether the risk is acceptable.
AI may put a strain on the existing rights of data subjects by making it difficult for individuals to exercise control over their personal data. Several difficulties include the need to provide clear information on how it is collected and processed, giving individuals access to their data and allowing them to correct inaccurate information, erasing information, informing them about who their information will be shared with, and addressing objections to the processing of their personal information within specified timeframes. Standardised processes for data subject requests and clear procedures for responding to those requests may need to be revised in light of those challenges. By establishing clear data management policies and procedures, organisations can ensure that personal data is collected, processed, and stored in a transparent and compliant manner. This can help individuals to exercise their data subject rights more easily and effectively, ensuring that their personal data is protected and used appropriately. An efficient response to data subject requests may also build consumer trust.
AI has become increasingly important for businesses that would like to extract the most value from their data. However, adopting AI tools without proper data governance risks compromising the privacy, security, and ethical use of business data. Data governance (with its focus on integrity, confidentiality, and availability) can also enable better results from the use of AI tools, as the quality of the data used to train AI models directly affects the accuracy and reliability of outputs. Africa-based companies, with their unique vantage point, have an unparalleled opportunity to learn from the experiences of other regions and develop effective strategies. By doing so, these companies can position themselves to thrive in an increasingly dynamic and competitive global marketplace.
Yuri Tangur is an admitted attorney and serves as legal counsel for compliance and privacy at Valenture Institute. His professional background encompasses a diverse range of areas including regulatory compliance, privacy law, intellectual property as well as risk management.
CELINA LEE
Celina
LEE,