5 minute read
What’s the risk for employers using AI?
Sianatu Lotoaso, Associate at Dundas Street Employment Lawyers, looks at the steps employers need to take when using generative artificial intelligence (AI) tools in business.
Generative AI is transforming businesses, and the way we work, at a fast rate and shows no sign of slowing down. While many employers are increasingly using AI to automate business systems and processes to create efficiencies or shortcuts, this comes with legal risks and implications that employers must manage.
The recent case of the two US lawyers who used generative AI to prepare legal submissions relying on fake court cases provides a cautionary tale to employers on the pitfalls of AI. Recently, the Office of the Privacy Commissioner (OPC) released guidance for employers on the potential privacy risks associated with generative AI tools. It provides simple and practical steps employers could take to mitigate their risks from using generative AI.
WHAT IS GENERATIVE AI?
AI is machine or software intelligence that tries to mimic human intelligence. Generative AI is tools or apps that use vast amounts of information, including personal information, to generate content, including audio, code, essays, images or videos, and human-like conversations. Common generative AI tools are ChatGPT, Microsoft’s Bing Search and Google’s Bard.
WHAT COULD GO WRONG?
AI can (over)confidently generate seemingly legitimate content, which is inaccurate (known as ‘hallucinations’), including citing non-existent sources. A recent example of this concerns two US lawyers who were fined US$5,000 for submitting fake court cases generated by ChatGPT (a chatbot that produces plausible text responses to human prompts).
Steven Schwartz said he used the AI ChatGPT to research cases to support his client’s case against the Colombian airline Avianca for an injury incurred on a flight and that, according to the Mata v Avianca case, he “just never thought it could be made up”. Peter LoDuca, who also worked on the case, said that he did not review any of the cases cited by Schwartz. Rather, he simply believed that the work produced by Schwartz, a colleague of more than 25 years, was reliable, and said that it “never crossed my mind” the cases were bogus.
US District Judge Castel held that while “there is nothing inherently improper about using a reliable artificial intelligence tool for assistance”, lawyers still had a “gatekeeping role” to ensure the accuracy of their work.
Schwartz and LoDuca had “abandoned their responsibilities when they submitted nonexistent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question”.
Ultimately, the employer (and not the AI tool provider) is responsible for compliance with New Zealand privacy laws.
OFFICE OF THE PRIVACY COMMISSIONER’S GUIDANCE ON GENERATIVE AI
The OPC has released guidance setting out its position that the “responsibility of complying with the requirements of the Privacy Act lies with agencies (whether in the public, private, or not-forprofit sectors)”. This means that, ultimately, the employer (and not the AI tool provider) is responsible for compliance with New Zealand privacy laws.
The OPC also sets out the top risks for employers using AI, including the following.
1. Generative AI relies on inputted data for the AI tool to operate. This presents a risk that, to the extent businesses share confidential and personal information with an AI tool, it may not have privacy protections in place, or the information is disclosed.
2. Generative AI can perpetuate bias and discrimination and can produce “confident errors of fact” (as seen in the US case above). This can include recruitment processes using generative AI where certain candidates are not favoured over others on the basis of their race, gender or other protected grounds.
3. Generative AI tools may not allow businesses to be compliant with their privacy obligations.
While the development of AI is inevitable and has significant benefits for the workplace, there are also potential pitfalls and risks to employers, which must be actively managed.
The Office of the Privacy Commissioner guide also sets out practical steps for New Zealand employers to take when it comes to using generative AI tools in their business, including the following.
1. Review whether the use of generative AI is necessary and proportionate.
2. Only use a generative AI tool after conducting a privacy impact assessment to identify and mitigate privacy risks.
3. Be transparent with customers and clients about how their personal information will be used and how potential privacy risks are being addressed.
4. Have a human review the outputs of any generative AI tool before taking any action to help mitigate the risk of acting on inaccurate or biased information.
5. Businesses should only share personal or confidential information if there is explicit confirmation from the AI tool provider that inputted data is not retained or disclosed by it.
Sianatu Lotoaso is an Associate at Dundas Street Employment Lawyers. Sianatu provides advice on all aspects of employment law and the employment relationship. Sianatu regularly provides advice to a range of clients in the public and private sectors. continuous improvement approach that puts people at the centre.