3 minute read
Recruiter contacts
REGULATING THE USE OF AI
In April, the European Commission published its draft regulations for artificial intelligence. Its aim is to preserve the safety and rights of individuals and organisations, as well as help to foster innovation. Although the UK has left the European Union, there are clearly implications for those UK companies that are both developing and likely to use AI-based applications in the recruitment sector.
The international legal practice Osborne Clarke explains that the legislation envisages a full regulatory framework that will include new EU and national bodies with strong enforcement powers and heavy fines for noncompliance. It adds that the proposed legislation is shaped around the level of risk created by different applications of AI. Three levels are identified under the headings of:
Prohibited AI systems
High-risk AI systems
Codes of conduct and transparency for all other AI systems The legal practice anticipates that the draft provisions are likely to be subject to extensive lobbying and do not expect it to become law before 2023 at the earliest. That said, it is important recruiters become aware of how the legislation could impact their practices sooner rather than later.
John Buyers, Osborne Clarke’s head of AI and machine learning, answers some of the key questions for recruiters. What are the main areas of concern for recruiters in the framework? The main concern in the draft EU framework is the classification of AI, which is used to ‘select individuals for recruitment; for filtering applications or evaluating candidates’ as ‘High Risk’ (see Annex III to the Reg, Section 4 [a]) and also AI, which is used to make decisions for ‘task allocation and for monitoring or evaluating performance’ (see Annex III, Section 4[b]). High Risk AI is subject to a raft of mandatory requirements too comprehensive to list here, but in short will require considerable levels of investment in appropriate tools and people to ensure compliance, including ensuring, for example, appropriate demographic representation in datasets used by AI systems, and avoiding bias on an ongoing basis.
Which applications of AI in recruitment could most lead to non-compliance/unethical behaviour? Automated (biometric) facial recognition systems used to detect autonomic responses in AI interview contexts – particularly to filter out unconscious responses to determine whether or not the candidate is telling the truth. These are typically used more in the US. These systems are questionably ethical, functionally variable and arguably unlawful in GDPR-governed countries, without specific user consent (which would seem to be very difficult to obtain on a lawful basis in that context given the circumstances of an interview). Automated filtering of CVs, especially using deep neural networks (which are opaque ‘black boxes’). These systems can create real bias and discrimination issues, particularly where they make ‘false correlations’ (as was shown when one automated system equated membership of golf clubs with success).
In your experience so far, what are recruiters’ main concerns when using AI in the recruitment process? Currently the industry is very personal data and GDPRfocused (rightly). There is litt le or no awareness of the risks – whether legal or ethical – in the use of AI.
How can they ensure they behave ethically and compliantly?
Only use AI in demonstrable and verifiable cases where it is really needed – not as a “niceto-have”.
Run proper GDPR Data Protection Impact Assessments (DPIAs) to ensure this is the case (and AI risk assessments on the AI side).
Make it clear to candidates precisely what technology is being used.
Understand the pitf alls of machine learning and the ‘black box paradigm’ (for example, you don’t necessarily understand how or why such a system reaches the decisions it does).
Invest in independent ethical and legal advice, and not exclusively from soft ware providers who oft en have a vested interest in selling such systems.