6 minute read
Taking the “HUMAN” out of “HUMAN RESOURCES”? Mitigating the Risks of AI in HR
By AUGUST JOHANNSEN
To overcome these challenges, HR professionals must actively collaborate with AI experts and data scientists to develop AI systems that are fair, transparent, and compliant. They need to invest in training and education to understand the intricacies of AI technology, its limitations, and the potential biases it can introduce. Establishing clear policies and guidelines for AI tool usage, including regular audits and reviews, is crucial to identify and rectify any legal compliance issues. Ultimately, HR professionals should maintain a human-centric approach, using AI tools as aids rather than replacing human judgment and ethical decision-making in order to strike a balance between efficiency and legal compliance in the ever-evolving landscape of HR.
This article first briefly addresses the budding regulatory landscape relating to AI tools for employment decision-making. It concludes with some suggestions on risk management and positive approaches that HR professionals can consider as innovative AI tools continue to improve and be incorporated into HR processes.
Recent Regulatory Action
AI tools are increasingly the subject of legislation and litigation. In October 2021, the Equal Employment Opportunity Commission (EEOC) launched an initiative relating to the understanding and regulation of AI use in employment decision-making. The EEOC’s Artificial Intelligence and Algorithmic Fairness Initiative is intended to examine how technology impacts the way employment decisions are made, and give applicants, employees, employers, and technology vendors guidance to ensure these technologies are used lawfully under federal equal employment opportunity laws. Over the past year and a half, the EEOC has held listening sessions with key stakeholders about algorithmic tools and their employment ramifications; gathered information about the adoption, design, and impact of hiring and other employment-related technologies; and begun to identify promising practices.
mitigate this outcome; implementing human review of AI decisions is one possible solution.
EEOC technical assistance does not have the force of law, and is not binding, but it provides a helpful insight into potential enforcement actions that the agency may take in the future. Indeed, the reliability of the May 2022 technical assistance document was underscored in January 2023 when the EEOC published its draft Strategic Enforcement Plan (SEP), outlining where and how the EEOC will direct its resources. The EEOC plans to focus on “the use of automatic systems, including artificial intelligence or machine learning, to target advertisements, recruit applicants, or make or assist in hiring decisions where such systems intentionally exclude or adversely impact protected groups.”
In May 2023, the EEOC further demonstrated its commitment to its enforcement plan by issuing another AI-related technical assistance publication, this time about how AI tools can implicate Title VII compliance. The May 2023 Title VII technical assistance encourages organizations to evaluate AI-powered employment screening tools under the 1978 Uniform Guidelines on Employee Selection Procedures, which provide guidance about how to determine if decision-making procedures are compliant with Title VII disparate impact analysis. Though significantly narrower than the ADA publication, the EEOC’s May 2023 Title VII technical assistance may signal that the EEOC believes AI tools can and should fit within existing regulatory structures.
Whether the EEOC’s regulatory enforcement strategy will be workable remains to be seen. Regardless, the May 2022 ADA technical assistance, January 2023 SEP, May 2023 Title VII technical assistance, and any future guidance issued by the EEOC should be carefully heeded by HR professionals to mitigate regulatory liability and to ensure their organizations’ compliance with legal duties.
WHAT TO DO?
The proliferation of AI tools has the potential to ease administrative burdens but adds new concerns with legal compliance. HR professionals are likely familiar with exposing and avoiding bias in decision-making by humans—but how can that bias be avoided when decisions are outsourced to AI? Organizations should take a proactive approach to scrutinizing and integrating AI tools into their operations with legal compliance in mind. The following are several recommendations for how to do so:
1. Identify how AI will be used. Organizations can use AI tools in myriad ways, but before stakeholders can make an informed decision on implementing such tools, they should understand exactly how and why the technology will be used. This will allow them to ensure compliance with legal requirements, evaluate risks, and employ risk-mitigation measures.
2. Test the AI. An ounce of prevention is worth a pound of cure. AI tools should be tested prior to implementation to determine whether they exhibit any bias or other negative factors that could impact their usability. There are a variety of pre-implementation measures that an organization can take to reduce potential bias in AI outputs, including: rely on multiple diverse and representative training data, regularly review and test outputs for bias, and/or disclose training data and algorithms to enhance transparency and accountability.
3. Establish guidelines. Once an organization defines how it wants to use an AI tool, and tests it to ensure it is unbiased, it should prepare a policy establishing guidelines for the use of the tool. Such guidelines might include identifying who may use the tool, the types of data that may be entered into the tool, the level of decision-making that can be based on the output (i.e., none, preliminary subject to human review, final), how the output will be described to its users, and information about the organization’s governance and oversight of the AI tool.
4. Actively and frequently address legal compliance. Organizations should rely on their legal counsel to check for recent legal developments and their implications before launching any AI tool. Organizations should continue to periodically re-assess the situation post-implementation as well. Needs may have changed, or the vendor may have expanded, reduced, or otherwise changed the tool such that it is no longer an appropriate fit for the organization. Risks should be frequently re-evaluated to understand how they have changed and to ensure previous mitigation measures are still effective.
Conclusion
HR professionals are pioneers in implementing cutting-edge AI technology for use in employment decisions. It is therefore crucial to be intentional and thoughtful about integrating these tools into their decisionmaking processes. Ongoing commitment to monitoring and assessing the technology and the law is critically important to maintaining legal compliance once AI is incorporated into HR decisions. Informed, deliberate implementation of AI technology can effectively and compliantly maintain the inimitable “human” aspect of “human resources.”
August Johannsen, Attorney at Law
AJohannsen@littler.com
Littler Lexington 333 West Vine Street | Suite 1720 Lexington, KY 40507 www.Littler.com
Every program or project needs a target. Do you agree? If you do, please read on.
What is the target for your first day of orientation with your new team members? Is it to make it to lunch without anyone in the New Hire Class quitting? NO! Your target should be that the new team member brags and brags to his family at dinner that evening about how very meaningful the day of orientation was and how excited he is to begin his career with your organization.
How do you get there? During this article you will how to make your new hires’ first powerful rememberable (for position reasons) and powerful (for retention reasons). Did you know that one of the first retention strategies is orientation? Crazy, isn’t it? Your new team member will begin forming her view of your organization during the recruitment process; however, her view will be solidified during the orientation and on-boarding programs.
For the purpose of this article, let’s clarify the difference between orientation and on-boarding. In my professional opinion, orientation is part of the on-boarding process.
Orientation is the first day (or up to a week) of the team member’s employment experience. This article will supply you with everything you need to know and do during your new hire orientation program to set the hook!
On-boarding refers to the first couple of weeks (or to a year) when the new team member is fully trained and ready to be productive. Perhaps the next article can cover Championship On-boarding.
Below are a few statistics that prove that your time reading this article will be well spent:
1. Only 10% of companies do a good job of on-boarding. Tragic!
2. 33% - 67% leave the company within 12 months. This is a ZERO production rate.
3. Companies that do a great job at orientation have employees that are twice as prepared for the job.
4. A solid, purposeful program increases employee engagement which decrease turnover.
Have you ever heard the statement, “Every system is perfectly designed for the results it gets.” So goes with your Orientation Program. Did you want new team members who are confused, bored, and frustrated at the end of the first day or do you want folks itching to come back the next day to see what is instore for them? After reading this article, you will have the tools you need to make raving fans out of your new team members. What a GREAT retention tool?