The possible uses of artificial intelligence (AI) have received much coverage lately. Now the risks of using AI to assist in the hiring process are in the spotlight since the EEOC just settled its first suit alleging discrimination in hiring through the use of AI.
In the lawsuit the EEOC alleged that iTutorGroup programmed its tutor application software to automatically reject female applicants age 55 or older and male applicants age 60 or older, resulting in more than 200 qualified U.S. applicants being rejected because of their age. The settlement involved the parties’ entering a five year consent decree including injunctive relief, training requirements, ongoing reporting obligations, and payment of $365,000 to the applicants.
The lawsuit is related to the EEOC’s initiative to ensure that AI and other emerging technologies used in employment decisions do not violate federal employment laws. In May 2023, the EEOC released technical assistance in this area.
For the EEOC’s purposes, AI is a machine-based system that can “make predictions, recommendations or decisions influencing real or virtual environments.” In the employment context, AI typically involves relying partly on the computer’s own analysis of data to determine which criteria to use when making decisions. AI may include machine learning, computer vision, natural language processing and understanding, intelligent decision support systems, and autonomous systems.
Examples of AI in the selection and hiring process include: resume scanners that prioritize applications using keywords; employee monitoring software that rates employees on the basis of their keystrokes or other factors; “virtual assistants” or “chatbots” that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements; video interviewing software that evaluates candidates based on their facial expressions and speech; and testing software that provides “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit” based on their performance on a game or test.
The guidelines reference a four-fifths “rule of thumb” for determining whether a selection rate for one group is substantially different than the selection rate of another group. One rate is substantially different than another if their ratio is less than four-fifths (or 80%). For example, if a personality test scored by AI has a selection rate for Black applicants of 30% and 60% for White applicants, the selection rate for Black applicants is substantially different than the selection rate for White applicants, which could be evidence of discrimination against Black applicants. However, use of the four-fifths rule is not always appropriate, especially where it is not a reasonable substitute for a test of statistical significance. This is especially true where a significant number of individuals are affected by the AI screening.
Employers who use AI in the selection process should take the following steps to avoid discriminatory outcomes:
- Check whether use of the procedure causes a selection rate for individuals in the group that violates the four-fifths rule or indicates a statistically significant difference;
- Make sure any vendor used to develop or administer an AI tool has evaluated and addressed the possibility of discriminatory selection rates;
- Evaluate AI tools on an ongoing basis to evaluate their impact and proactively change any practices that violate the four-fifths rule or indicate a statistically significant difference.
- Partner
Suzannah uses her years of experience in the private, public and nonprofit sectors to assist clients from a variety of industries. As co-chair of the firm’s Health Care Service Group and a member of the firm's Labor & Employment ...
Welcome to the Labor and Employment Law Update where attorneys from Amundsen Davis blog about management side labor and employment issues.