New EEOC Guidance: Use of artificial intelligence may discriminate against workers or applicants with disabilities | Foley & Lardner LLP
On May 12, 2022, the Commission for Equal Opportunities issued a new comprehensive “Technical Guidance”. guidance justified The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Evaluate Applicants and Employees. The guidelines, which cover a number of areas, define algorithms and artificial intelligence (AI); gives examples of how AI is being used by employers; answers the question of employer liability for the use of vendor AI tools; requires reasonable precautions when using AI in this context; addresses the “sorting out problem” where AI rejects candidates who would otherwise qualify for the job with reasonable accommodation; requires restrictions to avoid disability and medical questions; promotes “promising practices” for employers, applicants and employees alike; and gives many specific examples of pitfalls of discriminating against people with disabilities when using AI tools.
Here are some key takeaways from the new guidance:
Employers may be subject to ADA liability for AI vendor software:
- Vendor Software Risk Exposure. Employers who use AI-powered decision-making tools to evaluate employees or job applicants may be held liable under the Americans with Disabilities Act (ADA) for the shortcomings of that technology. Even if the AI tool is developed or maintained by a third party, the employer can be on the hook—especially if the employer has “given [the vendor] Authority to act on behalf of the employer.”
- On the test side and on the accommodation side. This means that employers must manage the risk from the AI provider’s action or inaction in administering the assessment and in providing reasonable accommodation. If a person requests a reasonable accommodation because of a disability and the seller refuses the request, the seller’s inaction can be exposed to the employer In spite of don’t know the request.
- Check the supplier agreement. Employers should carefully review indemnification and other liability limitation and attribution provisions in their AI supplier agreements.
Al-Tools may unlawfully “screen” qualified individuals with disabilities:
- screen blanking. “Screen-outs” in the AI context can occur when a disability affects an individual’s performance when taking an AI-driven recruitment test or prevents a candidate from being considered in the first place because they do not meet the AI-driven threshold criteria. Under the ADA, a screenout is unlawful if the tool screened out a person who is able to perform essential functions of the workplace with reasonable precautions.
- examples. AI tools can weed out those with limited manual dexterity (to use a keyboard); who are visually, hearing or speech impaired; who have employment gaps due to previous disability problems; or who suffer from PTSD (thus falsifying the results of, for example, personality tests or gamified memory tests).
According to instructions: “A disability could have that [screen out] Have an impact, for example by reducing the accuracy of the assessment, creating special circumstances that were not taken into account, or excluding the person from participating in the assessment altogether.”
- unprejudiced? Some AI-based decision aids are marketed as “validated” to be “unbiased”. It sounds good, but this labeling may not be indicative of disabilities, as opposed to gender, age, or race. Disabilities – physical, mental or emotional – cover a wide area of life, can be highly individualized (including necessary adjustments) and are therefore less susceptible to unbiased software adjustments. For example, learning disabilities can often remain undetected by human observers because their severity and characteristics vary so widely. Employers need assurances that AI can do better.
AI screens can generate unlawful disabilities and medical-related requests:
- Unlawful Requests. AI-driven tools can generate unlawful “disability-related requests” or seek information as part of a “medical examination” before accepting applicants for conditional job offers.
According to instructions: “An evaluation includes ‘disability-related questions’ when it asks job applicants or employees questions that are likely to reveal information about a disability, or asks directly whether an applicant or employee is a person with a disability. It is considered a “medical examination” if it provides information about physical or mental impairments or the state of health of a person. An algorithmic decision-making tool that could be used to identify an applicant’s medical condition would violate these restrictions if administered prior to a conditional job offer.”
- Indirect failure. Not all health-related requests from AI tools qualify as “disability-related requests or medical exams.”
According to instructions: “[E]Even if a request for health-related information does not violate the ADA’s limitations on disability and medical screening requests, it may still violate other parts of the ADA. For example, if a personality test asks questions about optimism and someone with major depressive disorder (MDD) answers those questions negatively and thereby loses a job opportunity, the test can “seed” the applicant for MDD.
Best Practices: Solid communication of what is being measured – and that reasonable accommodations are in place:
There are a number of best practices employers can follow to manage the risk of using AI tools. The guide calls them “promising practices”. Main points:
- Disclosure of topics and methodology. Regardless of whether a third party developed the AI software/tool/application, employers (or their providers) should inform employees or job applicants – in simple, understandable terms – as best practice what the assessment entails . In other words, disclose in advance the knowledge, skill, training, experience, quality, or characteristic that will be measured or verified with the AI tool. In the same way, disclose how the tests are administered and what is required—using a keyboard, answering questions orally, interacting with a chatbot, or what you have.
- Invite requests for accommodation. With this information, an applicant or employee has more opportunity to speak up in advance if they believe that an accessible facility is needed. Therefore, employers should consider asking employees and job applicants if they need reasonable accommodation to use the tool.
- Obvious or known disability: If an employee or applicant with an apparent or known disability requests housing, the employer should respond to that request promptly and appropriately.
- Otherwise, hidden disability: If the disability is not otherwise known, the employer may request a medical certificate.
- Arrange for adequate accommodation. Once the claimed disability is confirmed, the employer must make reasonable accommodation, even if this means providing an alternative test format. This is where guidance can really come into conflict with the use of AI. As such tools become endemic, alternative tests may seem inadequate by comparison, and there may be potential discrimination between AI-tested individuals and old-school tested individuals.
According to instructions: “Examples of reasonable accommodations may include special equipment, alternative tests or test formats, permission to work in a quiet environment, and exceptions to workplace policies.”
- Protect PHI. As always, all medical information obtained in connection with housing requests should be kept confidential and separate from the employee’s or applicant’s personnel file.
With the increasing reliance on AI in the private employer sector, employers need to expand their proactive risk management to control the unintended consequences of this technology. Legal standards remain the same, but AI technology can push the boundaries of compliance. Employers should not only make best efforts in this direction, but also carefully consider other means of risk management, such as contract terms and insurance coverage.
This article was created with the support of Ayah Housini, Summer 2022 Contributor.