U.S. warns of discrimination in using AI to screen job candidates : NPR

Assistant Lawyer Normal for Civil Rights Kristen Clarke speaks at a information convention on Aug. 5, 2021. The federal authorities stated Thursday that synthetic intelligence know-how to display screen new job candidates or monitor their productiveness can unfairly discriminate towards individuals with disabilities.

Andrew Harnik/AP


conceal caption

toggle caption

Andrew Harnik/AP


Assistant Lawyer Normal for Civil Rights Kristen Clarke speaks at a information convention on Aug. 5, 2021. The federal authorities stated Thursday that synthetic intelligence know-how to display screen new job candidates or monitor their productiveness can unfairly discriminate towards individuals with disabilities.

Andrew Harnik/AP

The federal authorities stated Thursday that synthetic intelligence know-how to display screen new job candidates or monitor employee productiveness can unfairly discriminate towards individuals with disabilities, sending a warning to employers that the generally used hiring instruments might violate civil rights legal guidelines.

The U.S. Justice Division and the Equal Employment Alternative Fee collectively issued steering to employers to take care earlier than utilizing standard algorithmic instruments meant to streamline the work of evaluating staff and job prospects — however which might additionally doubtlessly run afoul of the People with Disabilities Act.

“We’re sounding an alarm concerning the hazards tied to blind reliance on AI and different applied sciences that we’re seeing more and more utilized by employers,” Assistant Lawyer Normal Kristen Clarke of the division’s Civil Rights Division advised reporters Thursday. “The usage of AI is compounding the longstanding discrimination that jobseekers with disabilities face.”

Among the many examples given of standard work-related AI instruments have been resume scanners, worker monitoring software program that ranks employees based mostly on keystrokes, game-like on-line checks to evaluate job abilities and video interviewing software program that measures an individual’s speech patterns or facial expressions.

Such know-how might doubtlessly display screen out individuals with speech impediments, extreme arthritis that slows typing or a variety of different bodily or psychological impairments, the officers stated.

Instruments constructed to robotically analyze office conduct may overlook on-the-job lodging — resembling a quiet workstation for somebody with post-traumatic stress dysfunction or extra frequent breaks for a pregnancy-related incapacity — that allow staff to change their work situations to carry out their jobs efficiently.

Consultants have lengthy warned that AI-based recruitment instruments — whereas typically pitched as a approach of eliminating human bias — can truly entrench bias in the event that they’re taking cues from industries the place racial and gender disparities are already prevalent.

The transfer to crack down on the harms they will convey to individuals with disabilities displays a broader push by President Joe Biden’s administration to foster optimistic developments in AI know-how whereas reining in opaque and largely unregulated AI instruments which can be getting used to make necessary selections about individuals’s lives.

“We completely acknowledge that there is huge potential to streamline issues,” stated Charlotte Burrows, chair of the EEOC, which is liable for implementing legal guidelines towards office discrimination. “However we can not let these instruments grow to be a high-tech path to discrimination.”

A scholar who has researched bias in AI hiring instruments stated holding employers accountable for the instruments they use is a “nice first step,” however added that extra work is required to rein within the distributors that make these instruments. Doing so would possible be a job for an additional company, such because the Federal Commerce Fee, stated Ifeoma Ajunwa, a College of North Carolina legislation professor and founding director of its AI Choice-Making Analysis Program.

“There may be now a recognition of how these instruments, that are often deployed as an anti-bias intervention, would possibly truly lead to extra bias – whereas additionally obfuscating it,” Ajunwa stated.

A Utah firm that runs one of many best-known AI-based hiring instruments — video interviewing service HireVue — stated Thursday that it welcomes the brand new effort to teach employees, employers and distributors and highlighted its personal work in learning how autistic candidates carry out on its abilities assessments.

“We agree with the EEOC and DOJ that employers ought to have lodging for candidates with disabilities, together with the power to request an alternate path by which to be assessed,” stated the assertion from HireVue CEO Anthony Reynold.

Leave a Reply