US civil rights officials warn employers of biased AI

The federal government said Thursday that artificial intelligence technology to screen new job applicants or monitor worker productivity can unfairly discriminate against people with disabilities, sending a warning to employers that commonly used hiring tools could violate civil rights laws.

The U.S. Department of Justice and the Equal Employment Opportunity Commission have jointly released advice for employers to be careful before using popular algorithmic tools intended to streamline the work of job evaluation. employees and employment prospects, but which could also potentially violate the Americans with Disabilities Act.

“We are sounding the alarm about the dangers of indiscriminate reliance on AI and other technologies that we see increasingly used by employers,” Assistant Attorney General Kristen Clarke told reporters Thursday. of the department’s civil rights division. “The use of AI compounds the long-standing discrimination faced by job seekers with disabilities.”

Examples given of popular work-related AI tools included resume scanners, employee monitoring software that categorizes workers based on keystrokes, and video interviewing software that measures speech patterns or a person’s facial expressions. Such technology could potentially weed out people with speech impairments or a range of other disabilities.

The move reflects a broader push by President Joe Biden’s administration to foster positive advances in AI technology while limiting the opaque and potentially dangerous AI tools that are used to make important decisions about the means livelihood of people.

“We totally recognize that there is huge potential to streamline things,” said Charlotte Burrows, president of the EEOC, which is responsible for enforcing workplace discrimination laws. “But we can’t let these tools become a high-tech route to discrimination.”

A researcher who has researched biases in AI hiring tools said holding employers accountable for the tools they use is an “important first step”, but added that more work is needed to rein in the vendors who make these tools. It would likely be work for another agency, such as the Federal Trade Commission, said Ifeoma Ajunwa, a law professor at the University of North Carolina and founding director of the AI ​​Decision-Making Research Program.

“There is now a recognition of how these tools, which are typically deployed as an anti-bias intervention, could actually lead to more bias – while obscuring it,” Ajunwa said.

Copyright © The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

About Roberto Frank

Check Also

Parents, do you want to stop overdoing it? Research offers a clue.

Placeholder while loading article actions Leidy Klotz, professor of engineering and architecture at the University …