Amazon AI employment bias claims to add to growing concerns about technology employment discrimination



Despite our best efforts to streamline the hiring process for AI employment tools, our best efforts to streamline the hiring process for pools of applicants aim to open doors for a wider future employee.

AI employment tools are ubiquitous 492 for Fortune 500 companies According to JobScan, it uses an applicant tracking system to streamline recruitment and employment in 2024. While these tools help employers screen more job seekers and identify relevant experiences, human resources and legal experts warn that inappropriate training and implementation of recruitment techniques can promote bias.

The research provides harsh evidence of AI employment discrimination. Published by the University of Washington Information School study Last year, in an AI-assist resume screening across nine occupations using 500 applications, the technique favored white-related names with 85.1% of cases and female-related names, with only 11.1% of cases. In some settings, black male participants were at a disadvantage compared to their white male counterparts in up to 100% of cases.

“I’m getting this positive feedback loop and training a biased model of increasingly biased data,” says Kyra Wilson, a doctoral student at the University of Washington School of Information and a lead author of the study. luck. “We really don’t know where the cap is still on how bad it will be before these models stop working perfectly.”

Some workers claim they are seeing evidence of this discrimination outside of an experimental setting. Last month, five plaintiffs over the age of 40 were Class action lawsuit Workday, its workplace management software company, has discriminatory job seeker screening techniques. Plaintiff Derek Mobley in his first lawsuit last year claimed that the company’s algorithm was rejected by more than 100 jobs in seven years due to his race, age and disability.

Workday refused discrimination claims and stated in a statement luck The lawsuit is “no merit.” Last month’s company announcement We have been certified by two third-party for our “commitment to be responsible and transparently developing AI.”

“Workday’s AI recruitment tools do not make employment decisions. Our customers maintain full control of the employment process and human monitoring,” the company said. “Our AI capabilities only consider the qualifications listed in candidate job applications and compare them with qualifications identified by employers as required for their job. They do not use or even identify protected characteristics.”

It’s not just about hiring the tools workers have trouble with. A letter sent to Amazon Executives, including CEO Andy Jassy, ​​claimed that the company had claimed Flouted on behalf of 200 disabled employees. Americans with Disabilities Act. Amazon is said to have decided on accommodation for employees based on an AI process that does not adhere to ADA standards. Guardian It has been reported this week. Amazon said luck The AI ​​will not make any final decisions about the accommodations its employees.

“We understand the importance of responsible AI use and ensure that AI integration is thoughtfully and fairly built on robust guidelines and review processes,” the spokesman said. luck In a statement.

How can AI employment tools be discriminatory?

Like other AI applications, this technology is only as smart as the information provided. According to Elaine Pulakos, CEO of Talent Assessment Developer PDRI, most AI works by evaluating resume screening or CV screening. Pearson. They are trained on existing models of companies that evaluate candidates. This means that models are likely to perpetuate biases that could lead to “strange outcomes,” such as a collapse in demographics that indicate preferences for male candidates and Ivy League universities.

“If you don’t have any information guarantees about the data you are training your AI and you don’t make sure your AI doesn’t get off the rail and start hallucinating, and you’re doing something weird along the way, then you’re going to do something weird,” she said. luck. “It’s just the nature of a beast.”

Because much of the bias in AI comes from human bias, according to Pauline Kim, a law professor at the University of Washington, AI employment discrimination exists as a result of human employment discrimination, but remains prevalent today. Landmark 2023 Northwestern University Meta-analysis Of the 90 studies in six countries, we found a persistent and broad bias, including employers recalling white applicants on average 36% more than black applicants and 24% more than Latino applicants with the same resume.

According to Victor Schwartz, Associate Director of Technology Product Management for Remote Work Search Platform, rapid scaling of AI in the workplace can incite these flames.

“It’s much easier to build a fair AI system and expand to 1,000 hours of people’s equivalent jobs. luck. “Again, it’s much easier to be extremely discriminatory than to train 1,000 people to be discriminatory.”

“You’re flattening the natural curve that just crosses a lot of people,” he added. “That gives us the opportunity. There’s also the risk.”

How HR and legal experts are fighting AI employment bias

Employees are protected from workplace discrimination through the Employment Opportunity Equality Opportunity Committee and Title VII of the Civil Rights Act of 1964, but “there are no formal regulations regarding AI employment discrimination,” Professor Kim said.

Existing laws are prohibited against both intentional and differential influence discrimination. This refers to discrimination that arises as a result of neutral display policies, even if it is not intended.

“If employers build AI tools and have no intention of discriminating, if the applicants screened from the pool are found to be over 40, that will have a different effect on older workers,” Kim said.

Different theory of influence is established by law, but President Donald Trump said he revealed his hostility towards this form of discrimination by trying to eliminate it. Presidential Order April.

“Institutions like the EEOC are not trying to pursue cases with different impacts, or try to understand the possibility that these technologies may have individual impacts,” Kim said. “They are really pulling back from their efforts to understand and educate their employers about these risks.”

The White House did not respond immediately luckRequest a comment.

With little federal-level efforts to address AI employment discrimination, local level politicians have sought to address the potential technological bias, including New York City. Ordinance The use of the “automated employment decision tool” is prohibited unless the employer and the institution have passed a bias audit within one year of its use.

Melanie Ronen, employment lawyer and partner at Stradley Ronon Stevens & Young, luck Other state and local laws focus on increasing transparency when AI is used in the hiring process “involving the opportunity to opt out of AI use in certain circumstances (for future employees).”

The companies behind AI employment and workplace assessments, such as PDRI and Bold, have said they have hired themselves to mitigate technology bias as PDRI CEO Pulakos defends human evaluators to evaluate AI tools prior to implementation.

Schwartz, Bold Technology Product Management Director, argued that GuardRails, audits and transparency should be important to ensure that AI can implement fair employment practices, but that technology could also diversify the company’s workforce if applied properly. He cited a study showing women It tends to apply to less work They do so only when they meet all qualifications than men. If job seekers’ AI can streamline the application process, they can remove the hurdles for those who are unlikely to apply to a particular position.

“You can take the arena a little bit by removing that barrier using these auto-applied tools, or expert-applied tools,” Schwartz said.

Leave a Reply

Your email address will not be published. Required fields are marked *