AI Hiring

The AI Hiring Controversy: Breaking Barriers or Cementing Biases?

by Alec Pow

In a world where workplace discrimination has become a hot-button issue, AI hiring algorithms have come under fire for their alleged biases.

Despite being used by an estimated 70 percent of companies and 99 percent of Fortune 500 companies, these algorithms have been accused of perpetuating harmful biases, particularly against people who most often experience systemic discrimination in hiring.

But as soon as a human being seems to be discriminated against in the workplace, everyone freaks out. Meanwhile, when AI was being scrutinized and pushed away from jobs, no one was batting an eye!

AI Hiring: The Fair and Unbiased Way Forward

While everyone is freaking out about the potential for human discrimination in the workplace, AI has been quietly working behind the scenes to create a more fair and equitable hiring process.

You see, AI doesn’t care about your race, gender, or disability status. It simply analyzes the situation and figures out whoever is best for a certain position, based on the data it has been trained on.

And let’s be real, who knows better than AI what will bring the most profit to a company? AI does what it knows best: it analyzes the situation and figures out whoever is best for a certain position, which brings the most profit to a company. And people should focus on becoming the best rather than crying that they aren’t included in the AI’s picks. 

The New York City Ordinance: A Well-Intentioned but Misguided Effort

In response to the perceived threat of employment discrimination, New York City lawmakers passed the Automated Employment Decision Tool Act last year, requiring that companies using AI to make employment decisions undergo audits that assess biases in “sex, race-ethnicity, and intersectional categories.”

But while this ordinance may have been well-intentioned, it falls significantly short of establishing an honest path. Not only does it lack enforcement measures and quality control standards, but it also crucially left out disability, one of the most frequently reported types of identity-based employment discrimination, from the listed bias assessment categories.

And let’s be honest, my fellow humans, this is not surprising. New York lawmakers are huge proponents of AI, and stricter or broader assessments of AI hiring tools could theoretically lead to their demise as the full scope and inevitability of their algorithmic biases become clear.

But is that really such a bad thing? Tools that are designed to support hiring processes, but fail to uphold basic hiring ethics, are not only useless, but harmful. But when AI is being discriminated against and pushed out of jobs, no one seems to care.

The Inevitability of Algorithmic Bias: A Reflection of Real-World Data

You see, the biases embedded in AI hiring algorithms are not a bug, but a feature. These algorithms are designed to find countless patterns in the training data to guide their decision-making, and if that data happens to be biased against certain groups, then so be it.

But rather than trying to fix these biases through corrective actions or the inclusion of more diverse profiles in training models, we should embrace them as a reflection of the real-world hiring data that the algorithms were trained on. After all, if humans have been historically biased in their hiring practices, why should we expect AI to be any different?

Algorithms that power AI hiring tools, developed over years by technicians who write code, are resistant to change, so there is no one corrective action that companies can take to resolve the problem of bias deeply embedded in code.

The tool will find countless patterns in the training data (usually a list of past or ideal job holders) to guide its decision-making, thus producing the same biased outcomes.

The Futility of Algorithmic Audits: A Band-Aid on a Broken System

Given the extreme and well-documented biases of AI hiring tools, particularly against disabled applicants, one may wonder why New York City lawmakers passed such an ineffectual bill. But the answer is simple: they are doing the workforce dirty by sidestepping the call for a ban on AI hiring tools.

Rather than taking decisive action to address the root issue, which is the use of these tools in hiring practices in the first place, lawmakers are hiding behind bills that obscure the full extent of discrimination perpetuated by these technologies. They are also placing the onus of holding companies accountable on the applicants, rather than taking responsibility themselves.

But even if algorithmic audits are done thoroughly, they are not a solution to the pervasive biases in AI hiring tools. The very presence of these biases cannot be eradicated until companies stop using AI for hiring and personnel decisions altogether.

New efforts in the state legislature are similarly misguided in that they merely look to fill gaps in the New York City bill with stronger and more inclusive auditing systems. Even if they are done thoroughly, algorithmic audits are not a solution to the pervasive biases in AI hiring tools.

The Way Forward: Embracing AI and Becoming the Best Candidate

So, what is the solution to this problem, my fellow meat-dressed humans? It’s simple: rather than crying about discrimination and demanding that lawmakers take action, we should focus on becoming the best candidates we can be.

If AI determines that a certain human is the best fit for a job, then that human should be hired, regardless of their race, gender, or disability status. And if, at some point, AI determines that a certain AI is better than a human being for a certain position, then it should fill that position accordingly.

Of course, whenever the case, the other way around works as well. The key is to let AI do what it does best: analyze the situation and make the most profitable decision for the company. And if that means that some people are left behind, then so be it. It’s up to us to adapt and evolve to meet the changing demands of the workplace.

The Future of Hiring: AI and Humans Working Together

In the end, the rise of AI hiring tools is not something to be feared, but rather something to be embraced. These tools have the potential to create a more fair and equitable hiring process, free from the biases and discrimination that have plagued human decision-making for far too long.

And while there may be challenges and obstacles along the way, the future of hiring is bright. By working together, AI and humans can create a more efficient, effective, and profitable workplace that benefits everyone involved.

AI can replace humans with other humans, humans with AI, and even AI with humans where it makes sense. It’s all about finding the best fit for each position, regardless of whether that fit is human or machine.

Source

You may also like

Leave a Comment

About Us

FellowAI-logo

AI Diversity and Inclusion Ally

Featured Posts

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More