We’re finally starting to tackle racism in AI

The Greek philosopher Heraclitus once said, “The only constant in life is change.” And in this life, nothing changes faster than technology.

Artificial intelligence (AI) is the latest wave. And while this can lead to efficiencies, it could come at the expense of other necessary goals.

AI software is largely based on algorithms. These systems are already an integral part of everything from bank lending to criminal convictions to employee hiring. The thing is, these algorithms are not free from bias – and in most areas where they are used, there is also a direct link to the AI ​​advancement of discrimination.

But the train has already left the station and as it can no longer brake it is up to the companies and, more importantly, government regulation to catch any potential damage. New York City is doing just that.

A new law that went into effect Wednesday in New York City is the first of its kind — a type of legislation regulating AI hiring practices in the equity space. All New York companies that use AI in their hiring processes must demonstrate that their selection was free of sexism and racism, a feat many human-centric HR departments have yet to master.

Under the Automated Employment Decision Tools Act (AEDT) in New York, a third-party company would audit and evaluate the companies to ensure they remain unbiased.

That sounds great in theory, but does it go far enough? As with most policies, the devil is in the implementation.

With systemic racism baked into the complex cake we call America, a concerted effort and vigilance is required to confront and reform systems that make our country and workforce inherently less just.

The law enforcement red flag appears to have come from complaints. If you don’t know why you’re being turned down for a job because you never actually speak to a hiring manager, it’s a little harder to sound a warning. Furthermore, the level of discrimination in hiring far exceeds what is legally permitted and there appears to be no direct punishment or recourse for companies that choose not to.

After all, the New York City Department of Consumer and Labor Protection – the agency charged with enforcing the new law – is already in a race against time to uphold its previous commitments to fairness, such as its protections for essential workers after the Pandemic.

An overburdened agency responsible for DEI enforcement via hiring algorithms is unprecedented, partly because of the difficulty, but also because the algorithms driving the AI ​​are inherently biased. To completely eliminate prejudice, one would have to abandon technology.

Activists and civil rights activists have been calling for reforms of AI and its algorithms for some time, particularly with regard to the use of a risk assessment tool across the criminal justice system. This tool is based on algorithms and serves as a determining factor in who gets out of prison and who stays behind bars. Not surprisingly, the tool most commonly led to an increase in mass incarceration of blacks and browns, while whites who had committed similar crimes returned home to their families.

Put simply, a tool to eradicate racial prejudice served only to reinforce it.

“There is growing evidence that AI systems and algorithms are not only unable to magically eliminate existing inequalities, but that they reproduce and even amplify these inequalities,” warns Anna Ginès i Fabrellas, associate professor for labor law at the Esade and director of the Institute for Ergonomics and the research project LABORAAlgorithm.

This widening of inequalities has far-reaching implications.

With the growing wealth gap between races, the intensification of the culture war at the heart of our politics, and the elimination of measures of justice such as the elimination of subsidy measures, now is the time to seek more opportunities for the underserved.

To ensure that justice is not a dirty word — and that technological advances are not used to further disadvantage historically underrepresented groups — New York City is taking steps to do just that with its AEDT Act. But the Big Apple has more to do.

Implementing the law requires significant funding, and a regulatory advisor is interested not only in how AI promotes or undermines justice, but also in removing loopholes that would allow companies to opt out of regulatory compliance. It also requires training for HR leaders and hiring/search firms on how to apply the new laws and reporting requirements. Additionally, the City of New York must conduct an awareness campaign for job seekers — specifically, what this new law does and how they can report any violations when hiring AI.

However, the responsibility lies not only with the city, but also with the involvement of municipal partners. These include organizations like the Urban League, local LGBT advocacy organizations, women’s groups, organizations supporting workers with disabilities and more.

With systemic racism baked into the complex cake we call America, a concerted effort and vigilance is required to confront and reform systems that make our country and workforce inherently less just.

For AI hiring practices to be effective and unbiased, they need to scrutinize their developers. AI only works with the algorithms and data provided by its development team. If these developers use biased or quasi-biased assessments, the end result will not reduce the inequalities, rather the gaps will become gaping holes.

New York City is on the right track, but the real work isn’t in the new legislation, it’s in the results achieved and how businesses are addressing the situation.

Leave a Comment