With all the excitement about artificial intelligence (AI) and digital transformation, are we too quick to assume that the answer to all our problems lies in technology?
And more importantly, are we so excited to embrace technology – especially AI – that we overlook the societal and human problems it can cause?
If Bytes Bias: Unraveling the Myth of Neutral AI
This is the argument made by Meredith Broussard in her latest book More Than a Glitch – Confronting Race, Gender and Ability Bias in Tech.
The book is the latest in a series of contemporary explorations of issues surrounding bias and the broader social implications of our eagerness to use AI, and joins a number of other important works such as Cathy O’Neil’s Weapons of Math Destruction, “Algorithms of Oppression” by Safiya Noble and “Algorithms of Oppression” by Safiya Noble Broussard’s own artificial unintelligence.
Broussard recently joined me on my podcast to discuss some of the ideas she brings and her advice for business leaders interested in working with AI or adopting it in their organizations.
Central to their argument is the concept of “technochauvinism” – the belief that technological solutions are always superior to social or other methods of promoting change.
What is technochauvinism?
In the book, Broussard cites the example of the “stair-climbing machine,” often proposed by technologists and engineers as an innovation that could improve the lives of disabled people.
“Designers love to create things…because it’s cool – let’s create this novel solution.”
“But if you actually ask someone who uses a wheelchair … they’ll generally say no — ‘it looks scary.'” “It doesn’t look like it’s going to work.” They’ll say, “I’d rather have one.” ramp or an elevator.”
“Then you realize there’s this really simple solution that works really well, and we don’t have to add a lot of extreme computing technology; we can just build a ramp.”
“Until we’ve really made the world accessible, we mustn’t rework the solutions.”
Broussard says this concept – and many others who like it – is an example of a “disabled dongle”. This is succinctly described in this blog post as an idea from a (usually) capable engineer, appealing to our love for a technological “quick fix” for the complex, structural, societal change that is truly needed.
The antidote to techno-chauvinist thinking, Broussard says, is often simply choosing the right tool for the job. It is not always assumed that it is the most advanced technology or the most sophisticated data processing algorithm.
Broussard tells me: “We kind of have the idea that technical solutions will somehow be superior to others. And that in itself is a kind of bias…sometimes the right tool is something simple, like a book…it’s not a competition, certainly not inherently better than the other.”
Mathematically and socially fair
Another intriguing idea that Broussard explores is the difference between mathematical and social justice. When we use computers to help with challenges related to equality and fairness, we are usually presented with a mathematical solution.
A simple explanation: “A story that I think illustrates this concept – it’s about a cookie. When I was little, my brother and I would fight over who gets the last cookie.”
Ask a computer to solve this simple but urgent problem, and there’s an obvious answer: each kid gets half a cookie.
“But in the real world, if you split a cookie in half, you end up with a big half and a small half. And then we would argue about who has the bigger half.”
She suggests that the solution lies in socially constructed negotiation and compromise.
“So if I wanted the big half, I’d say, ‘Give me the big half and I’ll let you choose which TV show to watch after dinner.
“Mathematically just decisions and socially just decisions are not the same…which explains why we run into problems when trying to make socially just decisions with computers.”
The bottom line is that we should use computers to solve math-oriented problems and not rely on them too much for societal challenges.
AI and human jobs
A similar principle emerges when we think about how computers will be used to replace human labor. As a writer and journalist, Broussard’s own profession is widely recognized as threatened by the emergence of applications like ChatGPT. After all, when they can quickly and easily create articles, essays, and even entire books from a simple command prompt, who needs authors?
However, anyone who has attempted to write a book or even an essay at any level using ChatGPT will quickly tell you that this threat is a bit over the top.
Although AI-generated content is impressive at first glance, it still lacks many essential human qualities – most notably the real ability to generate new ideas or truly creative thoughts. That’s because it really only reproduces the language and ideas contained in its training data.
“If you’re the kind of person who’s capable of replacing manpower with generative AI, you’re in for a nasty shock,” says Broussard.
“AI is mediocre. Mediocre writing is absolutely useful in a lot of situations… and it seems like it’s going to be incredibly useful and flexible… One of the things you quickly realize after using generative AI for a while is that it’s kind of boring… it something just gives you the same thing over and over again… that’s not what you want to offer your customers.”
Her thoughts reflect my own belief that AI is not a substitute for creativity – it is a tool that enables people to enhance their own creative skills and become more organized in the way they use them.
The Dangers of AI
However, one aspect of AI that Broussard finds particularly concerning is computer vision — and specifically the way it treats people differently based on race, gender, and other factors.
“Face recognition depends on skin tone,” she tells me.
“In general, it’s better at detecting light skin than dark skin, better in males than females…it doesn’t detect trans and non-binary people at all.”
This has caused problems when AI-powered computer vision systems have been used for policing and facial recognition in public areas. In several instances, police use of the technology has been found unlawful and unethical, leading to its ban in some jurisdictions.
Broussard says: “We shouldn’t use facial recognition at all in policing. It is disproportionately used as a weapon against people of color and communities that are already disproportionately surveilled.
“We will not achieve justice by continuing to use these powerful technologies that work very poorly and have a disproportionate impact on certain groups.”
More important than fire?
“AI is sophisticated, especially generative AI is a lot of fun, but it’s not going to change the whole world. Things will change; It’s not the invention of fire.
Broussard alludes to comments made by Google CEO Sundar Pichai a few years ago when he described AI as “more profound than fire or electricity or anything we’ve done in the past.”
It’s a refreshingly down-to-earth counterpoint to the sentiments I often hear myself – someone who works closely with companies that sell AI, as well as companies whose reputations are built on the changes that can be achieved with it.
Personally, based on my own experiences and observations, I’m a little more excited and optimistic about the upside than Broussard himself. That doesn’t mean I’m in any way any less cautious or concerned about the downsides.
Broussard cites the work of organizations, institutions, and campaign groups, including the Algorithmic Justice League, Equal AI, and NYU’s Center for Critical Race and Digital Studies, as voices that will play a critical role in the continued evolution of AI.
As we wrap up our conversation, she tells me, “What worries me is that the conversations about AI don’t focus on the actual harm that real people suffer… for example, when you try to put biometric locks on people.” or office doors, people with darker skin will… not be able to get into their homes or offices as easily as other people.
“And that seems discriminatory and unnecessary; why not just use a key?”
You can Click here to see my conversation with Meredith Broussard, associate professor of data journalism at NYU and author of the books artificial unintelligence And More than a glitch – confronting race, gender, and ability biases in technology.