Artist Stephanie Dinkins has long been a pioneer in combining art and technology in her Brooklyn-based office. In May, she received $100,000 from the Guggenheim Museum for her groundbreaking innovations, including an ongoing series of interviews with Bina48, a humanoid robot.
For the past seven years, she has been experimenting with AI’s ability to realistically portray black women smiling and crying using various word prompts. Initial results were tenuous, if not alarming: their algorithm produced a pink humanoid cloaked in a black cloak.
“I was expecting something a little more reminiscent of black femininity,” she said. And although the technology has improved since her first experiments, Dinkins used bypass terms in the text prompts to help the AI image generators achieve their desired image, “to give the machine a chance to give me what I want.” Whether she uses the term “African American” or “Black woman,” machine distortions that distort facial features and hair texture are common.
“Improvements obscure some of the deeper questions we should be asking about discrimination,” Dinkins said. The black artist added: “The prejudices are deeply embedded in these systems, so they are ingrained and automatic. If I’m working in a system that uses algorithmic ecosystems, then I want that system to know who Black people are in a differentiated way so that we can feel better supported.”
She’s not the only one asking tough questions about the disturbing connection between AI and race. Many black artists find evidence of racial bias in artificial intelligence, both in the large data sets that teach machines how to generate images and in the underlying programs that run the algorithms. In some cases, AI technologies appear to ignore or distort artists’ text prompts, affecting how Black people are portrayed in images, and in other cases, they appear to stereotype or censor Black history and culture.
The discussion about racist prejudices within artificial intelligence has increased significantly in recent years. Studies show that facial recognition technologies and digital assistants struggle to recognize the images and speech patterns of non-white people. The studies raised broader questions of fairness and bias.
Big companies behind AI image generators — including OpenAI, Stability AI, and Midjourney — have committed to improving their tools. “Bias is a major industry-wide issue,” Alex Beck, a spokeswoman for OpenAI, said in an email interview, adding that the company is continually trying to “improve performance, reduce bias, and mitigate harmful outcomes.” . She declined to say how many employees were addressing racial prejudice or how much money the company had allocated to the problem.
“Black people are used to being invisible,” wrote Senegalese artist Linda Dounia Rebeiz in an introduction to her exhibition In/Visible for Feral File, an NFT marketplace. “If we’re seen, we’re used to being misrepresented.”
To prove her point, 28-year-old Rebeiz, in an interview with a reporter, asked OpenAI’s image generator, DALL-E 2, to imagine buildings in her hometown of Dakar. The algorithm generated arid desert landscapes and demolished buildings that Rebeiz said had nothing to do with the coastal homes in the Senegalese capital.
“It’s demoralizing,” Rebeiz said. “The algorithm tends towards a cultural image of Africa created by the West. It defaults to the worst stereotypes that already exist on the internet.”
Last year, OpenAI announced that it was introducing new techniques to diversify the images generated by DALL-E 2, allowing the tool to “generate images of people that more accurately reflect the diversity of the world’s population.”
Minne Atairu is an artist featured in Rebeiz’s exhibition. Columbia University Teachers College candidate who proposed using image generators on young black students in the South Bronx. But she now worries “that students could be tempted to create offensive images,” Atairu explained.
Included in the Feral File exhibit are images from her Blonde Braids Studies, which explores the limitations of Midjourney’s algorithm for creating images of black women with naturally blonde hair. When the artist asked for a picture of black identical twins with blonde hair, the program returned a sibling with lighter skin instead.
“That tells us where the algorithm is bundling images from,” Atairu said. “It’s not necessarily about drawing from a group of black people, but one that caters to white people.”
She said she is concerned that small black children may try to create images of themselves and see children whose skin has lightened. Atairu recalled some of her previous experiments with Midjourney before recent updates improved the abilities. “It would produce images that looked like blackface,” she said. “You would see a nose, but it wasn’t a human nose. It looked like a dog’s nose.”
In response to a request for comment, David Holz, founder of Midjourney, replied in an email: “If anyone finds a problem with our systems, we ask them to send us specific examples so that we can investigate the matter.”
Stability AI, which provides image generator services, said it plans to work with the AI industry to improve bias assessment techniques for a wider variety of countries and cultures. The bias, the AI firm says, is caused by “over-representation” in its general datasets, though it didn’t specify whether white over-representation was the problem here.
Earlier this month, Bloomberg analyzed more than 5,000 images generated by Stability AI and found that its program reinforced stereotypes about race and gender, typically portraying people with lighter skin tones as high-paying jobs, while those with darker skin tones were labeled “dishwashers” and “Housekeeper”.
These problems have not stopped the investment frenzy in the technology industry. A recent rosy report from consulting firm McKinsey predicted that generative AI would boost the global economy by $4.4 trillion annually. According to the GlobalData Deals Database, nearly 3,200 startups received $52.1 billion in funding last year.
Technology companies have battled allegations of bias in depicting dark skin since the dawn of color photography in the 1950s, when companies like Kodak used white models for color development. Eight years ago, Google disabled its AI program’s ability to let people search for gorillas and monkeys through its Photos app because the algorithm misclassified Black people into those categories. As of May of this year, the problem has not yet been resolved. Two former employees who worked on the technology told the New York Times that Google didn’t train the AI system with enough images of black people.
Other experts studying artificial intelligence have said that bias runs deeper than just datasets, citing the early development of this technology in the 1960s.
“The problem is more complicated than data distortion,” said James E. Dobson, cultural historian at Dartmouth College and author of a recent book on the birth of computer vision. According to his research, in the early days of machine learning, there was little debate about race, and most of the scientists working on the technology were white males.
“It’s difficult to separate today’s algorithms from historical ones as engineers build on previous versions,” Dobson said.
To reduce the appearance of racial bias and hateful imagery, some companies have banned certain words from text prompts that users send to generators, such as “slave” and “fascist.”
But Dobson said companies hoping for a simple fix, like censoring the kinds of prompts users can submit, would avoid the more fundamental problems of bias in the underlying technology.
“It’s a worrying time as these algorithms continue to get more complicated. And when you see garbage coming out, you have to ask yourself what kind of garbage process is still in the model,” the professor added.
Auriea Harvey, an artist featured in the Whitney Museum’s recent exhibition Refiguring on digital identities, encountered these prohibitions in a recent project with Midjourney. “I wanted to ask the database what they knew about slave ships,” she said. “I received a message that Midjourney would suspend my account if I continue.”
Dinkins had similar problems with NFTs that she created and sold that showed okra being brought to North America by enslaved people and settlers. She was censored for attempting to take pictures of slave ships using the Replicate generative program. She eventually learned to outwit the censors by using the term “pirate ship.” The image she received was an approximation of what she wanted, but it also raised troubling questions for the artist.
“What does this technology do with history?” Dinkins asked. “You can see someone trying to correct bias, but at the same time erasing a piece of history. I find these erasures just as dangerous as any bias because we will just forget how we got here.”
Naomi Beckwith, chief curator at the Guggenheim Museum, credited Dinkins’ differentiated approach to issues of representation and technology as one reason the artist received the museum’s first Art & Technology Award.
“Stephanie has become part of a tradition of artists and creators drilling holes in these overarching and totalizing theories of how things work,” Beckwith said. The curator added that her own initial paranoia about AI programs replacing human creativity abated when she realized these algorithms knew next to nothing about black culture.
But Dinkins isn’t quite ready to ditch the technology just yet. She continues to use it for her artistic projects – with skepticism. “Once the system can produce a truly faithful image of a black woman crying or smiling, can we rest?”