Keeping Real-World Bias Out of Artificial Intelligence Will Take Work, Speaker Tells Conference
March 27, 2017 | Read Time: 2 minutes
WASHINGTON
Machine learning sounds futuristic, but it’s already at work when Google Maps gives directions or Netflix recommends a movie. But real-world cultural bias is already a problem in artificial intelligence, and without careful solutions, the issue is likely to grow, Camille Eddy, told participants at the Nonprofit Technology Conference.
Ms. Eddy, a mechanical-engineering student at Boise State University and a robotics intern at HP Labs, said algorithms can be distorted when the data sets used to train them aren’t diverse.
Take for example an exercise in word associations using Word2Vec, a data set used to train search engines. After being given the word pairing “man” and “computer programmer,” Word2Vec matched “woman” with “homemaker,” Ms. Eddy said. The pairing of “father” and “doctor” led to “mother” and “nurse.”
“This data set is brazenly sexist,” Ms. Eddy said. It’s not that someone set out to create a biased data set, she explained. Word2Vec is based on how people use words online. “That’s an important thing to know. Not all models are going to be designed inappropriately. It also comes from how we use language in our everyday lives.”
Diversity in Hiring
Other times, bias is the result of actions taken — or not taken — by a technology’s creators.
Ms. Eddy pointed to the work of Joy Buolamwini, a graduate researcher at the MIT Media Lab. Ms. Buolamwini, who is black, was working with facial-recognition software when she realized that the program couldn’t detect her face. She was invisible to the software because the programmers didn’t train the algorithms with images of people with different skin tones and facial structures.
One answer to the problem is to raise awareness, said Ms. Eddy. For example, Ms. Buolamwini founded the Algorithmic Justice League to test software for bias and build more inclusive data sets to train algorithms.
Increasing the number of women and people of color working in technology — and making them more visible — will also help, Ms. Eddy said. Something as simple as a set of stock photos that show women of color in technology, released in 2015, could make a difference. The existence of the images and the online discussion spurred by their introduction could be fed into search engines to help close the perception gap, she said.
Ultimately, the solution to bias in algorithms is greater openness in technology, said Ms. Eddy: “Transparency is understanding why algorithms make the decisions that they make and being able to go in there and change it.”
The conference drew more than 2,300 nonprofit leaders, fundraisers, consultants, and technology providers.