This is STAGING. For front-end user testing and QA.
The Chronicle of Philanthropy logo

Innovation

Keeping Real-World Bias Out of Artificial Intelligence Will Take Work, Speaker Tells Conference

March 27, 2017 | Read Time: 2 minutes

Machine learning sounds futuristic, but it’s already at work when Google Maps gives directions or Netflix recommends a movie. But real-world cultural bias is already a problem in artificial intelligence, and without careful solutions, the issue is likely to grow, Camille Eddy, told participants at the Nonprofit Technology Conference.

Ms. Eddy, a mechanical-engineering student at Boise State University and a robotics intern at HP Labs, said algorithms can be distorted when the data sets used to train them aren’t diverse.

Take for example an exercise in word associations using Word2Vec, a data set used to train search engines. After being given the word pairing “man” and “computer programmer,” Word2Vec matched “woman” with “homemaker,” Ms. Eddy said. The pairing of “father” and “doctor” led to “mother” and “nurse.”

“This data set is brazenly sexist,” Ms. Eddy said. It’s not that someone set out to create a biased data set, she explained. Word2Vec is based on how people use words online. “That’s an important thing to know. Not all models are going to be designed inappropriately. It also comes from how we use language in our everyday lives.”

Diversity in Hiring

Other times, bias is the result of actions taken — or not taken — by a technology’s creators.


Ms. Eddy pointed to the work of Joy Buolamwini, a graduate researcher at the MIT Media Lab. Ms. Buolamwini, who is black, was working with facial-recognition software when she realized that the program couldn’t detect her face. She was invisible to the software because the programmers didn’t train the algorithms with images of people with different skin tones and facial structures.

One answer to the problem is to raise awareness, said Ms. Eddy. For example, Ms. Buolamwini founded the Algorithmic Justice League to test software for bias and build more inclusive data sets to train algorithms.

Increasing the number of women and people of color working in technology — and making them more visible — will also help, Ms. Eddy said. Something as simple as a set of stock photos that show women of color in technology, released in 2015, could make a difference. The existence of the images and the online discussion spurred by their introduction could be fed into search engines to help close the perception gap, she said.

Ultimately, the solution to bias in algorithms is greater openness in technology, said Ms. Eddy: “Transparency is understanding why algorithms make the decisions that they make and being able to go in there and change it.”

The conference drew more than 2,300 nonprofit leaders, fundraisers, consultants, and technology providers.


About the Author

Features Editor

Nicole Wallace is features editor of the Chronicle of Philanthropy. She has written about innovation in the nonprofit world, charities’ use of data to improve their work and to boost fundraising, advanced technologies for social good, and hybrid efforts at the intersection of the nonprofit and for-profit sectors, such as social enterprise and impact investing.Nicole spearheaded the Chronicle’s coverage of Hurricane Katrina recovery efforts on the Gulf Coast and reported from India on the role of philanthropy in rebuilding after the South Asian tsunami. She started at the Chronicle in 1996 as an editorial assistant compiling The Nonprofit Handbook.Before joining the Chronicle, Nicole worked at the Association of Farmworker Opportunity Programs and served in the inaugural class of the AmeriCorps National Civilian Community Corps.A native of Columbia, Pa., she holds a bachelor’s degree in foreign service from Georgetown University.