Earlier this month, three major tech companies publicly distanced themselves from the facial recognition tools used by police: IBM said they would stop all such research, while Amazon and Microsoft said they would push pause on any plans to give facial recognition technology to domestic law enforcement. And just this week, the city of Boston banned facial surveillance technology entirely.
Why? Facial recognition algorithms built by companies like Amazon have been found to misidentify people of color, especially women of color, at higher rates—meaning when police use facial recognition to identify suspects who are not white, they are more likely to arrest the wrong person.
CEOs are calling for national laws to govern this technology, or programming solutions to remove the racial biases and other inequities from their code. But there are others who want to ban it entirely—and completely re-envision how AI is developed and used in communities.
In this SciFri Extra, we continue a conversation between producer Christie Taylor, Deborah Raji from NYU’s AI Now Institute, and Princeton University’s Ruha Benjamin about how to pragmatically move forward to build artificial intelligence technology that takes racial justice into account—whether you’re an AI researcher, a tech company, or a policymaker.