Ideally, predictive algorithms are stone-cold, rational, big data-crunching tools that can assist humans in their flawed decision-making process, but the caveat is that they can often reflect the biases of their creators.
According to Laura Hudson in her FiveThirtyEight piece “Technology Is Biased Too. How Do We Fix it?” algorithmic bias is a growing problem, as organizations increasingly use algorithms as a factor in deciding whether to give someone a loan, offer someone a job or even whether to convict a defendant or grant them parole.
But fixing these algorithms presents a philosophical quandary: how to define fairness? And if biases are impossible to avoid, then which ones are less harmful than others?
So how are problematic algorithms already being used today? How, if at all, can they be made “fair?” And in what way can we use algorithms responsibly?
Guest:
Suresh Venkatasubramanian, professor of computing at the University of Utah and a member of the board of directors for the ACLU Utah; he studies algorithmic fairness