Do Algorithms Have a 'White Guy' Problem?
Algorithms are coming to dominate many aspects of our lives and most people are completely unaware of it. They are being used to determine which job applicants actually get called for an interview, which people are awarded a mortgage or business loan, which stores will open in a community, which news headlines and opinions we're likely to see on social media, and much more.
The question that has made headlines recently is whether these algorithms are biased and discriminatory. Some like ProPublica's Julie Angwin and Microsoft's Kate Crawford have argued that algorithms and Artificial Intelligence have a "white guy problem" - meaning that, since algorithms learn by being fed certain data, and they build models of the world based on that data, the people doing the feeding are tremendously important because as they make selections informed by their own biases, the resulting software comes to incorporate those same biases.
They cite these recent examples:
- Women were less likely than men to be shown ads on Google for highly paid jobs, according to a study at Carnegie Mellon University.
- Amazon’s same-day delivery service was made unavailable for ZIP codes in predominantly black neighborhoods.
- An investigation by ProPublica analyzed more than 7,000 risk scores assigned by the company Northpointe, whose tool is used in Wisconsin for criminal sentencing, and compared predicted recidivism to actual recidivism. It found that the scores were wrong 40 percent of the time and were biased against black defendants, who were falsely labeled future criminals at almost twice the rate of white defendants.
- "Predictive policing" in places like New York is used to forecast where crime is most likely to occur and directs police to focus on those neighborhoods. Crawford states that,
At the very least, this software risks perpetuating an already vicious cycle, in which the police increase their presence in the same places they are already policing (or overpolicing), thus ensuring that more arrests come from those areas. In the United States, this could result in more surveillance in traditionally poorer, nonwhite neighborhoods, while wealthy, whiter neighborhoods are scrutinized even less. Predictive programs are only as good as the data they are trained on, and that data has a complex history.
From my own anecdotal perspective, I remember once when my wife and I were sitting next to each other on the couch - both using Facebook on our laptops - and she was shown ads for diaper coupons while I was shown ads for getting a second job; nevermind that I'm the stay-at-home parent and she's the one who works full-time. How's that for gender stereotypes being ingrained in an algorithm?
While these biases clearly exist, it's more difficult to prescribe realistic remedies. Some ideas thrown out there include making the algorithms more transparent and accountable - the idea of "algorithmic auditing". Also, since algorithms reflect the values of its creators, there's the idea that we must address not only the people who design them, but also the people who sit on the company boards, and which ethical perspectives are to be included.
It seems to me that the real danger isn't that algorithms are being used in the areas that they are, but that they are being used in ways that overlook the merits of the individual in favor of larger group associations. For instance, credit scores (long determined by algorithms) are based on whether each individual pays their bills on time, their individual history, their individual access to more credit as needed, etc. While still flawed, at least these credit scores are based on what you do as a person, so you have some level of control over your score.
By contrast, these other algorithms in the news recently are more problematic because a person's "scores" are completely dependent on what neighborhood you live in and what racial/ethnic/religious group you were born into. In other words, unlike credit scores, these algorithms are explicitly designed and programmed to make judgments about you based on group stereotypes rather than on the actions you take and decisions you make as an individual.
All of us would like to be defined by what we do as individuals, not by what other people "similarly situated" to us do. There should be universal condemnation of a system in which an upstanding and responsible person can't get a mortgage or business loan solely because their neighbor defaulted on theirs.
The good news is that since algorithms are programmed by people, people can program them differently. If values like meritocracy and individualism are important then they can become the constitutionally defining principles of algorithms going forward. Right now, though, they are a problem, reinforcing the worst stereotypes and acting as an obstacle to individual advancement. As Alistair Croll has famously said, "this is our generation's civil right's issue, and we don't know it".