Why you should read “Weapons of Math Destruction”

Review of “Weapons of Math Destruction: How big data increases inequality and threatens democracy,” by Cathy O’Neil

Cover of Weapons of Math DestructionOver the last few years, the study of statistics has taken on new meaning and importance with the emergence of “data science,” an imprecisely defined discipline that merges aspects of statistics with computer science. Google, Amazon, and Facebook are among the most famous companies putting data science to use, looking for trends among their users.

How does Facebook know who your friends might be, or which advertisements you want to watch? Data science. How does Amazon know which books you’re likely to buy, or how much to charge you for various products? Data science. How does Google know which search results to show you? Data science. How does Uber know when to implement surge pricing, and how much to charge? Data science.

A key part of data science is machine learning, in which the computer is trained to identify the factors that might lead to problems. If you have ever tried to make a legitimate credit-card payment, but your card has been denied because it looked suspicious, you can be sure that it didn’t “look” bad to a human. Rather, a machine-learning system, having been trained on millions of previous transactions, did its best to put you into the “good” or “bad” category.

Today, machine learning affects everything from what advertisements we see to translation algorithms to automatic driving systems to the ways in which politicans contact voters. Indeed, the secret weapon of Barack Obama’s two presidential campaigns was apparently his finely tuned data science system, which provided a shockingly accurate picture of which voters might change their minds, and which would be the best way to do so. (A great book on the subject is The Victory Lab, by Sasha Issenberg.)

I’ve been getting both excited and optimistic about the ability of data science to improve our lives. Every week, I hear (often on the Partially Derivative podcast) and read amazing, new stories about how data science helped to solve problems that would otherwise be difficult, time-consuming, or impossible to deal with.

Cathy O’Neil’s new book, “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy,” has dampened my optimism and enthusiasm — and for that, I thank her.  The book came out at just the right time, pointing out where data science is causing more harm than good, and warning us to think, and even regulate it, before we let it take over our lives in too many more ways.

Even before reading this book, O’Neil was someone I admired: Her blog, mathbabe.org, has useful insights about the use of math in everyday life, her book “Doing Data Science” is a great introduction to the subject, and she’s a panelist on the “Slate Money” podcast, which I thoroughly enjoy each week.

While O’Neil is easygoing and funny in her writing and speaking, her book is deadly serious. In it, she says that the widespread use of data science is leading to serious problems in society.  She blames a number of different things for these failures. In particular, the opacity of many of the algorithms used makes them impossible to understand or evaluate. Their widespread use across huge populations for important decisions, and the frequent inability to find and appeal to a human, to insert some (pardon the term) common sense into the equation, means that mistakes can have far-reaching effects. Even if the results are good for most of the people most of the time, they can be bad for some of the people (and sometimes even most of the people) quite a bit of the time.

In statistics, you’re always dealing with averages, generalities, and degrees of confidence. When you’re letting a computer make decisions about people’s jobs, health, education, and court cases, you need to err on the safe side. Otherwise, many people could end up having their lives destroyed because they were statistical outliers, or didn’t quite match the profile you intended.

O’Neil points out, early on in the book, that data science involves creating statistical models. Models represent a form of reality, and help us to understand reality, but they aren’t themselves reality. The designer of a model needs to decide which factors to include and exclude. This decision-making process is, as O’Neil points out, riddled with the potential for error. This is particularly true if the thing you’re trying to measure isn’t easily quantified; in such cases, it’s common to use a proxy value.

For example, let’s say that you want to know how happy people are. You can’t directly measure that, so you use a proxy value for it — say, how much money people spend on luxury items. Not only is this a lousy proxy because there are lots of other reasons to buy luxury goods, but it’s likely to show that poor people are never happy. By choosing a bad proxy, you have made the model worthless. Combine a few bad proxy values, and unleash it on a large population, and you’re likely to do real harm.

Even if you choose your inputs (direct and proxies) correctly, your model will still likely have mistakes. That’s why it’s crucial to refine and improve the model over time, checking it against real-world data. As O’Neil points out in the book, this is why it makes sense for sports teams to model their players’ techniques; over time, they will analyze lots of players and games, and find out which factors are correlated with winning and losing. But in the case of a classroom teacher’s performance, how many inputs do you have? And how often does a fired teachers’s performance at other schools get factored into the model? Moreover, what if the inputs aren’t reliable? Put all three of these factors together, and you end up with a model that’s effectively random — but that still ends up having good teachers fired, and bad teachers remain.

(I should point out that the software I developed for my PhD dissertation, the Modeling Commons, is a collaborative, Web-based system for modeling with NetLogo. I developed it with the hope and expectation that by sharing models and discussing them, quality and understanding will both improve over time.)

As O’Neil points out, updates to models based on empirical data are rare, often because it is hard or impossible to collect such information. But as she points out, that’s no excuse; if you don’t update a model, it’s basically useless. If you give it a tiny number of inputs, its training is useless. And if your input data has the potential of being fudged, then you’re truly in terrible trouble. Given the choice between no model and a bad model, you’re probably better off with no model.

The thing is, these sorts of poorly designed, never-updated algorithms are playing a larger and larger part of our lives.  They’re being used to determine whether people are hired and fired, whether insurance companies accept or reject applications, and how people’s work schedules are determined.

Some of O’Neil’s most damning statements have to do with race, poverty, and discrimination in the United States. By using inappropriate proxies, police departments might reduce crime, but they do so by disproportionately arresting blacks.   And indeed, O’Neil isn’t saying that these data science algorithms aren’t efficient. But their efficiency is leading to behavior and outcomes that are bad for many individuals, and also for the long-term effects on society.

Sure, the “broken windows” form of policing might bring police to a neighborhood where they’re needed — but it will also result in more arrests in that neighborhood, leading to more residents being in trouble with the law simply because there are police officers in view of the perpretrators. Add to that the fact that many courts give longer sentences to those who are likely to return to a life of crime, and that they measure this likelihood based on the neighborhood in which you were raised — and you can easily see how good intentions lead to a disturbing outcome.

Moreover, we’ve gotten to the point in which no one knows or understands how many of these models work. This leads to the absurd situation in which everyone assumes the computer is doing a good job because it’s neutral. But it’s not neutral; it reflects the programmers’ understanding of its various inputs. The fact that no one knows what the model does, and that the public isn’t allowed to try to look at them, means that we’re being evaluated in ways we don’t even know. And these evaluations are affecting millions of people’s lives.

O’Neil suggests some ways of fixing this problem; conservatives will dislike her suggestions, which include government monitoring of data usage, and stopping organizations from sharing their demographic data. In Europe, for example, she points out that companies not only have to tell you what information they have about you, but are also prohibited from sharing such information with other companies. She also says that data scientists have the potential to do great harm, and even kill people — and it’s thus high time for data scientists have a “Hippocratic oath” for data, mirroring the famous oath that doctors take. And the idea that many more of these algorithms should be open to public scrutiny and criticism is a very wise one, even if I believe that it’s unrealistic.

Now, I don’t think that some of O’Neil’s targets are deserving of her scorn. For example, I continue to think that it’s fascinating and impressive that modern political party can model a country’s citizens in such detail, and then use that data to decide whom to target, and how. But her point about how US elections now effectively include a handful of areas in a handful of states, because only those are likely to decide the election, did give me pause.

I read a lot, and I try to read things that will impress and inform me. But “Weapons of Math Destruction” is the first book in a while to really shake me up, forcing me to reassess my enthusiasm for the increasingly widespread use of data science. She convinced me that I fell into the same trap that has lured so many technologists before me — namely, that a technology that makes us more efficient, and that can do new things that help so many, doesn’t have a dark side.  I’m not a luddite, and neither is O’Neil, but it is crucial that we consider the positive and negative influences of data science, and work to decrease the negative influences as much as possible.

The main takeaway from the book is that we shouldn’t get rid of data science or machine learning. Rather, we should think more seriously about where it can help, what sorts of models we’re building, what inputs and outcomes we’re measuring, whether those measures accurately reflect our goals, and whether we can easily check and improve our models. These are tools, and like all tools, they can be used for good and evil. Moreover, because of the mystique and opacity associated with computers and math, it’s easy for people to be lured into thinking that these models are doing things that they aren’t.

If you’re a programmer or data scientist, then you need to read this book, if only to think more deeply about what you’re doing. If you’re a manager planning to incorporate data science into your organization’s work, then you should read this book, to increase the chances that you’ll end up having a net positive effect. And if you’re a policymaker, then you should read this book, to consider ways in which data science is changing our society, and how you can (and should) ensure that it is a net positive.

In short, you should read this book. Even if you don’t agree with all of it, you’ll undoubtedly find it thought provoking, and a welcome counterbalance to our all-too-frequent unchecked cheerleading of technological change.

Book Review: The Undercover Economist Strikes Back

In order to get an undergraduate degree from MIT, at least when I was there, you needed to take a certain number of humanities and social-science courses.  This was to stop you from coming out a complete one-dimensional student; the idea was that rounding out your education with knowledge from other fields was good for you as a person, and also good for you as an engineer or scientist.  (And yes, I realize that not everyone at MIT studied science or engineering, but those were the overwhelming favorites.)  One of the most popular social sciences that people took was economics — which is a social science, although MIT’s version actually included a great deal of math.

At the time, I tended to be quite dismissive of economics.  I didn’t think that it could possibly be interesting, or why so many of my friends were taking so many courses in that field.  What insights could they gain?

And then, just before I graduated, MIT Press had one of its amazing sales on overstock books.  I bought a book by a then-professor at MIT, named Paul Krugman, called “The Age of Diminished Expectations.”  Reading this book was quite a revelation for me; I suddenly realized that economics was complex, fascinating, and described the world in all sorts of interesting ways.  For years, I read and followed Krugman’s writing, in Slate and then (of course) the New York Times, gleaning what I could about economics.  (I also happen to subscribe to many of his political views, but that’s secondary.)  Whenever I could find an interesting and well-written book about economics, I would get it, because I found it to be so interesting and compelling.

Several years ago, a friend asked if I had read “The Undercover Economist,” by Tim Harford.  I hadn’t, but decided that perhaps it was worth a read, and found it to be delightful, but in a different way from Krugman. Harford isn’t an economics researcher, but he knows just how to put economics research into words and a perspective that everyone can understand.  His examples are often drawn from pop culture, and he’s able to distill the academic debates and intrigue to their essence.  The fact that he’s very funny only adds to his appeal.  I’ve since become quite a fan of Harford’s, listening (among other things) to the “More or Less” podcast from the BBC that he hosts, a sort of Mythbusters of statistics (Mathbusters?).

So it should come as no surprise that I ordered his latest book, “The Undercover Economist Strikes Back,” almost as soon as it came out earlier this year.  I just read it cover to cover over the weekend, and came away delighted.  As someone who has been reading Krugman’s work for years, and who also listens to NPR’s excellent Planet Money podcast, I can’t say that there was a huge amount of new information in this book.  But it was written so well, and put things into such nice context, that this doesn’t matter.

Harford has a gift for making economics not only understandable, but also interesting and relevant to our own lives.  In The Undercover Economist, he describes microeconomics, which describes how businesses and individuals respond to incentives.  In this new book, he describes macroeconomics, which is a different kettle of fish altogether — it’s about how governments and economies work.  If you think of macroeconomics as a complex system, then it’s no surprise that the aggregate behaves differently from its individual, constituent agents. (This, it took me many years to learn, is a much better explanation than what economics instructors tell their students, which is simply that “macro is different from micro.”)

The book talks about all sorts of great stuff, starting with recessions, moving onto unemployment, and covering a large number of topics that are in the newspaper each day, that affect each and every one of us, and which probably seem strange or detached from our reality, but which are actually quite important — particularly if you’re in a democracy, and need to separate real economics from crazy talk.

Harford includes a great definition and discussion of what money is, and brings up the famous story of the island of Yap, which used huge, largely immovable stones as money.  He also introduces the different schools of thought on the subject, and where (and how) they differ — and how much of what politicians in the US and Europe have been saying and doing over the last five years has been foolish or misplaced.

The question-and-answer format in which he wrote the book is a little tedious, but much less than I expected it to be.  Really?  Yes, really.

In my mind, perhaps the topic that was most obviously missing from the book was a discussion of currency, and how that can affect an economy.  If you live in the US, or even in England or Europe, you can largely ignore currency issues.  Sure, there are exchange rates, and yes, they affect you to some degree, but it’s not a huge deal.

In Israel, by contrast, the exchange rate is  a huge deal, because Israel imports and exports so much.  The dollar’s rise and fall affects everyone, from high-tech software companies to people shopping at the supermarket. The ways in which the Bank of Israel played with exchange rates and buying dollars, just to keep things relatively stable (while claiming that they were doing no such thing) are impressive, and point to the sorts of challenges that small, trade-oriented economies have but that large ones don’t.  I’m not sure if this was an omission due to time or space constraints, or if as someone living in England, Harford hasn’t had to think or worry much about currency issues.

I’ve changed my tune 100 percent since I was an undergrad; i now find economics to be totally fascinating, and very much enjoy reading the sorts of book that Harford has put out.  If you’ve always wondered what macroeconomics is, or what the newspapers is talking about when they mentioned recessions, or whether the politicians suggesting budget cuts during the latest recession were saying the most obviously brilliant thing or the most foolish thing imaginable, Harford’s book is a fun, interesting read, and is highly recommended.