When science reporters write about technology that isn’t really new, they should say so up front.
In Sunday’s Boston Globe, reporter Carolyn Y. Johnson SM’04 described an exciting project conducted by two MIT alumni who claim to be able to identify gay men based on their Facebook friends.
That work was even more exciting in Fall 2007, when the alumni were students doing a class project for Ethics and Law on the Electronic Frontier (6.805).
Johnson’s article reverberated across the Internet, maybe because it involved sexuality and secrets, or maybe because it was a good read, treated its subject fairly, and captured readers’ imaginations.
But the article buried “this isn’t new” some 944 words in, causing a serious problem. Bloggers and news services re-reported the story without noting that the work was the computer research equivalent of a moldy loaf of bread — interesting to look at, but aging and easily replaced. Some rewrites missed the facts. A New York Daily News photo caption said “A new project at MIT claims to be able to predict sexual orientation based on cues from a person’s Facebook page.”
OK, Daily News caption editors, if that’s a “new project”, then the following facts are also new: MIT’s dean of admissions has just resigned after the shocking revelation that she lied on her resume; a group of naysayers is predicting a coming crash in the stock market, but most reasonable people think the “housing bubble” is here to stay; and a fresh-faced senator from Illinois is mulling a presidential candidacy.
Non-novel stories anger experts. My space physicist friend gripes every few weeks that a newspaper has reported cool but well-understood facts about, say, the aurora as though they were a recent discovery. And have you read about the “$150 space camera”? MIT students have been taking photos using inexpensive weather balloons for years — although it was very cool to see CNN feature one such project this fall.
Non-novel stories mislead readers. Truly great science stories aren’t just fun to read: they help people understand how the latest discoveries might be used to change human life. It’s not okay for news stories to gloss over timeliness.
Johnson herself is a graduate of MIT’s science writing program. So she’s spent time with experts on the bleeding edge of technology. In light of the substantial advantages of her background, I think her article was good but not great: it hit “interesting” right on the head but missed “timely” and grazed “relevant.”
I see two more problems. First, Johnson’s article omitted a crucial statistic that would let us gauge its importance. And second, a startup founded six years ago has used similar tactics to improve international security. Shouldn’t they have been important to this story?
First problem: the article doesn’t mention the work’s false positive rate. 1133 words in, we get: “Although the researchers had no way to confirm the analysis with scientific rigor, they used their private knowledge of 10 people in the network who were gay but did not declare it on their Facebook page as a simple check.”
OK. But how many “possible gay men” did the project predict? We can’t even begin to guess what percentage of the predictions were correct. We don’t know whether this research is any good.
Second problem: isn’t someone doing this stuff for a living? When I took this class in Fall 2007, I saw this work proposed and asked: OK, so you can speculatively identify gay people. But what useful things can you do?
Two years later, Johnson’s article dodges the question: what useful things can you do? I think they’re out there, but they aren’t brand new.
Everyone knows that you can analyze networks to find hidden characteristics. Can that change the world? A company called Palantir, founded in 2004, has spent much of the last decade mining network information like PayPal data to find terrorists. (See the Sept. 4, 2009 Wall Street Journal article “How Team of Geeks Cracked Spy Trade”, or find out about the project the way I did — at their Career Fair booth a few years ago.)
Johnson quotes a 2009 conference paper where scientists warn: “Using friends in classifying people has to be treated with care,” because the classifications can be weak. Sounds like someone ought to check this against the social-network-terrorist-sniffers whose software has, the Journal reports, “foiled a Pakistani suicide bombing plot on Western targets and discovered a spy infiltration of an allied government.” What’s their false positive rate? Do similar network analysis principles apply to Facebook friendships and PayPal transactions? How does the MIT work relate to this kind of industry work?
My uninformed guess is that the MIT work was a solid feat of engineering, likely to improve the way people do this kind of analysis. I hope they get a published paper out of it.
Science reporters should strive to represent the state of the art, not just the juiciest parts of the last few years’ results. I commend Johnson for writing something that I enjoyed reading and that helped engage people in parts of an important kind of research. I just wish the news story had mentioned what was, and what wasn’t, new.