Sunday 29 June 2014

Facebook is learning the hard way that with great data comes great responsibility

Facebook CEO Mark Zuckerberg in front of the company's logo

Facebook released the results of a study where its data scientists skewed the positive or negative emotional content that appeared in the news feeds of nearly 700,000 users over the course of a week in order to study their reaction. The study found evidence of “emotional contagion,” in other words, that the emotional content of posts bled into user’s subsequent actions.


The study was almost certainly legal, a line in the terms of service users agree to when they sign up cedes control of data for “data analysis, testing, [and] research,” and it was subject to a review process. But the reaction from many users and commentators has been pretty negative. The reminder of how heavily engineered the ubiquitous news feed is, the sensitivity of examining psyche and emotion, and the knowledge that Facebook can run these sorts of studies without people’s knowledge or explicit permission seems to have combined to strike a chord.


There are a couple of points worth making in defense of the study, first, that tech companies are constantly making changes that apply to some people and not others.


Many people at big Web companies have told me: If you use our product you are in an experiment. (They usually mean an A/B test.)—
Farhad Manjoo (@fmanjoo) June 29, 2014



Secondly, Facebook’s primary business is advertising. The very point of ads is to influence emotions and behavior, and they often use data and psychological insights to do so. The news feed isn’t a product that’s somehow “neutral,” it’s constructed with a particular goal in mind.


“FB experiments with what it shows you in order to understand how you will react,” sociologist Elizabeth Popp Berman writes. “That is how they stay in business.”


University of Texas research associate Tal Yarkoni has a detailed defense of the study, arguing that it’s not far outside of what companies usually do, and that using this kind of data for basic research is valuable.


And despite what some pieces seem to imply, Facebook was not really manipulating its user’s emotions, Yarkoni writes:


In particular, the suggestion that Facebook “manipulated users’ emotions” is quite misleading. Framing it that way tacitly implies that Facebook must have done something specifically designed to induce a different emotional experience in its users. In reality, for users assigned to the experimental condition, Facebook simply removed a variable proportion of status messages that were automatically detected as containing positive or negative emotional words.


Let me repeat that: Facebook removed emotional messages for some users. It did not, as many people seem to be assuming, add content specifically intended to induce specific emotions. Now, given that a large amount of content on Facebook is already highly emotional in nature–think about all the people sharing their news of births, deaths, break-ups, etc.–it seems very hard to argue that Facebook would have been introducing new risks to its users even if it had presented some of them with more emotional content. But it’s certainly not credible to suggest that replacing 10% – 90% of emotional content with neutral content constitutes a potentially dangerous manipulation of people’s subjective experience.



Still, as Elizabeth Berman points out, the study raises some pretty massive questions that are going to be important for Facebook, similar companies, and the world of big data researchers.


“Does signing a user agreement when you create an account really constitute informed consent?” Berman writes.


Informed consent is the idea that those having data collected should get consent using “reasonably understood” language. Some argue Facebook’s terms of service doesn’t do the job, and the latter criterion doesn’t seem to be met.


A second question has even bigger implications for these kind of datasets. The research is potentially valuable, and the amount of information is intriguing for the company. But ubiquity brings its own questions.


“Do companies that create platforms that are broadly adopted (and which become almost obligatory to use) have ethical obligations in the conduct of research that go beyond what we would expect from, say, market research firms?” Berman asks.


Facebook’s sheer size and ubiquity mean that it has access to more data, and that any changes have far more impact, than just about any other researcher can imagine. The intimacy and scale of that relationship is one of the reasons that the reaction has been so intense.


We reached out to Facebook for further comment – here’s what a spokesperson passed on to The Atlantic:


“This research was conducted for a single week in 2012 and none of the data used was associated with a specific person’s Facebook account. We do research to improve our services and to make the content people see on Facebook as relevant and engaging as possible. A big part of this is understanding how people respond to different types of content, whether it’s positive or negative in tone, news from friends, or information from pages they follow. We carefully consider what research we do and have a strong internal review process. There is no unnecessary collection of people’s data in connection with these research initiatives and all data is stored securely.”





Facebook is learning the hard way that with great data comes great responsibility

No comments:

Post a Comment