The Facebook Emotion Study in a Broader Context

Which line matches the first line, A, B, or C? In the Asch conformity experiments, people frequently followed the majority judgment, even when the majority was (objectively) wrong. Author note: This post contains both some criticisms of the PNAS paper's claims of 'emotional contagion' and some reasons why we should debate the ethics of the study in the broader context of experimental studies that have come before it. In response to some comments on Twitter, I feel the need to point out that my discussion of the limited effects of the study are not a post-hoc justification of why the study was or was not ethical. However, IRB approval of social science studies does take into consideration the expected impacts (how big and in what direction) of the experimental manipulation.

Another PNAS paper is inciting some controversy, specifically over whether the experimental manipulation involved was ethical. I think there are some legitimate concerns over the ethics of the study, but I think we have to put this study in context of both experimental manipulations that are happening on the web all the time and the specifics of how this study was conducted.

Here are a few news articles to give you the basics on what the study found, and why some are considering the study to be unethical:

And a few blog posts:

The news articles linked above are in general consensus that the study, while legal, was probably not ethical. This is especially because the researchers didn’t get “informed consent” from participants – in other words, because participants didn’t know they were subjects of an experiment.

But let’s consider the history of this study, as well as the specific manipulations. Did the experiment pose more than minimal risks to participants?

This study wasn’t new under the sun

Experimental manipulations of real web content, also known as field experiments, are nothing new. Large field experiments of web content also go largely without informed consent. A huge number of experiments predate this Facebook study, from experimental manipulations of your search engine results, to a randomized controlled trial of political mobilization messages delivered to 61 million Facebook users during the 2010 US congressional elections (Facebook experiment in social influence and political mobilization), to Google ad experiments (and you can bet that emotional outcomes are often the target of such ad manipulations).

One study published in the Journal of Politics conducted a field experiment on Facebook that purposely induced “feelings of anger and anxiety” in people through experimentally manipulated Facebook ads, to see whether users would click through to a website as a result.

And last year, Facebook changed a news feed algorithm to more prominently highlight Buzzfeed-like news headlines.

In other words, Facebook’s latest “massive psychological experiment” is one in a long line of experimental manipulations of web content. Facebook scientists didn’t wake up one day in 2012 to think about manipulating people’s news feeds to see what would happen. These types of manipulations have been happening for a while and, if we are honest, we knew it. We LIKE filter bubbles and personalized ads. But guess how Google and Facebook have become so good at giving us what we want, what we will click on? You got it, field experiments with random variations of digital content.

The Methods

So what did this ‘unethical’ Facebook study really do? The following is the abstract of the PNAS paper by Kramer, Guillory and Hancock:

We show, via a massive (N = 689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. We provide experimental evidence that emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues.

But emotional contagion is a rather strong description of what the study actually measured.

The study manipulated the extent to which people (specifically 689,003 people) were exposed to emotional expressions in their News Feed. It did so in the form of two week-long experiments: one in which Facebook filtered out 10% - 90% of friends’ statuses with positive words from your news feed each time you loaded your news feed, another in which Facebook filtered out 10% - 90% of friends’ statuses with negative words from your news feed each time you loaded your news feed. As the paper points out, “[i]t is important to note that this content was always available by viewing a friend’s content directly by going to that friend’s “wall” or “timeline,” rather than via the News Feed. Further, the omitted content may have appeared on prior or subsequent views of the News Feed. Finally, the experiment did not affect any direct messages sent from one user to another.”

As outcome, the researchers measured the percentage of words produced by a given person during the week that were negative or positive.

The results? When positive posts were reduced in the News Feed, the percentage of positive words in participants' status updates decreased by −0.1% compared with the control group (where some statuses were filtered at random), and the percentage of words that were negative increased by 0.04%.

Tal Yarkoni, on his blog [citation needed], gives a great metaphor of how we can understand the results of this study on users’ emotional states:

“the effect of condition in the Facebook study is roughly comparable to a hypothetical treatment that increased the average height of the male population in the United States by about one twentieth of an inch (given a standard deviation of ~2.8 inches). Theoretically interesting, perhaps, but not very meaningful in practice.”

In summary, the results are tiny – they are significant due to the large power (large sample size) of this study. Also, the measured outcome is how many positive or negative words you expressed in your status updates during that week of manipulation – not directly your mood or emotional state. So we should probably explore other reasons why seeing a greater percentage of negative posts from your friends would make you write more negative words than normal, apart from changes in your ‘meatspace’ mood.

Our real gripe is that Facebook intentionally made us sad, not that they ran an experiment

Given the many examples of other online content experimental manipulation studies, including former studies conducted using Facebook, it seems the biggest reason people are complaining about the ethics of this study in particular is that Facebook apparently intentionally made us sad.

But did they really? They didn’t make our Facebook friends post any more negative statuses than they would have otherwise. Those of us who got the ‘negative’ experimental manipulation just happened to see a variable amount less of our friends’ positive statuses that week. It certainly wasn’t all ‘puppies’ for one group and all ‘death’ for the other.

Most Facebook users are probably aware that their news feed isn’t simply a collection of everything their friends post. As the PNAS study pointed out, “[b]ecause people’s friends frequently produce much more content than one person can view, the News Feed filters posts, stories, and activities undertaken by friends.” Facebook has probably experimented with many different ways to filter the content that shows up on your news feed.

But did we really feel more sad??

Yarkoni also brings up a point in his blog post that I thought last night when I first glanced at the study: the fact that users in the experimental conditions produced content with very slightly more positive or negative emotional content doesn’t mean that those users actually felt any differently.

This is a very good point. How positive or negative our Facebook statuses are might often, for a variety of reasons, not reflect our actual moods or emotional states. Yarkoni gives a great example where you might think twice about writing ‘had an awesome day today with my besties!’ if the last update on your news feed was from a friend who just had a death in the family.

There are other reasons why you might write positive statuses even when you are actually down or sad. Seeing more or less of the honestly negative posts from your friends on your news feed might prompt you to speak honestly of your current mood, while mostly seeing only the positive posts from your friends might induce a social pressure to post a positive status as well.

http://youtu.be/QxVZYiJKl1Y

So something we have to consider when reviewing the claims of the PNAS Facebook study: how good of an indicator is the Facebook status of the actual mood or emotional state of the person posting? At best your Facebook status is only a partial, imperfect indicator of your actual mood or emotional state. So the results of the PNAS study may be more about social pressure than emotional contagion.

Informed Consent

This, from Wikipedia, shows why lack of informed consent doesn’t ‘break’ the study necessarily:

… while informed consent is the default in medical settings, it is not always required in the social science. Here, research often involves low or no risk for participants, unlike in many medical experiments. Second, the mere knowledge that they participate in a study can cause people to alter their behavior, as in the Hawthorne Effect: "In the typical lab experiment, subjects enter an environment in which they are keenly aware that their behavior is being monitored, recorded, and subsequently scrutinized."[28]:168 In such cases, seeking informed consent directly interferes with the ability to conduct the research […[ In cases where such interference is likely, and after careful consideration, a researcher may forgo the informed consent process. This is commonly done after weighting the risk to study participants versus the benefit to society […]

In social science settings where an experiment presents no more than minimal risks to the subjects, and where their anonymity is conserved, informed consent is not necessary.

A note: Cornell’s institutional review board (IRB) found the Facebook study to be ethical on the grounds that “Facebook filters user news feeds all the time, per the user agreement”.

Update June 30, 2pm: Matt Pearce at Los Angeles Times (@mattdpearce) tweets:

Facebook apparently ran the emotion-manipulation experiment on its own in 2012, under consent given by its own sweeping terms of service. [...] Facebook then gave the results of that experiment to two outside academic researchers, who worked on the research with an FB data scientist. [...] The research was at least notionally approved by Cornell's review board on the grounds that Facebook had already run its own experiment. [...] Additionally, according to Cornell, the academics apparently had "access only to [Facebook's] results – and not to any data at any time."

Summary

Do you think the recent PNAS Facebook study was largely unethical? If so, what about the other experiments recently conducted with Facebook content? Often, we tend not to complain about the ethics of experiments whose outcomes are “positive” – greater information seeking, positive emotions, voting, etc. The PNAS study certainly had some slightly “negative” outcomes – making people express slightly more negative words in their Facebook statues when they saw fewer positive words in friends’ statuses. But did the study involve any more than minimal risks to users’ emotional states? The study also lasted only for 1 week. A longer-term study would likely have triggered IRB to be more wary, but a week is most likely not enough time to chronically alter mood or emotional health.

So I leave you with the following points to think about. Feel free to disagree.

1)      Online experiments without informed consent happen all the time

2)      Experimental manipulation in the PNAS study was based on what your friends on Facebook were already saying

3)      Facebook is a free service, but run by a very commercial and competitive company - it can ultimately do whatever it wants with its algorithms, etc.

4)      What you write on your status may not be a good proxy for your actual mood anyway

5)      The experiment only lasted a week – a longer-term emotional manipulation experiment would have been more potentially harmful to participants