Experimental evidence of massive-scale emotional contagion through social networks

We show, via a massive (N = 689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. We provide experimental evidence that emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues.

In 2014, Facebook conducted a secret social experiment named “Experimental evidence of massive-scale emotional contagion through social networks” with the intention of understanding if emotions can be transferred in the absence of in-person interaction and non-verbal cues. Although we know today, to a large extent, that fake news and emotionally charged content can transfer emotions through social media — as evidenced in the run up to the 2016 US Elections—at the time of the study, it was unclear or ambiguous whether emotions could be transferred through social media interactions.

The Study:

Photo by The New York Public Library on Unsplash

Working with Cornell University’s Departments of Communication and Information Science, Facebook looked into whether emotional contagion (transferrable emotional states) occur in massive social network interactions in the absence of in-person interaction. The study focused on whether exposure to emotions through Facebook’s posts (positive and negative) changed people’s emotions and this was measured through the language used in their subsequent Facebook posts or status updates. The study was conducted on a population of approximately 700,000 individuals who were identified through their UserID and two parallel experiments were conducted over a one week period. In one experiment, the user’s exposure to positively emotional content was reduced anywhere between 10% to 90% for a specific viewing and in the other experiment, the user’s exposure to negatively emotional content or emotionally charged content was reduced anywhere between 10% and 90%.

The subsequent status updates or posts were then analysed using the “Linguistic Inquiry and Word Count” software (LIWC) to classify the posts as ‘positive’, ‘negative,’ or ‘neutral’. The study observed that when positive posts were reduced in the news feed, the percentage of positive words in people’s status updates decreased by 0.1% while the percentage of negative words in the status updates increased by 0.04%. Conversely when negative posts were reduced, the percent of negative words decreased by 0.07% and there was a 0.06% increase in positive words in status updates. The study concluded that this is indicative of emotional contagion based purely on textual interaction without the need for in-person interaction or non-verbal behaviour (REF 1). The study also identified that people who were exposed to fewer emotional posts (positive and negative) were less expressive on the days following the exposure indicating that emotional exposure affects people’s social media engagement. While people go through a range of experiences everyday that affects their mood, despite the small effect sizes evidenced in the study, the study concluded that these are strong experimental evidence that emotions can spread through social network. Furthermore the study argued that given Facebook’s large scale of massive social network, these small effects, on a larger scale, would aggregate to larger consequences.

Research Design Neglect:

It remains unclear from the study if a scientific approach was taken to design the study and analyse its results. To put things into perspective, look at the two models shown below.

Two model of Research Design approach. Image by author.

The ideal model is the ‘Linear model’ in which you come up with a theory/model on which you hypothesize, collect data on a sample, analyze and interpret the data, validate its findings and present it. But anyone who has worked in research or even in a lab experiment knows that this is far from how reality works. This is why we need the ‘Iterative Model’. In reality the researcher goes through various iterations of self-doubt, analysis, theories, data gathering and even iterations where the original research question is analyzed to check if the correct question has been asked and whether the approach taken is the correct one. This does not seem to have been done in the Facebook study. The paper admits that the study was done over a period of 1 week — 1 week to observe and gather data which was then analyzed without subsequent corrections or data gathering to declare that emotions can be transferred through social media interactions. While it is possible that a large chunk of work was done after the 1 week of data gathering to clean, analyze and interpret the data, no details of what issues were encountered in the data collected, challenges faced with the data quality or challenges to the initial hypothesis were presented or even hinted in the paper. It was almost as if they collected data, analyzed it and presented it without any evaluation.

Sample Size Neglect:

Photo by Robby McCullough on Unsplash

Given the small size of population on which this experiment was conducted (700k users divided up for 2 parallel experiments) relative to the vast expanse of users who use Facebook, it seems presumptuous to extrapolate the data to all the users of Facebook and suggest a link between the emotions to which a user is exposed through their news feed and how they feel after that. It should be noted that the study only saw a very small change in emotions (we will get to the ‘emotion’ part next) that the users expressed subsequently (maximum of 0.1%) and no context was provided on the background and events occurring around the subjects used in this sampling and whether these could have any effect at all in besmirching the study. The study fails to analyze or discuss the difference in emotion between a user who was exposed to 90% emotional content vs a user who was exposed to 10% emotional content.

Algorithm Bias Neglect:

Photo by Markus Spiske on Unsplash

Talking of ‘emotions’ that were analyzed in the study, the algorithm or software used for this analysis was the “Linguistic Inquiry and Word Count” software (LIWC) which was used to look at the status updates of the ‘test subjects’ to analyze its emotions and classify the text as laden with ‘positive’ or ‘negative’ emotions. It should be noted that LIWC merely looks for the presence of pre-programmed keywords to classify a sentence as having a positive or negative emotion (REF 2). With its limited vocabulary (915 words vs 170,000 words in the English language (REF 2)) and inbuilt feature of analyzing words in isolation, LIWC is inherently incapable of fully comprehending and classifying a sentence by understanding the context, slang, idioms, irony, sarcasm and brings together a context collapse through its analysis (REF 2).

To put this in perspective, I tried out LIWC’s sample analysis as shown below, using the text “Oh great! Now I have to spend $500 fixing my motorcycle because someone couldn’t stop at a light”.

It is obvious to anyone that this sentence has undertones of sarcasm and annoyance in it because I am miffed that someone damaged my motorcycle without stopping at a light. But not LIWC. LIWC classified this text as ‘positive’ with a score of 97.6 and with zero negative emotions. You can give it a try at LIWC’s website.

LIWC further clarifies that the higher the score for emotional tone, the more positive the message is.

In addition to the drawbacks that LIWC has, the study ignored any photos accompanying the texts, which could have provided context to the researchers on the emotions that were classified. LIWC’s output was taken at face value without any backing with validation data. Research studies that were quoted in support of LIWC (REF 1) either did not have any validation work or has evidence that LIWC is not able to decipher between positive and negative emotions (REF 2). This neglect of the algorithm’s bias is striking and it is even more shocking that the study did not address any issues encountered during the ‘analysis’ phase with regards to the classification given by LIWC.

Social Sciences Neglect:

Photo by Nathan Dumlao on Unsplash

The author of the study was involved in another study which found that people generally self-censor their posts before it is finally shared on Facebook (REF 2). Without paying attention to the social context in which status updates are shared and by simply focusing on positive or negative words in the status, the authors have failed in understanding any underlying emotions or societal contexts that could alter or bias the experiment. While the authors acknowledge that people go through a range of experiences that affect their mood (REF 1), little information is given towards how this is accounted for in the experiment and its effect on the results. The authors of the study have suggested that the findings of the study have importance for public health given the well documented connection between emotions and physical well-being. However this is an exaggerated extrapolation from a miniscule sample size in comparison to the number of users who are using the platform. Without proper context to back the emotions that the users feel or the context behind the posts that the users were exposed to, a ‘context collapse’ (REF 2) occurs which can have varied influence on one’s emotion and which cannot simply be classified in a black & white manner using the LIWC software.

Ethical Concerns with the study:

Photo by Tim Marshall on Unsplash

Facebook was in the center of a social media outrage following the publication of this study. The study brings out a number of ethical concerns which have been voiced by social media advocates. The first and foremost is that Facebook conducted this study on 700,000 participants without informed consent. Participants were not informed about this study or were not allowed to decide if they wanted to participate or not. Let’s tackle the ethical concerns by using the Belmont Report as the yardstick. This study goes against the accepted principle of ‘Respect for Persons’ where individuals are treated as humans rather than test subjects. The study references Facebook’s Data Use Policy to indicate that such research is covered under the terms and conditions that the user agrees to when signing up for Facebook (REF 1). But it is impractical to assume that all users globally have read through each and every line of the ‘Terms and Conditions’. While the ‘Terms and Conditions’ will serve as a legal defense, the ethical considerations behind using consumers as unsuspecting test subjects adds more fuel to the criticism against Facebook that the company uses consumers like products (REF 2), i.e. using consumer’s data, emotions and interactions to target them with more advertisements and content.

It remains unclear, from the study, who benefitted from it. Was the study performed in an altruistic manner to understand the emotional toll that a person feels after being exposed to varying emotions on Facebook and to use the results to better fund mental health programs? Or was it conducted to understand if Facebook could better understand its users’ emotions and how they can be micro-nudged into a pattern of behavior that is beneficial and profitable to the company (REF 3). Finally, the lack of an institutional review board on the study makes it unclear if this was an academic study or a study by Facebook for its profits. While Facebook would not release any new feature to the users without thorough review, the lack of review by an institutional review board on the study and its approach is a gross oversight.

Conclusion:

This study raises a lot of questions on the neglect that was shown for research design and ethical considerations. Without a proper design, can this study even be taken on face value and accepted? How does informed consent work once you accept the terms and conditions? Being legal is far from being ethical because legality always catches up to the advancement in technology. Somewhere in our quest to ‘be first’, we may have lost the ability to have some empathy.

What is the massive scale contagion experiment?

In 2014, Facebook conducted a secret social experiment named “Experimental evidence of massive-scale emotional contagion through social networks” with the intention of understanding if emotions can be transferred in the absence of in-person interaction and non-verbal cues.

What was the emotional contagion experiment?

The Facebook emotional contagion experiment, in which researchers manipulated Facebook's news feed by, among other things, showing fewer positive posts to see if they would lead to greater user expressions of sadness, raises obvious as well as non-obvious problems (Kramer et al., 2014).

What were the overall results of the Facebook emotional contagion study?

Significance. We show, via a massive (N = 689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness.

What is emotional contagion theory?

Emotional contagion occurs when someone's emotions and related behaviors lead to similar emotions and behaviors in others. Awareness of emotional contagion is important for managing our own emotions and related actions, and to assure our wellbeing and that of others.