Research, Privacy and Facebook

Research, Privacy and Facebook

  • AuthorNoah Berlatsky
  • Published Monday, August 25th, 2014
  • Comments0

“‘I observed a mature and initially poised businessman enter the laboratory smiling and confident. Within 20 minutes he was reduced to a twitching, stuttering wreck who was rapidly approaching a point of nervous collapse. He constantly pulled on his earlobe and twisted his hands. At one point he pushed his fist into his forehead and muttered, ‘Oh, God, let’s stop it.’”

An account of torture? Not quite. The quotation here is Stanley Milgram’s disturbingly triumphal description of his famous 1963 experiment on the nature of evil. Milgram, a Yale psychologist, was trying to test theories that suggested atrocities like the Holocaust were possible because people tend to obey the commands of authority figures, even if those commands are evil. Milgram had lab-coated assistants order research participants to administer what they thought were dangerous electric shocks. In fact the shocks were fake; the recipients were actors, who pretended to be distressed. Milgram was startled to discover that the participants, told they were participating in important research, would turn the “shocks” up and up until the actors screamed in distress.

Milgram proposed that his experiments, which showed people were willing to inflict pain when ordered to do so, demonstrated how the Nazis had convinced people to commit atrocities. He felt he had discovered a moral truth, but in doing so, he himself arguably violated ethical practices. He had lied to his research subjects in order to put them in a situation where they experienced, on his own account severe emotional distress, even pushing them to the point of “nervous collapse.” As a result of Milgram’s experiment and ones like it, ethical standards for experiments have been considerably tightened; institutional review boards (IRBs) have to sign off on proposals, and will reject experiments which seem likely to traumatize participants, or which use undue deception. Milgram’s experiment would not have passed such a threshold, and could not be conducted today—or so the scientific community likes to think, anyway.

Today, social media has created new ways for researchers to follow in Milgram’s ethically dubious footsteps. Recently, it was revealed that Facebook had manipulated the News Feed of some of its users — the News Feed being the main scroll of updates and messages that you see when you log into the site. The Atlantic’s Robinson Meyer describes the situation:

For one week in January 2012, data scientists skewed what almost 700,000 Facebook users saw when they logged into its service. Some people were shown content with a preponderance of happy and positive words; some were shown content analyzed as sadder than average. And when the week was over, these manipulated users were more likely to post either especially positive or negative words themselves.

Meyer says the experiment (published here) is legal given Facebook’s terms of service, which gives the company broad rights to utilize user data for research purposes. However, the experiment prompted a major backlash, as users (predictably) objected to Facebook surreptitiously manipulating their emotional states in the name of science. Ilana Gershon, a social media researcher and professor of anthropology at Indiana University, told me that Facebook users were right to be angry. Gershon argues:

I think Facebook scientists did do something ethically wrong by experimenting on its users with the intention of producing scientific results without having any oversight on their experiment and without ever giving the users the option to opt out of the experiment. And I also think it is significant that to this day, the users studied still don’t know if they were part of this experiment or not.

Social media expert and researcher danah boyd expands this critique, arguing that the problem is not merely the manipulation of data for research purposes, but the fact that Facebook routinely manipulates data without user consent input, or knowledge. Boyd explains:

Facebook actively alters the content you see. Most people focus on the practice of marketing, but most of what Facebook’s algorithms do involve curating content to provide you with what they think you want to see. Facebook algorithmically determines which of your friends’ posts you see. They don’t do this for marketing reasons. They do this because they want you to want to come back to the site day after day. They want you to be happy. They don’t want you to be overwhelmed. Their everyday algorithms are meant to manipulate your emotions. What factors go into this? We don’t know.

Basically, Facebook is always experimenting on you, trying to keep you happy and interested by filtering the information you see without ever telling you that they’re filtering the information. Boyd queries, “what would it mean that you’re more likely to see announcements from your friends when they are celebrating a new child or a fun night on the town, but less likely to see their posts when they’re offering depressive missives or angsting over a relationship in shambles?” Should sad or unhappy things be kept from you because they’re likely to upset you? Who gets to decide that? An algorithm?

Gershon commented, “I find it fascinating that this experiment, the substance of it, is to treat people as manipulable machines– that the experiment suggests that people can figure out an algorithmic way to manipulate others’ emotions.” The problem is not just Facebook’s research methods, but the vision of human nature behind those research methods. Facebook’s business model, and the business model of social media in general, views people not as human beings, but as inputs to be exploited and controlled.

You can clearly parallel Facebook’s working business model and aggressive manipulative ideal to Milgrams’ experiments. In the name of researching how people can be manipulated, Milgram used and exploited people, plugging them into his thesis, and carefully recording their emotional distress. He did conduct follow-up interviews to make sure that there was no long-term psychological damage or distress — and similarly it seems unlikely that the Facebook experiment would result in long-term damage. Even if there is no easily quantifiable harm, there remains a question of whether it’s acceptable to treat people as things or lab rats, whether in the name of research into great moral truths or in the name of marketing.

The promise of social media is that it will connect us to each other, and make it possible to share more closely with folks we love and care about. The threat, it seems like, is that users of social media turn into a consumable content mill for everyone else; millions of people-engines grinding out insight, uplift, outrage and demographic data at the synchronized command of algorithmic monetization. Facebook’s experiment suggests that it sees users as something to be used — which is unsettling, even if they are using us to inflict pleasure on each other instead of pain.

Noah Berlatsky writes for the Atlantic, Salon, and Splice Today; he is the editor of the Hooded Utilitarian, a comics and culture blog.

Leave a Reply