It has been well over one year now since Facebook enabled its almost two billion users to stream live video. During the roll-out, Facebook unabashedly encouraged its users to embrace the opportunity to “create, share and discover live videos.” Unlike Twitter, which required that users download the Periscope app separately before they could livestream, Facebook offered a fully integrated streaming functionality. (Twitter has since been working to eliminate its extra roadblock.)
Using Facebook Live is as easy as pushing the “Live” icon on the Facebook app. First-time users are greeted by a short set of basic instructions – which they can skip – explaining how to get started, the workings of the view counter and how to interact with viewers. Other than a cheerful reminder that reads, “Remember: They can see and hear you!” there is no alerting users to the ethical minefield that unfolds when livestreaming video. Instead, the sign-off message reads, “Try it! Just relax, go live and share what’s happening.” What could go wrong?
When livestreaming apps such as Periscope and Meerkat first burst onto the scene a couple years ago, journalism professionals embraced their potential but also engaged in thorough debate about the ethical pitfalls of these apps. Professionals trained and experienced in the moral questions presented by broadcasting live footage to large audiences saw the need to examine the potential harm posed by this technology. Yet, Facebook developers trust teenagers to figure out the harms on their own, through sometimes costly trial and error.
According to Mark Zuckerberg, Facebook Live marked a “shift in how we communicate, and it’s going to create new opportunities for people to come together.” There is no doubt that Facebook Live has done exactly that, as it has produced it predictable parade of viral stars and shareable content. But Facebook live has also been used to broadcast murder, torture, rape, beatings and other violent content, presenting some serious ethical concerns.
My point is not that the technology caused these events, or even enabled them. This type of dead-end ethical analysis is highly speculative and amounts to blaming technology for heinous acts committed by individuals. The ethical analysis does not end there. As the platform on which these videos are posted, Facebook aids in distributing this upsetting content, especially because the content remains available after the livestream has ended if a user chooses to post it. (On Instagram, by comparison, live videos used to disappear once the recording has stopped.) This is a choice that was made by the developers at Facebook, and it’s one that carries moral weight, as it gives these acts (and their actors) a notoriety they would otherwise lack. While the availability of this disturbing content raises a smorgasbord of ethical concerns for its creators, hosts, moderators, audiences and subjects, I want to narrow the focus here to one particularly troublesome type of content: suicides that are livestreamed.
The Livestream Suicides
In recent months, several people broadcast their suicides on streaming services.
As with other situations, we cannot assess the role the presence of livestreaming technology played in the tragic decisions these people made. Experts warn not to attribute suicide to a single cause. Even if we could somehow demonstrate that the existence of livestream functioned as a trigger in one case, there might be separate instances in which the livestreaming allowed others to see the cries for help and intervene.
Facebook has taken some laudable initiatives regarding this issue. It has an ongoing partnership with reputable suicide prevention programs that work on identifying and reaching out to users displaying suicidal thoughts. It is even contemplating the use of artificial intelligence and pattern recognition to flag content indicative of suicidal tendencies. In the wake of the recent suicides, the social network announced it would extend these measures to its livestreaming function. However, this was not an unforeseeable problem, and one can’t help but wonder why it took a number of people taking their lives before Facebook would take this step.
Compounding the ethical quagmire is the fact that Facebook tends to be slow in removing these types of videos. It took Facebook two hours to remove the video of “Facebook killer” Steve Stephens murdering Robert Godwin. (Contrary to some initial reports, this video was not livestreamed; it was uploaded after the fact.) When the suicide video of the 12-year-old from Georgia started making its rounds on Facebook, the company denied initial requests to remove it, according to a Buzzfeed report. Kyle MacDonald, a New Zealand-based psychotherapist, experienced a similar sluggish reaction when requesting removal of links to the suicide video. In the opinion pages of The Guardian, he took Facebook to task: “Facebook also claimed that because it is not hosting the video, it is not responsible,” he wrote. “This is despite the fact that due to its inaction the links were widely available on Facebook for anyone to see long after I reported the problem. It has not been verified that the video is authentic but whether it is or it isn’t, the content of the video shows a child committing the most serious act of self harm and is not appropriate for public viewing.” According to the New York Daily News, the video of the Alabama man committing suicide stayed up for two hours and generated more than 1000 views. A recent video of a Thai man killing his 11-month-old daughter before taking his own life stayed up for 24 hours. Because of livestreaming, Facebook has at times become a platform for snuff movies usually confined to the dark recesses of the internet.
Most journalism organizations generally won’t report on individual suicides unless they are newsworthy, and if they are, journalists follow a set of guidelines developed by experts in the field of suicide prevention and reporting. These guidelines include the stipulations that the method used by the suicide victim should not be disclosed, the word “suicide” should not be used in the headlines about individual suicides and coverage should not be prominent or extensive. While one arguably could find examples where these guidelines are not followed, most responsible news organizations tend to abide by them.
Why? Because experts have established that suicide is contagious, in a sense – one suicide can prompt others to harm themselves. Irresponsible media coverage is one of the contributing factors to this so-called contagion. While I am no expert in the subject matter, the graphic and realistic depictions of peers committing suicide seem to combine all the elements that experts agree should be avoided. They present the act as a way out for a troubled person with whom they might identify, they generate considerable media attention, they show the method used in great detail and they lack context. In other words, this content puts people struggling with suicidal inclinations at risk in a very direct and tangible way.
In February, Zuckerberg addressed the problem, claiming that future artificial intelligence could help detect troublesome content in the long-term. But for the time being, he said, it will be up to the Facebook community to provide a safe environment. This response does not cut it ethically. Facebook and other social media platforms have not caused suicides, but they are responsible for the suicide videos being captured by their technology and distributed across their networks. Moreover, Facebook has not been successful in removing this dangerous content in a timely fashion. This issue cannot be addressed by yet-to-be-developed technology.
Here is what I believe Facebook ought to do:
The technological and economic feasibility of these suggestions can be questioned. But the approach taken so far by Facebook – and tech companies in general – has been to release technology first and worry about ethics later. (This approach led Donald Heider, founder of the Center For Digital Ethics & Policy, to argue that Facebook should hire a chief ethicist.) But when human lives are at stake, it might be time to switch this modus operandi.
Bastiaan Vanacker is an Associate Professor at the School of Communication at Loyola University Chicago and Program Director of the Center for Digital Ethics and Policy.
Colleges weigh a variety of factors when deciding whether to admit an applicant. Students know the importance of test scores, grades, recommendations, extracurricular activities, and the college application essay. But there’s another factor that may actually be important as well.
According to a recent Kaplan Test Prep survey, the number of college admissions officers who say social media affects an applicant’s chances of being accepted has increased. Currently, only 35% of college admissions officers turn to social media for more information on an applicant. However, 42% say what they find online negatively impacts their decision, up from 37% last year. On the other hand, 47% say it has positively affected their decision, which is also up from 37% last year. Applicants can change their privacy settings so their social media data can’t be accessed. But what if –hypothetically- a college asked a prospective student for his or her log in information?
In some states, it is illegal for public colleges and universities to ask college applicants for password information. According to data from the National Conference of State Legislatures (NCSL), this practice is no longer permitted in Arkansas, California, Delaware, Illinois, Maryland, Michigan, New Hampshire, New Jersey, New Mexico, Rhode Island, Utah, Virginia, and Wisconsin.
As an example, Wisconsin’s statute states that no educational institution may, “Request or require a student or prospective student, as a condition of admission or enrollment, to disclose access information for the personal Internet account of the student or prospective student or to otherwise grant access to or allow observation of that account.”
The statute also states that no institution may, “Refuse to admit a prospective student because the prospective student refused to disclose access information for, grant access to, or allow observation of the prospective student’s personal Internet account.”
However, the NCSL list only covers a handful of states, and does not apply to private schools. It should be noted, however, that I could not find any instances of colleges that actually engaged in this practice. Whether this is a hypothetical situation or not, a law that forbids a school from asking for login credentials does not stop the institution from using other means. For example, Wisconsin’s statue also states that an institution is not prohibited from, “viewing, accessing, or using information about a student or prospective student that can be obtained without access information or that is available in the public domain.”
There are no laws against Google searches, and it would appear that many schools are utilizing this tool and other means. Bradley Shear, managing partner at Shear Law, specializes in social media, privacy, reputation, and technology and he believes that social media searches are widespread among higher ed institutions. “Regardless of the number of college admissions officers who say they don’t check social media, and in spite of the statutes prohibiting schools from asking for log-in data, the vast majority of schools are indeed searching online for any incriminating posts or photos,” Shear explains. With or without a password, he says that some admissions officers are either searching themselves, or the schools are hiring former investigators and police officers to identify applicants.
And, Shear believes that ethically, this is a slippery slope. For one reason, he says the information is unauthenticated. How many people are there in any given city with the same name? Even trying to narrow the information to high school seniors or recent grads could yield several duplicates.
Mistaken identity is a serious enough problem that attorneys general in over 30 states complained that liens and civil judgments were being erroneously reported on consumer credit reports. According to the new guidelines effective July 1, 2017, liens and civil judgments cannot be added to a credit report unless (1) the name, (2) the address, and (3) either the birth date or the social security number have been verified.
Hopefully, this level of personal information would not be included in an applicant’s social media profile. However, a Pew Research Center report reveals that 93% of teens between the ages of 14 and 17 share their real name, 94% share a photo, and 83% include their birthdate. Also, among this age group, 76% share their school’s name, and 72% share their city or town.
Shear also explains that applicants can be discriminated against because of their connection to others. In other words, they’re being judged by their friends and family members.
Shear relays one incident that stands out. “There was an applicant who had top scores – he was a great kid, with a very clean digital profile.” The applicant did not mention anything about his parents on social media. However, the interviewer stated that he found some Tweets by the parents, and indirectly was able to connect the dots and figured out this applicant’s family was wealthy and had political beliefs that the interviewer did not agree with. “The conversation veered off topic very quickly – but what did the family’s wealth, their vacation photos, and their political beliefs have to do with the student’s application?” Shear asks.
When students complete an application, they can’t be asked about their religion, politics, sexual orientation, etcetera, because this information could be used against them. However, Shear says that colleges can go online to discover this – and other types of information, which nullifies the original intent of privacy.
Suppose the school is able to verify that the social media account is for the correct applicant, and it is not able to glean information from friends and family members. Shear still believes this practice is problematic. “We’re talking about kids and they are going to say dumb things and do dumb things, and we shouldn’t hold it against them.” He questions the logic of deciding that individuals at this young age are unredeemable based on social media posts. “Instead, let’s hope they grow from these experiences,” Shear says. “Schools need students from different backgrounds and experiences, and you hope that these individuals leave college a better person than they started.”
As teens transition to college, it’s expected that many of them will probably make a lot of mistakes regarding how they allocate their finances, how much time they spend studying, etcetera, because their parents have doled out money and handled their finances, in addition to monitoring their school work and study time.
As a result, there’s an understanding – and at least temporarily, an acceptance that young college students may overspend their budgets, they may oversleep for classes, and they may spend more time partying than studying.
But, when schools check the social media accounts of these applicants, does this imply that there is no mercy, no room for growth, and no opportunity for development in this area? And if so, is that fair when many parents, partially out of respect for their teen’s privacy – and also because many of them may not be digitally savvy – don’t monitor social media activity as closely as other areas of a teen’s life?
I’m a member of the “email generation,” so that was – and still is – one of my primary ways of communicating professionally and personally. And while my email account doesn’t contain any crazy photos or outrageous comments, even I would be uncomfortable if someone said, “Give me your password so I can read your email communication.” On one level, I understand that anything I transmit digitally could be read by someone else, but there’s still an assumption that my communication will only be read by the intended recipients.
For teens, social media is the primary means of communication. And they share anything and everything. Anything and everything includes what they ate for breakfast; how they can’t decide which pair of jeans to wear; why there’s a long line at McDonalds. They post such selfies as “This is me, sitting in my room, bored.”
And since social media is as natural to them as breathing, they also tend to share their passions, disappointments, complaints, and various levels of silliness via this vertical. For many of them, a “filter” is a special effect for a selfie, not the ability to use discretion or self-censor what they post. “Most K-12 schools don’t have the ability to provide digital education to our kids,” Shear laments. “And because they’re not being provided the tools to deal with these digital issues, and then for colleges to hold it against them, that raises some questions, such as ‘What is the real mission of a college?’”
However, Grant Cooper, a career coach and resume writer, believes the use of social media in determining an applicant’s suitability is both fair and ethical. “Universities use a wide range of assessment tools and practices to ensure that applicants possess the appropriate extracurricular, academic, and psychological profiles to succeed within their institutions.”
According to the Kaplan Test survey, some of the examples of negative information found through social media searches included an applicant using questionable, borderline-racist comments, and an applicant brandishing weapons. From “Girls Gone Wild” to drunk frat brothers and overly-aggressive athletes, college students can pose a public relations nightmare for colleges and universities. And while the names of the offenders may be forgotten, negative incidents can haunt schools for a long time, negatively impacting the school’s reputation and ability to recruit and retain students.
“One unfortunate social media photo or a single questionable comment is generally not enough to bar a candidate from consideration,” Cooper says. “But a series of media posts or photos showing a pattern of immature or inappropriate behavior would absolutely be a red flag.”
Another one of the examples in the survey included an applicant who was a felon and did not disclose this information on his application. According to the admissions officer, the individual was not admitted because he lied to the school- although for some reason, he felt the need to reveal the entire story on social media.
According to an article in the New York Times, Auburn is one of 16 universities that asks applicants if they’ve ever been charged with, convicted of, or pled guilty or no contest to a crime (besides minor traffic violations). Also, the University of Alabama asks applicants if they’ve ever received “a written or oral warning not to trespass on public or private property?”
But is there a rationale to this line of questioning? The Times article also reports that Virginia Tech added a question about arrests or convictions as a result of the April 2007 incident at that school in which a student killed 32 people and wounded 17 more before taking his own life. It turns out that the individual had been accused of stalking in the past.
To what extent are these schools asking these questions and scouring social media profiles searching for potential warning signs? Applicants posting inappropriate messages about sexual assault, sharing videos of themselves drinking and driving, texting and driving, and engaging in other reckless behavior could give admission counselors pause. While it’s debatable if past behavior is the best indicator of future behavior, to be fair, at least colleges consistently apply this standard to applicants. That’s why high school grades and entrance exam scores are so important: it is assumed that students with good grades and high scores will continue this behavior in college.
According to The Hechinger Report, some college are using social media in yet another way. For example, Ithaca College created a private, social networking site for the school’s applicants. They can interact with fellow applicants, along with student ambassadors, faculty, and staff. However, the school analyzes such data as the number of photos the students upload to the site, and how many contacts they make to determine who is more or less likely to enroll at Ithaca.
On one hand, college is expensive for the student, the student’s family, and the taxpayers who ultimately back student loans. And it’s expensive to schools when students drop out, resulting in a loss of tuition and fees. But that’s not the only loss. Colleges and universities are ranked based on a variety of factors, including graduation rates. So, schools want students who are more likely to fit into their environment and have the greatest chance of achieving academic success.
In that respect, it seems logical that schools would want to analyze social media data to recruit the best students. However, it’s not clear how much weight is given to these interactions. Would students with limited Internet access be unfairly overlooked? What about students who just don’t engage a lot on social media? (And yes, while small in number, I’m sure those students exist.)
Social media plays an increasingly important role in society. However, is that role too large when evaluating the potential of young applicants? Perhaps. But I also believe that a school has the right to determine what it deems to be acceptable vs. unacceptable behavior. In the 21st Century, colleges have become businesses selling a product to consumers. And managing the company’s brand is job #1. It’s a hard lesson for careless teenagers to learn. As former baseball player Vernon Law would say, “Experience is a hard teacher because she gives the test first, the lesson afterward.”
Terri Williams writes for a variety of clients including USA Today, Yahoo, U.S. News & World Report, The Houston Chronicle, Investopedia, and Robert Half. She has a Bachelor of Arts in English from the University of Alabama at Birmingham. Follow her on Twitter @Territoryone.
The permeation of personal technology devices in American culture suggests that people have a deepening desire to be constantly connected to the world around them. The ease of information access and sharing created by smartphones and other personal technology devices helps to sustain a seamless integration of physical and digital selves. Americans tend to eagerly embrace the benefits of these personal technologies without giving much consideration to the right to information privacy, despite the threats found in a burgeoning American surveillance society. According to a Pew Research study, 68 percent of Americans are “not very concerned” orsomewhat concerned” about government surveillance of American data. These responses indicate that there is a general lack of knowledge in modern American society concerning how digital information is collected and used by governments and businesses.
One striking use of surveillance that Americans have largely overlooked is the implementation of dirt box technologies by the military and the government. This practice, which does not require civilian consent for data collection, involves an aircraft flying over a specific area and intercepting data, calls and text messages from thousands of people at once, according to an investigation by Ali Winston for Reveal. The captured data is primarily used to track criminal activity, but there are no constraints on how the government can further use those data sets, and thousands of innocent civilians are included in each dirt box sweep. Are Americans aware of the data they freely give away when they consent to carrying personal technology devices at all times? Should there be constraints placed not just on government entities but also on businesses and apps that collect big data to build products and develop marketing? Considering the prevalence of personal device usage in American culture, it’s time to establish rights in surveillance and big data collection for the digital selves generated by personal technology devices. An ethic of personalized technologies would allow government organizations and businesses to start a conversation (and eventually implement policies) about the best practices for protecting the digital representations of Americans. In turn, Americans would greatly benefit from a philosophical conversation about their digital lives and both the risks and benefits of using personal technologies.
To comprehend why more and more people embrace enhanced technology in their daily lives, and why people are generally willing to trade their personal data for the conveniences afforded by these technologies, we must first examine how consumers grew to accept human augmentation. Scientists and philosophers have long dreamed of a world where advances in technology would improve human society. One major proponent of human augmentation by technology was computer scientist J.C.R. Licklider. In his famous essay, Licklider defined man-computer symbiosis as the cooperation between humans and machines in an effort to make technological advancements. Licklider’s goal was for technology to facilitate computer solutions to problems while being guided by the creative flexibility of the human mind. Licklider recognized the power of human intelligence in helping machines to solve problems. Humans would no longer need to forsee all problems and their potential solutions; problems “would be easier to solve, and they could be solved faster, through an intuitively guided trial-and-error procedure in which the computer cooperated, turning up flaws in the reasoning or revealing unexpected turns in the solution,” he wrote. This idea of human-computer symbiosis inspired many computer programmers and inventors over the past 60 years and moved companies to design devices that would support platforms for creating digital selves.
There are now several modern examples of human-computer symbiosis that are worth exploration. First, a love of personal technology devices as created a culture of tethered selves and the tracking of digital footprints. The theory of tethered selves is another way of explaining how we sustain a 24/7 connection to technology, making our devices somewhat of phantom limbs. According to Sherry Turkle, the director of the MIT Initiative on Technology and Self, technology is “the architect of our intimacies, but this means that as we text, Twitter, e-mail, and spend time on Facebook, technology is not just doing things for us, but to us, changing the way we view ourselves and our relationships.” Tethering ourselves to technology means being more dependent on those technologies in every aspect of life. As we interact with technology in a seamless way, we create digital footprints, the composite actions of treading through the World Wide Web, according to Ryan Greysen.
Examples of human-computer symbiosis and digital footprints can be found in all types of digital landscapes, from social media platforms to companies that mine big data. One study by personality technology researchers Wu Youyou, Michal Kolinski and David Stillwell created algorithms of digital footprints. These algorithms were used to predict the preferences and personality traits of individuals based on their posts and likes on Facebook. The study found that digital footprints could predict the Facebook users’ preferences better than their friends could. On a broader level, digital footprints create endless data ripe for analysis. This vast amount of information is harvested through techniques such as meta-data analysis, with the goal of using the big data to problem-solve. This utility is another instance of human-computer symbiosis.
As digital selves continue blending with physical selves, we will become transmediated selves, or people whose online and offline selves blend seamlessly into one identity, as proposed by technology and religion philosopher J. Sage Elwell who also maintains that umans will continually embrace deepening relationships with personal technology devices. The theory of transmediated selves suggests that people are comfortable with technology because it is not a separate reality anymore; we no longer hop “online” and then go “offline.” Because of personalized devices, we are only a few clicks away from the online world at all times, and these devices constantly track our habits through various applications. Therefore, theories such as the transmediated self must be translated into formal law and governmental definitions of people, and new codes of conduct (and even policies) are needed to protect individuals in modern culture’s transmediated reality. An ethic of personal technologies would help form these definitions and protections.
Licklider’s man-computer symbiosis, Elwell’s transmediated self and Turkle’s tethered selves help us understand our eagerness and willingness to accept ubiquitous technologies and share our private lives online. If technology philosophers are correct, society will continue to rely heavily upon human-computer interactions. Even if the majority of Americans do not care about their information privacy, does that indifference give governments and businesses the authority to usenformation as they please? Absolutely not. Our deepening human-computer symbiosis necessitates more protection for the consumers involved in such a relationship, as companies (and governments) stand to gain a lot of money and information from a public embracing a transmediated reality.
An ethic of personal technology is needed to help create that consumer protection. In order to help safeguard the users of personal technology, this ethic should consider how the tenants of Licklider and others have influenced the progression, acceptance and usage of technology. An ethic of personal technology should also acknowledge the benefits provided to both users and creators of digital cultures, and it should more clearly establish the rights of the individuals who participate in such environments. These rights need to explain very clearly how personal data will be gathered and used by businesses and government entities. Furthermore, an ethic of personal technology should help define how data from personal devices is stored and perhaps even protect a person’s right to disappear online. One example of protecting this right to dissolve the digital self is the European Commission’s ruling on the “right to be forgotten.”
Complete human-computer symbiosis is on the horizon. The symbiosis as we currently see it might not yet involve chips planted into our bodies or a fully integrated human-robot interface (althoughese technologies are certainly not out of the question). Yet, in many virtual and tangible ways, we have begun to integrate computers into nearly everything we do. The human-computer relationship will only deepen as new life-enhancing technologies emerge and gain traction in our culture. Ultimately, an ethic of personal technology must consider the complicatedoles of computers and humans as they become increasingly intertwined. It seems that individuals who want to be part of mainstream society, engage in business and achieve social and personal success must integrate with technology. In light of this cultural framework, an ethic of personal technology should define the rights of all humans to protect and define their digital selves. If the line between digital selves and physical selves is dissolving, then the basic tenets of democracy must guide the development of human-computer symbiosis and the decisions affecting a society that is wholly dependent on technology.
Rhema Zlaten is a Ph.D. student in the Journalism & Media Communication department at Colorado State University in Fort Collins, CO. Her academic work focuses on how theories and findings from neuroethics, moral psychology, and sociology are shifting media ethics as well as our understandings of virtual spaces. Her professional experience includes reporting, layout design, photography and freelance writing.
The results of the 2016 presidential election have proven to be something of a Rorschach test for the politically conscious. President-elect Donald Trump pulled off an Electoral College victory and a stunning upset against Democratic candidate Hillary Clinton through narrow wins in Michigan, Wisconsin and Pennsylvania. Notably, Trump lost the popular vote to Clinton by more than 2.8 million votes.
Reading the tea leaves after this major event is obviously a partisan exercise for some, but one often cited and notably pesky culprit for Clinton’s loss presents a unique challenge for seekers of truth in the digital age, and it’s one that strikes at the heart of journalism itself.
On Nov. 5, mere days before the general election, the Denver Guardian declared – in all caps – “FBI AGENT SUSPECTED IN HILLARY EMAIL LEAKS FOUND DEAD IN APPARENT MURDER-SUICIDE.” There are a few problems here, not the least of which would be the fact that the Denver Guardian doesn’t exist. As The Denver Post dutifully noted shortly after the “story” broke, the Denver Guardian is not a legitimate news source. The aforementioned murder-suicide story is, quite simply, a piece of fake news. The story didn’t happen. It’s patently false. Nonetheless, stories such as this one spread like wildfire on social media sites, namely Facebook. Fake news is exactly what it sounds like: It is misinformation styled as news. Today, it is manufactured and optimally designed to get clicks. As such, false news stories tend to be both hyperpartisan and highly sensational. One such story claimed that journalist and conservative political commentator Megyn Kelly had been fired by Fox News after endorsing Clinton during the general election. In reality, Kelly never endorsed Clinton, and the commentator was never fired by Fox News (though she has since accepted an offer to move to NBC).
This disturbing trend of fake news recently made real headlines due to the troubling prospect that it might have swayed the election in favor of Trump. Did fake news really have that influence? At this point, it’s unclear. Not surprisingly, Facebook founder and CEO Mark Zuckerberg downplayed the notion of fake news having such an effect. However, an eye-opening analysis by Craig Silverman at BuzzFeed found that Facebook users were more engaged with fake news than with real stories from 19 major news outlets in the last three months of the presidential campaign. So, it wouldn’t be a huge leap in logic to assume that fake news had some kind of impact on the election results. Plus, with respect to the general election, between the three most contested states, the difference was an extremely close 80,000 votes. Even if the impact of fake news was minuscule, it could have made all the difference in those extenuating circumstances. The general significance of false news stories, however, extends beyond the scope of elections.
While fake news has been around at least since the dawn of the printing press, it has only recently become a steady source of income for unscrupulous entrepreneurs. The formula for its production is rather simple, involving only three steps.
Step one: Create a sensational story with no regard for the truth. Step two: Publish said story online, and sell ad space on the page. Step three: Collect ad revenue generated from the story.
As long as there is economic incentive to fabricate sensational stories, the plague of fake news will continue. So, how do we combat such hastily crafted misinformation? Considering potential conflicts with the First Amendment, government censorship is a path we don’t want to take. But, perhaps there are ways to disincentivize the creation and spread of fake news. Google is reportedly taking steps to ensure that fake news culprits are not able to use its ad-selling software. This is an admirable first step, but it is imperative for people to continue applying pressure on Google to ensure that the problem doesn’t fall by the wayside.
Some journalists are calling for readers to practice caution and more thoroughly scrutinize news stories. Brian Stelter of CNN coined the phrase “refuse to be confused,” a desperate plea for journalists and consumers alike to be more vigilant about the spread of misinformation. It’s an admirable sentiment. Edward Snowden recently echoed the plea, saying, “The answer to bad speech is not censorship. The answer to bad speech is more speech. We have to exercise and spread the idea that critical thinking matters now more than ever, given the fact that lies seem to be getting very popular.” Again, Snowden’s rhetoric is admirable. In an ideal world, intelligent readers armed with critical thinking skills would be plentiful, and they would be quick to combat misinformation. But the real world is fraught with complications, partisan sources, confirmation bias and prejudices that work in a myriad of ways to shut down critical thinking and productive discussion.
It’s difficult to conceive a complete, accurate profile of the average American, but researchers have discovered telling details about U.S. citizens in general. A study by the Organization for Economic Cooperation and Development found that the reading skills of American adults are significantly lower than those of adults in most other developed countries. Here’s another detail: Americans tend to work longer hours than people in other large countries. American adults in full-time positions reported working 47 hours a week on average – that’s nearly six days a week. Despite this schedule, the United States ranks close to the 30th percentile in the category of income inequality, meaning 70 percent of other countries have more equal income distribution. So, Americans have relatively poor reading skills and work longer hours than their counterparts in other developed countries. To top it off, the average American’s income is increasingly disproportionate relative to the country’s richest 1 percent. What can we discern from these details? Well, one thing is clear: Americans do not have the time, inclination or resources necessary to vet every single piece of news that appears on their Facebook feeds, and it is unrealistic to expect them to do so. A discerning readership is a great ideal to strive for, but not in place of pursuing pragmatic technological solutions to the problem of fake news.
Sites such as Facebook are largely responsible for creating the partisan environment that allows false information to spread online like a contagious virus. British filmmaker Adam Curtis aptly describes the process in his 2016 documentary, “Hypernormalisation,” telling how the algorithms and filters on social media have gravely limited the content people see.
“In the process, individuals began to move, without noticing, into bubbles that isolated them from enormous amounts of other information,” Curtis says. “They only heard and saw what they liked. And their news feeds increasingly excluded anything that might challenge peoples’ preexisting beliefs.”
Jon Keegan of the Wall Street Journal goes even further and creatively demonstrates the profound effect of partisan filtering on Facebook. His interactive graphic allows readers to pick certain hot-button issues, such as “guns” and “abortion,” and view side-by-side versions of liberal and conservative news feeds on Facebook to see how those topics are represented. The comparisons are striking. For instance, a cursory search of the word “guns” reveals a certain kind of result in the liberal Facebook feed: a video from Upworthy in which celebrities make the case for gun control. Conversely, the conservative feed yields a Breitbart article called “Debbie Wasserman Schultz: Federal Government May Ban Passengers from Checking Guns in Baggage.” This disparity demonstrates how social media can work to further divide Americans.
For a time after the presidential election, Zuckerberg went on the defensive against the idea that Facebook influenced the results. He refused to call Facebook a media company and seemed perplexed at the notion that anyone would even consider it that. Despite Zuckerberg’s reluctance to acknowledge the influence of the social networking platform, it is where an astounding number of people get their news. Indeed, 44 percent of the general population of the United States claimed to get news from the site. Zuckerberg recently walked back his defensive statements, saying that Facebook is, in fact, a media company – just not a “traditional” one. Whatever label you want to assign this behemoth corporate entity, the goal of a company such as Facebook is abundantly clear: to create a totally immersive online environment. Understandably, Facebook doesn’t want users leaving, and it is therefore designed to keep users engaged through an endless stream of photos, videos, news articles, and, yes — likely some fake news. The ideal Facebook user would never leave the site. And, naturally, the company wants everyone using Facebook as a basic amenity. Everything the company does is in pursuit of this ubiquitous ideal, and its efforts are working. CNBC reports that Facebook, with 1.35 billion users worldwide, has more monthly active users than WhatsApp (500 million users), Twitter (284 million) and Instagram (200 million) combined. It has about 1 billion more users than Twitter and the same amount of monthly users as there are people in China.
Facebook dominates our culture in ways that are impossible to fully articulate. To claim with certainty that it didn’t influence the 2016 presidential election, or many other major events, is specious. The platform undoubtedly influences the world by virtue of its market and cultural dominance. If such domination is indeed Facebook’s goal, the company has an ethical obligation to ensure that its users are not totally misinformed. When Facebook’s product is utilized to such a great extent, and when the company operates as the de facto media aggregate for its consumers, it puts itself in a position to be responsible for the stories shared by its users. Unlike the average American, Zuckerberg is uniquely poised to face this challenge head-on. If Facebook wishes to continue using the term “news feed” to describe its platform, it had better take all of the possible steps to ensure that what appears on said feed is not grossly inaccurate. But ethical appeals are rarely convincing to faceless corporations, whose financial obligations to shareholders and the bottom line have historically taken precedent over common decency.
Perhaps it would be better to frame the issue in pragmatic terms. If Facebook doesn’t want the public’s perception of its company to turn sour with the idea that Facebook is a fringe website fraught with dubious information, perhaps the company will take significant action to help stop the spread of fake news. Despite Zuckerberg’s initial downplaying of the potential impact of fake news on the election, Facebook is taking steps to address the problem. It is implementing a new system that allows users to flag stories they suspect to be false, and those stories are then referred to third-party fact checkers. This, too, is an admirable step in combating the spread of fake news. But is it just window dressing? As long as our social networks serve to reinforce partisan divides through algorithms, fake news will find a way to linger in the American consciousness. Now, more than ever, it is imperative that we as a society use technological means to combat the problem of misinformation. Moreover, it is imperative that those in positions to effect real change consider the consequences of allowing hyperpartisanship and, in turn, misinformation to thrive. It is for the benefit of humanity as a whole that innovative thinkers find new ways to connect individuals who are not ideologically similar. After all, isn’t that the supposed purpose of social networking — to better connect people?
No matter where you’ve stuck your pin on the political map, everyone can agree that the 2016 U.S. presidential election was not business as usual for American democracy.
Fingers pointed a thousand different directions on Nov. 9, looking for something to valorize or vilify for their victories and defeats. But through all of the infighting and name-calling, it quickly became clear that the real winner in this campaign was not a person or a movement, but a tool: fake news. It was so well used in this election that PolitiFact, a Pulitzer Prize-winning fact-checking website, named fake news its 2016 Lie of the Year, saying the concept consists of nothing more than “made-up stuff, masterfully manipulated to look like credible journalistic reports that are easily spread online to large audiences willing to believe the fictions and spread the word.”
In our Orwellian mediaverse, where doublespeak masquerades as hashtags and trending topics, #FakeNews certainly provides good content fodder and the occasional straw man, but the term also muddles the truth that it’s nothing more than propaganda with a Google AdWords account. Intentional or not, obfuscating the specter of propaganda through these doublespeak strategies ultimately distracts from the ethical implications of “information, especially of a biased or misleading nature, used to promote or publicize a particular political cause or point of view.” (That’s a dictionary definition of propaganda, by the way).
The first step in regaining ethical control over fake news is to call it what it is: propaganda. This puts the onus on us, the public, to wade through the mess of the modern media landscape which, now more than ever, is full of trap doors and mazes without exits. It’s only going to get worse, largely because the man that fake news helped to elect to one of the most powerful offices in the world is guilty of disseminating propaganda himself, while turning “mainstream media” into an insult – in much the same way Nazi Germany used “Lügenpresse” to discredit and ultimately silence any media opposing the regime.
Putting the burden on the public to be discerning goes against the emerging idea that Facebook, Twitter and other social platforms are at least partially responsible for the spread of disinformation. After all, you can’t have fake news if there’s no way to discover or share it. Plus, more than 62 percent of adults get their news from social media, so if we can blame these platforms for the proliferation of fake news, then we’re exempt from ethical responsibility. Calling propaganda a rose of another name and blaming social media platforms for circulating fake news renders us mere bystanders, scot-free and light as a feather.
Blaming Facebook and Twitter for fake news is like blaming roads for bad drivers. It distracts from the fact that the public took its own discernment and intelligence for granted. By shifting the blame in this way, and blindly sharing and clicking through content that reinforced our own opinions, we contributed to the viral nature of such propagandist lies as “Obama Signs Executive Order Banning the Pledge of Allegiance in Schools Nationwide” (2.7 million shares), “Pope Francis Shocks World, Endorses Donald Trump for President, Releases Statement” (961,000 shares), “Trump Offering Free One-Way Tickets to Africa & Mexico for Those Who Wanna (sic) Leave America” (802,000 shares) and hundreds more instances of tactical misinformation deemed “news.”
While there remains the million-dollar question about whether foreign interference impacted the U.S. election, there’s no doubting the influence propaganda had in its outcome (and the continued affirmation of that result). In the past few months, several outlets have conducted their own investigations into the culprits behind fake news websites, exposing opportunistic individuals generating salacious clickbait for the promise of earning a few extra bucks from advertising and private sources.
What fake news creators have in common, aside from their unabashed cynicism, is their intuitive understanding of the public’s vulnerability to misinformation – and the understanding that propaganda only works when people lack the interest or diligence to explore the provenance of claims. Because it’s easy to make information on the internet look authentic, it’s even easier for people to accept and share it as such. At that point, fake news creators like to wash their hands of the situation, stating like gun sellers that what people do with the information is not their responsibility, even if it results in a man bringing AR-15 into a pizza restaurant.
Fake news wouldn’t be so prevalent if there was not already a willing, receptive audience raised entirely on media that caters to pre-established biases and opinions. This idea relates to what communications scholars call cognitive dissonance, which is the discomfort we experience when faced with new beliefs or ideas that contradict our own. The discomfort leads to confirmation bias, or the idea that – when faced with a dissonant concept – we’ll adjust our view of that problematic thing to make it fit with what we already believe. It’s safer, because who wants to change their ideas all the time? It’s why creationists reject the science of evolution, or why you’ll never see Kim Kardashian driving a rusty ’96 Ford Escort (because even if you did, you’d block it out). As novelist Saul Bellow once said, “A great deal of intelligence can be invested in ignorance when the need for illusion is deep.”
While there’s a neurological basis for some degree of confirmation bias in our day-to-day lives, it’s a mental vulnerability easily exploited by the mechanisms of propaganda. Because propaganda plays on our defense mechanisms against dissonance, it’s hard for us to see it – admitting we’ve been duped goes against our cognitive biases and defense mechanisms. To save face, we instead call our susceptibility “fake news” and blame social media. It seems like there would be no ethical implications of this placement of blame (except maybe the loss of some common sense), but it’s now one of the leading reasons why we’ve appointed a racist, misogynist, wage-thieving, litigious and totally unqualified man to America’s highest office.
Despite the question of who or what is responsible for fake news, Facebook and other social media platforms are working to combat such organized propaganda efforts. Still, this effort is simply a technological Band-Aid on the open wound of modern democratic culture. Organizations such as the Media Literacy Project and Snopes are advancing media intelligence and fact checking, but propaganda isn’t going anywhere. “Propaganda is to a democracy what the bludgeon is to a totalitarian state,” as American linguist Noam Chomsky says. As it has been since the dawn of mass media, the ethical imperative is on us to sort truth from lies; to separate journalism from propaganda. As our skies darken and our new Commander in Chief continues lambasting the “liar media,” flaunting power over truth, our individual ability to ground ourselves in truth and sift through distracting noise might be the only skill that will stop America’s slow decline toward totalitarianism.
Benjamin van Loon is a writer, researcher and communications consultant from Chicago, Illinois. In 2016, he was awarded a Folio Award for his writing on technology. Learn more at benvanloon.com.