Colleges weigh a variety of factors when deciding whether to admit an applicant. Students know the importance of test scores, grades, recommendations, extracurricular activities, and the college application essay. But there’s another factor that may actually be important as well.
According to a recent Kaplan Test Prep survey, the number of college admissions officers who say social media affects an applicant’s chances of being accepted has increased. Currently, only 35% of college admissions officers turn to social media for more information on an applicant. However, 42% say what they find online negatively impacts their decision, up from 37% last year. On the other hand, 47% say it has positively affected their decision, which is also up from 37% last year. Applicants can change their privacy settings so their social media data can’t be accessed. But what if –hypothetically- a college asked a prospective student for his or her log in information?
In some states, it is illegal for public colleges and universities to ask college applicants for password information. According to data from the National Conference of State Legislatures (NCSL), this practice is no longer permitted in Arkansas, California, Delaware, Illinois, Maryland, Michigan, New Hampshire, New Jersey, New Mexico, Rhode Island, Utah, Virginia, and Wisconsin.
As an example, Wisconsin’s statute states that no educational institution may, “Request or require a student or prospective student, as a condition of admission or enrollment, to disclose access information for the personal Internet account of the student or prospective student or to otherwise grant access to or allow observation of that account.”
The statute also states that no institution may, “Refuse to admit a prospective student because the prospective student refused to disclose access information for, grant access to, or allow observation of the prospective student’s personal Internet account.”
However, the NCSL list only covers a handful of states, and does not apply to private schools. It should be noted, however, that I could not find any instances of colleges that actually engaged in this practice. Whether this is a hypothetical situation or not, a law that forbids a school from asking for login credentials does not stop the institution from using other means. For example, Wisconsin’s statue also states that an institution is not prohibited from, “viewing, accessing, or using information about a student or prospective student that can be obtained without access information or that is available in the public domain.”
There are no laws against Google searches, and it would appear that many schools are utilizing this tool and other means. Bradley Shear, managing partner at Shear Law, specializes in social media, privacy, reputation, and technology and he believes that social media searches are widespread among higher ed institutions. “Regardless of the number of college admissions officers who say they don’t check social media, and in spite of the statutes prohibiting schools from asking for log-in data, the vast majority of schools are indeed searching online for any incriminating posts or photos,” Shear explains. With or without a password, he says that some admissions officers are either searching themselves, or the schools are hiring former investigators and police officers to identify applicants.
And, Shear believes that ethically, this is a slippery slope. For one reason, he says the information is unauthenticated. How many people are there in any given city with the same name? Even trying to narrow the information to high school seniors or recent grads could yield several duplicates.
Mistaken identity is a serious enough problem that attorneys general in over 30 states complained that liens and civil judgments were being erroneously reported on consumer credit reports. According to the new guidelines effective July 1, 2017, liens and civil judgments cannot be added to a credit report unless (1) the name, (2) the address, and (3) either the birth date or the social security number have been verified.
Hopefully, this level of personal information would not be include in an applicant’s social media profile. However, a Pew Research Center report reveals that 93% of teens between the ages of 14 and 17 share their real name, 94% share a photo, and 83% include their birthdate. Also, among this age group, 76% share their school’s name, and 72% share their city or town.
Shear also explains that applicants can be discriminated against because of their connection to others. In other words, they’re being judged by their friends and family members.
Shear relays one incident that stands out. “There was an applicant who had top scores – he was a great kid, with a very clean digital profile.” The applicant did not mention anything about his parents on social media. However, the interviewer stated that he found some Tweets by the parents, and indirectly was able to connect the dots and figured out this applicant’s family was wealthy and had political beliefs that the interviewer did not agree with. “The conversation veered off topic very quickly – but what did the family’s wealth, their vacation photos, and their political beliefs have to do with the student’s application?” Shear asks.
When students complete an application, they can’t be asked about their religion, politics, sexual orientation, etcetera, because this information could be used against them. However, Shear says that colleges can go online to discover this – and other types of information, which nullifies the original intent of privacy.
Suppose the school is able to verify that the social media account is for the correct applicant, and it is not able to glean information from friends and family members. Shear still believes this practice is problematic. “We’re talking about kids and they are going to say dumb things and do dumb things, and we shouldn’t hold it against them.” He questions the logic of deciding that individuals at this young age are unredeemable based on social media posts. “Instead, let’s hope they grow from these experiences,” Shear says. “Schools need students from different backgrounds and experiences, and you hope that these individuals leave college a better person than they started.”
As teens transition to college, it’s expected that many of them will probably make a lot of mistakes regarding how they allocate their finances, how much time they spend studying, etcetera, because their parents have doled out money and handled their finances, in addition to monitoring their school work and study time.
As a result, there’s an understanding – and at least temporarily, an acceptance that young college students may overspend their budgets, they may oversleep for classes, and they may spend more time partying than studying.
But, when schools check the social media accounts of these applicants, does this imply that there is no mercy, no room for growth, and no opportunity for development in this area? And if so, is that fair when many parents, partially out of respect for their teen’s privacy – and also because many of them may not be digitally savvy – don’t social media activity as closely as other areas of a teen’s life?
I’m a member of the “email generation,” so that was – and still is – one of my primary ways of communicating professionally and personally. And while my email account doesn’t contain any crazy photos or outrageous comments, even I would be uncomfortable if someone said, “Give me your password so I can read your email communication.” On one level, I understand that anything I transmit digitally could be read by someone else, but there’s still an assumption that my communication will only be read by the intended recipients.
For teens, social media is the primary means of communication. And they share anything and everything. Anything and everything includes what they ate for breakfast; how they can’t decide which pair of jeans to wear; why there’s a long line at McDonalds. They post such selfies as “This is me, sitting in my room, bored.”
And since social media is as natural to them as breathing, they also tend to share their passions, disappointments, complaints, and various levels of silliness via this vertical. For many of them, a “filter” is a special effect for a selfie, not the ability to use discretion or self-censor what they post. “Most K-12 schools don’t have the ability to provide digital education to our kids,” Shear laments. “And because they’re not being provided the tools to deal with these digital issues, and then for colleges to hold it against them, that raises some questions, such as ‘What is the real mission of a college?’”
However, Grant Cooper, a career coach and resume writer, believes the use of social media in determining an applicants’ suitability is both fair and ethical. “Universities use a wide range of assessment tools and practices to ensure that applicants possess the appropriate extracurricular, academic, and psychological profiles to succeed within their institutions.”
According to the Kaplan Test survey, some of the examples of negative information found through social media searches included an applicant using questionable, borderline-racist comments, and an applicant brandishing weapons. From “Girls Gone Wild” to drunk frat brothers and overly-aggressive athletes, college students can pose a public relations nightmare for colleges and universities. And while the names of the offenders may be forgotten, negative incidents can haunt schools for a long time, negatively impacting the school’s reputation and ability to recruit and retain students.
“One unfortunate social media photo or a single questionable comment is generally not enough to bar a candidate from consideration,” Cooper says. “But a series of media posts or photos showing a pattern of immature or inappropriate behavior would absolutely be a red flag.”
Another one of the examples in survey included an applicant who was a felon and did not disclose this information on his application. According to the admissions officer, the individual was not admitted because he lied to the school- although for some reason, he felt the need to reveal the entire story on social media.
According to an article in the New York Times, Auburn is one of 16 universities that asks applicants if they’ve ever been charged with, convicted of, or pled guilty or no contest to a crime (besides minor traffic violations). Also, the University of Alabama asks applicants if they’ve ever received “a written or oral warning not to trespass on public or private property?”
But is there a rationale to this line of questioning? The Times article also reports that Virginia Tech added a question about arrests or convictions as a result of the April 2007 incident at that school in which a student killed 32 people and wounded 17 more before taking his own life. It turns out that the individual had been accused of stalking in the past.
To what extent are these schools asking these questions and scouring social media profiles searching for potential warning signs? Applicants posting inappropriate messages about sexual assault, sharing videos of themselves drinking and driving, texting and driving, and engaging in other reckless behavior could give admission counselors pause. While it’s debatable if past behavior is the best indicator of future behavior, to be fair, at least colleges consistently apply this standard to applicants. That’s why high school grades and entrance exam scores are so important: it is assumed that students with good grades and high scores will continue this behavior in college.
According to The Hechinger Report, some college are using social media in yet another way. For example, Ithaca College created a private, social networking site for the school’s applicants. They can interact with fellow applicants, along with student ambassadors, faculty, and staff. However, the school analyzes such data as the number of photos the students upload to the site, and how many contacts they make to determine who is more or less likely to enroll at Ithaca.
On one hand, college is expensive for the student, the student’s family, and the taxpayers who ultimately back student loans. And it’s expensive to schools when students drop out, resulting in a loss of tuition and fees. But that’s not the only loss. Colleges and universities are ranked based on a variety of factors, including graduation rates. So, schools want students who are more likely to fit into their environment and have the greatest chance of achieving academic success.
In that respect, it seems logical that schools would want to analyze social media data to recruit the best students. However, it’s not clear how much weight is given to these interactions. Would students with limited Internet access be unfairly overlooked? What about students who just don’t engage a lot on social media? (And yes, while small in number, I’m sure those students exist.)
Social media plays an increasingly important role in society. However, is that role too large when evaluating the potential of young applicants? Perhaps. But I also believe that a school has the right to determine what it deems to be acceptable vs. unacceptable behavior. In the 21st Century, colleges have become businesses selling a product to consumers. And managing the company’s brand is job #1. It’s a hard lesson for careless teenagers to learn. As former baseball player Vernon Law would say, “Experience is a hard teacher because she gives the test first, the lesson afterward.”
Terri Williams writes for a variety of clients including USA Today, Yahoo, U.S. News & World Report, The Houston Chronicle, Investopedia, and Robert Half. She has a Bachelor of Arts in English from the University of Alabama at Birmingham. Follow her on Twitter @Territoryone.
The permeation of personal technology devices in American culture suggests that people have a deepening desire to be constantly connected to the world around them. The ease of information access and sharing created by smartphones and other personal technology devices helps to sustain a seamless integration of physical and digital selves. Americans tend to eagerly embrace the benefits of these personal technologies without giving much consideration to the right to information privacy, despite the threats found in a burgeoning American surveillance society. According to a Pew Research study, 68 percent of Americans are “not very concerned” orsomewhat concerned” about government surveillance of American data. These responses indicate that there is a general lack of knowledge in modern American society concerning how digital information is collected and used by governments and businesses.
One striking use of surveillance that Americans have largely overlooked is the implementation of dirt box technologies by the military and the government. This practice, which does not require civilian consent for data collection, involves an aircraft flying over a specific area and intercepting data, calls and text messages from thousands of people at once, according to an investigation by Ali Winston for Reveal. The captured data is primarily used to track criminal activity, but there are no constraints on how the government can further use those data sets, and thousands of innocent civilians are included in each dirt box sweep. Are Americans aware of the data they freely give away when they consent to carrying personal technology devices at all times? Should there be constraints placed not just on government entities but also on businesses and apps that collect big data to build products and develop marketing? Considering the prevalence of personal device usage in American culture, it’s time to establish rights in surveillance and big data collection for the digital selves generated by personal technology devices. An ethic of personalized technologies would allow government organizations and businesses to start a conversation (and eventually implement policies) about the best practices for protecting the digital representations of Americans. In turn, Americans would greatly benefit from a philosophical conversation about their digital lives and both the risks and benefits of using personal technologies.
To comprehend why more and more people embrace enhanced technology in their daily lives, and why people are generally willing to trade their personal data for the conveniences afforded by these technologies, we must first examine how consumers grew to accept human augmentation. Scientists and philosophers have long dreamed of a world where advances in technology would improve human society. One major proponent of human augmentation by technology was computer scientist J.C.R. Licklider. In his famous essay, Licklider defined man-computer symbiosis as the cooperation between humans and machines in an effort to make technological advancements. Licklider’s goal was for technology to facilitate computer solutions to problems while being guided by the creative flexibility of the human mind. Licklider recognized the power of human intelligence in helping machines to solve problems. Humans would no longer need to forsee all problems and their potential solutions; problems “would be easier to solve, and they could be solved faster, through an intuitively guided trial-and-error procedure in which the computer cooperated, turning up flaws in the reasoning or revealing unexpected turns in the solution,” he wrote. This idea of human-computer symbiosis inspired many computer programmers and inventors over the past 60 years and moved companies to design devices that would support platforms for creating digital selves.
There are now several modern examples of human-computer symbiosis that are worth exploration. First, a love of personal technology devices as created a culture of tethered selves and the tracking of digital footprints. The theory of tethered selves is another way of explaining how we sustain a 24/7 connection to technology, making our devices somewhat of phantom limbs. According to Sherry Turkle, the director of the MIT Initiative on Technology and Self, technology is “the architect of our intimacies, but this means that as we text, Twitter, e-mail, and spend time on Facebook, technology is not just doing things for us, but to us, changing the way we view ourselves and our relationships.” Tethering ourselves to technology means being more dependent on those technologies in every aspect of life. As we interact with technology in a seamless way, we create digital footprints, the composite actions of treading through the World Wide Web, according to Ryan Greysen.
Examples of human-computer symbiosis and digital footprints can be found in all types of digital landscapes, from social media platforms to companies that mine big data. One study by personality technology researchers Wu Youyou, Michal Kolinski and David Stillwell created algorithms of digital footprints. These algorithms were used to predict the preferences and personality traits of individuals based on their posts and likes on Facebook. The study found that digital footprints could predict the Facebook users’ preferences better than their friends could. On a broader level, digital footprints create endless data ripe for analysis. This vast amount of information is harvested through techniques such as meta-data analysis, with the goal of using the big data to problem-solve. This utility is another instance of human-computer symbiosis.
As digital selves continue blending with physical selves, we will become transmediated selves, or people whose online and offline selves blend seamlessly into one identity, as proposed by technology and religion philosopher J. Sage Elwell who also maintains that umans will continually embrace deepening relationships with personal technology devices. The theory of transmediated selves suggests that people are comfortable with technology because it is not a separate reality anymore; we no longer hop “online” and then go “offline.” Because of personalized devices, we are only a few clicks away from the online world at all times, and these devices constantly track our habits through various applications. Therefore, theories such as the transmediated self must be translated into formal law and governmental definitions of people, and new codes of conduct (and even policies) are needed to protect individuals in modern culture’s transmediated reality. An ethic of personal technologies would help form these definitions and protections.
Licklider’s man-computer symbiosis, Elwell’s transmediated self and Turkle’s tethered selves help us understand our eagerness and willingness to accept ubiquitous technologies and share our private lives online. If technology philosophers are correct, society will continue to rely heavily upon human-computer interactions. Even if the majority of Americans do not care about their information privacy, does that indifference give governments and businesses the authority to usenformation as they please? Absolutely not. Our deepening human-computer symbiosis necessitates more protection for the consumers involved in such a relationship, as companies (and governments) stand to gain a lot of money and information from a public embracing a transmediated reality.
An ethic of personal technology is needed to help create that consumer protection. In order to help safeguard the users of personal technology, this ethic should consider how the tenants of Licklider and others have influenced the progression, acceptance and usage of technology. An ethic of personal technology should also acknowledge the benefits provided to both users and creators of digital cultures, and it should more clearly establish the rights of the individuals who participate in such environments. These rights need to explain very clearly how personal data will be gathered and used by businesses and government entities. Furthermore, an ethic of personal technology should help define how data from personal devices is stored and perhaps even protect a person’s right to disappear online. One example of protecting this right to dissolve the digital self is the European Commission’s ruling on the “right to be forgotten.”
Complete human-computer symbiosis is on the horizon. The symbiosis as we currently see it might not yet involve chips planted into our bodies or a fully integrated human-robot interface (althoughese technologies are certainly not out of the question). Yet, in many virtual and tangible ways, we have begun to integrate computers into nearly everything we do. The human-computer relationship will only deepen as new life-enhancing technologies emerge and gain traction in our culture. Ultimately, an ethic of personal technology must consider the complicatedoles of computers and humans as they become increasingly intertwined. It seems that individuals who want to be part of mainstream society, engage in business and achieve social and personal success must integrate with technology. In light of this cultural framework, an ethic of personal technology should define the rights of all humans to protect and define their digital selves. If the line between digital selves and physical selves is dissolving, then the basic tenets of democracy must guide the development of human-computer symbiosis and the decisions affecting a society that is wholly dependent on technology.
Rhema Zlaten is a Ph.D. student in the Journalism & Media Communication department at Colorado State University in Fort Collins, CO. Her academic work focuses on how theories and findings from neuroethics, moral psychology, and sociology are shifting media ethics as well as our understandings of virtual spaces. Her professional experience includes reporting, layout design, photography and freelance writing.
The results of the 2016 presidential election have proven to be something of a Rorschach test for the politically conscious. President-elect Donald Trump pulled off an Electoral College victory and a stunning upset against Democratic candidate Hillary Clinton through narrow wins in Michigan, Wisconsin and Pennsylvania. Notably, Trump lost the popular vote to Clinton by more than 2.8 million votes.
Reading the tea leaves after this major event is obviously a partisan exercise for some, but one often cited and notably pesky culprit for Clinton’s loss presents a unique challenge for seekers of truth in the digital age, and it’s one that strikes at the heart of journalism itself.
On Nov. 5, mere days before the general election, the Denver Guardian declared – in all caps – “FBI AGENT SUSPECTED IN HILLARY EMAIL LEAKS FOUND DEAD IN APPARENT MURDER-SUICIDE.” There are a few problems here, not the least of which would be the fact that the Denver Guardian doesn’t exist. As The Denver Post dutifully noted shortly after the “story” broke, the Denver Guardian is not a legitimate news source. The aforementioned murder-suicide story is, quite simply, a piece of fake news. The story didn’t happen. It’s patently false. Nonetheless, stories such as this one spread like wildfire on social media sites, namely Facebook. Fake news is exactly what it sounds like: It is misinformation styled as news. Today, it is manufactured and optimally designed to get clicks. As such, false news stories tend to be both hyperpartisan and highly sensational. One such story claimed that journalist and conservative political commentator Megyn Kelly had been fired by Fox News after endorsing Clinton during the general election. In reality, Kelly never endorsed Clinton, and the commentator was never fired by Fox News (though she has since accepted an offer to move to NBC).
This disturbing trend of fake news recently made real headlines due to the troubling prospect that it might have swayed the election in favor of Trump. Did fake news really have that influence? At this point, it’s unclear. Not surprisingly, Facebook founder and CEO Mark Zuckerberg downplayed the notion of fake news having such an effect. However, an eye-opening analysis by Craig Silverman at BuzzFeed found that Facebook users were more engaged with fake news than with real stories from 19 major news outlets in the last three months of the presidential campaign. So, it wouldn’t be a huge leap in logic to assume that fake news had some kind of impact on the election results. Plus, with respect to the general election, between the three most contested states, the difference was an extremely close 80,000 votes. Even if the impact of fake news was minuscule, it could have made all the difference in those extenuating circumstances. The general significance of false news stories, however, extends beyond the scope of elections.
While fake news has been around at least since the dawn of the printing press, it has only recently become a steady source of income for unscrupulous entrepreneurs. The formula for its production is rather simple, involving only three steps.
Step one: Create a sensational story with no regard for the truth. Step two: Publish said story online, and sell ad space on the page. Step three: Collect ad revenue generated from the story.
As long as there is economic incentive to fabricate sensational stories, the plague of fake news will continue. So, how do we combat such hastily crafted misinformation? Considering potential conflicts with the First Amendment, government censorship is a path we don’t want to take. But, perhaps there are ways to disincentivize the creation and spread of fake news. Google is reportedly taking steps to ensure that fake news culprits are not able to use its ad-selling software. This is an admirable first step, but it is imperative for people to continue applying pressure on Google to ensure that the problem doesn’t fall by the wayside.
Some journalists are calling for readers to practice caution and more thoroughly scrutinize news stories. Brian Stelter of CNN coined the phrase “refuse to be confused,” a desperate plea for journalists and consumers alike to be more vigilant about the spread of misinformation. It’s an admirable sentiment. Edward Snowden recently echoed the plea, saying, “The answer to bad speech is not censorship. The answer to bad speech is more speech. We have to exercise and spread the idea that critical thinking matters now more than ever, given the fact that lies seem to be getting very popular.” Again, Snowden’s rhetoric is admirable. In an ideal world, intelligent readers armed with critical thinking skills would be plentiful, and they would be quick to combat misinformation. But the real world is fraught with complications, partisan sources, confirmation bias and prejudices that work in a myriad of ways to shut down critical thinking and productive discussion.
It’s difficult to conceive a complete, accurate profile of the average American, but researchers have discovered telling details about U.S. citizens in general. A study by the Organization for Economic Cooperation and Development found that the reading skills of American adults are significantly lower than those of adults in most other developed countries. Here’s another detail: Americans tend to work longer hours than people in other large countries. American adults in full-time positions reported working 47 hours a week on average – that’s nearly six days a week. Despite this schedule, the United States ranks close to the 30th percentile in the category of income inequality, meaning 70 percent of other countries have more equal income distribution. So, Americans have relatively poor reading skills and work longer hours than their counterparts in other developed countries. To top it off, the average American’s income is increasingly disproportionate relative to the country’s richest 1 percent. What can we discern from these details? Well, one thing is clear: Americans do not have the time, inclination or resources necessary to vet every single piece of news that appears on their Facebook feeds, and it is unrealistic to expect them to do so. A discerning readership is a great ideal to strive for, but not in place of pursuing pragmatic technological solutions to the problem of fake news.
Sites such as Facebook are largely responsible for creating the partisan environment that allows false information to spread online like a contagious virus. British filmmaker Adam Curtis aptly describes the process in his 2016 documentary, “Hypernormalisation,” telling how the algorithms and filters on social media have gravely limited the content people see.
“In the process, individuals began to move, without noticing, into bubbles that isolated them from enormous amounts of other information,” Curtis says. “They only heard and saw what they liked. And their news feeds increasingly excluded anything that might challenge peoples’ preexisting beliefs.”
Jon Keegan of the Wall Street Journal goes even further and creatively demonstrates the profound effect of partisan filtering on Facebook. His interactive graphic allows readers to pick certain hot-button issues, such as “guns” and “abortion,” and view side-by-side versions of liberal and conservative news feeds on Facebook to see how those topics are represented. The comparisons are striking. For instance, a cursory search of the word “guns” reveals a certain kind of result in the liberal Facebook feed: a video from Upworthy in which celebrities make the case for gun control. Conversely, the conservative feed yields a Breitbart article called “Debbie Wasserman Schultz: Federal Government May Ban Passengers from Checking Guns in Baggage.” This disparity demonstrates how social media can work to further divide Americans.
For a time after the presidential election, Zuckerberg went on the defensive against the idea that Facebook influenced the results. He refused to call Facebook a media company and seemed perplexed at the notion that anyone would even consider it that. Despite Zuckerberg’s reluctance to acknowledge the influence of the social networking platform, it is where an astounding number of people get their news. Indeed, 44 percent of the general population of the United States claimed to get news from the site. Zuckerberg recently walked back his defensive statements, saying that Facebook is, in fact, a media company – just not a “traditional” one. Whatever label you want to assign this behemoth corporate entity, the goal of a company such as Facebook is abundantly clear: to create a totally immersive online environment. Understandably, Facebook doesn’t want users leaving, and it is therefore designed to keep users engaged through an endless stream of photos, videos, news articles, and, yes — likely some fake news. The ideal Facebook user would never leave the site. And, naturally, the company wants everyone using Facebook as a basic amenity. Everything the company does is in pursuit of this ubiquitous ideal, and its efforts are working. CNBC reports that Facebook, with 1.35 billion users worldwide, has more monthly active users than WhatsApp (500 million users), Twitter (284 million) and Instagram (200 million) combined. It has about 1 billion more users than Twitter and the same amount of monthly users as there are people in China.
Facebook dominates our culture in ways that are impossible to fully articulate. To claim with certainty that it didn’t influence the 2016 presidential election, or many other major events, is specious. The platform undoubtedly influences the world by virtue of its market and cultural dominance. If such domination is indeed Facebook’s goal, the company has an ethical obligation to ensure that its users are not totally misinformed. When Facebook’s product is utilized to such a great extent, and when the company operates as the de facto media aggregate for its consumers, it puts itself in a position to be responsible for the stories shared by its users. Unlike the average American, Zuckerberg is uniquely poised to face this challenge head-on. If Facebook wishes to continue using the term “news feed” to describe its platform, it had better take all of the possible steps to ensure that what appears on said feed is not grossly inaccurate. But ethical appeals are rarely convincing to faceless corporations, whose financial obligations to shareholders and the bottom line have historically taken precedent over common decency.
Perhaps it would be better to frame the issue in pragmatic terms. If Facebook doesn’t want the public’s perception of its company to turn sour with the idea that Facebook is a fringe website fraught with dubious information, perhaps the company will take significant action to help stop the spread of fake news. Despite Zuckerberg’s initial downplaying of the potential impact of fake news on the election, Facebook is taking steps to address the problem. It is implementing a new system that allows users to flag stories they suspect to be false, and those stories are then referred to third-party fact checkers. This, too, is an admirable step in combating the spread of fake news. But is it just window dressing? As long as our social networks serve to reinforce partisan divides through algorithms, fake news will find a way to linger in the American consciousness. Now, more than ever, it is imperative that we as a society use technological means to combat the problem of misinformation. Moreover, it is imperative that those in positions to effect real change consider the consequences of allowing hyperpartisanship and, in turn, misinformation to thrive. It is for the benefit of humanity as a whole that innovative thinkers find new ways to connect individuals who are not ideologically similar. After all, isn’t that the supposed purpose of social networking — to better connect people?
No matter where you’ve stuck your pin on the political map, everyone can agree that the 2016 U.S. presidential election was not business as usual for American democracy.
Fingers pointed a thousand different directions on Nov. 9, looking for something to valorize or vilify for their victories and defeats. But through all of the infighting and name-calling, it quickly became clear that the real winner in this campaign was not a person or a movement, but a tool: fake news. It was so well used in this election that PolitiFact, a Pulitzer Prize-winning fact-checking website, named fake news its 2016 Lie of the Year, saying the concept consists of nothing more than “made-up stuff, masterfully manipulated to look like credible journalistic reports that are easily spread online to large audiences willing to believe the fictions and spread the word.”
In our Orwellian mediaverse, where doublespeak masquerades as hashtags and trending topics, #FakeNews certainly provides good content fodder and the occasional straw man, but the term also muddles the truth that it’s nothing more than propaganda with a Google AdWords account. Intentional or not, obfuscating the specter of propaganda through these doublespeak strategies ultimately distracts from the ethical implications of “information, especially of a biased or misleading nature, used to promote or publicize a particular political cause or point of view.” (That’s a dictionary definition of propaganda, by the way).
The first step in regaining ethical control over fake news is to call it what it is: propaganda. This puts the onus on us, the public, to wade through the mess of the modern media landscape which, now more than ever, is full of trap doors and mazes without exits. It’s only going to get worse, largely because the man that fake news helped to elect to one of the most powerful offices in the world is guilty of disseminating propaganda himself, while turning “mainstream media” into an insult – in much the same way Nazi Germany used “Lügenpresse” to discredit and ultimately silence any media opposing the regime.
Putting the burden on the public to be discerning goes against the emerging idea that Facebook, Twitter and other social platforms are at least partially responsible for the spread of disinformation. After all, you can’t have fake news if there’s no way to discover or share it. Plus, more than 62 percent of adults get their news from social media, so if we can blame these platforms for the proliferation of fake news, then we’re exempt from ethical responsibility. Calling propaganda a rose of another name and blaming social media platforms for circulating fake news renders us mere bystanders, scot-free and light as a feather.
Blaming Facebook and Twitter for fake news is like blaming roads for bad drivers. It distracts from the fact that the public took its own discernment and intelligence for granted. By shifting the blame in this way, and blindly sharing and clicking through content that reinforced our own opinions, we contributed to the viral nature of such propagandist lies as “Obama Signs Executive Order Banning the Pledge of Allegiance in Schools Nationwide” (2.7 million shares), “Pope Francis Shocks World, Endorses Donald Trump for President, Releases Statement” (961,000 shares), “Trump Offering Free One-Way Tickets to Africa & Mexico for Those Who Wanna (sic) Leave America” (802,000 shares) and hundreds more instances of tactical misinformation deemed “news.”
While there remains the million-dollar question about whether foreign interference impacted the U.S. election, there’s no doubting the influence propaganda had in its outcome (and the continued affirmation of that result). In the past few months, several outlets have conducted their own investigations into the culprits behind fake news websites, exposing opportunistic individuals generating salacious clickbait for the promise of earning a few extra bucks from advertising and private sources.
What fake news creators have in common, aside from their unabashed cynicism, is their intuitive understanding of the public’s vulnerability to misinformation – and the understanding that propaganda only works when people lack the interest or diligence to explore the provenance of claims. Because it’s easy to make information on the internet look authentic, it’s even easier for people to accept and share it as such. At that point, fake news creators like to wash their hands of the situation, stating like gun sellers that what people do with the information is not their responsibility, even if it results in a man bringing AR-15 into a pizza restaurant.
Fake news wouldn’t be so prevalent if there was not already a willing, receptive audience raised entirely on media that caters to pre-established biases and opinions. This idea relates to what communications scholars call cognitive dissonance, which is the discomfort we experience when faced with new beliefs or ideas that contradict our own. The discomfort leads to confirmation bias, or the idea that – when faced with a dissonant concept – we’ll adjust our view of that problematic thing to make it fit with what we already believe. It’s safer, because who wants to change their ideas all the time? It’s why creationists reject the science of evolution, or why you’ll never see Kim Kardashian driving a rusty ’96 Ford Escort (because even if you did, you’d block it out). As novelist Saul Bellow once said, “A great deal of intelligence can be invested in ignorance when the need for illusion is deep.”
While there’s a neurological basis for some degree of confirmation bias in our day-to-day lives, it’s a mental vulnerability easily exploited by the mechanisms of propaganda. Because propaganda plays on our defense mechanisms against dissonance, it’s hard for us to see it – admitting we’ve been duped goes against our cognitive biases and defense mechanisms. To save face, we instead call our susceptibility “fake news” and blame social media. It seems like there would be no ethical implications of this placement of blame (except maybe the loss of some common sense), but it’s now one of the leading reasons why we’ve appointed a racist, misogynist, wage-thieving, litigious and totally unqualified man to America’s highest office.
Despite the question of who or what is responsible for fake news, Facebook and other social media platforms are working to combat such organized propaganda efforts. Still, this effort is simply a technological Band-Aid on the open wound of modern democratic culture. Organizations such as the Media Literacy Project and Snopes are advancing media intelligence and fact checking, but propaganda isn’t going anywhere. “Propaganda is to a democracy what the bludgeon is to a totalitarian state,” as American linguist Noam Chomsky says. As it has been since the dawn of mass media, the ethical imperative is on us to sort truth from lies; to separate journalism from propaganda. As our skies darken and our new Commander in Chief continues lambasting the “liar media,” flaunting power over truth, our individual ability to ground ourselves in truth and sift through distracting noise might be the only skill that will stop America’s slow decline toward totalitarianism.
Benjamin van Loon is a writer, researcher and communications consultant from Chicago, Illinois. In 2016, he was awarded a Folio Award for his writing on technology. Learn more at benvanloon.com.
When people have unpleasant experiences, they tend to tell others about them. “Others” used to include a handful of coworkers, family members and friends. But, now that social media has become the preferred communication platform, it’s only logical that people use it when they want to voice their disapproval and dissatisfaction.
A vast number of topics are broached on social media, but some of the most interesting rants include:
In addition to these notable scenarios, there are daily complaints about customer service, sporadic complaints by waitresses about famous customers leaving meager tips and students complaining about being called out for supposedly violating their schools’ dress codes. The list goes on and on.
People rant on social media for various reasons, and they clearly realize the advantages of using this medium.
The UK-based Institute of Customer Service determined that the number of consumer complaints made on social media has increased eight-fold since January 2014. A VentureBeat report reveals that consumers post 2.1 million negative comments about U.S. companies on social media every day.
According to The Social Habit, 79 percent of the people who turn to Twitter to complain about a company want their friends to see what they’ve written. Only 52 percent hope the company will see the post, and roughly 36 percent expect the company to actually see and take action based on their comments.
Some people might view ranting as an emotional outlet, especially because there’s a school of thought that warns about the dangers of letting anger build up without any release. However, one study, called “Anger on the Internet: The Perceived Value of Rant-Sites,” revealed that online ranting seems to increase anger. Whether participants in the study read someone else’s rant for a period of five minutes or spent five minutes writing their own rants, it negatively affected their emotions and made them even angrier.
Ironically, an organization’s own social media tools could also negatively impact a consumer’s view of the company. This concept was brought to light in another survey conducted by The Social Habit, in which participants divulged the response times they expected when using social media to contact a company for customer support. Of the participants, 32 percent said they expected to receive a response in 30 minutes, and 42 percent expected a response within one hour. Even at night or on weekends, 57 percent expected the same response times to apply. Most companies are not equipped to reply that quickly, especially on a 24-hour basis, and this unpreparedness can actually lead to a higher level of customer dissatisfaction.
While companies might not believe that the customer is always right, they don’t find value in being embroiled in a public relations nightmare caused by a single customer who might have hundreds, thousands or even millions of followers.
To what extent are complainers using social media as a bully pulpit? In most instances, companies wisely choose to avoid online arguments. With high-profile complaints, they might issue statements, but, even then, companies have to be careful not to release information that could have legal ramifications.
For example, if parents rant on social media that their child was suspended for what appears to be a minor offense, in the process of defending itself, the school can’t say, “This is just the latest in a long list of infractions,” and then list the child’s offenses; doing so would entail releasing the private information of a minor.
Companies should be aware that such a defense would be an obvious breach of social etiquette and likely illegal, but individuals should also exercise caution when posting on social media.
California-based counselor Aida Vazin pointed out that social platforms can pose problems for both parties.
“One of the reasons one-sided platforms are a bit troublesome in social media is the fact that they are one-sided, and everything is written down, not just said,” she noted. So, whether the information is accurate or not, once it’s on the internet, that side of the story can live on long after the conflict has been resolved.
Vazin went on to explain that social media lends itself to unchallenged ranting. “There’s a saying that there are three sides to every story – my side, your side, and what really happened – and this scenario is incomplete when there’s a one-sided platform and it only captures a snapshot of a whole event or situation,” she said.
Sometimes, ranting might be a way to garner sympathy and online attention. April Masini, a relationship and etiquette expert and author of the “Ask April” online advice column, said she believes that social media has changed the dynamics of relationships. “We have fast friends and fast enemies because of what we say and like or give a thumbs down to on social media,” Masini said.
The allure of social approval – even from strangers – can spur some people to post sensational content. Everyone loves to hear an exciting story, and, unfortunately, bad news travels faster than good news. As any media outlet can confirm, bad news also generates more clicks and page views.
But, what separates online rants from the material produced by legitimate media outlets is that personal posts aren’t fact-checked, balanced stories that attempt to present both sides – or even admit that all of the facts have not been collected.
Masini agreed with the findings of The Social Habit that ranting is done primarily for social media friends to see, not as a way to solve a problem. “Ranting is a symptom of not working to fix conflicts,” she said. “When someone is working on a problem — be it a relationship problem, a financial problem or a real estate problem – they’re not ranting; they’re working on the problem.”
Rather than an active attempt at solving issues, Masini said, online ranting is a way for people to express their displeasure. “Whether it’s displaced road rage, graffiti on the side of a building, or ranting on social media, they just want an outlet,” she said.
On the other hand, some social media rants can be productive. Last holiday season, I saw a post about how long one individual spent waiting to see a customer service representative at a local Apple Store before finally leaving in frustration. Several other people commented on the post with negative remarks, but one user explained that making an appointment usually leads to much faster service. A few days later, the original poster claimed she made an appointment and was serviced within minutes of entering the store. Assuming the other negative posters didn’t know the importance of making an appointment before arriving at an Apple Store, this negative experience was turned into a teachable moment.
Also, the post by the mother of the special needs student who was not invited to his classmate’s birthday party was picked up by a major news outlet, and the story led to a lively debate over the right of parents and students to choose the classmates they want to invite to private functions versus the insensitivity of excluding certain classmates.
Admittedly, social media rants can be quite effective in changing company behavior. If it improves customer service or changes a ridiculous policy, perhaps complaining online can be a force for good.
But, when companies have logical, well thought-out policies and procedures, and they are forced to make exceptions to those rules because an irate customer threatens to damage the brand’s reputation, ranting becomes a type of coercion.
It can also reinforce bad behavior. Just as some parents relent when their toddler screams and hollers, ranting adults learn that throwing a temper tantrum on social media will lead some companies to surrender.
“I believe two-sided platforms such as the mutual rating system of Uber is a great balance and good rule to implement when rating or complaining about others in social media,” Vazin said.
Christopher Bauer, Ph.D., a fraud specialist and the author of “Better Ethics NOW: How to Avoid the Ethics Disaster You Never Saw Coming,” offered tips for ranters and the objects of those rants.
For ranters, Bauer said, “Remember that your rant will be seen both by the object of your rant and potentially countless others, so unless it’s an emergency, cool down before you post.”
Bauer explained that there are several reasons to take a step back and calm down. “Not only do you not want to risk libel liability, but you don’t want to risk your personal reputation,” he said. Once a ranter develops a reputation as a rash person or someone who can’t separate objective information from perceptions and feelings, Bauer said this individual will lose credibility.
“One of the pleasures of social media is that it can feel like any other conversation with rapid-fire back-and-forth – but it’s exactly that immediacy, especially when we’re excited or riled up, that can lead to posting thoughts and perceptions we’ll later regret having said in public.”
For the objects of the rant, Bauer suggests taking a different approach. “If you’re a business, respond immediately, but take it off social media as quickly as possible with a call (because it’s both more direct and personal), or, if absolutely needed, via a text or email,” he said.
However, Bauer didn’t recommend that companies try to ignore the negative posts. “Besides the fact that responding is simply better customer service, anyone ranting today is a whole lot more likely to keep ranting tomorrow if you don’t respond,” he said.
Terri Williams writes for a variety of clients including USA Today, Yahoo, U.S. News & World Report, The Houston Chronicle, Investopedia, and Robert Half. She has a Bachelor of Arts in English from the University of Alabama at Birmingham. Follow her on Twitter @Territoryone.