The results of the 2016 presidential election have proven to be something of a Rorschach test for the politically conscious. President-elect Donald Trump pulled off an Electoral College victory and a stunning upset against Democratic candidate Hillary Clinton through narrow wins in Michigan, Wisconsin and Pennsylvania. Notably, Trump lost the popular vote to Clinton by more than 2.8 million votes.
Reading the tea leaves after this major event is obviously a partisan exercise for some, but one often cited and notably pesky culprit for Clinton’s loss presents a unique challenge for seekers of truth in the digital age, and it’s one that strikes at the heart of journalism itself.
On Nov. 5, mere days before the general election, the Denver Guardian declared – in all caps – “FBI AGENT SUSPECTED IN HILLARY EMAIL LEAKS FOUND DEAD IN APPARENT MURDER-SUICIDE.” There are a few problems here, not the least of which would be the fact that the Denver Guardian doesn’t exist. As The Denver Post dutifully noted shortly after the “story” broke, the Denver Guardian is not a legitimate news source. The aforementioned murder-suicide story is, quite simply, a piece of fake news. The story didn’t happen. It’s patently false. Nonetheless, stories such as this one spread like wildfire on social media sites, namely Facebook. Fake news is exactly what it sounds like: It is misinformation styled as news. Today, it is manufactured and optimally designed to get clicks. As such, false news stories tend to be both hyperpartisan and highly sensational. One such story claimed that journalist and conservative political commentator Megyn Kelly had been fired by Fox News after endorsing Clinton during the general election. In reality, Kelly never endorsed Clinton, and the commentator was never fired by Fox News (though she has since accepted an offer to move to NBC).
This disturbing trend of fake news recently made real headlines due to the troubling prospect that it might have swayed the election in favor of Trump. Did fake news really have that influence? At this point, it’s unclear. Not surprisingly, Facebook founder and CEO Mark Zuckerberg downplayed the notion of fake news having such an effect. However, an eye-opening analysis by Craig Silverman at BuzzFeed found that Facebook users were more engaged with fake news than with real stories from 19 major news outlets in the last three months of the presidential campaign. So, it wouldn’t be a huge leap in logic to assume that fake news had some kind of impact on the election results. Plus, with respect to the general election, between the three most contested states, the difference was an extremely close 80,000 votes. Even if the impact of fake news was minuscule, it could have made all the difference in those extenuating circumstances. The general significance of false news stories, however, extends beyond the scope of elections.
While fake news has been around at least since the dawn of the printing press, it has only recently become a steady source of income for unscrupulous entrepreneurs. The formula for its production is rather simple, involving only three steps.
Step one: Create a sensational story with no regard for the truth. Step two: Publish said story online, and sell ad space on the page. Step three: Collect ad revenue generated from the story.
As long as there is economic incentive to fabricate sensational stories, the plague of fake news will continue. So, how do we combat such hastily crafted misinformation? Considering potential conflicts with the First Amendment, government censorship is a path we don’t want to take. But, perhaps there are ways to disincentivize the creation and spread of fake news. Google is reportedly taking steps to ensure that fake news culprits are not able to use its ad-selling software. This is an admirable first step, but it is imperative for people to continue applying pressure on Google to ensure that the problem doesn’t fall by the wayside.
Some journalists are calling for readers to practice caution and more thoroughly scrutinize news stories. Brian Stelter of CNN coined the phrase “refuse to be confused,” a desperate plea for journalists and consumers alike to be more vigilant about the spread of misinformation. It’s an admirable sentiment. Edward Snowden recently echoed the plea, saying, “The answer to bad speech is not censorship. The answer to bad speech is more speech. We have to exercise and spread the idea that critical thinking matters now more than ever, given the fact that lies seem to be getting very popular.” Again, Snowden’s rhetoric is admirable. In an ideal world, intelligent readers armed with critical thinking skills would be plentiful, and they would be quick to combat misinformation. But the real world is fraught with complications, partisan sources, confirmation bias and prejudices that work in a myriad of ways to shut down critical thinking and productive discussion.
It’s difficult to conceive a complete, accurate profile of the average American, but researchers have discovered telling details about U.S. citizens in general. A study by the Organization for Economic Cooperation and Development found that the reading skills of American adults are significantly lower than those of adults in most other developed countries. Here’s another detail: Americans tend to work longer hours than people in other large countries. American adults in full-time positions reported working 47 hours a week on average – that’s nearly six days a week. Despite this schedule, the United States ranks close to the 30th percentile in the category of income inequality, meaning 70 percent of other countries have more equal income distribution. So, Americans have relatively poor reading skills and work longer hours than their counterparts in other developed countries. To top it off, the average American’s income is increasingly disproportionate relative to the country’s richest 1 percent. What can we discern from these details? Well, one thing is clear: Americans do not have the time, inclination or resources necessary to vet every single piece of news that appears on their Facebook feeds, and it is unrealistic to expect them to do so. A discerning readership is a great ideal to strive for, but not in place of pursuing pragmatic technological solutions to the problem of fake news.
Sites such as Facebook are largely responsible for creating the partisan environment that allows false information to spread online like a contagious virus. British filmmaker Adam Curtis aptly describes the process in his 2016 documentary, “Hypernormalisation,” telling how the algorithms and filters on social media have gravely limited the content people see.
“In the process, individuals began to move, without noticing, into bubbles that isolated them from enormous amounts of other information,” Curtis says. “They only heard and saw what they liked. And their news feeds increasingly excluded anything that might challenge peoples’ preexisting beliefs.”
Jon Keegan of the Wall Street Journal goes even further and creatively demonstrates the profound effect of partisan filtering on Facebook. His interactive graphic allows readers to pick certain hot-button issues, such as “guns” and “abortion,” and view side-by-side versions of liberal and conservative news feeds on Facebook to see how those topics are represented. The comparisons are striking. For instance, a cursory search of the word “guns” reveals a certain kind of result in the liberal Facebook feed: a video from Upworthy in which celebrities make the case for gun control. Conversely, the conservative feed yields a Breitbart article called “Debbie Wasserman Schultz: Federal Government May Ban Passengers from Checking Guns in Baggage.” This disparity demonstrates how social media can work to further divide Americans.
For a time after the presidential election, Zuckerberg went on the defensive against the idea that Facebook influenced the results. He refused to call Facebook a media company and seemed perplexed at the notion that anyone would even consider it that. Despite Zuckerberg’s reluctance to acknowledge the influence of the social networking platform, it is where an astounding number of people get their news. Indeed, 44 percent of the general population of the United States claimed to get news from the site. Zuckerberg recently walked back his defensive statements, saying that Facebook is, in fact, a media company – just not a “traditional” one. Whatever label you want to assign this behemoth corporate entity, the goal of a company such as Facebook is abundantly clear: to create a totally immersive online environment. Understandably, Facebook doesn’t want users leaving, and it is therefore designed to keep users engaged through an endless stream of photos, videos, news articles, and, yes — likely some fake news. The ideal Facebook user would never leave the site. And, naturally, the company wants everyone using Facebook as a basic amenity. Everything the company does is in pursuit of this ubiquitous ideal, and its efforts are working. CNBC reports that Facebook, with 1.35 billion users worldwide, has more monthly active users than WhatsApp (500 million users), Twitter (284 million) and Instagram (200 million) combined. It has about 1 billion more users than Twitter and the same amount of monthly users as there are people in China.
Facebook dominates our culture in ways that are impossible to fully articulate. To claim with certainty that it didn’t influence the 2016 presidential election, or many other major events, is specious. The platform undoubtedly influences the world by virtue of its market and cultural dominance. If such domination is indeed Facebook’s goal, the company has an ethical obligation to ensure that its users are not totally misinformed. When Facebook’s product is utilized to such a great extent, and when the company operates as the de facto media aggregate for its consumers, it puts itself in a position to be responsible for the stories shared by its users. Unlike the average American, Zuckerberg is uniquely poised to face this challenge head-on. If Facebook wishes to continue using the term “news feed” to describe its platform, it had better take all of the possible steps to ensure that what appears on said feed is not grossly inaccurate. But ethical appeals are rarely convincing to faceless corporations, whose financial obligations to shareholders and the bottom line have historically taken precedent over common decency.
Perhaps it would be better to frame the issue in pragmatic terms. If Facebook doesn’t want the public’s perception of its company to turn sour with the idea that Facebook is a fringe website fraught with dubious information, perhaps the company will take significant action to help stop the spread of fake news. Despite Zuckerberg’s initial downplaying of the potential impact of fake news on the election, Facebook is taking steps to address the problem. It is implementing a new system that allows users to flag stories they suspect to be false, and those stories are then referred to third-party fact checkers. This, too, is an admirable step in combating the spread of fake news. But is it just window dressing? As long as our social networks serve to reinforce partisan divides through algorithms, fake news will find a way to linger in the American consciousness. Now, more than ever, it is imperative that we as a society use technological means to combat the problem of misinformation. Moreover, it is imperative that those in positions to effect real change consider the consequences of allowing hyperpartisanship and, in turn, misinformation to thrive. It is for the benefit of humanity as a whole that innovative thinkers find new ways to connect individuals who are not ideologically similar. After all, isn’t that the supposed purpose of social networking — to better connect people?
No matter where you’ve stuck your pin on the political map, everyone can agree that the 2016 U.S. presidential election was not business as usual for American democracy.
Fingers pointed a thousand different directions on Nov. 9, looking for something to valorize or vilify for their victories and defeats. But through all of the infighting and name-calling, it quickly became clear that the real winner in this campaign was not a person or a movement, but a tool: fake news. It was so well used in this election that PolitiFact, a Pulitzer Prize-winning fact-checking website, named fake news its 2016 Lie of the Year, saying the concept consists of nothing more than “made-up stuff, masterfully manipulated to look like credible journalistic reports that are easily spread online to large audiences willing to believe the fictions and spread the word.”
In our Orwellian mediaverse, where doublespeak masquerades as hashtags and trending topics, #FakeNews certainly provides good content fodder and the occasional straw man, but the term also muddles the truth that it’s nothing more than propaganda with a Google AdWords account. Intentional or not, obfuscating the specter of propaganda through these doublespeak strategies ultimately distracts from the ethical implications of “information, especially of a biased or misleading nature, used to promote or publicize a particular political cause or point of view.” (That’s a dictionary definition of propaganda, by the way).
The first step in regaining ethical control over fake news is to call it what it is: propaganda. This puts the onus on us, the public, to wade through the mess of the modern media landscape which, now more than ever, is full of trap doors and mazes without exits. It’s only going to get worse, largely because the man that fake news helped to elect to one of the most powerful offices in the world is guilty of disseminating propaganda himself, while turning “mainstream media” into an insult – in much the same way Nazi Germany used “Lügenpresse” to discredit and ultimately silence any media opposing the regime.
Putting the burden on the public to be discerning goes against the emerging idea that Facebook, Twitter and other social platforms are at least partially responsible for the spread of disinformation. After all, you can’t have fake news if there’s no way to discover or share it. Plus, more than 62 percent of adults get their news from social media, so if we can blame these platforms for the proliferation of fake news, then we’re exempt from ethical responsibility. Calling propaganda a rose of another name and blaming social media platforms for circulating fake news renders us mere bystanders, scot-free and light as a feather.
Blaming Facebook and Twitter for fake news is like blaming roads for bad drivers. It distracts from the fact that the public took its own discernment and intelligence for granted. By shifting the blame in this way, and blindly sharing and clicking through content that reinforced our own opinions, we contributed to the viral nature of such propagandist lies as “Obama Signs Executive Order Banning the Pledge of Allegiance in Schools Nationwide” (2.7 million shares), “Pope Francis Shocks World, Endorses Donald Trump for President, Releases Statement” (961,000 shares), “Trump Offering Free One-Way Tickets to Africa & Mexico for Those Who Wanna (sic) Leave America” (802,000 shares) and hundreds more instances of tactical misinformation deemed “news.”
While there remains the million-dollar question about whether foreign interference impacted the U.S. election, there’s no doubting the influence propaganda had in its outcome (and the continued affirmation of that result). In the past few months, several outlets have conducted their own investigations into the culprits behind fake news websites, exposing opportunistic individuals generating salacious clickbait for the promise of earning a few extra bucks from advertising and private sources.
What fake news creators have in common, aside from their unabashed cynicism, is their intuitive understanding of the public’s vulnerability to misinformation – and the understanding that propaganda only works when people lack the interest or diligence to explore the provenance of claims. Because it’s easy to make information on the internet look authentic, it’s even easier for people to accept and share it as such. At that point, fake news creators like to wash their hands of the situation, stating like gun sellers that what people do with the information is not their responsibility, even if it results in a man bringing AR-15 into a pizza restaurant.
Fake news wouldn’t be so prevalent if there was not already a willing, receptive audience raised entirely on media that caters to pre-established biases and opinions. This idea relates to what communications scholars call cognitive dissonance, which is the discomfort we experience when faced with new beliefs or ideas that contradict our own. The discomfort leads to confirmation bias, or the idea that – when faced with a dissonant concept – we’ll adjust our view of that problematic thing to make it fit with what we already believe. It’s safer, because who wants to change their ideas all the time? It’s why creationists reject the science of evolution, or why you’ll never see Kim Kardashian driving a rusty ’96 Ford Escort (because even if you did, you’d block it out). As novelist Saul Bellow once said, “A great deal of intelligence can be invested in ignorance when the need for illusion is deep.”
While there’s a neurological basis for some degree of confirmation bias in our day-to-day lives, it’s a mental vulnerability easily exploited by the mechanisms of propaganda. Because propaganda plays on our defense mechanisms against dissonance, it’s hard for us to see it – admitting we’ve been duped goes against our cognitive biases and defense mechanisms. To save face, we instead call our susceptibility “fake news” and blame social media. It seems like there would be no ethical implications of this placement of blame (except maybe the loss of some common sense), but it’s now one of the leading reasons why we’ve appointed a racist, misogynist, wage-thieving, litigious and totally unqualified man to America’s highest office.
Despite the question of who or what is responsible for fake news, Facebook and other social media platforms are working to combat such organized propaganda efforts. Still, this effort is simply a technological Band-Aid on the open wound of modern democratic culture. Organizations such as the Media Literacy Project and Snopes are advancing media intelligence and fact checking, but propaganda isn’t going anywhere. “Propaganda is to a democracy what the bludgeon is to a totalitarian state,” as American linguist Noam Chomsky says. As it has been since the dawn of mass media, the ethical imperative is on us to sort truth from lies; to separate journalism from propaganda. As our skies darken and our new Commander in Chief continues lambasting the “liar media,” flaunting power over truth, our individual ability to ground ourselves in truth and sift through distracting noise might be the only skill that will stop America’s slow decline toward totalitarianism.
Benjamin van Loon is a writer, researcher and communications consultant from Chicago, Illinois. In 2016, he was awarded a Folio Award for his writing on technology. Learn more at benvanloon.com.
When people have unpleasant experiences, they tend to tell others about them. “Others” used to include a handful of coworkers, family members and friends. But, now that social media has become the preferred communication platform, it’s only logical that people use it when they want to voice their disapproval and dissatisfaction.
A vast number of topics are broached on social media, but some of the most interesting rants include:
In addition to these notable scenarios, there are daily complaints about customer service, sporadic complaints by waitresses about famous customers leaving meager tips and students complaining about being called out for supposedly violating their schools’ dress codes. The list goes on and on.
People rant on social media for various reasons, and they clearly realize the advantages of using this medium.
The UK-based Institute of Customer Service determined that the number of consumer complaints made on social media has increased eight-fold since January 2014. A VentureBeat report reveals that consumers post 2.1 million negative comments about U.S. companies on social media every day.
According to The Social Habit, 79 percent of the people who turn to Twitter to complain about a company want their friends to see what they’ve written. Only 52 percent hope the company will see the post, and roughly 36 percent expect the company to actually see and take action based on their comments.
Some people might view ranting as an emotional outlet, especially because there’s a school of thought that warns about the dangers of letting anger build up without any release. However, one study, called “Anger on the Internet: The Perceived Value of Rant-Sites,” revealed that online ranting seems to increase anger. Whether participants in the study read someone else’s rant for a period of five minutes or spent five minutes writing their own rants, it negatively affected their emotions and made them even angrier.
Ironically, an organization’s own social media tools could also negatively impact a consumer’s view of the company. This concept was brought to light in another survey conducted by The Social Habit, in which participants divulged the response times they expected when using social media to contact a company for customer support. Of the participants, 32 percent said they expected to receive a response in 30 minutes, and 42 percent expected a response within one hour. Even at night or on weekends, 57 percent expected the same response times to apply. Most companies are not equipped to reply that quickly, especially on a 24-hour basis, and this unpreparedness can actually lead to a higher level of customer dissatisfaction.
While companies might not believe that the customer is always right, they don’t find value in being embroiled in a public relations nightmare caused by a single customer who might have hundreds, thousands or even millions of followers.
To what extent are complainers using social media as a bully pulpit? In most instances, companies wisely choose to avoid online arguments. With high-profile complaints, they might issue statements, but, even then, companies have to be careful not to release information that could have legal ramifications.
For example, if parents rant on social media that their child was suspended for what appears to be a minor offense, in the process of defending itself, the school can’t say, “This is just the latest in a long list of infractions,” and then list the child’s offenses; doing so would entail releasing the private information of a minor.
Companies should be aware that such a defense would be an obvious breach of social etiquette and likely illegal, but individuals should also exercise caution when posting on social media.
California-based counselor Aida Vazin pointed out that social platforms can pose problems for both parties.
“One of the reasons one-sided platforms are a bit troublesome in social media is the fact that they are one-sided, and everything is written down, not just said,” she noted. So, whether the information is accurate or not, once it’s on the internet, that side of the story can live on long after the conflict has been resolved.
Vazin went on to explain that social media lends itself to unchallenged ranting. “There’s a saying that there are three sides to every story – my side, your side, and what really happened – and this scenario is incomplete when there’s a one-sided platform and it only captures a snapshot of a whole event or situation,” she said.
Sometimes, ranting might be a way to garner sympathy and online attention. April Masini, a relationship and etiquette expert and author of the “Ask April” online advice column, said she believes that social media has changed the dynamics of relationships. “We have fast friends and fast enemies because of what we say and like or give a thumbs down to on social media,” Masini said.
The allure of social approval – even from strangers – can spur some people to post sensational content. Everyone loves to hear an exciting story, and, unfortunately, bad news travels faster than good news. As any media outlet can confirm, bad news also generates more clicks and page views.
But, what separates online rants from the material produced by legitimate media outlets is that personal posts aren’t fact-checked, balanced stories that attempt to present both sides – or even admit that all of the facts have not been collected.
Masini agreed with the findings of The Social Habit that ranting is done primarily for social media friends to see, not as a way to solve a problem. “Ranting is a symptom of not working to fix conflicts,” she said. “When someone is working on a problem — be it a relationship problem, a financial problem or a real estate problem – they’re not ranting; they’re working on the problem.”
Rather than an active attempt at solving issues, Masini said, online ranting is a way for people to express their displeasure. “Whether it’s displaced road rage, graffiti on the side of a building, or ranting on social media, they just want an outlet,” she said.
On the other hand, some social media rants can be productive. Last holiday season, I saw a post about how long one individual spent waiting to see a customer service representative at a local Apple Store before finally leaving in frustration. Several other people commented on the post with negative remarks, but one user explained that making an appointment usually leads to much faster service. A few days later, the original poster claimed she made an appointment and was serviced within minutes of entering the store. Assuming the other negative posters didn’t know the importance of making an appointment before arriving at an Apple Store, this negative experience was turned into a teachable moment.
Also, the post by the mother of the special needs student who was not invited to his classmate’s birthday party was picked up by a major news outlet, and the story led to a lively debate over the right of parents and students to choose the classmates they want to invite to private functions versus the insensitivity of excluding certain classmates.
Admittedly, social media rants can be quite effective in changing company behavior. If it improves customer service or changes a ridiculous policy, perhaps complaining online can be a force for good.
But, when companies have logical, well thought-out policies and procedures, and they are forced to make exceptions to those rules because an irate customer threatens to damage the brand’s reputation, ranting becomes a type of coercion.
It can also reinforce bad behavior. Just as some parents relent when their toddler screams and hollers, ranting adults learn that throwing a temper tantrum on social media will lead some companies to surrender.
“I believe two-sided platforms such as the mutual rating system of Uber is a great balance and good rule to implement when rating or complaining about others in social media,” Vazin said.
Christopher Bauer, Ph.D., a fraud specialist and the author of “Better Ethics NOW: How to Avoid the Ethics Disaster You Never Saw Coming,” offered tips for ranters and the objects of those rants.
For ranters, Bauer said, “Remember that your rant will be seen both by the object of your rant and potentially countless others, so unless it’s an emergency, cool down before you post.”
Bauer explained that there are several reasons to take a step back and calm down. “Not only do you not want to risk libel liability, but you don’t want to risk your personal reputation,” he said. Once a ranter develops a reputation as a rash person or someone who can’t separate objective information from perceptions and feelings, Bauer said this individual will lose credibility.
“One of the pleasures of social media is that it can feel like any other conversation with rapid-fire back-and-forth – but it’s exactly that immediacy, especially when we’re excited or riled up, that can lead to posting thoughts and perceptions we’ll later regret having said in public.”
For the objects of the rant, Bauer suggests taking a different approach. “If you’re a business, respond immediately, but take it off social media as quickly as possible with a call (because it’s both more direct and personal), or, if absolutely needed, via a text or email,” he said.
However, Bauer didn’t recommend that companies try to ignore the negative posts. “Besides the fact that responding is simply better customer service, anyone ranting today is a whole lot more likely to keep ranting tomorrow if you don’t respond,” he said.
Terri Williams writes for a variety of clients including USA Today, Yahoo, U.S. News & World Report, The Houston Chronicle, Investopedia, and Robert Half. She has a Bachelor of Arts in English from the University of Alabama at Birmingham. Follow her on Twitter @Territoryone.
I am a huge “Star Wars” fan. My parents took me to see the first film when I was 10 years old. The queue stretched around the block, and I’ve never forgotten the frisson of excitement I felt when the star destroyer slowly filled the screen. I spent my teenage years wanting to be Han Solo. (Actually, I still do. My husband is more of a Chewbacca, and my kids cosplay Rey and Kylo Ren; the fights are impressive.)
One of the performances that stayed with me from that first viewing was Peter Cushing as Grand Moff Tarkin, Governor of the Empire’s fearsome battle station, the Death Star. Cushing was already a wonderfully sinister legend, even to a 10-year-old British girl. Watching “Rogue One,” I was delighted to see the character of Governor Tarkin appear on screen almost forty years on, apparently unchanged. However, rather than blanket praise for the work of the Industrial Light and Magic team (the special effects wizards behind the whole Star Wars franchise), the character’s physical appearance, so famously linked to that of the deceased actor, caused a babble of concern across the press and social media. Catherine Shoard, film editor of The Guardian, declared this resurrection “a digital indignity”: I beg to differ.
The technology of CGI – computer generate imagery – is already familiar to most cinema goers, and has made possible the effective realisation of sci fi and fantasy films including the Lord of the Rings trilogy and the Harry Potter series. We’ve applauded the rapid improvement from jerky approximations of fantastic creatures to smooth and seamless character integrations. So, why did so many people find Rogue One’s digital resurrection morbid, disrespectful, or downright unethical?
Resurrecting actors is nothing new
First, let’s debunk the myth that this is the first time a movie has involved the CGI portrayal of an actor. Of course it isn’t! I first recall CGI being used in “Gladiator” (1999), when Oliver Reed died three weeks before the end of filming. Director Ridley Scott’s careful use of offcuts and a bit of (at the time) new-fangled CGI, patching Reed’s features onto body doubles, ensured that he gave a complete performance from beyond the grave. It was quite fun watching the film and trying to spot the CGI elements. Even before that, in 1988, the accidental death of Roy Kinnear during the filming of “The Return of the Musketeers” required the producers to get creative. The technical wizardry we have now was in its infancy, so Kinnear’s role was completed by a double and a sound-alike, who both went uncredited. Although the mechanics were different, the principle was the same: The show must go on!
A number of TV commercials have also been produced and aired after their stars died. These advertisements have attracted more attention than films that have undergone role completion editing, possibly because it’s more obvious to audiences that the star has had no physical part in the production. Steve Bennett-Day, Executive Creative Director for Havas Helia, voiced popular concerns about CGI in his 2016 article for Campaign, a branding and marketing publication. He articulates the perceived difference between ‘respectful’ ads and what he calls the ‘creepy’ use of CGI. In his article, he compares accidents of broadcast timing where the shooting has been completed, and genuine tributes like the re-broadcast of a much-loved British ad in response to demand after the passing of the star, against poorly-judged CGI depiction. He specifically criticises the use of images of the late Audrey Hepburn in a Galaxy chocolate ad, and also notes a ‘downright awful’ (sic) Saatchi & Saatchi print ad that featured an image of the late Kurt Cobain showing off Dr Marten boots under his angel robes. Bennett-Day’s criticism once again draws attention to the key ethical question that was raised by “Rogue One”: When the CGI performance we’re watching is a completely new creation, has a line been crossed?
Protecting our identities
Photographers are all aware of the rules around image copyright. A photographer who takes a picture legally owns that image, with certain exceptions: in particular, if one takes a picture of artwork, the copyright for the photograph rests with the creator of the original artwork. With an individual, Publicity Rights are applied in a similar vein. So, if the likeness of an individual is used to create a CGI character, where do the rights reside? This is an issue both of privacy and of earnings.
Legal protection of Publicity (Personality) Rights differs from country to country and from state to state, a complex patchwork of legislation even in the United States alone. We know that the “Star Wars” team sought and was granted permission from Peter Cushing’s estate to create the CGI representation of Governor Tarkin as portrayed. The late, great Robin Williams used his will to restrict the use of his name, signature, photograph and likeness for 25 years from his death, according to clause 220.127.116.11 (a) in the legal documentation published by the Hollywood Reporter. Did the shrewd star anticipate that CGI technology could eventually be used to resurrect dead celebrities? The provision in Williams’ will avoids reliance upon inconsistent publicity protection against an increasingly complex and global technological backdrop.
Why CGI for Rogue One?
We might wonder why the character of Grand Moff Tarkin was given the CGI treatment where others were not. Rebel leader Mon Mothma was played by Caroline Blakiston in the original trilogy, and by Genevieve O’Reilly in “Episode III: Revenge of the Sith” and in “Rogue One” – a seamless transition, as the actresses were of the same build, and the character’s neat dark bobbed hair, robes and chains of office were straightforward to recreate. However, Peter Cushing’s fabulous cheekbones and sunken face were his trademark, and as such gave a unique feel to the Tarkin character. Lookalike Australian actor Wayne Pygram successfully portrayed Tarkin in a short cameo in “Episode III: Revenge of the Sith,” and there have been questions around why Pygram did not reprise the role. We don’t know the answer, but Pygram has done very little acting work since “Star Wars” and the TV series “Farscape,” and it’s possible that he was simply not in a position to take the part. For continuity of character, and without an easy lookalike solution, it would appear that Rogue One’s producers had no choice but to turn to Industrial Light and Magic.
Actor Guy Henry, the performer you see behind the CGI overlay of Cushing’s features and who is rightfully credited for the role of Governor Tarkin, thinks that creating more completely new performances by deceased actors through CGI is an unlikely scenario. When interviewed by Hollywood Reporter’s Aaron Crouch, he said, “I can’t really see why they would [do the same again] …… This was very specifically to recreate this character in a way that served the story of Rogue One.” In this context, it seems that the CGI work in “Rogue One” was really a continuation of the same performance in “A New Hope,” a way of ensuring that the show did go on. It’s interesting to note that while a Carrie Fisher CGI cameo closed Rogue One, and was in any case produced during her lifetime, a statement has recently been issued by the Star Wars team to reassure fans that for future films “Lucasfilm has no plans to digitally recreate Carrie Fisher’s performance as Princess or General Leia Organa.” This seems to bear out Guy Henry’s assertion, and will come as a relief to those who found the Rogue One Tarkin character disturbing.
Animation vs. reality
If anything underlines the realistic nature of the character in “Rogue One” as a technical accomplishment rather than a new Cushing performance, it’s the nomination for Outstanding Animated Performance in a Photoreal Feature in the upcoming Visual Effects Society Awards. Grand Moff Tarkin is up against Newt Scamander’s Niffler, the cutest of Fantastic Beasts in the latest excursion into the Harry Potter universe.
We praise CGI when it delivers fantastic creatures to our screens with such veracity. We are delighted that they integrate seamlessly with the real and interact with human actors in a way that, previously, we could only imagine. We recognize that there are actors creating those personalities behind CGI masks. Andy Serkis, for example, is an accomplished character actor: his live performance as Gollum set against the final CGI version in this video of Weta Digital’s work is worth watching, for the performance as much as the insight into the creative animation process.
For years, we have seen realistic animations of the human form in films and video games. We’ve also witnessed crossovers in the other direction: for instance, Angelina Jolie played a real representation of animated character Lara Croft in the “Tomb Raider” films. It seems that discomfort with the CGI representation of Grand Moff Tarkin stems purely from the fact that he is not a fantastic beast, and we recognize the features of the fictional character as those of a real deceased fellow human. With Tarkin, we are committing the ultimate sin of confusing the actor with the character.
There is nothing unethical about the CGI character of Grand Moff Tarkin: it is appropriate to the story, fundamentally the continuation of an existing performance, and it’s a grandly successful creation by Guy Henry and the Industrial Light and Magic team. However, the controversy serves as a warning that the digital world is moving fast, and that considerations of privacy may need to take this technology into account, too.
Kate Baucherel is a published author, speaker, trainer and coach, and co-founded community software company Ambix. She has two young children, and lives in the north of England. Find out more at www.katebaucherel.com, or follow @katebaucherel on Twitter.
To those who think Snapchat is just for silly selfies: Think again.
Originally an app for sending photos and short videos (“snaps”) that permanently disappear after being viewed, the tool has evolved considerably since its 2011 launch.
Here’s a quick synopsis: In 2013, the company created “Snapchat Stories,” a feature that lets users stitch together multiple snaps that can be viewed an unlimited number of times in a span of 24 hours. In 2014, it added “Our Stories” (now called “Live Stories”), enabling people at events to submit their snaps to a common story curated by Snapchat itself. And in January 2015, Snapchat introduced “Discover,” a channel for media companies to push content to the app’s users, which now add up to over 150 million daily — more than Twitter, though a far cry from Facebook. Those audiences are not just teenagers and college students. At a conference in February 2016, Snapchat said more than half of its new users are over the age of 25.
While many other changes have marked the app’s short history (2014 also brought “geofilters,” which stamp a time or place on a snap, and “lenses,” which overlay masks on selfies), but it is the “Stories” and “Discover” features that have piqued the interest of journalists and media companies most.
Indeed, the New York Times published more than a dozen articles about Snapchat and its parent company, Snap, Inc., this past November and December. “If you secretly harbor the idea that Snapchat is frivolous or somehow a fad, it’s time to re-examine your certainties,” wrote the New York Times tech columnist Farhad Manjoo. “In fact, in various large and small ways, Snap has quietly become one of the world’s most innovative and influential consumer technology companies.”
It’s true: Snapchat is a tool allowing journalists to report and distribute news in completely original ways. It’s also a platform empowering consumers to engage with news like never before. But the company, notorious for its secretive culture, is tight-lipped about its editorial policies, making it difficult to answer the ethical questions raised by the app’s journalistic features: Who is responsible for validating crowdsourced content? Who should decide what news is important for audiences to see? And is a company like Snapchat bound by the principles of journalism?
Yusuf Omar, mobile editor at Hindustan Times and a Snapchat enthusiast, is one journalist pushing the boundaries of Snapchat in his work. In July, for example, he used the app to interview victims of sexual abuse. He had his sources shield their faces using a Snapchat filter. The move wasn’t a gimmick. Rather, it gave a voice to women who felt they must remain anonymous in a country that stigmatizes rape survivors.
At a conference in Chicago this October, Omar discussed how he and his staff use Snapchat more broadly as a content creation tool. A journalist could simply use a smartphone’s built-in camera app to take photos and record video, but Omar said there are unique benefits to using Snapchat instead. The obvious plus is that the app lets journalists send content directly to their followers, in real time. Snaps can also be manipulated with text, drawings and icons. So, a journalist could take a photo of a scene, manually circle a point of interest and describe it with a caption. Omar emphasized Snapchat’s time and place geofilters as a valuable feature for journalists because they can’t be manipulated by the person taking the snap. “These are layers of verification that help us add authenticity,” he said.
He admitted that Snapchat is not a perfect tool. It can be hard to build a following on the app because individual journalists can only share content with Snapchat users who have chosen to add them as a friend. What’s more, there are no “likes,” “shares” or “retweets” on Snapchat, so content does not flow from user to user as it does on Facebook or Twitter. Journalists can measure the reach of their snaps to a degree. Users can see how many times their stories have been viewed by their friends and by who, but they can’t see how many friends they have in total or a full list of friends following them. Not even celebrities have this power.
Omar also talked about how a team of journalists out in the field can take snaps and send them to an editor back at the office, who can then combine the pictures into a package. Savvy journalists can even ask their followers at events to send them snaps for this purpose. It’s worth noting that these “citizen journalists” are members of the public and therefore not bound to any journalistic principles. In cases where citizen journalists are utilized, Omar believes that the editor is responsible for the integrity of the final product. “It’s the same when you hire a freelancer to do a story,” he said. “It’s still going to be the news editor that’s going to be on the chopping board if that story doesn’t make sense.”
Snapchat uses citizen reporters to crowdsource content, too. In the past, the app called on its users to cover political debates, Ramadan in Mecca, Hurricane Matthew and battles in Iraq, although less serious events such as concerts and sporting events are more often showcased in Snapchat’s Live Stories.
This is how Live Stories work: At events deemed newsworthy, Snapchat activates a feature that allows users on location to submit photos and videos. (Those users grant Snapchat the right to use their content in the app’s terms of service.) Then, a team of curators at the company chooses footage from the submissions, stitches scenes together and adds context such as graphics and captions. The results are unprecedented, often intimate snapshots of events from a diverse array of perspectives many journalists could only dream of attaining.
As alluring and innovative as Snapchat’s Live Stories are, it’s important to note they come from an entity that identifies itself as a camera company, not a media company. A camera company makes no promise that its content will be factual and balanced. Maybe Snapchat’s curators are held to those principles internally, but maybe they’re not. When asked what is known about the people making decisions at Snapchat, Omar is blunt: “We know so little. I don’t think there is a startup that is more mysterious than Snapchat. We know so little about their direction and where they’re going.”
So, how does Snapchat decide what topics to cover in Live Stories? Do curators follow rules about how many sources and perspectives to include in a story? What voices does Snapchat miss by relying solely on its own users to submit content? Does Snapchat fact-check? Should it?
After Facebook’s fake news scandal broke in November, journalist Jessica Lessin wrote an editorial for the New York Times arguing that Facebook should not be responsible for policing news. “…hiring editors to enforce accuracy — or even promising to enforce accuracy by partnering with third parties — would create the perception that Facebook is policing the ‘truth,’ and that is worrisome,” she wrote. “I’m not comfortable trusting the truth to one gatekeeper that has a mission and a fiduciary duty to increase advertising revenue, especially when revenue is tied more to engagement than information.”
Does the same sentiment hold for Snapchat, a company that builds its own news packages while relying on advertising revenue? Of course, traditional news organizations count on advertising, too, but “are checked by the power of our competitors and … by readers who stop paying us if we fail them,” wrote Lessin.
Snapchat makes money in a couple ways: from ads (it costs $350,000 to $600,000 for a branded geofilter and up to $700,000 for a lens) and from its Discover tool — a distinct space in the app where media companies ranging from Cosmopolitan to the Wall Street Journal share their content with Snapchat users.
Snapchat introduced Discover in a post on its website in January 2015, calling it “a new way to explore Stories from different editorial teams” and “the result of collaboration with world-class leaders in media to build a storytelling format that puts the narrative first.” This line is perhaps the most striking: “Social media companies tell us what to read based on what’s most recent or most popular. We see it differently. We count on editors and artists, not clicks and shares, to determine what’s important.”
Again, the question arises: How does Snapchat ultimately decide what’s important? And how much does that answer depend on who is paying the bills? Only Snapchat’s paying media partners can use the Discover space, and a partner can reportedly be booted at any time. As with Live Stories, content in Discover appears to be fully at the mercy of Snapchat, which does not openly detail its policies for branded content. “The messaging and media app has no formal branded content program, and enforces rules arbitrarily about what is and isn’t permitted,” said advertisers and publishers contacted by Digiday.
Lately, it seems that Snapchat has made a new move every day: In September, the company debuted Spectacles, which are sunglasses with a built-in camera hooked up to Snapchat. In November, it filed paperwork for an initial public offering in early 2017 (the value of the company is said to be up to $25 billion, one of the highest stock debuts in years). Finally, in December, the company announced new partnerships with Disney and Turner to create original TV shows hosted on the app.
Outsiders don’t know what Snapchat’s plans for its news division entail, but it’s clear that as this so-called camera company grows, so will its power to influence what kind of content it exposes to its millions of users.
Nora Dunne is a Chicago-based writer and editor. She earned a bachelor’s degree in journalism from Boston University in 2010.