Last month, a video emerged online of a University of Mississippi student biting off the head of a hamster during a spring break party. Soon enough, the young man was identified and has since withdrawn from the university, possibly facing animal cruelty charges. Should his alcohol-fueled spring break misdeed cause him embarrassment and woe decades later? Should future employers, lovers, family members or potential in-laws be able to dredge up this incident? Or does the young man have a right to be forgotten and define himself as someone else than that guy who bit off a hamster’s head?
While the European Union (EU) has embraced the view that individuals should have the ability to remove certain personal information from the internet for more than a decade, the United States has been slow to adopt the concept.
In May of 2014, a ruling by the European Court of Justice (ECJ) brought the topic to the forefront. The fact that the defendant was Google Spain, a subsidiary of Google Inc., a company based in the United States, brought the conversation close to home for Americans. In the ECJ case, the plaintiff, a Spanish gentleman whose home foreclosure (since reconciled) had been publicized online, asked for the removal of the content “since it was no longer relevant.” Previously, the Spanish Data Protection Agency had denied his request, determining that the content was legal and accurate. The ECJ disagreed, however, concluding that the data, while lawful, was “…inadequate, irrelevant or no longer relevant, or excessive in relation to the purposes for which they were processed and in the light of the time that has elapsed.” Its decision concluded that search engines are responsible for the links they point to, which forced Google to comply with EU data privacy laws and remove links pointing to pages where the gentleman’s information exists. The ECJ further stipulated that Google was required to allow others to request information removal. For clarification, the official decision noted that future requests for removal could be denied if such information were justified because of a significant question of public safety or interest. That means that the data controller, in this case Google, would be required to check all inquiries for deletion against this test.
Wikipedia founder, Jimmy Wales—who is serving on a Google advisory committee that will help the data giant determine which removal requests will be allowed— is outspoken on his opinion of the legislation. He believes that allowing an individual to dictate what links are removable is wrong. He states: “In the case of truthful, non-defamatory information obtained legally, I think there is no possibility of any defensible right to censor what other people are saying. It is important to avoid language like data because we aren’t talking about data—we are talking about the suppression of knowledge.” In most cases, this “suppression of knowledge” is requested by individuals concerned with covering up nefarious activities. After Google’s request for removal form became available online, the company received over 12,000 requests for data removal in one day.” Since the form was published, over 250,000 links have been allowed to be removed. In a bold attempt to champion journalistic freedom in the face of at least some of these removed links, the British news source The Daily Telegraph is maintaining a list of Telegraph stories that have been removed from search results and making it available on its site.
Mr. Wales’ concerns about censorship are justified, as there seems to be plenty of gray area when determining which links should stay and which should go. According to the BBC, some of those first requests to be forgotten included a convicted pedophile’s appeal for links to pages regarding his conviction to be removed; a doctor who asked that links to negative patient reviews be deleted; and a politician running for re-election who wanted links to an article referencing his conduct in office deleted. Most people would agree that in the cases above, the information should not be removed from search results. But what about shocking, and far more juvenile offenses such as the hamster-biting fraternity boy mentioned earlier?
Lawyer and former Director of Global Public Policy at Google, Andrew McLaughlin, finds the ECJ ruling fear-inducing. In discussing it, he imagines a dystopian political system where the well-connected rewrite history to whitewash their misconduct while collecting armloads of personal information and surveillance on private individuals. This reliance on maintaining the status quo and discouraging individual thought was rife in the totalitarian governments of pre- and post-World War I Europe. Lack of privacy protection makes it easy for the elite or politically powerful to coerce individuals into silence. It also removes people’s ability to reference past events, a tactic used by dictators and authoritarians throughout history and part of the reason why the EU is so protective of privacy rights.
American hesitancy toward the right to be forgotten finds its basis in the dichotomy between privacy and freedom of speech. Our nation’s political and cultural background have predisposed us to be more concerned with protecting speech than privacy. However, there are certainly strong advocates of privacy protection in the U.S. They argue that in the past, adverse events in a person’s life would be expunged after a certain time period—criminal records would be sealed; credit reports cleared of bad debt; bankruptcies removed—so that individuals seeking to “start over” had the opportunity to move beyond earlier bad decisions or regrettable events.
Unfortunately, the Internet has ensured that if such events are posted online, they can be found via a simple search far beyond the period that the records have relevancy. This ability to store and retrieve data indefinitely can significantly impact the ability of people to recover, move forward, or rehabilitate their lives. What’s more, with advances in big data collection and data mining techniques, there is a possibility that every “Like” button you click on Facebook or every tweet or retweet can be aggregated and assembled to create a road map of your actions and opinions over time. Privacy protectionists try to win points with groups that advocate for free speech by claiming that disregard for protections provided by right to be forgotten legislation would be a deterrent to free speech. For example, knowing that links to search results for politically unpopular views will be cached for an indefinite amount of time could limit participation in political activism as people may worry about posting views that clash with the mainstream.
On the other hand, opponents of the right to be forgotten claim that this type of legislation allows discretionary censorship of individual information that may be important to consumers or individuals making professional or personal choices. This “cleaning” of data could preclude people from being able to protect themselves from fraud and personal injury when researching potential employees, service providers, friends, and even family. Visionary author George Orwell famously stated: “He who controls the past controls the future.” Allowing revisionists to “sanitize” events, reviews, and personal histories to reflect the reality that they want pushed to the forefront may not be in the best interest of the public. With the adoption of right to be forgotten legislation, the state could allow individuals to remove links to factual information for their benefit or the benefit of others that they designate. Even though the editing of information is self-directed, it still amounts to censorship.
Questions that need to be answered as our consideration of the right to be forgotten law continues include: How much time must pass before relevance is determined? Who is going to make the call in weighing public interest and individual rights against one another when examining requests for deletions? Finally, there is also the issue of compliance and its cost to companies and consumers. Marc Dautlich, a lawyer at Pinsent Masons, recognizes the difficulty in asking search engines to manage the status of hundreds of thousands of requests. He asks: “If they get an appreciable volume of requests what are they going to do? Set up an entire industry sifting through the paperwork?” It’s a good question and one that the courts should consider closely when contemplating legislation.
So which is the more important right to protect—freedom of speech or privacy? I suggest we need both. We need privacy so we can voice our opinions free from political or personal reprisals. We should be free to speak our minds without fear that it may be held against us in perpetuity. People grow, change, and mature in their personal and professional lives. Something said or done as a child or a young adult probably does not reflect the skills, thoughts or opinions of the same person twenty years later. We should not allow youthful indiscretions to be a blot on someone’s character forever. But we also need freedom from censorship in order to operate in a truly enfranchised society. For example, allowing professionals to cherry-pick and remove reviews from search histories or to eliminate evidence of incidents of wrongdoing is to suppress the lawful truth. Some behaviors come within the scope of public interest, and records regarding them should be retained for reasons of public safety. Another type of information, the kind that is gossipy, hateful, libelous, slanderous or defamatory and outdated is a different matter and individuals should be allowed to remove this type of data from public view.
Nikki B. Williams is a freelance writer based in Houston, TX. She has written for a variety of clients from the Huffington Post and D.C.-based political action committees to Celtic jewelry designers in Ireland. Her first nonfiction book, The One Size Does NOT Fit All Guide to Stress Management will be available on Amazon soon. Check for updates at nikkibeewilliams.com.
Digital disease detection (DDD) has been gaining momentum over the last fifteen years. The internet has become a resource for clinicians and health officials looking for new ways to determine the strength and breadth of diseases and communicating this information to the general public. Increasingly, disease-related data is being dispersed and collected through both formal and informal channels, from chatrooms and blogs to web-search analyses. This shift to web-based information mining will fundamentally change how public health information is reported. Recent adopters of DDD hope this information will provide early warnings so that health precautions can be taken promptly. These alerts could potentially prevent epidemics and save more lives.
Like many digital technologies, the burgeoning field of DDD, also called infodemiology, is expanding rapidly. The assumption is that this new and more personal way of collecting health information and disease dispersal rates will result in a number of public health benefits. These include: enhanced detection timeliness, increased response time and health and safety readiness, and a reduction in fatalities. However, these supposed assets of DDD assume a profusion of ethical challenges in regard to privacy, accuracy, verification, and compliance.
Before addressing the ethical ramifications of DDD, we need a firm understanding of the process of collecting and disseminating data. There are several major platforms currently in use, with many more in the development and testing stages. ProMed mail, arguably the first and oldest DDD player, was established in 1994 as an email service to over 188 countries. It disseminates reports of disease outbreaks by retrieving online data from blogs, emails, and various online local news outlets. The World Health Organization’s (WHO’s) Global Public Health Intelligence Network (GPHIN) is a similar news-crawling site that was created soon after, in 1997. Its software allows GPHIN to collect data every 15 minutes. ProMED and GPHIN were instrumental in keeping health officials well informed during the 2002 SARS outbreak in China.
One of the most in-the-news DDD names, Google Flu Trends, uses aggregated Google search data to predict global trends of influenza cases. Famously, Google Flu overpredicted flu severity in 2009 when public confusion surrounding H1N1 caused faulty data generation. One newer system, a dengue tracker developed for Sri Lanka in 2013, relies on a computer model that combines information on current dengue cases, weather patterns, and mosquito data to predict the spread of the disease. The information, in the form of hotspot maps, is disseminated to public health workers and laypeople alike. This model encourages the public to report disease symptoms, mosquito breeding sites, and mosquito activity. Citizens can submit the information on cellphone friendly reporting forms, which also captures their location.
Once an individual has made a report, she receives a health alert tailored to her location that includes helpful information, which can easily be shared on Twitter, Facebook, and other social networks. Challenges identified by the creators of the program include poor participation because of a reluctance to share disease information; mislabeling dengue as a disease with similar symptoms; and reports being influenced by demographics, limiting the effectiveness of the data.
Most recently, researchers at Penn State University developed a system that uses Twitter streams along with medical records to see if they could correctly predict who had the flu. They found that of the people whose Twitter accounts they examined, about half discussed their illness on Twitter. For the rest, the team mined data to glean subtle clues about the Twitter users’ health status. For example, if they posted they were going to a party, they were less likely to be sick, but if their tweeting rate declined, there was a higher likelihood of illness. The authors of the study proved that by using tweets alone, they could predict the correct medical diagnosis 99 percent of the time. They are currently working out how to apply this method to the spread of HIV. The idea of using Twitter-based data mining methodology to track disease spread introduces the possibility of labeling and tracking individuals based on casual public comments. This type of tracking raises critical ethical questions due to the stigma of HIV and other sensitive diseases.
Issues arising from privacy standards in a globally shared model
One of the greatest concerns to present itself in DDD is the lack of a level playing field among all of the nations that use this data. Some countries have stricter privacy and transparency laws than others. How will this be reconciled among nations if the agency collecting and disseminating the data resides in a country that does not place a high value on protecting personal information? The difference between current uses for “big data” and DDD is that data for DDD is being used for a common, public good and not for the benefit of a corporation or individual—yet. As DDD technology becomes more viable, plenty of corporate players will evolve to capitalize on the trend, potentially taking it out of the nonprofit arena and placing it where there is an emphasis on capitalistic competition.
A holistic approach to these issues is needed. Initially, there should be a global governance system that ensures the privacy of all individuals whose data is used for public health reasons, as well as an emphasis on preserving the data for public health use only. Global governance would protect privacy throughout all areas of the world and preclude international corporations from using the data in ways other than initially intended. There should also be some level of participatory agreement or consent before data of a private nature is shared and participants’ identity should be protected.
Issues arising from incorrect data
DDD relies on spatial analysis of cases in both data collection and outbreak reporting, but spatial event data has been found to be widely inaccurate due to the geocoding process. Spatial analysis entails monitoring cases within a given geographic space and looks for discernable patterns from which disease spread probability can be extrapolated. Geocoding supplements spatial data by matching geographic coordinates with an address, a postal code or other location identifier in order to pinpoint specific outbreak locations. The data accuracy varies based on the population density and data quality. Even a small number of errors of this nature—about 10 percent of the records being considered incorrectly geocoded—can result in incorrect disease distribution maps. In addition, when DDD entities concerned with privacy try to de-identify individuals when compiling data sets, there is a noted trend toward data inaccuracy. Both the technology involved with geocoding and that used to cloak individual identities within the mass of data need to be improved and perfected before we can be entirely reliant on DDD predictions and safe from privacy invasion.
Issues arising from incorrect and delayed outbreak notification
Incorrect data can lead to erroneous public notification or warning and unnecessary panic among the populace. If the error is not caught early in the process, it can result in a strain on the public health system as people use clinical personnel, medicines, and other resources unnecessarily. Disseminating incorrect information can also have a serious impact on the public’s perception of an industry and the government, which can result in disregard of subsequent notices. Failsafe and validation requirements need to be in place to prevent wasting public health resources on invalid predictions and to curtail widespread panic.
Not surprisingly, availability of data also influences the speed of information dissemination. Countries with a free press and a high proportion of internet users were able to get information on outbreaks to the public in a more timely manner, but even these countries experienced a 12-17 day lag between data collection and outbreak announcement. Partially free-press areas (17-24 days) and countries with no press freedom (24-37) experienced the greatest lags in outbreak reporting. The countries with the greatest lags also tended to embrace government-produced propaganda that clouds reporting of emerging outbreaks. Both lag-time and ambiguous reporting can result in unsuccessful or inaccurate information spread.
DDD has a lot of structure and regulation to undergo before it can be considered a real, rather than rogue, technology. Most people are familiar with the leper colonies of the past, where infected persons were shunned, and even harmed, for the risk they represented to society. A global communal entity that is consistently using personal information to track potentially life-threatening illnesses can identify and segregate sick individuals to their detriment. There must be a watchdog organization created to oversee the correct and private handling of personal data. There must also be significant oversight and monitoring, including checks and rechecks, before warnings and predictions are released publicly. Strict empirical control is needed to prevent false alarms and widespread panic that can threaten and endanger lives. In the event of a false alarm, there should be a standard plan available to world health officials that would allow them to subvert the damage and offer immediate remediation to areas negatively impacted. Finally, there should be transparency to consumers who are active online. They should be allowed to choose to participate in DDD programs and have options regarding consent for data usage. DDD represents a leap forward in the science of predicting and, ultimately, preventing local and global health disasters, but it should not come at the expense of privacy.
Nikki B. Williams is a freelance writer based in Houston, TX. She has written for a variety of clients from the Huffington Post and D.C.-based political action committees to Celtic jewelry designers in Ireland. You can contact her through her website, nikkibeewilliams.com.
Wallets will be a relic of the past if the forces behind mobile payment apps have their way. Services like Apple Pay, Samsung Pay, Google Wallet, and seemingly countless others aim to nullify the need for physical credit and debit cards – the payment systems that made cash and checks obsolete just a couple of decades ago.
On the surface, paying for things with your phone seems convenient, but not absolutely necessary. It’s not really a hassle to pull out a credit card while waiting in line at the grocery store. But the smartphone already encompasses so many aspects of everyday life. Why not consolidate one more feature onto it?
Let’s walk through a hypothetical day: You’re out for a run in the morning, listening to music via headphones attached to the phone strapped to your bicep. You stop to purchase a cold water bottle at a convenience store. No need to store a card or cash in your waistband or tucked in your shoe. Tapping the phone against an electromagnetic reader at the register completes the purchase. Later, at work, you owe a coworker $20, but have no cash. But you don’t have to find an ATM during lunch: Instead, you send the money over Venmo, an app that’s hooked to your bank account. The colleague receives the payment instantly. That night, you buy your mom a birthday present from an online store. Instead of fishing out your credit card to type in its number, expiration date, and so on, you click one button that’s already hooked to your Google Wallet account. Done.
Acknowledging the potential convenience of mobile payments is relatively easy: They’re fast and they prevent you from having to carry around as much stuff. But there are ethical issues the industry needs to resolve before consumers will completely accept the concept, notably those related to accessibility and privacy: First, how easy is it to use these apps in the real world? Do they work anywhere, for anyone with any phone? Second, how do the companies behind these apps protect consumers’ private payment-related data? And how are they using that data for their own purposes?
Let’s focus on a major player in the mobile payments sphere, Google Wallet. The digital wallet app holds not only a person’s credit and debit card information, but also gift and loyalty cards. It can also be used to make purchases at online stores partnered with Google or to send money peer-to-peer. People make purchases in stores using near field communication, or NFC, a technology that uses electromagnetic induction to let a device communicate or exchange data with another device.
Some background: The Google Wallet FAQ page says that the app works “anywhere MasterCard PayPass is accepted,” which includes “millions of merchant locations in the United States.” Searching MasterCard’s locator for grocery stores within one Chicago Zip code reveals 42 locations that accept this payment method, including chains and independent mom-and-pops, but not every chain or mom-and-pop grocery store. A user would want to be familiar with the locations that accept the service and those that don’t, or else, carry around backup payment options – but there goes some of the convenience factor.
So Google Wallet does not work everywhere. But a bigger issue is that it doesn’t work for everyone. In fact, to use the app in stores users must have NFC-enabled Android phones – not the cheap ones or the older models. Consider CNET’s list of the best Android phones of 2015. The top-rated, NFC-capable Galaxy Note 4 retails for about $700. The “best budget” Android, the Motorola Moto G, costs about $200, but does not have NFC. To enjoy the benefits of Google Wallet, people must be both willing and able to buy an expensive phone that has the required technology.
If apps truly become the preferred mode for payment, those who won’t or can’t invest in pricier phones may be at a disadvantage – even locked out of certain stores if apps become not just the preferred option, but the only option they accept for purchases. Or, less drastic, maybe a shop offers special discounts to customers who use a payment app. Is it fair to link something as ubiquitous as shopping to a technology that is not attainable to all?
These scenarios are not likely to become realities anytime soon. In a 2014 Deloitte survey, just 7 percent of consumers in the United States said they had used their smartphone to make an in-store payment. About half of the respondents said that they didn’t even know if their phone had NFC capabilities. Another survey, conducted by MEF, a global trade association focused on mobile commerce, reported that 79 percent of U.S. respondents were not comfortable sharing their personal information over an app and 35 percent said they don’t believe mobile payment systems are secure.
Despite these concerns, the security features of payment apps may actually be their primary perk. When paying with Google Wallet, a consumer’s stored credit and debit card data is not passed on to the merchant. This is a significant plus as cyberattacks that compromise customer accounts become more common. Google Wallet also provides fraud protection, lets users lock the app with a security PIN, and allows people to remotely disable the app should a phone be lost or stolen.
Consumers can rest assured that their personal information is probably safe from hackers and thieves. It’s not, however, safe from Google’s grasp. It’s well known that the company has the ability to monitor surfers’ online behavior – site visits, locations, and so on. With Google Wallet, it has access to arguably some of the most personal of personal data: your finances.
Another mobile payment app, Apple Pay, has many of the same pros and cons as Google Wallet.
It requires NFC technology, which works only on the iPhone 6 and iPhone 6 Plus ($649 and $749 without a contract). But it also provides security benefits that supersede other traditional payment methods. Apple Pay even has a security feature that Google does not: A fingerprint identification sensor called Touch ID.
Ultimately, most people won’t make a conscious decision to choose Google Wallet over Apple Pay. Instead, it will come down to the phone and carrier they already have. Those who use neither Google nor Apple will soon have another option: In February, Samsung acquired LoopPay, which allows the smartphone maker to launch its own mobile payment system, one that also features a fingerprint reader. Samsung Pay will use a different kind of technology, known as Magnetic Secure Transmission, which lets digital wallets scan the magnetic stripe readers already on credit terminals.
All of this competition in the mobile payment sphere may be confusing for consumers, many of whom have not ever used one of these apps. But the competition among Apple, Google, Samsung, and many others is actually making mobile payment apps more secure as each company tries to one up the other in an area they believe is here to stay.
Is it inevitable that consumers will eventually embrace it, too? People already use their phones to send highly compromising selfies, to write sensitive business emails, and to bank online. Uploading all your credit card and checking account information to Google or Apple accounts doesn’t seem so dangerous in that light. Indeed, mobile payment transaction values doubled between 2012 and 2013, according to a report by eMarketer. It’s already a multibillion-dollar industry despite hesitance from some consumers and stores.
Last fall, CVS made headlines when it turned off NFC readers in its credit card terminals, disabling the ability for customers to use Apple Pay and other mobile payment apps at its stores. Instead, CVS joined a consortium of some 40 merchants led by Walmart called Merchant Customer Exchange to develop their own mobile payment app, CurrentC. While the service is accessible to anyone with a smartphone, it has its own set of issues.
For starters, CurrentC is hooked up to consumers’ bank accounts, not their credit cards, in order to allow companies to sidestep the 2 to 3 percent credit card processing fee that they have to pay, say, Visa and MasterCard when customers use those forms of payment. But will Walmart and CVS pass those savings onto their shoppers or minimum-wage employees?
Another problem for consumers is that these stores feature CurrentC exclusively, a move that seems self-serving. Walt Mossberg, of tech news site Re/code, summed it up by writing: “I simply believe that people who respect their customers and have faith in their own technology products should welcome competition, and that consumer choice should be a paramount value in retailing.” Customers seem to agree: Reviews in Apple’s App Store and in Google Play give the CurrentC app an average one star rating out of five, noting faulty security features and cumbersome usability.
It’s reasonable that other companies would want to enter the mobile payment app game – why should the big names win the whole market? Right now, they don’t. There are many other relatively well-known mobile payment apps, including PayPal, Venmo, LevelUp, Square, and store-specific ones like the Starbucks App, plus many less prevalent options. But even with all these choices, it’s not yet easy to forgo cash and plastic completely.
While mobile payment apps may be safer than using credit cards, ethical issues remain. They may perpetuate a dichotomy between those who can afford compatible phones, and those who can’t. The apps also carry the same privacy concerns that all digital products do: They enable companies to track deeply personal information. The details of how they are using this data now, and how they will use it later as the technology evolves, are unsettlingly unclear.
And even if every logistic and ethical issue is eventually reconciled, a smartphone can still break or run out of battery.
Nora Dunne is a Chicago-based writer whose work has appeared in the Boston Globe Sunday Magazine, the Christian Science Monitor, Metro newspapers and Kirkus Reviews. She earned a bachelor’s degree in journalism from Boston University in 2010.
A smiling pile of poop. A rainbow. A cat with hearts for eyes.
Do these belong in the courtroom?
Because they’ve been showing up there lately. Two months ago, 17-year-old Osiris Aristy was arrested for making a threat against the New York Police Department. His threat included a police officer emoji – a tiny cartoon symbol you can text someone; in this case, a man wearing a police cap – and three gun emoji. The Brooklyn resident posted them to his Facebook profile.
Of course, that wasn’t his entire message. Aristy also wrote “[Black man] run up on me, he gunna get blown down,” and, hours earlier, a photo of a gun with “feel like katxhin a body right now.” That was enough to get him charged with making a terroristic threat.
Aristy’s charges were dropped last month, but it’s pretty significant that he was charged at all. He’s not the first, either. Emoji have been used as evidence in a handful of recent court cases, raising the question, is it ethical to use these tiny, seemingly harmless cartoons as evidence? Especially since their meaning can be so murky?
Basically, do emoji count?
To answer that, let’s take a look at their prevalence, usage, and meaning.
First, emoji are definitely part of language. Currently, statistics about how many emoji have been texted are not available; however, on Twitter, people use them more often than hyphens, the number 5, or capital V. A dizzying real-time emoji tracker reports that the most popular emoji on Twitter, a face crying tears of joy, has been used more than 626 million times. Since there are some 720 emoji, total use on Twitter alone is probably in the hundreds of billions.
If their popularity alone isn’t proof enough that they’ve become part of the lexicon, well, ask officials at the Library of Congress and the Oxford English Dictionary. The former accepted a copy of Moby Dick made up completely of emoji in 2013. Two years earlier, the heart symbol became part of the Oxford English Dictionary, meaning “to heart” or “to love.” You can try to brush emoji off as fringe teen slang – their main home is on an iPhone screen, after all – but they’re increasingly becoming mainstream. Legal experts even say emoji are covered under our First Amendment right to freedom of speech.
So whether we like it or not, a cartoon pizza slice now counts as language.
Make no mistake, emoji are open to interpretation. For instance, using a winky face can be flirtatious, or at the end of a text that reads “I hate you,” express someone is joking. Some think the emoji of praying hands is actually a high five. You have to consider the relationship between the sender and the receiver, the context of the message, and typical use of the emoji itself. (A gun is less ambiguous than a wink.) Thanks to irony, sarcasm, and plain ol’ variations in usage, language is no straightforward thing.
So yes, it would be ridiculous to base an entire court case on emoji. As Wired writer Julia Greenberg writes, “None of these cases [that mentioned emoji] relied solely on the emoji, of course. Evidence, arrests, and prosecutions are far more complicated than that.”
But sometime soon, courts will have to answer Eli Hager, who asks on the criminal justice news site The Marshall Project, “[Are] emoji significant and unambiguous enough to be presented to the jury the same way the words are? Are some emoji significant, but others, not?” The gun emoji, for instance, seems especially incriminating and straightforward.
In the case of 22-year-old Christopher Levi Jackson, however, using the gun emoji a whopping 27 times wasn’t enough to get him charged with murder. A few hours after someone shot and killed 25-year-old Travis Mitchell, Jackson texted Mitchell’s sister, “It’s a chess game. I’m up two moves a head … try again. Bang bang, bang,” followed by 27 gun emoji. Detectives on the case believed that Jackson’s text meant Mitchell wasn’t the intended victim, and Mitchell planned to kill whoever was. Police arrested Mitchell for first-degree murder, but without further evidence, they had to release him.
And that’s how it should be. As D.C. attorney John Elwood told Buzzfeed, “Words that could be construed as threatening are enough to make an arrest, but they shouldn’t be enough to convict someone.” Emoji should be examined with as much context as possible, in light of the sender’s criminal record, past behavior, and other factors. Time will tell how much value juries place on them.
For now, the emoji is in its infancy; words are still our main units of language. You wouldn’t build a court case on body language, even though it’s a huge part of communication (55 percent, according to researcher Albert Mehrabian, who came up with the famous “93 percent of language is nonverbal” statistic). There isn’t a benchmark yet for emoji use in court. Lawyers can’t even agree on whether emoji should be read or shown in the courtroom.
In a recent court case, for example, the defendant’s lawyer argued that emoji should be included as evidence because they shed light on the rest of the message. The defendant’s lawyer not only asked the judge to include an emoji after a particular statement, but to show the emoji to the jury. In a letter to the judge, the lawyer argued that describing emoji aloud wasn’t sufficient because they “cannot be reliably or adequately conveyed orally.”
The bottom line is, just because you can say or text something doesn’t mean it’s free of consequences. Emoji are part of how we communicate, so they have repercussions. At the risk of sounding like an after-school special, think before you text.
After all, interpretation often matters more than intention. Do you really want to wind up in jail for threatening arson because you used 12 angry faces and 16 fire emoji? Didn’t think so. Because even if you’re , jail is .
On the week of November 20th, 2014, The New York Times ran a story called “Cities Energized: The Urban Transition.” From its title, one might assume the piece concerns a notable shift in how cities consume and/or generate energy. After all, it’s a vital issue—the kind that a paper such as The New York Times might cover. President Obama himself has been calling for a decided shift to renewable resources throughout his presidency. Though in his 2014 State of the Union speech, the president celebrated green energy initiatives while praising the oil boom of the preceding year in nearly the same breath. It’s a divisive topic. But one would hope, by virtue of the story’s placement in the paper, that the author strived for objective journalism. Furthermore, one would hope that the editorial staff aimed towards poignancy and relevance with the selection of the piece. Lastly, one would expect that the piece offered a critical perspective on such an issue. As it turns out, one would be categorically wrong about all of these things.
Despite all traditional indications to the contrary, The New York Times was paid by Shell to place “Cities Energized” in its paper. The piece is, definitively, a piece of “native advertising.” Though Shell is only mentioned a handful of times throughout, the story’s placement is all part of a focused effort to depict the company as a leader in energy. Notably, the Shell ad marked the first time in the history of The New York Times that a piece of native advertising has been featured in print. While the paper has increasingly experimented with native ads in its online format, this recent move seems to indicate a strategic shift, both for the paper, and perhaps for the advertising industry as a whole. Why now? Meredith Levien, EVP of adverting at the Times, explained that while advertisers had expressed interest in offering native print ads in the past, it was determined that the ads weren’t befitting of the paper.
Defining the practice has proven tricky. Solve Media made a valiant effort: “Native advertising refers to a specific mode of monetization that aims to augment user experience by providing value through relevant content delivered in-stream”. But this is a definition based on rhetorical posturing and advertising jargon. In a Huffington Post op-ed, Fahad Khan offers a broader, yet considerably more tempered definition: “Native ads are ads in a format that is native to the platform on which they are run, bought or sold. Native advertising is the activity of producing, buying and selling native ads.”
The question is ultimately contextual. By definition, native advertising appears alongside editorial content in a clandestine fashion. Native advertising mimics the form and function of the platform on which it appears. Advertising content can be viewed as “native” when it appears alongside other content and media that appears on said platform, hence the term. But editorial content, it is not. An important ethical distinction between the two involves the exchange of money—more precisely, which way said money is directed. Editorial content is generally an expenditure of the entity that produces it. Creating content costs time, money, manpower and other resources. Conversely, advertorial content is another stream of revenue for said entity. Another party supplies the work—a party with the financial means to buy editorial space in a paper such as The New York Times.
The Shell piece received quite a bit of coverage in ad industry publications. Moreover, the practice as a whole has generally been heralded by industry insiders as the latest and greatest innovation in marketing communications. Likewise, The New York Times piece received praise of this nature. But despite the hype, it’s unclear if native advertising delivers a worthwhile return on investment. As such, the practice hasn’t really taken off quite yet. Primarily, the analytics needed to measure its success haven’t been fully developed. Strategists also haven’t been able to hone audience targeting enough to make native placements truly worth it over other forms of digital advertising. But there are some major signs that mark the practice’s rise.
Take Forbes’s Brand Voice, for example. Brand Voice is a platform for company-sponsored long-form content. Last year, the platform accounted for 30 percent of Forbes’s total ad revenue. And as the hype of native advertising wears off, efforts to measure its effectiveness will become more reliable. Selina Petosa from Ad Age predicts “an expansive shift toward native long-form content in the years ahead”. But of course, while many questions about the effectiveness of native advertising have yet to be answered, the medium’s viability is not precisely relevant to its ethical validity.
When it comes to understanding the emergence of native advertising, it helps to have some historical context. Modern advertising was created from innovative techniques introduced by the tobacco industry in the 1920s, particularly with the campaigns of Edward Bernays, a man widely considered the founder of “Madison Avenue” style advertising. He is also known as the father of public relations. The “Madison Avenue” style is typified by creative use of language and graphics employed specifically to manipulate people’s emotions on a mass scale, usually for the purpose of promoting a product or service. Bernays drew heavily from the works of his uncle, Sigmund Freud, whose psychoanalytic theory provided Bernays with a framework for his methods. He was also influenced by crowd psychology, a fledging field of study at the time. While Bernays held the position that “herd instinct” had caused a dangerous tendency in society to be prone to manipulation, he also maintained that such manipulation was necessary. By understanding how this group behavior worked, Bernays hypothesized that one could manipulate people without their conscious knowledge of what’s happening. Such is the mentality of the modern school of advertising. With this in mind, it’s really quite obvious that native advertising isn’t a trade anomaly. The practice is the logical ends of such a mentality.
One essential factor in the success of a native ad piece is its ability to be indistinguishable from the editorial content with which it is featured. While The New York Times piece conceivably passed in this capacity, an earlier ad featured in the Washington Post arguably missed the mark. In fact, Kevin Getzel, the paper’s chief financial officer, made the point that the piece was “designed and labeled to be differentiated from the newsroom-generated journalism on the page, hence the coloration and slugging”. But this brings up an interesting, albeit admittedly semantic point. Is an ad truly native if it is clearly distinct from the editorial content? Does the mere virtue of its advertorial nature make the content invalid from an editorial perspective? While—yes—The Washington Post ad is clearly labeled, it occupies space formerly reserved for editorial content, thereby undermining the entire reason the newspaper exists in the first place.
While native advertising may mark an important advancement in the way we use communication media to market products, it also may in turn be the death knell of journalism as well know it. One should note that the Times and Post pieces are just the initial ventures of major newspapers in the business of native. If they prove successful, it’s a given that more and more companies will get in the game. An arms race in the native playground may lead to an increase in the quality of advertorial content. Though this point may be lost upon old fashioned, honest-to-goodness journalists, whose editorial space will surely dwindle if native becomes mainstream. But perhaps there is a threshold, beyond which consumers will not tolerate the overstepping of advertisements into editorial space. After all, people can only take so many ads before abandoning a product wholesale. As analytics evolve, this metric may present itself in interesting ways. It very well could be that native, if proven effective, will only serve to replace traditional forms of advertising rather than continuing to encroach upon editorial territory. It’s a slightly less bleak outcome, but still one which does not address the underlying problem.
Native ad champions tout this one essential perk of the practice as such: it’s a mode of advertising that does not interrupt the medium upon which it’s featured. The ad does not disrupt the user experience. Unfortunately, that very perk presents an ethical problem. While a newspaper’s primary function is to inform, the essential nature of the native advertising is to deceive. Its function is inherently deceptive. If a piece of native content is successful, the consumer is fooled about the work’s intent. So whereas the content’s function is to deceive, the intent is to promote a brand. Of course, consumers aren’t so easily fooled. Generally, one knows an advertisement when one sees it. There are numerous, rather blatant indicators that give it away. However, as the practice of native advertising becomes commonplace, advertisers will become more adept at masking the true nature of the content. The price, unfortunately, is a gradual undermining of editorial works, and sadly, journalism as a whole.
It is perhaps a sad reality in today’s socioeconomic environment that a reputable paper like The New York Times requires new streams of revenue that categorically undermine the editorial content, i.e. the very reason people buy it in the first place. But this doleful exchange isn’t anything new, nor is the controversy surrounding native advertising. In fact, the Federal Trade Commission settled its first case on the matter in 1917, in which an ad for a vacuum cleaner was presented as a favorable review. Furthermore, most of the major points against native advertising, at least in relation to journalism, are extensions of the same ideological points one can make against advertising in general. When we come to think about the ethics of native advertising, one inevitably reaches a paradoxical juncture: how do we reconcile a potentially hurtful practice, in which the manipulation of others is so easily lauded? If a certain level of deception is required to promote a brand or a product, is it really worth promoting?
David Stockdale is a freelance writer from the Chicagoland area. His political columns and book reviews have been featured in AND Magazine. His fictional work has appeared in Electric Rather, The Commonline Journal, Midwest Literary Magazine and Go Read Your Lunch. Two of his essays are featured in A Practical Guide to Digital Journalism Ethics. David can be reached at email@example.com, and his URL is http://davidstockdale.tumblr.com/.