The recent violence in Charlottesville, Virginia has underscored the increase in racist hate groups in the United States. Heidi Beirich of the Southern Poverty Law Center (SPLC), an advocacy organization that monitors extremist groups in the United States, said, “Since the era of formal white supremacy — right before the Civil Rights Act when we ended [legal] segregation — since that time, this is the most enlivened that we’ve seen the white supremacist movement.” As of 2016, the SPLC reports that the number of hate groups, both black and white, in the United States has increased for the second year in a row.
In an era of equal-opportunity hate, the internet has become a proving ground for new tactics in assuring social justice. In the wake of the Charlottesville debacle, web domain host service GoDaddy, terminated their services to neo-Nazi website Daily Stormer. After attempting to secure services with Google and later, a Russian site, Daily Stormer has since retreated to the dark web.
Other internet-based companies have protested Daily Stormer using similar actions in an effort to control the proliferation of hate speech and violence in the online world. OKCupid banned white supremacist Chris Cantwell from its site for life, Airbnb deactivated the accounts of members whom they believed were headed to the Charlottesville rally and GoFundMe removed crowdfunding accounts for the legal defense of James Fields, the man accused of driving his car into counter-protestors at the Charlottesville rally.
On the surface, this seems like a good thing. But beneath the noble notion of stopping hate speech lurks a far more insidious issue: The decline of free speech in the face of internet capitalism.
The Truth Behind the Stance
Dan Race, a representative for GoDaddy, said the company terminated its relationship with Daily Stormer in August 2017 due to a terms of service violation. However, one month prior, GoDaddy made a seemingly incongruous decision to protect the hate site even after Daily Stormer published an article threatening children and family members of CNN employees as well as graphic images of various CNN journalists being shot in the head. When asked why GoDaddy continued service to the hate site, Ben Butler, GoDaddy’s director of network abuse, said, “We do not see a reason to take any action under our terms of service as [the article] does not promote or encourage violence against people. While we detest the sentiment of this site and the article in question, we support First Amendment rights and, similar to the principles of free speech, that sometimes means allowing such tasteless, ignorant content.”
What happened between July and August to cause GoDaddy to change its stance? The answer is that an outpouring of extreme public pressure precipitated their move to finally deny service to Daily Stormer. The kind of flip-flop in policy exhibited by GoDaddy is indicative of the much larger problem facing proponents of free speech online — the use of vague terms of service to justify business decisions predicated by consumer opinion.
In essence, GoDaddy is not acting against hate speech; it’s acting in its own best interest and the interest of its shareholders.
Most people were happy with the decision to deny service to Daily Stormer, a publication that consistently spews hateful and vile messages. But we must consider what happens if GoDaddy decides its business base doesn’t like it hosting pro-Christian, pro-Muslim, or pro-LGBT channels. The terms of service policies for GoDaddy and many other online companies are intentionally vague. Ambiguous language allows them to make quick pivots in policy to stay ahead in an industry known to elicit lightning-fast changes in public perception. The problem with this ability to self-determine action against expression is that companies may easily slip from censoring violent hate speech to removing speech that is only controversial to some, such as the Black Lives Matter movement.
In other words, what’s best for online companies’ business positions is at odds with the protection of free speech. As major intermediaries in the online environment, large private corporations have the ability to take swift action in the suppression of online speech because, unlike governments, their actions don’t require a court order. But this unchecked and opaque system of self-regulation presents a serious roadblock for free expression.
Further Complications in a Limited Online World
Currently, the internet has enough domain registrars that GoDaddy and Google’s response might subvert, but not destroy, Daily Stormer’s and other similar channels’ ability to communicate. But there are far fewer online payment processors, and withdrawal of this type of service can cost an organization the ability to raise funds necessary to support and communicate their cause, as in the case of PayPal freezing donations to Wikileaks in 2010. The difficulty with this kind of corporate activism is it results in information being moderated not by the courts and due process, but according to the whim of a few companies who control a chokepoint for public information.
As the online economy progresses, it has boiled down to a few major players — an oligopoly of technology, if you will. The top tier, dubbed “The Frightful Five” by the New York Times, comprise Amazon, Apple, Facebook, Google and Microsoft. Not surprisingly, the internet is experiencing consolidation across the board. In 2007, 50 percent of North American internet traffic came from several thousand websites. In 2016, only nine years later, 35 websites accounted for more than half of the traffic.
When just a few companies control hosting, payment and social media venues, they have the potential to make it increasingly hard for the public to access the rich, vibrant, culturally and politically diverse universe of information. Some of this is due to manipulative algorithms, exemplified by the way Facebook filtered news of the Ferguson riots in 2014, but some is a reaction to public outcry that puts pressure on their bottom lines. For example, in 2015 the House Foreign Affairs Committee wrote to Dick Costolo, Twitter’s CEO, to urge him to combat groups like the Islamic State. In a bold move, Twitter’s general counsel, Vijaya Gadde responded with a pledge to preserve “…the ability of users to share freely their views — including views that many people may disagree with or find abhorrent.” However, once the public hue and cry was raised, Twitter changed course and began to suspend purported ISIS accounts.
Thankfully, there are some outliers. In a remarkable stand against the pulse of public opinion, Tobi Lutke, the CEO of Shopify, an Ottawa-based e-commerce company, decided to continue hosting a store for Breitbart News, an extreme right-wing media outlet in the United States despite receiving more than 10,000 messages urging him to drop them. Lutke said, “To kick off a merchant is to censor ideas and interfere with the free exchange of products at the core of commerce. When we kick off a merchant, we’re asserting our own moral code as the superior one. But who gets to define that moral code?”
Who, indeed. When considering the rights and wrongs of online expression, it’s important to consider that what is offensive to one person may not be to another, particularly in the realm of politics and religion. All views must be protected and represented, regardless of what a vociferous few, or even many, might think. As Forbes contributor Kalev Leetaru writes, “. . . what one culture or religion might view as parody or satire might be viewed by another as hate speech. To a secular Frenchman, Charlie Hebdo’s cartoons lampooning the Prophet Mohammed might be viewed as legitimate political satire, while to a Muslim they could be viewed as hate speech inciting violence.”
Corporate entities very often model the opinions expressed by their consumer base, rather than seeking fair representation for all views. In 2010, political cartoonist Mark Fiore’s app was rejected by Apple for satirizing public figures. Four months later, he won the Pulitzer Prize and Apple reversed face after irate consumers emailed Steve Jobs regarding Apple’s decision, underscoring not only the problem with companies determining what is and is not offensive, but also the hold that public opinion exerts over company policies.
A Complex Solution
Private companies should be allowed to act according to their own set of policies, but those policies must have a foundation in law. For the United States, this means the government needs to work toward firmer oversight of online ethical issues as well as the development of distinct guidelines for corporate policies that are firmly anchored in constitutional law. At the least, online corporate user agreements, terms and conditions, and other policies should be clearly stated and transparent. Currently, only Google, Microsoft and Twitter, out of the 22 companies featured in the 2017 Corporate Accountability Index, divulge the type of data restricted in their terms of service. Besides having clear guidelines for users, online companies should have grievance procedures in place to address potential violations of freedom of expression.
While private companies should be allowed to retain their autonomy to determine the direction and position of their businesses, they should also be compelled to follow over-arching parameters that have been determined through due process. The independent policing of speech and content by several large organizations, if allowed to continue, can easily shift these corporations into a quasi-governmental censorship role, letting slip the reins of information from the public hands for good.
In the meantime, the antidote for hate speech, online or otherwise, is not censorship. As the late Supreme Court Justice Louis Brandeis wisely said in his Whitney v. California opinion in 1927, “If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.”
Nikki B. Williams is a bestselling author based in Houston, Texas. She writes about fact and fiction and the realms between, and her nonfiction work appears in both online and print publications around the world. Follow her on Twitter @williamsbnikki or at www.gottabeewriting.com.
A study, “College Students’ Drinking and Posting About Alcohol: Forwarding a Model of Motivations, Behaviors, and Consequences,” by researchers at the University of North Carolina and Ohio University, reveals an interesting correlation between drinking alcohol and posting about it on social media. Specifically, the study of 364 college students, which was also published in the Journal of Health Communication, found that those who had an “alcohol identity” were more likely to post about their alcohol consumption on social media.
The students participated in an online survey. Each participant was over the age of 18, was active on at least one social media site and had at least one alcoholic drink in the past month. The students were asked a variety of questions about consumption habits — and any related issues, social media habits and correlations between drinking and posting.
Ironically, posting on social media was a greater predictor that students would have alcohol-related problems than actually drinking alcohol. Dr. Charee Thompson, one of the co-authors and an assistant professor of communication studies at Ohio University, wrote, “This might be because posting about alcohol use strengthens a student’s ties to a drinking culture, which encourages more drinking, which could lead to problems.”
Another one of the study’s co-authors, Lynsey Romo, an assistant professor of communication at North Carolina State, made three particularly interesting points:
Alcohol use is clearly a problem on college campuses. The sobering statistics from the National Institute on Alcohol Abuse and Alcoholism reveal that each year:
The symptoms of an Alcohol Use Disorder include the inability to stop drinking or reduce alcohol consumption, forgoing other activities to drink, having memory blackouts, engaging in risky activities as a result of drinking, needing to drink more to obtain the desired effect and experiencing withdrawal symptoms.
However, while alcohol use – and misuse – among college students is problematic, the idea of student leaders and college administrators trying to identify at-risk students by scanning social media sites for alcohol-related texts and photos, is just as troubling.
First Do No Harm
Whether intentional or not, this type of behavior has the potential to cause harm to targeted students. Danielle Keenan-Miller, Ph.D., director of the UCLA Psychology Clinic and an adjunct assistant professor in the UCLA Department of Psychology, said, “The ethics code for the American Psychological Association has a number of guiding, or aspirational, principles that are meant to guide the behaviors of psychologists, several of which seem related to this case.”
According to Keenan-Miller, “First, psychologists strive for beneficence and non-maleficence, which means that we aim to benefit those with whom we work and not do harm, and clearly, there is a risk for harm to students if colleges use data about students’ social network postings in a way that is stigmatizing, jeopardizes their student status, or creates risk of legal harm.”
She admits that the students who are accurately identified would perhaps benefit from receiving assistance, but concludes, “The study is not an intervention study, so it does not itself show that there is any benefit to students who are identified this way.”
The implications can extend far beyond searching for students to “help.” Some employers already use negative social media posts against job candidates. This includes posts that exhibit, among other things, drunkenness, racism and overt sexuality. Colleges are also using social media to determine admittance and scholarship offers. What happens if employers and colleges resort to identifying applicants as having alcoholic problems because they posted photos that included alcoholic drinks?
Carol A. Prescott, Ph.D., professor of psychology and gerontology at the University of Southern California believes these indicators are probabilistic and not necessarily causal. “You can think about it like how auto insurance companies assign rates to drivers based on age and sex,” Prescott said. “Demographics are statistically associated with how people drive, but it doesn’t mean every person in a category (e.g., males under 26) is going to be riskier, just that on average there is an association and it is useful for predicting behavior in the aggregate.”
And that, intrinsically, could be problematic. According to Keenan-Miller, “There is an ethical difference between identifying through research that some kind of marker can be used to predict an adverse outcome, and a policy that says that we should then deploy using that marker in the general population.”
For example, she says that genetic markers can help predict who is at an increased risk of developing a particular health condition. Genetic markers have been used to predict Alzheimer’s, postpartum depression, the fatality of prostate cancer, and melanoma survival rates. “However, employers and schools can’t force their employees or students to get genetic tests and then put them under various kinds of medical treatments to try to keep them healthy,” Keenan-Miller said.
Taking any type of action based on social media posts is extremely troubling to Robin Schooling, a Baton Rouge-based HR executive and strategist. “College students are, technically, ‘adults,’ and therefore, have the freedom to broadcast their online persons and identities,” Schooling said.
While some posts may not reflect the best judgement, Schooling believes it’s important for schools and employers to put these unfortunate selfies or comments in perspective. “This is merely one aspect of a young adult’s interests and capabilities,” Schooling said, warning against jumping to conclusions. “The student council president may be at a party standing next to a keg, but that doesn’t mean she has a drinking problem.”
There are other issues, which could include a predisposition against those who drink. “There is a real concern of bias setting in when individuals, acting on behalf of a school or employer, make judgments based on pictures or posts viewed in isolation,” Schooling said.
“The HR professional conducting a pre-job offer media scan may be a teetotaler and adamantly opposed to the consumption of alcohol, which is fine if that’s her choice; however, an ingrained human bias to anyone who drinks alcohol should not factor into a determination of whether a candidate has the necessary knowledge, skills and abilities to perform a job.”
Informed Consent Issue
The lack of informed consent is another potential problem. James Amirkhan is a psychology professor at California State University Long Beach, and chair of CSULB’s Institutional Review Board, which examines the ethics of human subject research. “If a researcher proposed to examine social media sites and diagnose the posters, we would have some ethical concerns,” Amirkhan said. “First of all, we would insist that the possible subjects of this project have provided informed consent — that is, we would insist that the students know that their postings are being scrutinized for the purpose of identifying possible substance abuse problems.” Are colleges prepared (and willing) to do this if they decide to scan the social media sites of their students?
Another issue is how the posts would be accessed and whether this access would constitute an invasion of privacy. “If the privacy setting was the general public setting, this would be less of a concern than if it were set to a group of friends — since these searches would represent, in essence, an invasion of privacy,” Amirkhan said. And, he questions how the information would be used. “If the intent was to send a list of alcohol treatment resources to the poster alone, this would be less of a concern than providing the information to the university administration or even the university counseling center without the poster’s consent.” Additionally, he believes a risks/rewards assessment must be conducted. “You would need to weigh the likelihood that a poster is actually in trouble and would benefit from an intervention versus the possible damage (to reputation or to one’s feelings and self-esteem) that this might incur.”
Schooling agrees that privacy concerns could pose another level of problems. “The viewing of ‘public’ content on channels such as Instagram or Twitter is one thing; scanning and requesting access to text messages, which are intended as private messages, takes us down a slippery slope towards Big Brother monitoring what private citizens do in their own time.”
Also, who is assuring the privacy of this information? Just last year, a university in the UK was investigated for leaking personal student information, including mental health and medical details. The information was erroneously uploaded to the school’s website, and discovered by a student conducting a Google search. Several years ago, Stanford Hospital accidentally uploaded the medical records of 20,000 patients to Student of Fortune, a website that provides paid homework tutorials to college students. In addition to these errors, there’s the possibility that students (and possibly, staff) charged with “searching” for individuals with alcohol problems might willfully share this information with others.
The Bottom Line
I don’t drink, but I have many friends who do, and some of them may post a photo of their alcoholic beverage or make a comment about it. To my knowledge — although I could be wrong — none of these particular individuals have problems with alcohol.
It should also be noted that my alcoholic-drink-posting-friends also tend to post photos of their meal, whether they ate at a restaurant or grilled out in the backyard. But, does this mean they have some sort of eating disorder since they’re frequently posting about their food?
These same individuals also post about how may miles they ran, or how many steps they took on a particular day and some even post the information on their sleep trackers to inform everyone that they went to sleep or woke up on time. So, what does this reveal about these people? Do they have exercise or sleep addictions?
They might. Perhaps these are all warning signs of existing or potential problems. But it’s possible that they might just have a social media addiction. However, I couldn’t say conclusively, because I’m wary of diagnoses based on social media posts.
Terri Williams writes for a variety of clients including USA Today, Yahoo, U.S. News & World Report, The Houston Chronicle, Investopedia, and Robert Half. She has a Bachelor of Arts in English from the University of Alabama at Birmingham. Follow her on Twitter @Territoryone.
Will big business compromise the ethics of artificial intelligence?
We are entering a new era of technology adoption. We have passed the point of using new tools to perform old tasks, and our behavior is changing. I grew up with a 1970s television that had four push-buttons on it; these were laboriously and manually tuned to U.K. channels BBC1, BBC2, and ITV. (The mysterious fourth button was 1970s redundancy-in-design at its finest, and came into use with the launch of Channel 4.) Two decades later, I had a remote control in my hand, enjoying the novelty of flicking through an ever-growing list of cable channels each evening. In 2017, the dizzying array of shows and multiple streaming services has changed the game. You can’t simply flick through the evening’s listings. When does that episode stream? Is that U.K. or U.S. release time? Which provider is the new show on? These days, I rely on voice control, asking Alexa to find a title, an episode, or a genre. Artificial intelligence (AI) has come to the rescue, it is a tool for its time, a behavioral shift in my home.
There is no obvious ethical problem around a company producing a clever TV remote. This, however, is the tip of the artificial intelligence iceberg, and it’s not at all clear if ethics is a priority for those developing AI tools.
Do you always know if you’re talking to a machine?
AI comes in many forms, from game-winning, game-changing AI Go player AlphaGo, to machine learning in the background of software, to chatbots at the forefront of interaction. In the last three months, I have watched robots discussing sushi and sashimi with startled humans around a table (in Austin, at South by Southwest’s Japan Factory). I’ve talked to a chatbot about accounts payable on a financial system (in London, at Sage Summit). I’ve engaged with several customer service chat representatives that clearly identify themselves as chatbots. It’s also very likely that I’ve commented, liked, or shared a post by a chatbot on a social media channel, without being aware of its source. Whether you know it or not, you probably have too.
Most interactions are via a keyboard, which instantly removes the voice and visual cues that identify humans. We judge intelligence by verbal reasoning: If machines were to answer questions in a manner indistinguishable from humans, they might be considered intelligent. This is the fundamental test of ‘judging intelligence’ devised by Alan Turing in 1950, and it has been passed. We can no longer be sure whether we are dealing with people or machines in our interactions across the World Wide Web.
Research published in March 2017 about online human-bot interactions suggests that as many as 15 percent of Twitter accounts are not, in fact, run by humans. The MIT Technology Review suggested in November 2016 that around 20 percent of all election-related tweets on the day before the U.S. presidential election were made by an army of influential chatbots. The sheer volume can distort the online debate and may have influenced the outcome of the election. Similar concerns have been raised about the U.K’s vote on Brexit earlier in the year.
How can we trust the entities with whom we interact to uphold human values and exercise ethical judgement? With whom does the responsibility lie? Will ethics be their priority in development?
Teaching Artificial Intelligence to think
AI started with a very narrow focus: Think of IBM’s Deep Blue beating Kasparov at chess. The goal was to select the right chess move from hundreds of thousands of possibilities with strategic implications. Experts agree that, due to this niche programming, the same machine would struggle to win a round of tic-tac-toe with a five-year-old. Artificial intelligence has evolved to cover a broader spectrum. The continued success of AlphaGo, consistently winning against champions in a more complex gaming environment, is a product of machine learning, the development of artificial intuition, and a process of mimicking human practice and study. The responsibility of programmers is to kick-start the machine learning and give the new mind proper direction. This is standard practice, and much faster than learning from scratch. It is worth noting, however, that the co-creator of AlphaGo, artificial intelligence researcher, neurologist, and co-founder of DeepMind, Dr. Demis Hassabis, believes that AI could learn from a zero state rather than being “supervised” to start its learning in a particular direction.
Guiding the learning of a new intelligence is an onerous responsibility. Dr. Hassabis recently spoke about the challenges facing the builders of AI. The majority of AI is built with benevolent intent, he said, adding that since there are so few people able to do this, the current risk of overtly negative programming is small. I am still uneasy, having grown up with the fiction of ‘evil scientist in hidden volcano lair’ from “Thunderbirds” to James Bond films. The growing body of evidence that social media is rife with chatbots, indistinguishable from human accounts and influencing popular opinion, suggests that ethical behavior is not always a priority.
Corporations are leading the way in bringing artificial intelligence into the public domain, using chatbots to enhance interaction with customers. In a social media dominated world, we have an expectation of responsiveness that a human workforce cannot meet: Chatbots allow an instant rapport to develop, keeping customers on your site, building loyalty, and improving satisfaction. There is significant effort on the part of large developers to build ethics in from the start. IBM Watson’s CTO Rob High recently outlined several key areas that ethical developers must consider, including basic trust, openly identifying as a chatbot, and managing data that is shared in a human-bot interaction. It’s a legal minefield. A simple example has parallels in flawed goal-setting. We observe that human behavior changes according to the goals that are set, which is often not as expected by the management! A goal to hit a raw sales target can have unscrupulous teams discounting for volume and consequently losing margin despite reaching their target. We see this as unethical because we are human, and ingrained ethics will ensure that such behavior is short-lived, whether through the actions of management or as a result of peer pressure. A chatbot needs to have ethical ‘gut feeling’ programmed in, and that takes time, effort, and money.
Investment in ethics requires diverse and ethical investors
Much of the innovation in emerging technology is coming from the exciting tech startup sector. Ideas are flying around from talented millennials, grabbing the imagination and hitting the tech and investment headlines. The skills of these young founders are not in question, but business models historically leave much to be desired. We are currently watching the struggles of one of the early tech successes, Uber, as its ethical stance comes into question from all sides: The CEO, Travis Kalanick, has recently stepped aside in the face of growing criticism of the business culture and values. The Financial Times cites “reckless pursuit of increased shareholder value” as a dangerous habit. Unfortunately, the rapid success of Silicon Valley businesses over the past 15 years has led to aggressive venture capital investment based on growth and users, rather than reliability and revenue, and is weighted heavily towards white male founders. According to AOL founder Steve Case, speaking at South by Southwest in Austin this year, only 10 percent of U.S. tech investments went to women, and 1 percent to African-Americans. This ‘bro-culture’ is unhealthy — Dan Lyons’ book “Disrupted: My Misadventure in the Startup Bubble” describes the “grow fast, lose money, go public, cash out” process and the male-and-pale majority in the industry. At what point in this gold rush do founders take a sober look at ethical business and diverse, ethical technology?
There is a joint responsibility for developers and for those non-technical parties to ensure that artificial intelligence retains its “benevolent intent” and reflects the best of our diverse human society. Ethical AI can only be guaranteed by ethical business practices. We hope that these can evolve fast enough to keep up with the rapid advances in technology.
Kate Baucherel is a published author, speaker, trainer and coach, and co-founded community software company Ambix. She has two young children, and lives in the north of England. Find out more at www.katebaucherel.com, or follow @katebaucherel on Twitter.
As volatile and unpredictable as President Trump’s first months in office have been, he has been consistent in his derision of leaks and in calling for the prosecution of those responsible for them. (As a candidate, he conveniently held a different position.) With the first Trump-era leak prosecution now underway, it seems as if the Department of Justice has taken the president’s marching orders to heart. This is unlikely to be the only leak prosecution we will witness under this administration and Attorney General, bringing to the forefront the question of journalists’ responsibility towards leakers in a digital age.
The Winner Leak Investigation
The recent prosecution has been linked to a June 5 story on The Intercept that suggested Russian interference with the elections might have been more profound than was previously known. The news outlet published a redacted version of the top-secret NSA document upon which the story was based, which it had received through an unknown leaker. Two days before the story went online, 25-year-old government contractor Reality Leigh Winner was arrested and charged with violating 18 U.S.C. Section 793(e) for removing classified material from a government facility and mailing it to a news outlet.
Unsealed court records reveal that the arrest came after a reporter for The Intercept had contacted another NSA government contractor and officials at the NSA in an attempt to verify the document’s authenticity. (The identities of the news outlet and government agency have not been officially released, but there is little doubt they are The Intercept and the NSA.) The reporter also mentioned in at least one of those exchanges that he had received the documents through the mail and that they had been postmarked in Augusta, Georgia, which happens to be where Winner lives. He also shared photographs and copies of the document with them. This information prompted an investigation that quickly pointed to Winner as the potential leak. She confessed shortly afterwards, without the presence of a lawyer, when FBI agents showed up at her door with a search warrant.
Assessing The Intercept’s Actions
The Washington Post’s media blogger Erik Wemple analyzed The Intercept’s actions and argued, as many others have since, that the news outlet’s effort to reach out to government officials in order to assess the authenticity of the document provided authorities with an important lead in their investigation. The reporter revealed where the documents were sent from, and the documents he shared contained important clues as to the leaker’s identity, as can be gleaned from the affidavit: “The U.S. Government Agency examined the document shared by the News Outlet and determined the pages of the intelligence reporting appeared to be folded and/or creased, suggesting they had been printed and hand-carried out of a secured space.” Authorities learned that only six workers had printed the report, including Winner. Winner was the only one of the six on whose computer email exchanges with The Intercept were found. On June 6, The Intercept posted a statement claiming that the information in the government’s affidavit and search warrant contained unproven allegations about Winner and about how the FBI had come to arrest her.
In hindsight, the actions of The Intercept seem troubling. But Wemple mitigates his criticism by stating that the actions stemmed from the legitimate need to verify the documents and by arguing that Winner would have been found out anyway: “Yet the mistakes of the leaker before the Intercept even received the document would likely have sealed her fate, regardless of any clumsiness by the reporter in verifying the scoop.” I am not convinced, however, that the information contained in the affidavit warrants this conclusion. The email exchange she had with The Intercept from her work computer dates from March and contained a request for the transcript of a podcast episode, hardly a smoking gun.
None of the stories I have seen so far emphasizing the ease with which investigators caught Winner have made a convincing argument that the same would have been true had the document not been made available to them by The Intercept. They often seem to assume that ultimately the document would be published, dooming Winner’s chances of remaining anonymous. But news outlets routinely report on classified information without sharing the actual documents with their readers or authorities. Why did this happen here? Making these documents available in their original form amounted to providing the FBI with a roadmap to its target.
As this blogger and security expert points out, the document posted on The Intercept contained enough meta information to determine the serial number of the printer used and the exact time when the document was printed. Like most modern printers, Winner’s printer leaves hard-to-see yellow dots on a document containing this information: “The document leaked by the Intercept was from a printer with model number 54, serial number 29535218. The document was printed on May 9, 2017 at 6:20.” Simply scanning the document in black and white before posting it would have eliminated this problem: “To fix this yellow-dot problem, use a black-and-white printer, black-and-white scanner, or convert to black-and-white with an image editor.”
It would be false and unfair to state that The Intercept does not care about the fate of people leaking to them. On its site, it offers potential leakers advice on how to become a whistleblower without being detected. However, as one security expert noted, the guidelines focus on sending information without being caught, but says nothing about covering your tracks in obtaining the information.
It also makes the following commitment: “At The Intercept, our editors and reporters are committed to high-impact reporting based on newsworthy material. If we decide to go forward with a story, we will have a discussion with you about what risks of retaliation you might face and whether you want to remain anonymous. We will be explicit with you about the parameters of our agreement to protect your anonymity, and we will honor our commitments.”
However, in this case, the news outlet did not know the identity of the leaker and therefore could not engage in this back and forth. But of course, this does not absolve reporters and editors from their obligation to do everything in their power to protect others from identifying the source. Reporting on the document without sharing it in its original form with authorities and readers would have been a more prudent course of action. Perhaps investigators still would have been able to identify Winner as the source, but this speculative assessment has little bearing on the ethical analysis. The SPJ Code of Ethics requires journalists to minimize harm and The Intercept failed to meet this requirement.
The Positive Duty to Protect Leakers
The leaker-reporter relationship is fraught with inequality. Leakers put everything on the line when leaking classified information. Not only do they risk losing their job when found out, but also their freedom. When they are leaking classified national security information as Winner did, they are typically prosecuted under the 1917 Espionage Act and are facing experienced federal criminal prosecutors, long prison sentences and astronomical legal bills.
Take the case of Stephen Jin-Woo Kim, a State Department contractor who in 2009 leaked highly classified information about North Korea to Fox News reporter Jay Rosen. The journalistic community was up in arms when Rosen was named in an affidavit as a co-conspirator so that a judge would approve a warrant to search his emails (Rosen was never indicted, nor was he ever going to be), but did not bat an eye when Kim pleaded guilty and received a 13-month prison sentence. (Ironically, The Intercept was one of the few media outlets that reported on the devastating effect of the episode on Kim).
Reporters on the other hand, have relatively little skin in the game. As a matter of prosecutorial tradition, journalists are not prosecuted for printing classified information. No administration (until now) wants to wage a war against the press by putting reporters in jail. Whether the First Amendment precludes them from doing so is not clearly established, and I would not be surprised to see developing case law on this issue under the current administration. However, by and large, reporters do not face the same risks leakers do.
Given this unequal distribution of risk, media organizations should be aware of all the digital trails that leakers can leave behind, and help them erase them, instead of adopting a leaker-beware approach. As is often the case for leakers facing prosecution, Winner was not the most sophisticated leaker. She was not as well-versed in the game of leaks as many D.C. insiders are. But this made her more deserving of moral consideration, not less. And while The Intercept was unaware of who she was, this ignorance should have lead them to assume she was a source vulnerable to detection. Instead, its actions compounded the mistakes Winner allegedly made.
Digital technologies have made it easier for media outlets to obtain troves of information through leaks and data dumps, but they have also made it easier for the government to follow the digital trail back to the leaker. Traditionally, the source-reporter relationship has mainly required that reporters keep their promises of confidentiality. Nowadays, a digital trail is more likely to reveal an anonymous source than a reporter’s loose lips, and journalists need to do more than just keep their mouths shut in order to protect the identity of their informants.
Whereas leak investigations were a rarity before this century, the Bush and -especially- the Obama administrations have been much more aggressive in going after leakers, prosecuting more of them than all the previous administrations combined. This changed landscape leaves leakers in a more vulnerable position than ever before, making it not only the media’s positive duty to protect their identity at all cost, but also to advocate for, rather than blame, leakers once they have been caught. As the story unfolds and more facts become known, our assessment of what happened might alter, but this lesson won’t.