The widespread usage of social media and apps like Periscope and Meerkat, (evolved from earlier video-sharing technologies such as Vine and of course YouTube) has turned ordinary citizens into often unwitting journalists. Long before news crews can get to the scene of a crime, traffic accident or hostage situation, anyone with a smartphone can capture graphic images of a potentially violent or personal situation, and broadcast video live to thousands, even millions. The ethical ramifications of using eyewitness footage are complicated for journalistic and legal purposes in terms of the responsibilities and rights of both the filmmaker and the subjects of the film.
Madeleine Bair is the program manager at WITNESS, an international organization that provides training and support to people using video in human rights advocacy. She told the International Journalists Network: “The emergence of eyewitness footage in reporting has happened largely without specialized training or best practices for the reporters and news outlets who find themselves using citizen footage.”
One of the issues in determining what is ethical when using eyewitness footage is a lack of information about what can be used. In addition, the nature of social media encourages sharing without regard for the implications of privacy, or in some cases, safety and human rights.
Constitutional attorney Dan Barr told KPHO that in most cases involving public places, privacy laws don’t protect individuals, which means if someone captures images of a person at the beach, or a concert, privacy law is not applicable. However, sharing images of a person or group of people who are situated at a private business, home or doctor’s office could be problematic.
“If you take a photo in a private area where people have a reasonable expectation of privacy, you’re going to run afoul of the privacy laws,” Barr said in the report.
CNN reported that journalist Stephanie Wei had her PGA Tour credentials revoked after live-streaming her account of professional golfers practicing shots from a tour event on Periscope. “Everything just felt so natural, almost as if not live-streaming it would be missing an opportunity to do my job in a more informative way,” she wrote in a blog post. “The response to the streams (from fans) was tremendous and overwhelmingly positive. I thought about the possibilities and how Periscope could be a major game-changer in enhancing media coverage of practice rounds leading up to the tournament days.”
Wei reported her understanding that the Tour had a responsibility to its financial partners in terms of paying for the television rights, however she felt it was time to “adjust to the current, ever-changing media landscape and how people—its customers—consume golf content.”
After noting that she was thrown off the PGA tour and lost her professional credentials, she would have to “live with the consequences” that would impact her “professionally and financially,” Wei added.
While money and sports broadcasting are one element in the mix in the larger ethical landscape of live footage, there are other factors affecting the issue as well. Even geography can play a part. In America, there is a different set of standards and expectations for what might be appropriate to broadcast online than overseas. So when an American is traveling, footage that is taken and broadcast might have human rights implications that are unknown to the person shooting the footage.
In terms of journalism and for documentarians and investigators, videos posted or linked online raise questions about how to apply ethical and safe human rights practices. Minimizing the potential for harm or violent repercussions to any community when filming, editing and posting footage should take paramount consideration. Journalism at a minimum has a responsibility to protect the identity of a victim of a crime so that a person’s privacy isn’t violated by social media. Video subjects, who often don’t even know they’re being filmed, shouldn’t be forced to explain events they otherwise would not have had to, potentially undergoing additional trauma at the cost of a sensational news story.
While technology makes it easy to link to a YouTube video in an online report, article, or documentary film, filmmakers have a responsibility to consider the potential implications of doing so for the individuals and social or community groups being filmed.
Eyewitness footage shouldn’t be used as the sole source of a media report. Eyewitness Media Hub co-founder and lead researcher Pete Brown told MediaShift, citing a Tow Center report on legal and ethical issues of eyewitness footage:“I don’t think news organizations should be reaching the conclusion that eyewitness media can somehow replace professional journalism,” Brown told MediaShift. “It’s an amazing and invaluable addition to the newsgathering process, but audiences still need journalists to unpack and make sense of eyewitness content and to provide vital context about the story.”
In order to help citizens and activists determine when and how to use eyewitness footage, the international human rights advocacy organization WITNESS released ethical guidelines on the issue.
Responsibility to Individuals Filmed
One overriding concept within the realm of ethics in usage of eyewitness footage is simply minimizing harm to the subject of the documentation. Gaining consent from the person or persons being filmed is the easiest way to avoid future ethical problems. If the subjects are aware they are being documented and for what purpose and audience, the creator of the film has met a certain level of ethical responsibility. Once again, the setting of the filming becomes important: If the setting is a public place versus a private home or office, there are far less likely to be problems later on with privacy issues.
Those shooting footage need to consider cultural differences. For example, when filming a protest or rally in the U.S., the identities of the protestors are not usually a secret, so if they’re seen on social media, there would be no potential for additional retribution, where in other countries, protestors could be punished for their activism.
WITNESS guidelines recommend that filmmakers should consult with someone inside the relevant community being filmed in order to determine whether sharing the footage could potentially create any harm.
A film’s creator often weighs benefits to society of documenting a film’s subject because of a perceived advancement of the greater social good. However, the risk of sharing the eyewitness video to create harm to participants is a far greater consideration. Facial and voice blur tools are available on YouTube to hide the identities of individuals in a video if privacy or lack of consent is a potential issue. It is the filmmaker’s responsibility to ensure that identifying information (nametags, license plates, addresses) is not seen on the video.
Responsibility to the Filmer
One ethical issue of eyewitness video is protecting the identity of a filmmaker if that individual has chosen to remain anonymous. Sometimes this choice is made for safety reasons, so the footage is posted under another account or an anonymous account. Anyone using this footage then has a responsibility to respect this desire for anonymity.
Another important issue a filmer faces is the provenance of the footage. News organizations have stumbled before by reporting on footage that turned out to be counterfeit. Reporters should always embed or link to footage from its original source, stating the name or organization of the filmer. They should also describe as much information as possible about the circumstances under which the video was obtained and why its legitimacy is assumed. Copyright issues may come into play and are a separate legal determination.
Responsibility to the Audience
When presenting eyewitness videos to an audience, the content curator has a responsibility to provide a context for the footage, particularly if it is controversial or potentially offensive. Simply placing a piece of video or linking to it in the middle of a sentence without explaining why it is there wouldn’t make any sense. Including background information and history of its placement within the report or social media channel is important. If the footage is graphic or violent, for example, there should be a clear warning preceding it.
Unfortunately, eyewitness videos can sometimes be made to further agendas of hate, fear, rumor and stereotypes. In the ethics guide, WITNESS advises taking steps to reevaluate that the video does not provide a platform for the advancement of hateful beliefs or false rumors.
Everyone has a smartphone and a social media account, so it’s easy to shoot a video and share it. If suddenly the world is going to be full of “citizen journalists” though, we’d all benefit from learning some of the basic tenets of responsible journalism and work to respect the rights and privacy of those being filmed.
Mary McCarthy is Senior Editor of SpliceToday.com and a bestselling author. Find her at marytmccarthy.com.
In March 2016, jurors awarded ex-wrestler Hulk Hogan $140 million in his case against Gawker for posting a video of him having sex with the wife of his then-best-friend Todd Clem. In a smackdown between privacy and free speech, the former seems to have won. As one of the jurors told ABC News, “[Hogan is] still a human being just like everyone else, no matter how many people know his name and his face.”
The same month saw two other celebrity privacy verdicts with a similar sentiment. Fox Sports reporter Erin Andrews won $55 million in a lawsuit against a stalker who filmed a nude video of her in a hotel room without her knowledge. And Ryan Collins, who hacked into 50 or more celebrities’ cloud services and obtained nude photos of them that were later leaked, faces up to five years in prison.
What do these cases tell us about the state of internet privacy today? Have these scandals taught us something? Has anything changed?
Anything can go public. So what?
Today, celebrities know that if they’re doing something unethical, chances are the public will find out. The internet’s existence alone means that information can travel around the world faster than gossip in a school cafeteria.
But it’s not just the internet; it’s technology as a whole. Social media accounts (with their vulnerabilities to hacking), smartphones with cameras, and streaming video all make for instant, personal access to people who were formerly protected by managers and the limitations of physical film. The barrier to access is much lower now—so low, it’s easy to demolish said barrier using only an iPhone and excessive booze. Just ask John Galliano, the fashion designer whose career at Dior tanked after a video surfaced of him drunkenly making anti-Semitic remarks.
Public figures know a single remark or faux pas can and will be front-page news, and yet that doesn’t stop them. “We need to resign from this company immediately…At any moment, the police arrive, and we end up in the newspapers,” wrote Jurgen Mossack, founder of the law firm at the heart of the Panama Papers info leak, in a cautionary email to other top staff. The clincher? Mossack sent that email in 2014. Despite freaking out about it two years ahead of time, he failed to prevent this fear from coming true.
The lesson here: We’ve learned everything and nothing. Knowing that something shameful could become headline news hasn’t stopped high-profile people from doing those things. Whether or not Hulk Hogan knew he was being taped having sex with his friend’s wife, he still slept with her.
With entitled or deluded impunity, society’s elite think they’re immune from the consequences of their actions—and up to a point, they’re right. A quick swing by rehab or half-hearted apology scripted by a publicist—like Johnny Depp and Amber Heard’s recent bizarre video after smuggling their dogs into Australia—is often all the public needs in order to move on. Terry Richardson, Woody Allen, Chris Brown, and R. Kelly, each with well-documented instances of sexual harassment, molestation, abuse, and/or rape, continue to enjoy professional careers, fame and fortune. Without any accountability, why should celebrities care that their private misdeeds could become public online?
The privacy scandal as PR tool
One thing has changed: Celebrities have quickly learned to use privacy scandals to their advantage. What should perhaps serve as a cautionary tale, an incentive to clean up one’s act, has instead become a PR tool for the famous to shrewdly wield.
Exhibit A: Kim Kardashian’s “accidentally” leaked sex tape that propelled her to fame. The highly effective “misstep” that got her national recognition and launched her entire career has been copied over and over by other celebs seeking to raise their profiles or shed a squeaky-clean Disney image. Obviously, sometimes celebrity photo or video leaks are crimes; hackers shouldn’t get a free pass. But when the leak happens to emphasize how sexy or well endowed someone is (ahem, Justin Bieber) rather than them committing a crime or other socially unacceptable behavior, one wonders if the “leaked on purpose” rumors are true.
Beyond just boosting their sex appeal, celebs are using internet privacy scandals to incite the public on other villains: paparazzi and the media. After actress Kristen Bell had her first child, she and her husband started a campaign to ban websites and magazines from publishing photos of celebrities’ children, resulting in Entertainment Tonight, People and Just Jared all agreeing not to. But at the same time, other celebs court the paparazzi, even alerting the photographers to their whereabouts in order to get some tabloid coverage.
In the recent Hulk Hogan privacy scandal, Gawker Media—already a love-to-hate site—became a scapegoat and public example. As juror Shane O’Neil told ABC News, “Gawker made it clear to everyone…that they were all about crossing the line.” ABC added that the jurors were hoping “to send a message” with their verdict. O’Neil continued: “It just wasn’t about punishment of these individuals and Gawker. You had to do it enough where it makes an example in society and other media organizations.” Suddenly, the site that millions eagerly read every day for celeb gossip has gone too far, free speech be damned. It’s hackers, photographers and smut-peddlers who have gone too far, not us!
We’re conflicted about privacy and fame
I daresay this reflects America’s past Puritan ideals and current conflicted relationship with privacy, sex and fame. We want to take nude selfies and label certain people “sluts.” We want to see Jennifer Lawrence naked while blaming hackers and photographers for it, not our own curiosity. We shun celebs who are too perfect (like Anne Hathaway), but when someone actually breaks the law (say, Vanessa Hudgens carving her name into a rock in Sedona), suddenly it’s OK to call her a “stupid a$$ bitch” and a moron. We’re fallible, but celebrities should be perfect: sexy and accessible and flawed—but only a little bit, and only in ways we relate to.
Ultimately, we see celebrities as our more-successful stand-ins, so we accept a certain amount of imperfection. If someone famous like Hulk Hogan can’t have the freedom to have an affair and tape it, then that means we can’t cheat on our spouse, and what is the world coming to? We draw the line at pedophilia (Subway Jared), double-digit rapes (Bill Cosby), and failing to be sufficiently patriotic (Ariana Grande and Donutgate)…but not much else. Because celebs, and by extension the American public, should be able to have our Snapchat cake and eat it too.
We give away more and more of our privacy in the name of convenience and the latest technology, yet we’re surprised when someone steals our banking information. We want ever more intimate access to celebrities, but shame on the media for giving it to us. We want to support the whistleblowers and Edward Snowdens of the world…as long as they expose pre-established villains, but not anyone we relate to.
By all means, hold hackers, intrusive paparazzi, and tabloids accountable for invading celebrities’ privacy. But famous or not, we all have responsibilities too: Start being more realistic about the illusion of digital privacy or stop being so ashamed of our sexual expression and offshore bank accounts.
Holly Richmond is a Portland writer who follows celeb gossip WAY too closely. Learn more at hollyrichmond.com.
The San Bernardino attack that resulted in the deaths of 14 people last December continues to evolve into the polarizing yet familiar battle over the balance between privacy and national security. For those who have lost track of how it all started, the story began when the FBI was unable to unlock an iPhone belonging to one of the attackers, Syed Rizwan Farook, and approached Apple for assistance. Drama ensued as Apple refused to help the FBI break into the phone, believing that the methodology it was asked to utilize was unwarranted and threatening to public security. In what many have argued is an unethical, unprecedented request, the FBI ordered Apple to create software that would disable privacy settings used in select iPhones models. In addition to existing disputes over the acceptable extent of access to private information, the order gave rise to a new question: Does the FBI have the right to demand security backdoors that could compromise the safety of uninvolved civilians?
The trouble began soon after the FBI found that it could not unlock Syed’s phone, which was locked with a four-digit code set to erase the phone’s contents after ten incorrect password attempts. The task was further complicated by a setting that increased time increments between failed password trials, a particularly frustrating problem in crimes when time is of the essence. In fact, Apple’s iPhone encryption software was so advanced that the company itself claimed it did not possess the technology needed to unlock it. Frustrated with Apple’s refusal to comply with its requests, the FBI asked Magistrate Sheri Pym to issue a court order demanding that Apple create a new operating system to allow it to bypass security measures.
The order was unique both because it asked for nonexistent software and because it requested a security ‘backdoor’ that could be used to unlock myriad devices. So was it ethical, not to mention legal, for the FBI to ask for software that had the potential to override broadly-applicably security measures? According to Apple, the answer is a big, fat, thespian no. Apple not only refused to comply but also published an open letter to the public, advising people of the ‘chilling’ implications of a security backdoor, writing that, “this demand would undermine the very freedoms and liberty our government is meant to protect.” Apple warned that the technology could be detrimental if misused, stating: “In the wrong hands, this software — which does not exist today — would have the potential to unlock any iPhone in someone’s physical possession … while the government may argue that its use would be limited to this case, there is no way to guarantee such control.” The letter went on to outline several alarming scenarios that could result from giving the government access to this technology. Among them were the right to ask for software that intercepts texts or photos, health records, financial data and locations.
Though the letter was a bit artful, it raised important questions that deserve careful consideration. For one, the request for nonexistent software could set a legal precedent for permitting additional nonstandard, privacy-compromising demands. Apple’s fear stemmed in part from the approach the FBI took to seeking out iPhone contents. Rather than issuing a standard subpoena for information found on one device, the government requested a court order under the All Writs Act, which allows federal courts to issue all necessary or appropriate legal writs (i.e., court orders) compelling citizens to undertake certain actions as long as it is necessary and appropriate. The Act is a component of the Judiciary Act of 1789, and its creators could not have possibly predicted cell phones, let alone the links between individual phone software and security of technologies belonging to the greater public. Because the Act is so broad, it could, in theory, be applied to more extensive requests for technology that would jeopardize our privacy.
Whether major fears about abuses of power are symptomatic of public paranoia or forward-thinking dedications to ensuring public security is debatable. The government’s stance on the issue is not. Soon after the open letter was published, the FBI filed a motion to compel Apple to comply with the court order and accused the company of misrepresenting facts for marketing purposes. Government prosecutors wrote: “Rather than assist the effort to fully investigate a deadly terrorist attack by obeying this Court’s Order of February 16, 2016, Apple has responded by publicly repudiating that order … The Order does not, as Apple’s public statement alleges, require Apple to create or provide a ‘back door’ to every iPhone; it does not provide ‘hackers and criminals’ access to iPhones … It does not give the government ‘the power to reach into anyone’s device without a warrant or court authorization …” The motion also goes on to imply that Apple misled the public about the dangers of the All Writs Act, claiming that Apple previously complied with the Act, and use of the law for such purposes was not unprecedented.
While Apple and the FBI clearly stand on opposite sides of the argument, the public’s opinions on whether the government is dangerously overstepping boundaries are mixed. Based on a March phone poll of over 1,000 individuals, CBS revealed that 50 percent of those polled thought that Apple should unlock the iPhone, and 45 percent thought it should refute the order. Despite the varied results, eight in 10 respondents still believed that it was at least somewhat likely a decision to unlock the phone could set a legal precedent for mandates to unlock additional devices in the future. In other words, a belief that the government will continue to push privacy boundaries are widespread
Luckily for the FBI, it is unlikely that the bureau will be forced to defend itself on a public stage. Nor will Apple be lucky enough to testify in court, acting as a stalwart battling the government to protect collective security. What could have set the stage for a Hollywood movie has begun to devolve into a background narrative. After asking for a delay on its court date with Apple, the FBI fully retracted its demands. Instead of fighting the tech giant, it secured the services of professional hackers who were able to find and expose flaws in the iPhone’s security system, allowing the government to unlock the phone without clearing its contents.
Not only has the dramatic storyline come to an abrupt halt, the ball is back in the FBI’s court. Now that it possesses information about Apple’s security flaws, it has the opportunity to minimize accusations about unethical intentions to infiltrate additional devices. If the FBI chooses to provide Apple with details about its operating system failings, the bureau may qualm some public suspicions, but it will also risk losing valuable information that could be utilized for future searches. The path it chooses to take will likely be determined by the White House in the coming weeks, but inevitably, uncertainties over its intentions will remain intact.
Paulina Haselhorst was a writer and editor for AnswersMedia and the director of content for Scholarships.com. She received her MA in history from Loyola University Chicago and a BA from the University of Illinois at Urbana-Champaign. You can contact Paulina at PaulinaHaselhorst@gmail.com.
Revenge Porn Legislation — Too Little, Too Late?
Nonconsensual pornography distribution has been around for a while. One of the first litigated instances of what we now know as “revenge porn” occurred in 1984 when Hustler magazine published a nude photograph of a woman submitted by a partner under a forged consent form. Since then, scorned lovers, ex-spouses and nefarious internet extortionists have continued to cause personal distress and wreak havoc to reputations in an ever-increasing spiral of bullying and public shaming.
Legislation has been slow to catch up with the escalation of this practice, but countries are beginning to put laws in place to protect individuals from online defamation stemming from the dissemination of intimate images. The question is: Are they doing enough?
In 2015, revenge porn was criminalized in the UK under the 2015 Criminal Justice and Courts Act. To penalize a perpetrator under this Act, however, prosecutors must find proof of intent to cause malicious harm or distress. Because the law lacks a firm definition of what constitutes such proof, it allows the justice system a broad interpretation of actions, which lets many get away with the crime. That being said, the law, introduced in April 2015, was successfully applied the following month when the first individual was charged, convicted and sentenced under the new legislation.
Canada released its version of revenge porn legislation in the same year, enacting the Protecting Canadians from Online Crime Act in April of 2015. This law aims to halt cyberbullying of all types, including nonconsensual sharing of intimate images. Canada’s law takes revenge porn legislation a step further by expanding its definition of protected images to include any image where a person or persons had a reasonable expectation of privacy such as a photo of two people kissing in a park. Despite the more encompassing nature of its law, Canada’s first conviction came only this month in a landmark case tried by the Ontario Superior Court. The victim, under the pseudonym of Jane Doe, was awarded $141,708.03 from her ex-boyfriend, who uploaded a sexually explicit video of her to Pornhub. Canadian legislators have continued to define and expand their laws, with one province, Manitoba, reaching a decision in January of 2016 to allow victims to not only sue the initial perpetrators but also anyone who further distributes this type of pornography.
In the United States, revenge porn has been a hot topic for several years, with a federal revenge porn bill being discussed throughout 2015. At this writing, 26 states have individual revenge porn laws, but they vary widely from state-to-state. For example, the “Revenge Porn Law” (SB 255) signed into California law in 2013 excludes intimate selfies from protection. If you press the “record” or “photo” button yourself to take an intimate selfie, it is automatically excluded from protection. If someone hacks your computer and distributes the intimate photos you’ve stored there, these too are excluded. California’s law is also notably lenient on re-distributors, who are protected from prosecution.
Contrast this with Illinois’ law, SB2694, which does not require motive to be proved for conviction as does the UK law and does not protect re-distributors of the images, as does the California law.
The trouble with state laws is that perpetrators can look for states with lenient laws if they are intent on causing damage or harm and half of our states still have no protection whatsoever. For example, this year when Chicago Blackhawks draftee Garret Ross was charged with disseminating sexual images in Illinois, it was determined he resided in Michigan at the time of the crime so the case had no basis in Illinois. While Illinois has one of the toughest laws on revenge porn in the nation, Michigan has no protective legislation on the books aimed specifically at revenge porn. However, he could be charged with a misdemeanor in Michigan if the case goes to court there. In contrast, under Illinois law, Ross would face Class 4 felony charges that carry a one-to-three year prison sentence and up to $25,000 fine, plus restitution to the victim for costs incurred. The Blackhawks suspended Ross until resolution of the charges. He was reinstated on March 29, 2016, after the charges in Illinois were dropped based on his residency. The victim may still file charges in Michigan.
This kind of disparity in state laws is what prompted California Representative Jackie Speier to work on introducing federal legislation to make it illegal to distribute nonconsensual explicit images on a nationwide level. The bill used input from lawyers, technology companies, constitutional scholars and advocacy groups to more clearly define what constitutes an intimate image. In addition, the bill refines evidentiary requirements to create a law that is constitutionally sound while providing the highest level of protection. Called the Intimate Privacy Protection Act, it was scheduled to be brought before Congress in the fall of 2015, although further research failed to find mention of it after mid-2015. A call to Congresswoman Speier’s D.C. Communications Office confirmed that the bill is no longer under consideration. However, the respondent from Speier’s office was clear that she could “not speak for the Congresswoman.” No details regarding why the bill was set aside were forthcoming.
In the meantime, California’s Attorney General Kamala Harris formed a technology and leadership subcommittee that includes members of large technology companies such as Facebook, Twitter, Google, Pinterest, Microsoft, Tumbler and Yahoo. With input from these companies, the subcommittee provided a list of suggestions for publication on a cyber exploitation website run by Ms. Harris’ office. The site provides tools for law enforcement agencies, links to privacy and removal policies for major online companies, and resources for victims. It also offers a best practices white paper that other technology firms can use to lessen the chances that revenge porn will find a home on their sites.
As long as the topic of revenge porn remains a hot button issue that garners both media and public attention, governing bodies all over the world will continue to explore and refine their policies regarding it. Unfortunately for some, their efforts are too little or too late. In February of 2015, the Los Angeles Times reported that Kevin Christopher Bollaert, operator of revenge porn websites UGotPosted.com and ChangeMyReputation.com was convicted of six counts of extortion and 21 counts of identity theft. While he may serve up to 23 years in prison for his crimes, his many victims were left with long-term issues. Some became suicidal; others lost jobs, spouses, partners, and the love and respect of friends and families. In all cases, reputations were irreparably damaged.
Many women and young girls who are victims of revenge porn pay the ultimate price—their lives. Media profiled suicides like Americans Amanda Todd, Audrie Pott, and Kacie Palm; Canadian teen Rehtaeh Parsons and Brazilian teen Julia Rebecca also underscore the need for firm and irrefutable legislation in place worldwide to put a stop to this widespread harassment and invasion of privacy. Legislation should be comprehensive and include detailed definitions and penalties to keep perpetrators from being able to hide behind vagaries and loopholes. The wide variance in laws from state-to-state and nation-to-nation also highlights the need for an international internet governance group. A group that can work on creating overall standards for internet regulations to make online crimes such as revenge porn more difficult to carry out. Until then, advocacy groups such as the Cyber Civil Rights Initiative continue to work hard to bring attention and focus to the insidious and devastating crime of revenge porn.
Nikki B. Williams is a freelance writer based in Houston, TX. She has written for a variety of clients from the Huffington Post and D.C.-based political action committees to Celtic jewelry designers in Ireland. You can contact her through her website, nikkibeewilliams.com.
Stopping crime before it happens is the perfect martial dream. It can save time, resources, and even lives. But for the average citizen, the idea of preventive crime monitoring is more like a science fiction nightmare from Steven Spielberg’s 2002 tech thriller/Tom Cruise vehicle Minority Report.
The pitfalls of pre-crime monitoring are central to Spielberg’s underlying horror where, in the future, clairvoyant beings “previsualize” violent crimes before they happen. All is well until the beings previsualize a crime nobody expects to happen, setting off a 145-minute chain of Academy Award-nominated events.
Minority Report’s claims about free will could keep a philosophy class going for hours, but the real relevance of the film, as with any serious science fiction, is in its prophetic power. No, we don’t have superhuman psychic mutants, but we do have big data, and as early as 2005, some U.S. police departments were using predictive tech to effectively identify negative trends and reduce crime in certain cities, like Memphis and Minneapolis. But that was more than a decade ago. A lot has changed since then, and the evolutionary rate shows no sign of cessation. We’re more connected now, and more and more of our lives are being sent to the cloud. As a result, we’ve laid a strong groundwork for a total surveillance society.
Though some people are okay with the techno-Faustian bargain we’ve bought into, most are still unsettled by the idea and the potentials of digital surveillance. Even with pre-crime tech entering its teen years, recent news of China’s recent foray into pre-crime monitoring is rustling some feathers and bringing the field to a pivotal ethical crossroads. The tech isn’t going away, and it’s only going to get better. The challenge will be: how do we do it so we don’t all end up like Tom Cruise on the run?
China’s pre-crime monitoring program, developed by state-run defense contractor China Electronics Technology Group, reportedly captures data on “jobs, hobbies, consumption habits, and other behavior of ordinary citizens” to predict potential crimes, writes Bloomberg reporter Shai Oster. There’s nothing notable about the data capture—just look at the digital advertising industry—but for crime surveillance purposes, its intentions are far more suspect. Especially in China’s program, where “there are no safeguards from [Chinese] privacy protection laws and minimal pushback from civil liberty advocates and companies,” adds Oster.
Surveillance is a mechanism of power, and without legal safeguards or civil, corporate or public pushback, the technology can evolve unchecked. The U.S. has its own safeguards in place—at least on paper—which is why Apple was able to refuse the FBI access to the San Bernardino shooter’s iPhone a few weeks ago (until a hacker came by to help the agency circumvent the issue). The safeguards are in place to guard the privacy of the American public, but in the eyes of the state, they’re like duct tape over the state’s camera lens. But because privacy laws in China are overwhelmingly favorable to the state over its public, writes Patrick Tucker at Defense One, “China is poised to emerge as a leader” in pre-crime monitoring technology.
China’s growing leadership position in pre-crime tech is founded on a military paradigm that favors domestic security over military spending. According to Tucker, China increased its security spending in 2011 by 13 percent to a total of 624 billion yen ($5.6 billion), over military spending at 601 billion yen ($5.4 billion). The increase in spending allowed the Chinese government to launch a national program, “requiring 650 Chinese cities to reform their public security and safety infrastructures with state-of-the-art technologies,” according to a 2013 report from Homeland Security Research. Technologies in the overhaul include tracking technologies, video surveillance, physical identity and access management, cyber security, physical security information management, and other surveillance hardware and software.
This tech ramp-up is part of a greater Chinese effort towards “social governance,” or “social management,” which—though difficult to define in English—is distinct from government oversight of economic and state governance: Instead, it speaks to how “the government manages and regulates social affairs, social organizations and social life, with the guidance of law,” according to East Asia Forum. The push comes from the changes spurred by China’s increased urbanization, where the government is increasingly expected to maintain social stability. Including general social affairs in this larger state oversight effort is one piece of the larger surveillance pie, and with digital tech integral to modern social affairs, it makes practical sense for states to drive resources towards social surveillance.
To officiate the strength and scope of these resources, China drafted a new cybersecurity law last year authorizing “broad powers to control the flow of information,” writes Austin Ramzy at the New York Times. Recognizing the democratizing ideology of open Internet, China already has in place restrictive Internet laws, and the new draft law says that the state’s Internet information department is “responsible for comprehensively planning and coordinating network security efforts and related supervision and management efforts.” And instead of creating new cybersecurity initiatives, the new draft law instead elevates extant practices and regulations to the state level, ensuring the centralization and efficacy of state surveillance power.
Asked whether China’s increased spending on domestic security is part of a greater global trend, Adam Segal, the Maurice R. Greenberg senior fellow for China studies and director of the Digital and Cyberspace Policy Program, said the tech is instead being driven by China’s specific “concerns about social protests and threats to domestic control”—or what it calls terrorism. In a March 4 article for Defense One, Segal argued that despite the locality of its efforts, China is looking to the global stage as a reference and defense for their anti-terrorism surveillance, stating that the provisions of the cybersecurity and data collection laws are in accordance with “international common practices.”
In his article Segal adds, “The desire for data may only intensify under Xi Jinping’s leadership; the Chinese Communist Party appears increasingly worried about domestic stability and the spread of information within the country’s borders.” It’s not something China takes lightly, either, if you recall the 2010 incident in Xinjiang where the state ended ten straight months of Internet blockage in the region following deadly, racially charged riots between the Muslim Uighurs and the Chinese Han. China blamed overseas groups using the Internet for inciting the violence, and shut down regional access to curb information sharing. The riots left around 197 people dead and another 1,600 injured and fit the context for China’s definition of terrorism, which its pre-crime monitoring program is now attempting to curb.
“When the Chinese refer to cyber terrorism,” Segal added in an interview, “they are referring to the spread of extremist ideas as well as the promotion of violence—say, sharing of how to construct IEDs.” China’s pre-crime monitoring program will flag any terrorist-like behavior, such as sudden influxes of cash, frequency of international calls, and other analyzed trends, allowing authorities to target specific instigators, freeze their accounts, and open up further information inquiries—to stop any terrorist acts before they happen. “The issue for the U.S.,” adds Segal, “is that some forms of speech the Chinese consider terrorist—‘splittism’ from Uighur or Tibetan activists—the U.S., would likely consider legitimate public discourse.”
And here’s the grind, that one person’s terrorism is another’s free speech. “Since all algorithms and data gathering are inherently political,” Segal says. “The system, if possible, would seem ripe for abuse.” Power is concerned with self-preservation, and pre-crime monitoring, using big data and analytics for its support, is another tool in this arsenal. But the technology won’t stop, and it will only get better—especially as more of modern life gets sent to the cloud. For pre-crime monitoring to advance, be effective, and avoid Minority Report-scale misapplication, it will need to prioritize ethics over returns.
Benjamin van Loon is a writer and researcher from Chicago, IL. He holds a Master of Arts in Communication and Media from Northeastern Illinois University. Follow him on Twitter @benvanloon and view the rest of his work online at www.benvanloon.com.