Cutting-edge security technology is a spy-movie staple. In films like Charlie’s Angels, Minority Report, and Avengers, gadgets unlock with a fingerprint or retinal scan. The protagonist freezes as a red laser slowly scans her eyes, and presto, the vault opens.
Crossing the U.S.-Mexico border could soon be nearly that high-tech (albeit less dramatic).
Using someone’s unique biological traits for access control is called biometric authentication. And it’s made the leap from the silver screen to border checkpoints like Otay Mesa. The busy San Diego-area pedestrian crossing just launched a $2 million pilot program that verifies people’s identities using biometrics, specifically iris scanning and facial recognition.
The program, begun in December 2015, is being tested on non-U.S. citizens entering the U.S. But the goal isn’t to keep people out. According to the Center for Migration Studies, a bigger problem is people who enter the U.S. legally and remain after their visa expires. So starting in February 2016, the cameras will also snap facial and eye photos of people leaving the U.S. That way, immigration authorities can determine who hasn’t left the country yet—and therefore whose visa has expired. The pilot program ends in June 2016, and if it’s a success, the technology may spread to other pedestrian checkpoints.
How It Works
Facial and iris scanning isn’t as quick as spy thrillers would have you believe, but it’s convenient nonetheless. Sean Allocca of Forensic Magazine explains: “The cameras are positioned at six kiosks. The migrants simply walk up to the kiosk, scan their documents on a reader, and look into a camera. The process takes seconds, according to reports, and the migrants are then ready to be questioned by immigration officials.” (See it in action here.)
With securing the border being such a hot topic for 2016 presidential candidates–particularly conservative ones–biometric access control is a timely development, if not a new one.
Biometrics at the border actually trace back to George W. Bush and America’s surge of antiterrorism sentiment following 9/11. In 2004, Bush signed the Intelligence Reform and Terrorism Prevention Act, which mandated a “biometric entry and exit data system [to] facilitate efficient immigration.” In other words, Homeland Security should use biometric tests like iris scans, fingerprints, and facial photographs to speed up border crossings and prevent terrorism.
Today, all U.S. airports use biometric authentication on international visitors. U.S. Customs and Border Protection agencies check foreigners’ fingerprints against a database of criminal and immigration records before granting them entry.
Some U.S. citizens even volunteer their biometric data. If you’ve enrolled in TSA’s PreCheck program, yours is already in use. PreCheck lets low-risk fliers bypass long security lines and pesky shoe removal in exchange for their fingerprints, a background check, and $85 a year. Jetsetters love the convenience biometric data affords.
So do some Mexican citizens crossing the U.S. border, like Cesar Quezara. Facial and iris scanning “was two or three seconds. It’s very easy,” as he told San Diego public media KPBS. But is it worth trading your privacy for convenience?
Those who say no have two main concerns: identity theft and civil liberties.
Facial recognition and eye-scanning technology is typically used to prevent identity theft, but ironically, it can also enable it. Researchers have found that if your face can unlock something, so can a photo or video of you used without your permission.
With the abundance of photos and videos available online, particularly on social media sites like Facebook and Instagram, someone could steal your likeness without your knowledge. According to a study by Carnegie Mellon researchers: “It is possible…to identify strangers and gain their personal information – perhaps even their Social Security numbers – by using face recognition software and social media profiles.” Security researcher Duc Nguyen found that he could crack facial recognition security on Asus, Toshiba and Lenovo laptops with a photograph of the authorized user.
As facial recognition software advances, it now often requires the person to blink, proving it’s not a photo. However, Popular Science writer Dan Moren was able to bypass his banking app’s security measures by using a video of himself blinking. Obviously walk-up facial scanners at pedestrian border crossings would be more difficult to fool, but clearly, the technology is vulnerable to exploitation–both by someone posing as you to cross the border or a hacker who obtains your face from a government database.
The human iris contains some 5,000 data points, making it harder to fake than a face…right? Not necessarily. In 2012, researchers in Madrid created fake irises that successfully fooled biometric scanners 80 percent of the time. Making a fake iris isn’t new, but the researchers were replicating real people’s irises for the first time—meaning identity theft is possible.
John Verdi of the Electronic Privacy Information Center is a vocal critic of biometrics due to the technology’s vulnerability to identity theft. “If your passport is stolen, the government can reissue you a new passport,” he told CNN. “If your biometrics are stolen, short of hacking off your finger, there’s no way for consumers or travellers to reacquire their identity.” An iris or face is, needless to say, similarly impossible to replace.
Civil Liberties and Racism
With news outlets from The Guardian to The New York Times lamenting “the death of privacy,” some people worry more about how government agencies and businesses will use their biometric data than about hackers and identity theft.
Almost 90 percent of people “are willing to share their biometric details when traveling across international borders,” according to a 2014 Accenture survey that polled 3,000 people worldwide. However, more than two-thirds would first need to know how their personal data would be used.
Massachusetts resident John Gass discovered firsthand that, rather than detect criminal activity, facial-recognition technology could be used to unintentionally make his life miserable. In 2011, Gass got a letter out of the blue from his state’s Registry of Motor Vehicles revoking his driving privileges. It turns out that an algorithm incorrectly flagged Gass’s photo as a “potentially criminal false identity” because he looked too much like another driver. It took Gass 10 days of fighting with the RMV to get his driver’s license reinstated. This facial-recognition technology is used in about 35 states.
Perhaps a bigger concern than bureaucratic headaches is racial profiling. That’s what American Civil Liberties Union attorney Mitra Ebadolahi worries will come from facial-recognition software being used at border checkpoints. She told KPBS: “If you pay for convenience by giving up a photo of yourself, and that photo then gets into a database–not just at the border, but potentially used elsewhere–and then is used to connect you to a crime you didn’t commit because you sort of look like a guy who the victim of that crime has identified as being the perpetrator, how would you feel about the government having that information on you indefinitely?”
Racism may in fact be subconsciously programmed into biometric technology. Author Shoshana Amielle Magnet writes in When Biometrics Fail that iris scanners don’t sufficiently read the irises of people of color. She concludes: “[Biometric] technologies work differently, and fail to function more often, on women, people of color, and people with disabilities.” (This makes sense, considering white men are overrepresented in the tech world.)
Our faces give away more than our racial and ethnic background. Researchers have discovered a significant link between personality traits and iris appearance, as Andrew Patrick, an IT research analyst for the Canada Privacy Commissioner, points out. Specifically, researchers could identify aspects like trust, warmth, and impulsiveness based on the characteristics of someone’s eye. Thanks to an iris scan at the U.S.- Mexico border, the government could learn how trusting or impulsive you are and potentially share that information with interested parties like insurance companies.
“In many ways, biometrics represent a wrong direction in solving identity theft,” Patrick writes on his site. “Instead of a universal identifier that can be used, and abused, everywhere…why not make multiple private credentials that people can use selectively…while maintaining their overall security and privacy?” Unfortunately, non-U.S. citizens seeking to enter America may not have much of a choice.
Holly Richmond is a Portland-based writer interested in pop culture and social justice.
Our current atmosphere of digital connectedness has spawned innovative new ways to help citizens feel secure in their surroundings. Unsure of the safety of an area you’re visiting? There’s an app for that. See a suspicious character lurking around your neighbor’s home? There’s an app for that. Witness a crime taking place? Never fear—a new wave of mobile apps allows you to be aware of, and alert authorities to, suspicious characters in neighborhoods, stores and other venues through real-time tracking and user-reported incidents. Their goal is to keep you safer by providing notice of unsafe areas and criminal activity as well alerting authorities and other community members of crime-related incidents. Upon initial examination, this seems like a useful and efficient way to deter crime and increase personal safety, but it appears these crime-fighting superhero apps have a dark side.
There’s no doubt that apps like CrimePush can help citizens get fast assistance in the midst of a crime. This particular app sends authorities the location, photo, video, audio and text description of the crime at the push of a button. Similar technology is also in use at police departments like the Virginia State Police to encourage citizens to submit anonymous tips about suspicious behavior to the police for follow-up. Apps like SketchFactor (which has since been removed from the market due to controversy concerning racial profiling) and GhettoTracker (also removed) mainly targeted geographical areas with unsafe reputations, although SketchFactor allowed reporting of individuals. Other mobile tools like Nextdoor and GroupMe help connect community and business members with one another and with local authorities to monitor local criminal activity and perceived threats.
Unfortunately, “perceived” is the operative word. While these apps allow members to alert one another to suspicious activity, they have also seemingly opened the door to a McCarthy-era level of racial bias. In a recent example, businesses and residents of Georgetown, an affluent neighborhood outside of Washington, D.C., used the GroupMe app in an attempt to curtail the area’s growing shoplifting problem. CBS News reported allegations in October of this year that the group was racially profiling African American shoppers, since over 72 percent of the “suspicious individual” GroupMe reports targeted African Americans. Joe Sternlieb, a representative of the Georgetown Business Improvement District, defended the group, noting that less than 5 percent of the African American individuals identified on GroupMe were arrested. As further evidence of his community’s neutrality, he explained that group members who post inappropriate content are either told to work within the specified rules or they are kicked off the app. He did not mention what “inappropriate content” was, and he did not expand on whether or not the remaining 68 percent of African Americans tagged in reports but not arrested were approached by the police. After the controversy was reported in the media, the group discontinued use of the app.
Georgetown is one of the “whitest” neighborhoods in the D.C. area, with over 85 percent of the population reported as Caucasian and just over 3 percent African American, as opposed to the neighboring District of Columbia with its 38 percent Caucasian and 50 percent African American populations, respectively. With the low level of Caucasian population density in Georgetown, a black individual would be easy to notice and might seem out of place. However, the African Americans that live in Georgetown are, like their white neighbors, affluent, well educated, and law-abiding. Leslie Hinkson, a Georgetown University associate professor of sociology, explains: “Crime does occur in Georgetown. And quite often when people describe the perpetrators of those crimes, they’re usually young men of color. But that doesn’t mean every person of color is an automatic suspect.” One February incident underscores her statement. An employee at a Georgetown retail establishment took a photo of a tall, well-dressed African American man that he described as “…Very suspicious, looking everywhere.” Later, an employee at another store responded, “He was just in Suitsupply. Made a purchase of several suits and some gloves.”
As demonstrated through the prior example, apps like these can quickly become a forum to unfairly categorize members of another race or socioeconomic status as dangerous or sketchy. For instance, riders on San Francisco’s Bay Area Rapid Transit system (BART) can use a BART-created app for iOS and Android called BART Watch that allows them to report suspicious activity, crimes, and other unwanted behavior to authorities instantly. When a local newspaper, the East Bay Express, requested a month’s worth of these complaints they found that there was a disproportionate number of reports aimed at blacks. Approximately 68 percent of the complaints that included a description referenced blacks. Interestingly, only 10 percent of BART ridership is attributed to blacks, with whites and Asians making up the majority of the remainder. Also, many of the “offenses” included in the report were relatively benign activities such as playing loud music, smelling bad, and taking up more than one seat. Zachary Norris, the executive director of the Ella Baker Center for Human Rights, decries the app, noting that, “By encouraging passengers to report these types of complaints, BART is furthering our punishment economy, wherein we find punitive solutions to social problems that actually require reinvestment in communities.”
While many of these apps have a polarizing effect on demographically separate groups, at least one was created in response to an already sensitive social situation. Hollaback—an app designed to reduce street harassment aimed at women, people of color, and the LGBT community—allows real-time reporting of incidents with a location map of the occurrence. Problem is, there is no strict definition of street harassment. A well-meaning compliment to one may be a serious infraction to another. Also, reports may be fabricated, exaggerated, or created in an attempt to hassle another individual and even purposefully get them in trouble with authorities. In fact, the argument that Hollaback may overlook the harassment issues of men, mainly white or straight men, has surfaced on social chatrooms like Reddit, demonstrating that apps that call out (or leave out) some segment of society are at risk for fomenting social discontent. Finally, any app that relies upon communal reporting may also contribute to the proliferation of vigilante-style justice, where community members take matters into their hands based on a mobile report.
Evidence such as the Georgetown incidents show that in some cases these apps have a way of marginalizing certain members of society. They can also depersonalize the impact that anonymous reports, with their subsequent follow-up investigation by authorities, can have upon innocent persons. The apps emphasize people’s tendencies to fear what is different and allow individuals with deep-seated anger toward another ethnicity, religion, age, gender or sexual preference to harass others through erroneous reports of dangerous activity.
While they may seem like a reasonable way to keep an eye on crime, there are flaws in the design of these mobile group reporting apps, which can contribute to a more significant racial divide. In a speech to Georgetown University students earlier this year, F.B.I. Director James Comey spoke candidly about racial tension and overcoming bias. Importantly, he noted that racial bias and misunderstanding run both ways and to overcome it, people need to see and understand one another. “It’s hard to hate up close,” he explained in a question-and-answer session following his speech. Unfortunately, apps like these—with their snarky digital anonymity—allow prejudice and misunderstanding to snowball as the accuser, accused, and authority figures are even further disconnected from one another. Anonymity may protect the informant, but it can also enable emotional distance and contribute to incident exaggeration. An article in Forbes noted that the CrimePush application lets users report crime anonymously “so that they may continue with their busy lives knowing that with a push of a button, police will know and have everything to pursue the criminal.” This cavalier attitude toward situational reporting minimizes the significance reports like these can have on innocent individuals.
We mustn’t forget the pluses in this crime prevention app equation: sometimes having a mobile reporting app saves lives and property. A neighborhood in Arizona that used the Nextdoor app to keep tabs on criminal activity saw their rate of burglaries plummet. A community in Indianapolis was able to help authorities apprehend a group of burglary suspects through the use of NextDoor.
There is no doubt that there is a need for apps that can send help to victims in distress, allow crime reporting on-the-fly, keep neighbors and businesses aware of suspicious activity in their area, and let travelers know where it is safe to trek. But app developers need to have an awareness of the social risks and costs of this type of anonymous, instantaneous reporting. They should engage with lawmakers, citizens and law enforcement authorities to determine and build in fail-safes that reduce false reports and discourage, perhaps even penalize, biased targeting.
Nikki B. Williams is a freelance writer based in Houston, TX. She has written for a variety of clients from the Huffington Post and D.C.-based political action committees to Celtic jewelry designers in Ireland. You can contact her through her website, nikkibeewilliams.com.
It doesn’t seem controversial to suggest that the creator of an original artistic work, such as a song, film or a piece of literature, deserves the right to control how that work is used. Indeed, the notion of copyright is somewhat of a cultural and socioeconomic fixture in Western culture—though mostly, it is a point of bemused spectacle for average people. Consider the story of Sam Smith, a soulful singer-songwriter from England. He was recently under scrutiny after some listeners noted that his song “Stay With Me” was remarkably similar to Tom Petty’s “I Won’t Back Down.” The two songs clearly share similar (yet notably simple) chord progressions and certain (again, very simple) melodic motifs, albeit with considerably disparate instrumental arrangements and stylistic treatments. Since the release of “Stay With Me,” countless individuals have created videos and audio tracks, mashing the two songs together to illustrate their similarities; such is the bizarre, technology driven world we live in today where this kind of feat is not only possible but also commonplace. Smith’s label quickly settled with Petty’s lawyers, awarding him royalties and a song writing credit. Aside from highlighting just how unoriginal and derivative most pop music really is, the situation also serves to underscore the truly absurd role of copyright law in modern times. It is most often employed by affluent people and corporate entities, e.g. Tom Petty and his label, and likewise Sam Smith and his label, both parties with the financial means and legal resources to negotiate a matter, such as this, and potentially mount a costly and time-consuming lawsuit if it were financially beneficial for them to do so. This is something that most Americans simply cannot do. Though ironically, they can illegally download both Petty and Smith’s entire discographies with meager chance of legal repercussions. And yet, the notion of copyright is clearly still ingrained in our national consciousness today. But with the advancement of technology, the world is rapidly evolving. Unfortunately, laws are not so quick to do the same. Does this set of rules and conventions actually serve the public good? Is our notion of copyright law worth extending to other parts of the world?
Copyright is largely governed by territory, meaning that rights and protections such as those afforded to us by U.S. law are generally limited to, well, the United States. But international agreements between countries can extend these rules to other territories. It is undeniable that, for better or worse, such agreements have shaped the world around us into what we know today. They have spurred the globalization of markets and dramatically shifted the economic paradigms of entire countries, including—in large part—the United States. The architects of such deals have naturally used these agreements, massive both in scale and scope, as the prevailing method for advancing global economic agendas in the modern age. Today, such an agreement is nearing completion. After five years of negotiations, countries involved in the Trans-Pacific Partnership (TPP) reached an agreement on October 5, 2015. The deal serves as an extension of the Trans-Pacific Strategic Economic Partnership Agreement (TPSEP or P4), signed by New Zealand, Singapore, Brunei and Chile in 2005. This time, twelve countries were involved in negotiations: U.S., Canada, Mexico, Japan, Australia, Vietnam, Peru, Chile, Malaysia, New Zealand, Singapore and Brunei Darussalam. The partnership contains many components indicative of traditional trade agreements, such as the lowering of tariffs between countries. But the TPP has been subject to a plethora of scrutiny and criticism. And one highly controversial part of the agreement may change the way we make and interact with entertainment, art, and indeed all creative works in the future.
The matter of intellectual property rights has been a specific point of contention among countries involved with the TPP negotiations; it’s one of the major issues that has led to an approximate three-year delay of the agreement’s finalization. There have been at least 19 points of disagreement regarding intellectual property during the negotiation process, with the U.S. applying pressure on other countries to give in to its demands. On October 9, 2015, WikiLeaks released the final draft of the intellectual property chapter of the TPP). Julian Assange, the editor-in-chief of WikiLeaks, had previously criticized the intellectual property component of the TPP. In 2013, after releasing a preliminary draft of the IP chapter, Assange said: “We released today…the secret intellectual property chapter, what they call ‘intellectual property,’ but it’s actually all about how to extend monopoly rights of companies like Monsanto, which has genetic patents over wheat and corn. They are extending the ability of Disney to criminally prosecute people for downloading films, prosecuting internet service providers, and introducing something they call a ‘patent prosecution highway’ ”. While Assange’s comments tend to drift towards sensation and spectacle, his grim appraisal of the deal echoes that of many serious critics’ of the TPP. Such critics commonly argue that the deal disproportionately benefits and protects corporate entities, rather than the average people of the involved countries. The spirit of this criticism extends in no small part to the aspects of this partnership affecting copyright law.
There are numerous philosophical justifications for the enforcement of copyright. Today, this set of conventions is such a foundational component of our legal system; it just seems to be common sense that the creator of a work ought to have exclusive control over its use. Pragmatically, it is commonly thought that this system incentivizes the creation of new, original works, thus driving industries that depend on the creative process. Of course, over the last couple of decades, the development and spread of the internet has further complicated matters in previously unimaginable ways. Now, it has become as easy as the click of a mouse to replicate and distribute a work, and far easier to commit copyright infringement in turn. In relation to intellectual property, authors of Understanding the Trans-Pacific Partnership note: “Countries differ on the appropriate level of obligation in several areas, including patent rights for pharmaceuticals, copyrights, and enforcement”. Representatives of Canada, for instance, have notably objected to the U.S.’s proposals for copyright protection, a major part of which involves the increased criminalization of copyright infringement. Essentially, the U.S. is looking to broaden the definition of criminal infringement to include any noncommercial violations, while previously only commercial infringement was considered a criminal offense. The distinction is crucial and significant to those countries that oppose the stringent criminalization of copyright infringement. Advocacy groups in Canada are asking the recently elected Prime Minister Justin Trudeau to push against such changes, which fly in the face of recent copyright reforms in their country. These reforms were aimed specifically to account for noncommercial use of a copyrighted work in a straightforward, common sense manner. For instance, Canada’s so called mash-up exception allows for the creation of a new work from a copyrighted work, so long as the new work is not sold for commercial gain. Should said violators be criminally prosecuted for merely sharing a file? When the definition of criminal infringement extends to noncommercial violations, some absurd things start to set in motion. What you get are these obtuse manifestations of law where it is technically illegal to share something as innocuous as an internet meme.
While it is clear that the mandate of the U.S. has been to extend the policies of strict copyright enforcement, it is interesting to note that the TPP does not explicitly extend one reasonable part of U.S. copyright conventions: fair use. In essence, copyright law grants the creator of an original work with the exclusive right over said work in relation to its use, distribution and reproduction, among other rights. As such, other parties cannot legally use the work without the explicit permission of the copyright holder. But the conventions of fair use do allow for the copying of a protected work, insofar as such a use adheres to a certain reasonable set of criteria. If the copying of a work constitutes a “transformative” act that will greatly help the case for fair use. Creating a parody of a well-known song, for instance, arguably constitutes fair use. Song parodist Weird Al Yankovic, for example, does not have to ask for permission from rights holders in order to sell and perform his song parodies, although he does so as a courtesy. But fair use emerged out of our common law system and its application by the courts is complicated and unpredictable. Perhaps that is why these same conventions cannot be extended to other countries. They aren’t even really hammered down here. And after all, why would they be? There is no real money in protecting the conventions of fair use. Extending the scope of such conventions might even affect the bottom line of major corporations. Such is the dysfunction within our legal system. But perhaps even more problematic is the issue of copyright terms, yet another problematic nuance of the TPP.
Central to the very idea of copyright is the stipulation that it is limited to a given time period, after which such a work goes into the public domain. This time period is conventionally referred to as a term. When a work’s copyright term has expired, it is said to have fallen out of copyright. Thus, such a work is public domain; meaning prior legal restrictions to the work’s use no longer apply. But in the U.S., copyright terms are convoluted and arbitrarily long. Any work published prior to 1923 is public domain. In addition, any work published with a copyright notice between 1923 and 1963, but does not have a copyright renewal has effectively fallen out of copyright. Any work published between 1923 and 1977, without a proper copyright notice, are also in the public domain. And finally, works published between 1978 and March 1, 1989 without copyright registrations are also in the public domain.
While works published in the past are subject to an arbitrarily complicated set of year ranges, exceptions and other various nuances, works published today are plainly subject to copyright for the life of the creator plus 70 years. Let’s presume a work is published today, and the original copyright holder dies 35 years from now. Such a work would not be in the public domain until the year 2120. Many consider these conventions to be excessive, and not particularly beneficial in the way of protecting artists or promoting creativity. Such laws do, however, work to protect the financial interests of corporations that obtain and hold the rights of such works. It wasn’t always that way. The Copyright Act of 1790 set the term for copyright protection to 14 years, renewable for another 14, after which the work would fall out of copyright. A maximum of 28 years is a considerably modest term when compared to the unmitigated behemoth we have today. So what happened? The answer is perhaps more depressing than alarming, and points to a larger, systematic problem within the U.S. government. Businesses habitually engage in lobbying efforts in order to influence Congress and the legislative agenda. As such, Congress has continually extended copyright terms , such that works which ought to have been public domain decades ago are being held captive by wealthy corporations. These corporate entities grow wealthier through this very manipulation, without the necessary creation of new works. That last part is important. Current conditions do not promote creativity; in actuality, they promote an atmosphere of complacency and decadence. A business with a vast collection of intellectual property can simply rest on their laurels, collect royalties, and effectively do nothing of value for society.
The song “Happy Birthday to You,” has recently made headlines amid litigation regarding the legitimacy of Warner/Chappell Music’s copyright over the song, which has now been ruled invalid. Many have interpreted the court’s judgment to mean that the ubiquitous Happy Birthday song is now in the public domain. And yet, for decades, Warner/Chappell Music has profited greatly from the licensing of this song, due in no small part to its popularity in American culture. In fact, the song has earned over $50 million in its lifetime, and has recently brought in $2 million per year for Warner/Chappell. It isn’t a stretch of imagination to say that Warner/Chappell has been withholding this work from its rightful place in the public domain, and has indeed defrauded would-be content producers of both time and money by enforcing the strict control of the work. The sad part is that this isn’t terribly unusual. What has happened with “Happy Birthday to You” over the years is a shining example of the lengths corporations will go to in order to profit from a work that they arguably have little claim over, but the very same problem exists with films, literature, art and other mediums.
It is clear that certain aspects of copyright law, as it exists today, are both impractical and perhaps even harmful to society as a whole. The issue is not that the entire notion of copyright law is invalid; of course artists and creators are entitled to have dominion over their own work. It’s more of a question of the scope and term of that entitlement. Does it extend to other parties, namely corporations that claim to own such a work many years after the original creator’s death, in virtual perpetuity? Do noncommercial violators of copyright deserve to be held as criminals in the court of law? If so, what function does that sort of convention serve? Who is really benefiting? In a sense, copyright law as it exists today is a sad perversion in light of what it was originally intended to do; i.e. protect creators and promote innovation. The conventions of copyright, which in principle ought to protect and promote artistry and innovation, now serve to hinder those very things. Instead, corporations and their cohorts in government use copyright law to protect their wealth. They do so not because they are inherently evil, but because profit motive is the driving force of their activity. That is, by definition, what corporations do. But by allowing these outdated conventions to continue, we are effectively endangering the cultural and artistic wellbeing of future generations, thus undermining the collective creativity of humanity as a whole.
David Stockdale is a freelance writer from the Chicagoland area. His political columns and book reviews have been featured in AND Magazine. His fictional work has appeared in Electric Rather, The Commonline Journal, Midwest Literary Magazine and Go Read Your Lunch. Two of his essays are featured in A Practical Guide to Digital Journalism Ethics. David can be reached at firstname.lastname@example.org, and his URL is http://davidstockdale.tumblr.com/.
On September 11, 2015 the European Data Protection Supervisor (EDPS) Giovanni Buttarelli released an Opinion about digital privacy and dignity entitled Towards a new digital ethics. The document was published in an effort to encourage open discussion about privacy concerns facing the European Union and to emphasize that regulations should focus on preserving human dignity. The Opinion outlines technologies the EDPS believes to pose the greatest threat to privacy and discusses the entities responsible for preventing infringement. It also announces his plans to create an Ethics Board responsible for analyzing the ethical effects of defining and using private data.
The EDPS is an independent supervisory authority who was appointed by the European Parliament and Council in 2014 to advise the European institutions and bodies on privacy legislation and cooperate with authorities to ensure that personal data is protected. Increasing concerns about the proliferation of privacy-threatening technology have driven the EDPS to release a statement about the association between ethics and digital security. This Opinion is a follow up to the EU Data Protection Reform, and further opinions are expected as the EDPS works on his five-year strategy to constructively improve and monitor data security.
Much of the Opinion outlines modern digital technology the EDPS believes to pose the biggest threat to preserving private information in an ethical manner. In addition to examining current innovations, Buttarelli extends his research to analyze the potential for security breaches in up-and-coming developments. Big data, the Internet of Things (IoT), cloud computing, data business models and autonomous devices were of particular interest. The EDPS expressed his concern that personal, sometimes inaccurate, information collected using these technologies is being used to establish profiles that undermine our dignity and make us vulnerable to discrimination.
Big data, the process of collecting large sums of data from various sources and processing it with the help of algorithms, poses a significant obstacle to preserving privacy. In addition to the better-known use of big data–personalized advertising–the process is also used for more impactful purposes such as determining insurance rates and loans. The EDPS worries that business models that rely on summarizing a person by pulling information, especially from sources unknown to the individual, undermine their dignity. Among Buttarelli’s goals is curtailing the practice of reducing people to data, and establishing steps to regulate big data is a part of the process.
Another trend that warranted attention was the Internet of Things (IoT), networks of devices that remotely collect and exchange information. Some gadgets that make up the IoT network are extremely beneficial to human wellbeing, but their services rely on private information. Wearable health monitoring devices and accident prevention technology that fall under the IoT umbrella can potentially save lives, but they also rely on gathering, storing and transferring personal information. Advanced collection technology, such as heat sensors and authentication applications, are often used in combination with cloud storage to carry out IoT processes. The collected data may include IP addresses, passwords, health conditions and physical locations; details that must be utilized in an ethical way.
The Opinion suggests that obtaining access to such data puts users at risk of stereotyping, especially in the health and auto industries. Nevertheless, the EDPS maintains that it is possible for security checks to protect the dignity of the public without stifling remarkable innovation. He plans to bring experts together to discuss what steps can be taken to guard data collected by IoT devices without hindering their usefulness.
While he did not offer sample legislation to counter threatening trends, the EDPS did identify technologies that warrant further analysis. The Opinion served primarily as a jumping-off point for further, more specific discussion about security regulation. Equipped with information about technologies of concern, business leaders and IT technicians could work together with legislators and privacy experts to propose ethical solutions.
In addition to highlighting areas of concern, the EDPS identified parties that must be held accountable for securing privacy measures. He indicated that an ‘ecosystem’ made up of legislators, corporations, IT developers and individuals was responsible for maintaining ethical privacy standards.
Not exactly known to shoulder the responsibility for company ethics, IT developers were challenged to seek out solutions to digital privacy concerns. In particular, developers were asked to implement personalization tools to safeguard private information in devices and networks. According to Buttarelli, technological design decisions should, “support our values and fundamental rights.” He suggested that further research on privacy and auditing technology would play a role in achieving these goals.
Businesses that utilize private data were naturally directed to use such information for necessary functions only. This was the most challenging of directives, as utilizing personal content in a variety of ways is often a part of company business models. It will be interesting to see whether discussions the EDPS hopes to initiate will produce realistic alternative profit models and suggestions for circumventing personal data usage.
Nevertheless, Buttarelli stressed that businesses should be using private information to meet clear objectives. He also called on them to enforce strict and clear auditing procedures that involve oversight by independent regulators. The EDPS suggested that corporations implement auditing regulations, introduce audit certifications, and set up company codes of conduct.
Without fear of repercussions, it is unlikely that companies will make privacy a top concern, especially in cases where it interferes with profits. That is why legislators were named as a part of the ecosystem responsible for ensuring information security and personal dignity. It is worth noting that EU laws already prohibit the use of information in unlimited ways, even in cases where individuals offer full consent. Therefore, EU legislators have already set in place basic data protection regulations they can build upon after more direct EDPS protocols are proposed.
Buttarelli asked that IT developers, businesses and legislators ensure privacy and offer clear guidance to those who do not understand how their data is collected and used. But he did not ignore the responsibility of individuals to monitor their own behavior and make sure their information was utilized correctly and gathered properly. According to the EDPS, “individuals are not passive beings requiring absolute protection against exploitation.” He offered sample research that suggested misinformation was not uncommon in credit reports, and directed individuals to challenge questionable results that might lead to discrimination. Consumers who were unsatisfied with corporate services could pressure businesses to step up by shopping around and purchasing from more reputable companies.
The parties in Buttarelli’s ecosystem each have unique motives for managing private information, but according to the EDPS, one factor must underlie all of their goals: human dignity. It is difficult to create a one-size-fits-all plan for processing private data because future technology is not exactly easy to police. Despite his thorough analysis of trends that may evolve into privacy threats, regulations for the unknown will not address all imminent problems.
What can guide leaders, developers and the public in maintaining ethical approaches to data privacy is ensuring that new technologies and regulations uphold an individual’s dignity. In other words, producers and processors of technology should regularly ask themselves whether they are using private data in ways that can lead to stereotyping, stigmatization or exclusion. They should also consider whether personal data is used as a profiting tool rather than an imperative aspect of operation. If dignity is compromised in exchange for technology, we should debate whether that technology is worth the price.
Despite his concerns over unethical use of private data, the EDPS remained optimistic about the EU’s ability to preserve privacy without stifling innovation. He challenged developers to create technology that limited the ability to single out individuals and to concentrate on methods of collecting unidentifiable data, assuming the private information was necessary in the first place.
To help him evaluate how a balance between human dignity, innovation and business models can be achieved, the EDPS announced he would establish an Ethics Advisory Board in the coming months. The board will include several experts in the fields of technology and economics. Given the board’s focus on ethics, members will also include specialists who can provide information about the social implications of privacy risks. Among them will be sociologists, psychologists and ethics philosophers. When needed, additional authorities will be invited to weigh in on solutions to privacy obstacles and their ethical compromises.
An ethics board proposal and the list of threatening technologies and responsible parties were all laid out to establish a framework for further discussion. The Opinion of the EDPS echoed his belief that personal dignity did not have to hinder innovation as long as the EU challenged itself to come up with solutions that prioritized dignity in technology. According to the EDPS, now is a prime time for the EU to adopt a fresh approach to handling private data in an ethical manner. Although technological trends are not all predictable, proactive collaboration between experts and the public can ensure that security and dignity will become an integral part of future developments.
Paulina Haselhorst was a writer and editor for AnswersMedia and the director of content for Scholarships.com. She received her MA in history from Loyola University Chicago and a BA from the University of Illinois at Urbana-Champaign. You can contact Paulina at PaulinaHaselhorst@gmail.com.
Though user-generated content (UGC) is now a standard part of the journalist’s newsgathering toolbox, the ready availability and proliferation of such content—and our bottomless appetite for breaking news—introduces a familiar ethical dynamic in the modern newsroom. However, rather than inhibiting the journalistic project, these evolving ethics of social newsgathering may also empower the voice of the raucous court of public opinion, which has historically kept the press in check.
The increased proliferation of UGC in news media is a byproduct of our culture’s increasingly symbiotic relationship with social media. The best social media platforms allow for a customized newsfeed that effectively blends current events with status updates, food pictures, and smarm. In other words, our ubiquitous newsfeed is a digital age, individualized evolution of the broadsheet. Although within these feeds there is enough context to distinguish a CNN headline on Syrian refugees from pictures of your aunt’s vacation to Denver, the fact that these two messages—one ‘newsworthy’ and one social—share the same visual real estate belies an emerging precedent and expectation for UGC and news media: The news must be relatable, relevant, and—most importantly—it needs to come from someone like us, not them.
This much is confirmed by the Online News Association (ONA), which last year was the first group to formally engage with the ethics of this new paradigm in the two-part article, “Social newsgathering: Charting an ethical course.” Regarding newsgathering, the authors write, “You’d better leave lots of room for social tools, given the powerful role social newsgathering now plays in discovering important information and content, especially when news breaks where there isn’t a professional journalist in sight.” On one hand—depending on who you talk to—the increasingly powerful role of social newsgathering has put professional journalists in a precarious position (just ask anyone who used to work for the Chicago Sun-Times). But on the other hand, UGC allows us to see, hear and learn things we might have otherwise only gathered through hearsay back in the days of print. There are certainly rightful grounds for the news utility of UGC (and the corresponding reduction of overhead a la the Sun-Times).
For its first look at the ethics of social newsgathering, Eric Carvin and Fergus Bell of the ONA—both social media editors at The Associated Press—formed a social newsgathering working group who cooperatively identified the five key ethical challenges of social and digital newsgathering. These challenges include verification and accuracy, contributor safety, rights and legal issues, social journalist wellbeing, and the less obvious issues of workflow and resources (i.e. “How does a newsroom with tight resources develop the expertise to make strong ethical decisions about social newsgathering?”). While these are worthwhile exercises, in many ways they simply restate the ethical challenges faced by journalism since the dawn of free press: Can these facts be verified? Will someone be put in harm’s way if this information is shared? Are we breaking any laws? How do we know if what we’re doing is the right thing?
These and other ethical questions are, of course, essential for good journalism. But what these conversations leave out is that, despite the nobility of such ethical pontification, they take place under the umbrella of commerce that is at once amoral and discriminating. You can’t run a profitable paper without selling a few ads. And isn’t that the whole reason ‘news’ exists, anyway—to make money? Idealists say no, but without the news media industry, there would be no news—and without financing, there would be no news media. But the nature of UGC and the tendency of the digital age to subvert once privileged news access undermines the business precedents—and associated ethical contexts—of traditional news media. It’s a double-edged sword.
One recent corrective on UGC and the ‘business’ of news comes again from Fergus Bell who, in a keynote address at the news:rewired ‘in focus’ conference this past October, said: “There are ways to be competitive and ethical at the same time. I think that it requires the industry to work together. There are certain standards that we can come to – just because this is new, it doesn’t mean that we can’t get together and talk about it.” As Alli Shultes reports at Journalism.co.uk, the focus of the conversation on UGC and journalism should be sustainability. For Shultes, this means “building confidence so that newsrooms and journalists continue to be trusted to handle UGC in an ethical and professional manner.” But it also means sustaining business, audiences and reliable reportage, which is the essential product of the news media industry. On a basic level, Bell has a method for fostering this sustainability via UGC:
– Find the earliest example
– Check the source’s history
– Ask the source about the information/image
– Verify the source
– Secure permission for the AP to use it
– Compare the content (with other images that might date it)
– Verify the content
Because UGC is now a journalistic standard, and because the presentation of this standard is both received and combined with our personalized, microcosmic social news, Bell’s ethical methodology has value for journalists and newsrooms as well as for the users themselves. The ubiquity of UGC means that users are now more explicitly contributing to the news generation and collection process, whether we know it or not. A haphazard tweet or Instagram can be front-page news, despite the generally self-serving intentions dominating our online lives. In this way, there is at least a baseline moral incentive for users to be aware of the ethical dynamics of newsgathering, especially as the lines between laity and industry continue to blur.
At one time, the hard line between the public and the press—while perhaps more beneficial to business—nonetheless gave the public a certain power over the press. If we don’t like what you’re printing, we’re not going to buy it. And if we really hate it, we will boycott it or do everything we can to put you out of business. A democratic system that values free speech grants sway to the majority; this is the Court of Public Opinion. And as the digital conversation of social media has empowered the voice of the laity, it has likewise changed the dynamics of this Court. The UGC that is changing the dynamics and ethics of newsgathering and journalism is (or can be) the same content that serves to hold the press accountable. And in situations where the press is state controlled, social media as UGC even has the capability of inspiring revolution (as in Egypt in 2011).
With UGC now a newsroom standard, it’s important to update the ethics of newsgathering and journalism. By that same token, it’s perhaps even more important to understand the commercial context from where these ethics emerge. For UGC has not only transformed the new gathering process but also the industry, which has historically commanded the mechanisms of this process. In this new milieu there emerges a responsibility that both journalists and ‘users’ would do well to apply. It makes for better news, reliable content, and—even though we hate to say it—a better bottom line.
Benjamin van Loon is a writer and researcher from Chicago, IL. He holds a Master of Arts in Communication and Media from Northeastern Illinois University. Follow him on Twitter @benvanloon and view the rest of his work online at www.benvanloon.com.