Stopping crime before it happens is the perfect martial dream. It can save time, resources, and even lives. But for the average citizen, the idea of preventive crime monitoring is more like a science fiction nightmare from Steven Spielberg’s 2002 tech thriller/Tom Cruise vehicle Minority Report.
The pitfalls of pre-crime monitoring are central to Spielberg’s underlying horror where, in the future, clairvoyant beings “previsualize” violent crimes before they happen. All is well until the beings previsualize a crime nobody expects to happen, setting off a 145-minute chain of Academy Award-nominated events.
Minority Report’s claims about free will could keep a philosophy class going for hours, but the real relevance of the film, as with any serious science fiction, is in its prophetic power. No, we don’t have superhuman psychic mutants, but we do have big data, and as early as 2005, some U.S. police departments were using predictive tech to effectively identify negative trends and reduce crime in certain cities, like Memphis and Minneapolis. But that was more than a decade ago. A lot has changed since then, and the evolutionary rate shows no sign of cessation. We’re more connected now, and more and more of our lives are being sent to the cloud. As a result, we’ve laid a strong groundwork for a total surveillance society.
Though some people are okay with the techno-Faustian bargain we’ve bought into, most are still unsettled by the idea and the potentials of digital surveillance. Even with pre-crime tech entering its teen years, recent news of China’s recent foray into pre-crime monitoring is rustling some feathers and bringing the field to a pivotal ethical crossroads. The tech isn’t going away, and it’s only going to get better. The challenge will be: how do we do it so we don’t all end up like Tom Cruise on the run?
China’s pre-crime monitoring program, developed by state-run defense contractor China Electronics Technology Group, reportedly captures data on “jobs, hobbies, consumption habits, and other behavior of ordinary citizens” to predict potential crimes, writes Bloomberg reporter Shai Oster. There’s nothing notable about the data capture—just look at the digital advertising industry—but for crime surveillance purposes, its intentions are far more suspect. Especially in China’s program, where “there are no safeguards from [Chinese] privacy protection laws and minimal pushback from civil liberty advocates and companies,” adds Oster.
Surveillance is a mechanism of power, and without legal safeguards or civil, corporate or public pushback, the technology can evolve unchecked. The U.S. has its own safeguards in place—at least on paper—which is why Apple was able to refuse the FBI access to the San Bernardino shooter’s iPhone a few weeks ago (until a hacker came by to help the agency circumvent the issue). The safeguards are in place to guard the privacy of the American public, but in the eyes of the state, they’re like duct tape over the state’s camera lens. But because privacy laws in China are overwhelmingly favorable to the state over its public, writes Patrick Tucker at Defense One, “China is poised to emerge as a leader” in pre-crime monitoring technology.
China’s growing leadership position in pre-crime tech is founded on a military paradigm that favors domestic security over military spending. According to Tucker, China increased its security spending in 2011 by 13 percent to a total of 624 billion yen ($5.6 billion), over military spending at 601 billion yen ($5.4 billion). The increase in spending allowed the Chinese government to launch a national program, “requiring 650 Chinese cities to reform their public security and safety infrastructures with state-of-the-art technologies,” according to a 2013 report from Homeland Security Research. Technologies in the overhaul include tracking technologies, video surveillance, physical identity and access management, cyber security, physical security information management, and other surveillance hardware and software.
This tech ramp-up is part of a greater Chinese effort towards “social governance,” or “social management,” which—though difficult to define in English—is distinct from government oversight of economic and state governance: Instead, it speaks to how “the government manages and regulates social affairs, social organizations and social life, with the guidance of law,” according to East Asia Forum. The push comes from the changes spurred by China’s increased urbanization, where the government is increasingly expected to maintain social stability. Including general social affairs in this larger state oversight effort is one piece of the larger surveillance pie, and with digital tech integral to modern social affairs, it makes practical sense for states to drive resources towards social surveillance.
To officiate the strength and scope of these resources, China drafted a new cybersecurity law last year authorizing “broad powers to control the flow of information,” writes Austin Ramzy at the New York Times. Recognizing the democratizing ideology of open Internet, China already has in place restrictive Internet laws, and the new draft law says that the state’s Internet information department is “responsible for comprehensively planning and coordinating network security efforts and related supervision and management efforts.” And instead of creating new cybersecurity initiatives, the new draft law instead elevates extant practices and regulations to the state level, ensuring the centralization and efficacy of state surveillance power.
Asked whether China’s increased spending on domestic security is part of a greater global trend, Adam Segal, the Maurice R. Greenberg senior fellow for China studies and director of the Digital and Cyberspace Policy Program, said the tech is instead being driven by China’s specific “concerns about social protests and threats to domestic control”—or what it calls terrorism. In a March 4 article for Defense One, Segal argued that despite the locality of its efforts, China is looking to the global stage as a reference and defense for their anti-terrorism surveillance, stating that the provisions of the cybersecurity and data collection laws are in accordance with “international common practices.”
In his article Segal adds, “The desire for data may only intensify under Xi Jinping’s leadership; the Chinese Communist Party appears increasingly worried about domestic stability and the spread of information within the country’s borders.” It’s not something China takes lightly, either, if you recall the 2010 incident in Xinjiang where the state ended ten straight months of Internet blockage in the region following deadly, racially charged riots between the Muslim Uighurs and the Chinese Han. China blamed overseas groups using the Internet for inciting the violence, and shut down regional access to curb information sharing. The riots left around 197 people dead and another 1,600 injured and fit the context for China’s definition of terrorism, which its pre-crime monitoring program is now attempting to curb.
“When the Chinese refer to cyber terrorism,” Segal added in an interview, “they are referring to the spread of extremist ideas as well as the promotion of violence—say, sharing of how to construct IEDs.” China’s pre-crime monitoring program will flag any terrorist-like behavior, such as sudden influxes of cash, frequency of international calls, and other analyzed trends, allowing authorities to target specific instigators, freeze their accounts, and open up further information inquiries—to stop any terrorist acts before they happen. “The issue for the U.S.,” adds Segal, “is that some forms of speech the Chinese consider terrorist—‘splittism’ from Uighur or Tibetan activists—the U.S., would likely consider legitimate public discourse.”
And here’s the grind, that one person’s terrorism is another’s free speech. “Since all algorithms and data gathering are inherently political,” Segal says. “The system, if possible, would seem ripe for abuse.” Power is concerned with self-preservation, and pre-crime monitoring, using big data and analytics for its support, is another tool in this arsenal. But the technology won’t stop, and it will only get better—especially as more of modern life gets sent to the cloud. For pre-crime monitoring to advance, be effective, and avoid Minority Report-scale misapplication, it will need to prioritize ethics over returns.
Benjamin van Loon is a writer and researcher from Chicago, IL. He holds a Master of Arts in Communication and Media from Northeastern Illinois University. Follow him on Twitter @benvanloon and view the rest of his work online at www.benvanloon.com.
For over 200 years, the devious practice known as “gerrymandering” has undermined our democratic process and effectively instilled a sense of political despondency among our electorate in the United States. What is gerrymandering? The term is derived from the word “salamander” and the name of Elbridge Gerry, a former governor of Massachusetts in the early 19th century. Gerry presided over the drawing of a new state senate voting district in 1812, and its creation was largely believed to favor his party, the Democratic-Republicans. The map was said to resemble the shape of a salamander, complete with various appendages and depicted in the Boston Weekly Messenger with claws and fangs. Simply put, the term “gerrymandering” has come to mean the intentional manipulation of electoral maps to favor a particular party or interest. Many solutions have been proposed to stop this practice, but one of the most promising ones may turn out to be a computer program.
One can usually spot a gerrymandered electoral district by its obtrusive shape. This graphic illustrates how the manipulation of electoral boundaries can give a disproportionate advantage to a given party. For example, here are Tennessee’s nine congressional districts as they appeared in 2004. Notice the distinct shapes of districts seven and three. Those shapes are not arbitrary. They were drawn up specifically to favor house Republicans. In fact, not a single House member in Tennessee lost a bid for re-election from 1980 to 2005. This is gerrymandering in full effect. Whenever a politician at the state level has remained in office for multiple terms, it’s not unreasonable to suspect that gerrymandering has played a role in some way. Job security is usually a good thing—but when it comes to politicians, accountability should be part of the job description. Electoral districts are said to be based on “communities of interest,” defined by Washington Post reporter Christopher Ingraham as “people who share a common demography, culture, class, etc.” That’s a pretty broad term. How do we decide what a community consists of? The law doesn’t define it. Who gets to decide?
The United States has become increasingly polarized in the past few decades. Thomas Mann from the Brookings Institution writes, “The parties in Congress are as polarized—internally unified and distinctive from one another—as any time in history.” The increased polarization, both in rhetoric and in actual representation within Congress, is perhaps a symptom of a larger systemic problem: Americans by and large feel they are alienated from their representation in government. This dramatic presidential race is perhaps a manifestation of these trends. And it certainly makes for a lot of great headlines. At least presidential elections are generally competitive. But what happens in Congress is arguably more consequential for the lives of average Americans. And it’s directly related to the heart of the matter. Congress currently has an approval rating of 11 percent. That’s only 2 points higher than the all-time low of 9 percent in November 2013, directly after the government shutdown. A whopping 86 percent of the public says they disapprove of Congress. Yet in 2012, the re-election rate for incumbents in Congress was 90 percent for House members, and 91 percent for the Senate. Now at this point, you might be wondering how it is that representatives in Congress remain in office when the public disapproves of them in record numbers. That’s a great question.
Part of the problem is that most folks tend to like their own elected representatives, but think that Congress as a whole is ineffective. That explanation holds water with Senate elections, which are not subject to the vices of gerrymandering. But what about the House of Representatives? Take a look for yourself. Ingraham concludes that Democrats are underrepresented by at least 18 seats in the House. It would be irresponsible to blame only one party when it comes to gerrymandering, but the Republicans have benefitted disproportionately in recent elections. So it isn’t particularly surprising that Democrats as of late have widely attempted to pass legislation to end the practice; one wonders if the shoe were on the other foot, if they would be as eager to do so.
So gerrymandering is a big problem. What can we do to fix it? Is it fixable? The drawing of electoral boundaries has been controversial for as long as electoral boundaries have existed in the US. The formation of independent, nonpartisan redistricting commissions is a potential solution. California’s map in the 2000s is a prime example of the influence of gerrymandering. The bipartisan effort in 2000 virtually guaranteed incumbent victories. In 2002, no seat changed party hands in California state legislature. Districts were dominated by either the Democrats or the Republicans, with very few competitive districts. The state’s solution is often touted as an ideal model. In 2008, the state voted to pass Proposition 11, which removed the responsibility of drawing the California’s congressional districts from the state legislature. Surely that’s a step in the right direction. The power to redraw was given to the California Citizens Redistribution Commission, consisting of 14 members. As a result, the redistricting commission created a map that gave rise to the most competitive congressional districts in the country. But redistricting commissions, as good as they may sound, are still subject to corruption. It shouldn’t be a surprise that as soon as the new commission in California was created, politicians were already trying to figure out how to exert influence over the process. Maybe it’s time to take humans out of the equation.
A recent study conducted by marketing research firm Intentions Consulting and Nikolas Badminton shows that some Canadians have a distinct lack of trust in, well, humanity. Roughly a quarter of participants thought than an unbiased computer program would be more trustworthy and even more ethical than their own managers. They preferred a computer program to be responsible for screening, hiring and assessing job performance. Younger adults shared these beliefs in higher numbers, with about a third of them preferring a computer program over an actual person. It’s safe to say that at least some Americans share this preference in some capacity. What if, as a solution to gerrymandering, we let a computer program redraw electoral districts?
It’s true that there will always be losers when it comes to drawing electoral boundaries, which makes it all the more important that it’s done in an impartial way. Maybe, in this instance, simple is best. The “shortest split-line” method of redrawing districts is a clear, straight forward solution, touted for its impartial nature. Essentially, shortest split-line is based on an algorithm with clear, transparent parameters. Creating a mathematical definition of an electoral district inherently removes the bias of human influence. In reference to the algorithm, The Center for Range Voting (CRV) states, “Which of those people are liberal, conservative, Republican, Democrat, black, white, Christian, Jewish, polka-dotted, or whatever has absolutely zero effect on the district shapes that come out.” CRV created a detailed comparative analysis of what states electoral boundaries would look like using shortest split-line. One of the major hurdles when it comes to the method is that its indiscriminant nature can actually be illegal in some circumstances, specifically with respect to the Voting Rights Act. The Equal Protection Clause expressly prohibits the formation of electoral districts that dilute the votes of racial groups. This law espouses a nice sentiment. But consequentially, it mandates affirmative racial gerrymandering, sometimes with unintuitive results; it lumps minorities together, wherever possible, into one single district. As evidenced by the situation with Florida’s fifth congressional district, this actually diminishes the influence of minorities, because it gave them less influence in surrounding areas. Wouldn’t it be best for all voters if it could be ensured that the redrawing process was impartial?
Gerrymandering isn’t a problem that makes headlines often. Both parties do it. It’s an arbitrarily complicated problem, and it’s not a sexy or even a particularly interesting problem. And that is part of the reason we allow it to persist. But gerrymandering gives rise to a polarized, corrupt and complacent legislature. In previous essays, we’ve seen multiple instances in which the U.S. government has been slow to adapt to emerging technologies. Copyright laws have been slow to adapt to the emergence of digital media. Transparency, once praised as a virtue in government by candidate Obama, is all but a hopeless dream as far President Obama is concerned. Ironically, that’s one thing the executive and legislative branches seem to agree on, despite the fact that transparency is now easier than ever with the development of new methods of electronic communication. This distinct inability for our legislative body to keep up with the advancement of technology is largely attributable to the checks and balances inherent to the way our government functions. Passing legislation isn’t an easy thing to do. The more comprehensive the legislation is, the more difficult it is to get through Congress, and rightfully so. But redistricting based on a shortest split-line algorithm is actually an instance in which technology can directly aid the democratic process, thereby potentially making our government more efficient and fair. It shouldn’t be controversial to suggest that politicians ought not be given the authority to draw their own electoral boundaries. It’s a clear conflict of interest. In principle, gerrymandering is patently undemocratic, regardless of which party is drawing the map. And rest assured, both parties are complicit in this practice. Voters should choose their representatives, not the other way around.
David Stockdale is a freelance writer from the Chicagoland area. His political columns and book reviews have been featured in AND Magazine. His fictional work has appeared in Electric Rather, The Commonline Journal, Midwest Literary Magazine and Go Read Your Lunch. Two of his essays are featured in A Practical Guide to Digital Journalism Ethics. David can be reached at email@example.com, and his URL is http://davidstockdale.tumblr.com/.
Last month, for the first time in its 30-year history, the South by Southwest (SXSW) festival in Austin, Texas, played host to the president of the United States. Thousands of SXSW delegates crowded into lounges, bars and ballrooms to watch the live-streamed keynote conversation, while a lucky few hundred drawing winners attended the event in person.
In discussing matters of policy, the president did not disappoint. He ranged across the wide spectrum of government tech challenges, and for the first time commented publicly on the ongoing Apple-FBI conflict around access to the encrypted iPhone of the San Bernardino shooter.
Government and Tech
There is a clear will to make government work more effectively using tech; to tackle big problems in new ways, and to improve civic participation. The concept of tech talent working hand-in-hand with government is sometimes hard to grasp: There is a massive cultural clash between the sleek tech world and outdated government systems. The president ruefully recalled the launch of Obamacare and the immediate failure of the website built to administer it. The big and bloated procurement systems, which gave rise to this failure, were created for ‘boots and pencils’ – not software! There is an immediate need to change procurement processes so that they call for innovative solutions, rather than requesting specific but outdated widgets.
To tackle the current state of affairs there is now a cross-system ‘SWAT team’ with top Silicon Valley talent, the U.S. Digital Services. As a U.K. observer, the technical challenges faced by the government are familiar. We are also going through substantial change, with the U.K.’s Government Digital Service bringing agile development to bear in a traditionally sluggish environment.
Future-proofing tech initiatives
President Obama stressed that the new regime will need to support constant improvement, and the continued introduction of new talent and new ideas. Building engagement and public trust in government tech is a priority: A continuous pipeline of talent, thought leadership on issues, such as tackling extremism online, and improving representation, are key elements for success.
The U.S. is the only developed country that is making it harder to vote, said the president. In Texas, he reminded us, you can’t even register to vote online. Government needs safe, secure and smart systems, and users must be aware of the issues upon which they are voting. In a wider context, there is now an app for everything from banking to bus tickets, from fitness to photography. We should expect the same smooth experience on primary interaction with government, whether that’s at IRS filing time or renewing licenses. Right now this is patently not the case, and the government needs the help of the tech community as a whole to move forward effectively.
The digital ideal
One immediate concern when moving services online is to ensure that all citizens are empowered to use them. A shocking 50 percent of Hispanics and 46 percent of African-Americans do not have regular Internet access or digital skills; this has to be tackled in parallel with new initiatives. The private sector, nonprofit, and government relationship must be opened up to have a chance of success, and focus must shift from the ‘cool next thing’ to actually using that ‘thing’ to help people, and to deliver opportunities.
Apple and San Bernardino
For the first time, President Obama commented publicly on the (now moot) dispute between the FBI and Apple. The FBI has requested that Apple provide a ‘back door’ into their operating system to enable access to encrypted data. The president’s position is that deep search of personal effects is normal on grounds of suspicion and on production of a warrant, so why should a digital search be treated differently? He went on to say that the Snowden revelations had ‘vastly overstated the danger to U.S. persons’ and that both Snowden and popular culture have increased public fear of security breaches. There is a conflict of values: If it is possible to make an impenetrable device protected from hackers, then how do we enforce laws? Decisions must be made to balance risks, and there must be a way to compromise as we have in other situations, such as airport security, where we surrender ease of passage to improve our own safety.
The tech community, however, is still coming down firmly on the side of Apple because a door once opened cannot easily be closed. Two days after the president’s keynote, I spoke to Rep. Will Hurd, chairman of the information technology subcommittee of the House of Representatives. He was clear that while there should be room for compromise, there can be no back door. He also reminded us that there has been no justification from the FBI as to why they need access to the physical device: They already have all the data from the cloud. Have prosecutors been blinded by tech, and forgotten the more traditional tools at their disposal?
The privacy leaders of Google, Facebook and Microsoft echoed this view at a panel session the following morning. Any ‘back door’ created for good will be exploited by the bad guys, said Microsoft’s chief privacy counsel, Mike Hintze. Erin Egan, chief privacy officer policy at Facebook, expanded on this: “We all work with law enforcement,” she said. “But we have a duty to protect the security of people who use our service.”
Google’s legal director of privacy, Keith Enright, highlighted ‘Engineering for Trust’ as a fundamental principle in this space. Even if the Apple case had specific validity (and Rep. Will Hurd’s comments throw this into question), legislation to require back-door development would open the floodgates and set a very dangerous precedent. Unfortunately, the current legislation used to determine government access to data predates the technology now in common use; the Electronic Communications Privacy Act was enacted 30 years ago, in 1986, five years before the advent of the World Wide Web.
What does the future hold?
The president’s visit to the SXSW Interactive event was a solid indictment of the importance of tech to government. By calling for the whole tech community to step forward and help, he opened the door to closer collaboration and real innovation in effective government.
The big firms gave a tantalizing glimpse of a welcome trend: the realization that privacy is infinitely nuanced, and therefore merits new ways of thinking. Ethics, they believe, will be the next development in the privacy space – should philosophers be on board from the start of feature and product planning? There is also a swing towards empowerment of users and perhaps, one day, the personalization of the privacy experience in line with individual sensitivities. An exciting time indeed for digital ethics and policy!
Kate Baucherel is a director of UK-based software startup Ambix, a qualified accountant with 25 years’ experience across a variety of industries, and an experienced digital marketer. She is the author of Poles Apart: Challenges for business in the digital age, and works with businesses of all sizes to help them use the internet effectively to achieve their goals. She has two young children, and lives in the north of England.
Smartphones kind of suck.
Mining minerals and metals for the phone parts create hazardous fumes and toxic waste, and workers who assemble iPhones also endure human rights violations. (Maybe you remember the spate of suicides by iPhone factory workers in China in 2010.) And when smartphones get tossed, hundreds of millions of them end up in landfills every year, where carcinogenic ingredients like lead and mercury can leach into soil and water. “There’s no getting around the hard truth: right now, there is no such thing as an ‘ethical smartphone,’” wrote Andrew Leonard for Salon in 2012.
Has anything changed in the past four years? Fairphone thinks so.
What is Fairphone?
In mid-2013, Dutch startup Fairphone made headlines for crowdfunding the first “ethical smartphone.” The company promised technology without the guilt: A smartphone made with fair labor practices, environmentally conscious material sourcing, and conflict-free minerals (at least for Europeans—the Fairphone isn’t available yet in the U.S.).
Two and a half years later, Fairphone has sold 60,000 of its original model and 16,500 of the next model, Fairphone 2 (which sells for about $584). Quartz’s Sarah Shearman concluded, “Fairphone’s success suggests that there are concerned customers prepared to vote for more ethical electronics with their wallets.”
But “more ethical electronics” is murky. Arguably, it’d be hard to make a less ethical smartphone than what’s available today. To determine how ethical Fairphone is and whether it’s a feasible alternative, we have to take a closer look at the company’s practices—and the larger societal issues surrounding smartphones in general.
Sourcing materials for smartphones is a double whammy: Since more than 30 minerals go into them, production requires a lot of mining; plus, that mining is often tied to violence. Caroline Winter for Bloomberg explains, “Minerals found in smartphones often come from conflict zones, most notably the Democratic Republic of Congo [DRC], where many mines are controlled by warlords and armed groups that use the profits to bankroll the country’s brutal, ongoing battles.”
In an attempt to avoid these conflict minerals, companies are turning to Indonesia instead of the DRC as a less-controversial source for the tin used in smartphones (which solders components to the circuit board). Two tin suppliers to Foxconn, Apple’s top manufacturer, get all of their tin from Indonesia. But even there, safety precautions go unheeded and miners are buried alive in pits that are illegal and dangerous. Tin mining in Indonesia may be “conflict-free,” but it’s not necessarily safe or regulated.
And then there’s pollution. In Australia, mining aluminum for smartphone casing releases coal dust and sulfur dioxide, harmful to lungs. In Inner Mongolia, processing rare earth metals produces toxic waste and “radioactive sludge” that causes cancer. And mining for copper in Chile for logic boards has created what environmental activists call “the biggest toxic waste dump in Latin America.”
So what is Fairphone doing differently? It’s a small start, but the company is using tin and tantalum from conflict-free mines in the Congo, and someone monitors the mines. Recycled copper is used for the printed circuit board. Fairphone is also trying to improve its sources of tungsten and gold, stating in a fact sheet, “We are working with partners to try to reopen the conflict-free tungsten trade in Rwanda to stimulate the local economy and establish a transparent tungsten supply chain…We are working to identify and integrate sources of fair trade and fair-mined gold into our supply chain.” Four out of 30 minerals leaves much to be desired, but with sourcing and processing being so complex and opaque, at least it’s something.
There’s a huge discrepancy between soaring smartphone profits and the tiny salaries and pitiful working conditions of those who actually build them. Sarah Shearman of Quartz mentions injustices at smartphone assembly factories such as sweatshop conditions, poor pay, and no breaks during long shifts. Undercover reporters at an Apple factory outside Shanghai reported 16-hour shifts and tiny dorm rooms shared by 12 workers. Meanwhile, Apple made $18 billion in profits in the first quarter of 2015 alone—“the biggest quarterly profit ever made by a public company,” the BBC reported.
Fairphone is trying to not only improve labor conditions but also give workers a voice. At Guohong, the Chinese factory it uses, the company established a Worker Welfare Fund. Employee-elected representatives decide how to use the fund (higher salaries, better food, extra training, etc.) and liaise with management. Fairphone has donated $125,000 to the fund so far. The company also boasts, “Our production partner Hi-P has already made a number of concrete improvements, ranging from fire safety and protective equipment for employees to addressing systemic challenges such as working hours.”
But there’s a lot more work to do. Fairphone Founder and CEO Bas van Abel admitted, “Conflict-free is not child labor-free. We know for sure there is child labor in our supply chain. Why? Because we work in Congo. We choose to work in Congo because we think by contributing to the work in Congo and giving people jobs there, there’s a chance to do something about child labor.” Overly optimistic sentiment? Time will tell.
Repair and Disposal
The tech industry, and particularly the smartphone industry, is based on convincing people to buy a new model at an ever-increasing clip. Companies do this with technological advances, incompatible updates, and the difficulty of repair.
Today’s smartphones are often very hard to open, with tough glue or solder cementing the battery in place. For instance, the HTC One Android smartphone, which came out in 2013, had a seamless, “zero-gap” design that makes it almost impossible to fix. “This phone was not made with openability in mind,” remarked iFixit. The site rates smartphones on repairability, and several recent phones rank poorly, including the HTC One M9, the Google Nexus 6P, and the Samsung Galaxy S6 Edge. (iFixit gave the Fairphone 2 a perfect score in this category.)
Even if people repair their smartphones, most don’t dispose of them responsibly. It’s difficult to find recent figures, but estimates of smartphone recycling rates range from 20 percent to as low as 11 percent. Forrester research analyst Doug Washburn thinks that’s because it’s much harder to recycle a smartphone than glass bottles or paper. Plus, Washburn says, smartphones are small, so it’s easy to stow them in a drawer and forget about them. At best, not recycling a smartphone prevents the reuse of precious minerals; at worst, the toxic components can seep into soil and water.
As previously mentioned, Fairphone excels here. The startup designed its modular phone with repairability in mind. Fairphone says you can “fix a broken screen in under a minute” and sells repair parts on its site. Repair instructions come with the Fairphone 2, and you don’t even need tools to replace the battery or display. Fairphone also launched an e-waste recycling program so Europeans can ship their old Fairphones back for free when they’re done with them, and other brands of smartphones are recycled as well.
Overall, the Fairphone 2 is Europeans’ most ethical smartphone choice, aside from continuing to use their existing phone. (Ethical Consumer rates it a 15 out of 20 on ethics, compared to the iPhone, which gets a lowly 6.5.) But there are other issues to consider.
Whether you love Androids, iPhones, or something else, chances are, you get a new phone fairly often. Maybe that’s because it breaks—after all, our society has shifted from a mindset of “repair” to “replace,” particularly with electronics. Maybe it’s because manufacturers have trained us to upgrade our devices every two years, if not sooner. Either way, rapid consumption is the norm. When the novelty of the Fairphone wears off, will people still keep it around?
If consumers can shift their thinking, buy a Fairphone, actually repair it, and keep it for several years, that would reduce demand and waste. After all, people in other countries keep their smartphones for four or five years, wireless analyst Tina Teng told NBC News. Tech analyst Allen Nogee added, “In developing countries, with voice-centric phones and low incomes, people keep phones a long time.” But that’s probably more due to financial restrictions than ethical concerns. I don’t know if Americans can change our perception of using the same phone for years from “technologically resistant and out of touch” to “cool and ethical.” As those $18 billion attest, the latest Apple product is a compelling status symbol.
Another problem is Fairphone’s inferiority to other smartphones in terms of price, weight and aesthetics. “The Fairphone is an average mid-range smartphone, but it can’t really compete with the likes of the Moto G, which is less than half the price,” wrote Claudia Cahalane in a review for The Guardian. “The ideal would be something that looks and works like an iPhone, but is made and sold by a company like Fairphone. And that’s unlikely in the near future.” Rather than forcing ethically minded consumers to sacrifice something shiny and fun for something expensive and hefty, we need stricter and better-enforced regulations that hold all smartphone manufacturers to higher standards.
However, no matter how ethically it’s done, mining is still destructive. Friends of the Earth Europe writes that tin mining for smartphones “is almost certainly linked to the devastation of forests, farmland, coral reefs, and communities in Indonesia.” (Destroying fields of crops to make a smartphone will be ironic to anyone who’s played Farmville.) Ultimately, it’s hard to get around mining’s destruction, pollution and waste. Regulating it is a great first step, but ultimately, rare earth minerals aren’t a renewable resource. We need to figure out what else we can build smartphones out of and incorporate renewable energy if we want a truly sustainable solution.
So yes, as Fairphone’s leaders themselves point out, the most ethical smartphone is the one you already own. Better yet, use it to pressure smartphone manufacturers to improve their practices. To again quote Andrew Leonard on Salon, “Ironically, billions of people around the world are now in possession of the most powerful tools for facilitating grass-roots organization ever invented: ethically compromised smartphones!” Put those tweets, emails, and Kickstarter dollars to good use, and maybe one day, we’ll have a truly ethical smartphone.
Holly Richmond is a Portland-based writer eagerly awaiting the renaissance of the can-and-string phone. She has an iPhone 4.
Last fall, race relations at the University of Missouri at Colombia were intensely unstable. Large protests were held across the campus, prompting the university’s president to resign over claims that he hadn’t done enough to address racist incidents that had recently occurred on school grounds. Melissa Click worked as an assistant professor of communications at the school. During a protest, student journalist Mark Schierbecker was recording the demonstrators, who had declared the inner circle of the protest a “media free zone” or “safe space.” Click blocked his camera with her hands after arguing back and forth for a few moments. “Who wants to help me get this reporter out of here? I need some muscle over here!” shouted Click. That last part would imminently come to haunt her. Schierbecker noted that he was on public grounds, and he was well within his rights to record the demonstration. Click sardonically dismissed this notion, as other students gathered around Schierbecker and gradually pushed him out of the circle. He uploaded his recording to YouTube, and it went viral. Needless to say, outrage ensued. Calls for Click’s dismissal from the university were both abundant and fervent.
On Feb. 12 , the board at U. Missouri voted 4-2 in favor of Click’s dismissal, and released a statement indicating their position on the matter:
The board respects Dr. Click’s right to express her views and does not base this decision on her support for students engaged in protest or their views. However, Dr. Click was not entitled to interfere with the rights of others, to confront members of law enforcement or to encourage potential physical intimidation against a student.
This rhetoric circling around the notion of “safe spaces,” particularly with respect to protesting, is both quizzical and paradoxical. The entire point of an organized protest is to bring attention to a particular issue or cause. As such, any attempt to prevent journalists from covering such events on public grounds is not only in direct conflict with the First Amendment, and as such completely hypocritical—it’s also totally counterproductive. The twisted irony that calls for Click’s dismissal were right out of the social justice warriors’ playbook is surely not lost on her. But, alas, the whole event has been covered in every conceivable manner possible from every respective angle and interest at this point. Free speech advocates have said their piece. Click has been rhetorically crucified. Interestingly enough, she’s also been brought up on third-degree assault charges, but that seems to be another story entirely. The truth is that the incident itself is not terribly compelling, and Dr. Click’s subsequent dismissal is not all that surprising, considering the circumstances.
The various reactions provoked by said incident, however, were deeply compelling. This is evidenced by Melissa Click’s Inbox, a piece by Steve Kolowich with the Chronicle for Higher Education. Kolowich did a public information request for emails Click received after her skirmish with the forces of public shame, and what he found can be described as an interesting kind of cross section of human sadism—an exposé of sorts. The additional irony that the so-called “media” she so passionately rebuked now has access her university emails is palpable.
Of course, public information requests of this nature are perfectly ethical, and really quite necessary in a free society. “When news breaks, we try to get as much information as we can,” said Kolowich. “Professors and staffers at state universities are public employees, so the content of their emails (with some exceptions, depending on various laws) is public record. On a day when Dr. Click suddenly became a figure in the debate over free speech on college campuses, I guessed there would be a lot of interesting messages in her university email account, though I wasn’t sure who they’d be from or what they’d say.” It should be noted that the university willingly complied in a reasonable time frame.
Journalists can and should request information of this nature on a routine basis from public organizations and institutions; it is a fundamental part of proper reporting. In the case of Melissa Click, any supposed confidences are automatically nullified, as Kolowich indicated, by the fact that she works at a public institution; any communications should be treated as such. An informed professional should be aware that such communications are potentially subject to public scrutiny. And the argument that Click’s privacy has been violated in this instance would be a particularly specious point to make, considering Kolowich’s use of the information served as a tempered defense of Click. Or at the very least, Kolowich’s article serves as a plea for sanity and perhaps even mercy on her behalf—particularly with respect to threats to Click’s safety. “We’re living at a time when Internet notoriety can come suddenly and without warning,” said Kolowich. “This is true at universities, just like everywhere else. Students have video cameras, YouTube channels, Twitter accounts. What happens on campus, whether in the classroom or on the quad, is less likely than ever to stay there.” Much like the inner circle of the demonstration at U. Missouri, email correspondence of a public employee is, well, public information. It’s available to anyone who wants to access it.
One such message read, “I hope your mother dies of brain cancer.” Another read, “I hope you’re gang-raped by some of the very animals with whom you’re so enamored.” And this message really set itself apart in its vivid description: “Sport should be made of you, in which you are passed around a cell block for a week straight, then cut loose to be hunted down and killed. If hell exists, I want to be there to take part in your eternal agony. You do not deserve a marked grave.” That last one was reported to university police.
Were Dr. Click’s actions unprofessional? Well, yes. Her behavior was beyond unprofessional. Yes, Click absolutely should not have been ordering her students around like a mob or some kind of security force. She should not have used her authority as a professor to do so. And yes, it was right that she was dismissed from her position. But, as these messages indicate, her cardinal sin was grossly disproportional to the reaction it incensed, especially with respect to threats of violence. Click issued an apology, expressing regret for her actions:
I regret the language and strategies I used, and sincerely apologize to the MU campus community, and journalists at large, for my behavior, and also for the way my actions have shifted attention away from the students’ campaign for justice. My actions were shaped by exasperation with a few spirited reporters.
We ought to take such a statement in good faith. In light of the violent messages Click received, one can easily understand why safe spaces are a trend at universities. People can be horrible. Let’s be clear. Threats of violence are abhorrent. There is no excuse for it. Standing up against cruelty, even when one fundamentally disagrees with the target of such cruelty, is absolutely essential in a situation such as this. It is perhaps all the more important in such a situation. Right-wing pundits might revel at the notion of the left eating itself—liberal journalists and social justice warriors going after one another with incisive zeal. But the truth is arguably more nuanced than that.
The callous witch hunt that Professor Click has endured is deeply troubling and should be viewed as unacceptable, at least for anyone who places a value on civil discourse. At best, a lot of the messages amount to kicking an individual while she was clearly already down. At worst, certain messages were statements fueled by unmitigated cruelty and depravity. But what can be done? This is the nature of discourse in an age where information travels faster than a user’s ability to parse the inappropriateness of such a message. A cursory look at Click’s inbox is evidence enough of that.
It would be a great oversight not to mention journalist Jon Ronson’s latest book, So You’ve Been Publicly Shamed, which discusses in depth the matter of public shaming in the age of the Internet. And while it’s a great read, the conclusion isn’t particularly shocking: Public shaming is a bad thing. It has the capacity to ruin lives. But what about emails sent directly to one’s target of hostility? It’s valid to call out Click for her hypocrisy, and let’s do journalists and free speech advocates the courtesy of not equating their messages with that of the psychotic. But at a certain point, even the most sensible messages amount to beating a dead horse. In order to be an ethical actor in today’s world, one needs the ability to appreciate nuance. One needs to avoid logical and emotional shortcuts that come along with a hardline stance on issues such as racism and free speech. These are intensely complicated issues, making it all the more important to exercise a certain amount of restraint and caution when broaching them.
Public record requests, when employed in this way, serve as a kind of sociological examination of culture in the digital age. This is real, legitimate journalism in an age where reporting all too often panders to the lowest common denominator. The Internet has made it easier and far quicker to spread messages. Unfortunately, humans have a propensity to appreciate the extreme, broad messages over more tempered ones. As such, the normalization of digital communication technology has propelled absolutist rhetoric and broad worldviews like the plague. Extremist messages are inherently more viral, whereas the nuanced ones tend to get lost in the noise. One could say we have created for ourselves a scenario even more perverse than George Orwell could have predicted. One could claim that it is beyond what Orwell warned, and all the more tragic because elements of its existence arose not out of some government agenda to spy on its citizens—a concern, undoubtedly—but rather out of human nature manifesting itself in the digital space. One could argue that Internet has brought about mass market doublespeak, and much like Pandora’s Box, we cannot go back to the way it was. And really, would we want to?
David Stockdale is a freelance writer from the Chicagoland area. His political columns and book reviews have been featured in AND Magazine. His fictional work has appeared in Electric Rather, The Commonline Journal, Midwest Literary Magazine and Go Read Your Lunch. Two of his essays are featured in A Practical Guide to Digital Journalism Ethics. David can be reached at firstname.lastname@example.org, and his URL is http://davidstockdale.tumblr.com/.