There is an instinctive, basic trust within all of us, something that is essential to the ease with which we move through our daily lives. Think for a moment about what you trust. I trust that the alarm will wake me on time each day. Making my morning coffee, I trust that the utility companies continue to provide electricity, that what is labeled as coffee is genuinely coffee, and that the cup will not break. Think of the sense of utter betrayal when one of those conditions is not fulfilled! I still remember rushing to get up when a power failure reset the alarm, breaking my favorite mug on my 18th birthday, and the time the good coffee ran out on the morning of a big meeting.
As a U.K. citizen, I also have an instinctive trust in free speech. Freedom of speech is enshrined in law worldwide under the United Nations Declaration of Human Rights, and upheld in varying degrees nation by nation, from the First Amendment to individual enactments of human rights legislation. Because free speech is such a basic right in the U.K and United States in particular, we assume that people exercise this right in order to express genuine fact and opinion. It’s rare to question free speech. Perhaps it’s time we started.
Leaving the house each morning, I place my trust in the manufacturer of my car to fulfill their promise of comfortable, safe, and legally compliant vehicles. Until very recently, there was a general consensus among car buyers that Volkswagen and their partners fulfilled and even exceeded their duty of care towards our environment – certainly the messages put out there by the manufacturers, and by apparently independent free speech sources across all channels, reinforced this impression. However, as reported across the globe on Sept. 18, 2015, including the New York Times article, “VW Is Said to Cheat on Diesel Emissions,” we were misled. This came as a huge shock to consumers: trust had been misplaced. The trust in the brands involved was based on a perception fed by clever marketing, promotion, and perpetuation of an image across social channels. Perhaps discovering that all was not what it seems should have raised questions within all of us about the veracity and independence of that font of all knowledge, the internet. However, our need to function day to day within a trust framework means that questioning free speech is not a reflexive response.
A balanced view of the world?
How do we come to trust? From babyhood, we look at the evidence around us, and learn from our own experiences. We observe the reaction of our peers to situations, and follow their lead. We have an instinct for self-preservation, which helps us to place more emphasis on evidence that seems to be balanced and fact-based, naturally fearing overt coercion. Ultimately, we trust free speech, and the birth of the World Wide Web gave us access to reams of freely given information for decision making in our daily lives. However, when you step back, can you really say that all the memes, clickbait, selective reporting and freely-given opinion is truly balanced, factual, and evidence-based? We can’t always trust free speech on the World Wide Web.
As creator, Tim Berners-Lee said, when writing about the Web at 25: “When we link information in the Web, we enable ourselves to discover facts, create ideas, buy and sell things, and forge new relationships at a speed and scale that was unimaginable in the analogue era. These connections transform presidential elections, overturn authoritarian regimes, power huge businesses and enrich our social networks.” This explosion of information, coupled with the ability of every internet user to become an armchair philosopher, scientist, politician, or sports coach, starts to ring alarm bells if you step back and succeed in suspending your instinctive trust in free speech. Our natural leanings towards trusting our peers can backfire in all kinds of ways, in all areas of life.
The Escher Group in North East England conducted a detailed study last year into the habits of small businesses when they seek advice and support. As these micro enterprises are the backbone of the U.K.’s economic revival, and there are hundreds of public sector-led initiatives in place to help them, it’s important that they access those resources to thrive and survive. However, Escher’s results showed that 98 percent of respondents do not trust the public sector to help them with their business; the first point of reference is usually their peers. Think about your own first port of call: it’s a natural human reaction to ask the people you think will empathize with your problems. Unfortunately, this means that solid, verifiable business advice isn’t always filtering down to the people who need it. The noise added by personal opinion, anecdotal evidence, and online publication of unverified documents full of inaccuracies, is a problem that needs to be addressed. Ultimately, free speech is trusted over ‘official’ information because of a perceived lack of empathy, to the detriment of all.
Although Tim Berners-Lee goes on to say that: “social networks are interesting …they give us a custom view, a manageable and trusted slice,” this is the trust at the root of the perpetuation of internet hoaxes. I regularly find myself pointing friends and family to references on Snopes, ThatsNonsense and other sites when they unthinkingly share a dramatic, but uncorroborated meme, which seems to align with their own views. (For other useful ways to clean up your friends’ social feeds, check out Pete Brown’s comprehensive guide published in Australia’s The Conversation – Six easy ways to tell if a viral story is a hoax.)
They are exercising their freedom of speech rights by sharing hoax memes: that is to say, they are expressing what they believe, and they have the right to do so whether the reader finds it distasteful or not. However, it’s the detail behind what they share that is of concern. Our freedom of expression may be compromised by the veracity of the content we share.
The regulation dilemma: who polices free speech?
The Arab Spring demonstrated the powerful, positive use of the web to spread messages and “overturn authoritarian regimes” as Berners-Lee describes. However, we are now seeing the powerful, negative use of the Web as the Daesh movement (ISIS) overturns democracy in favor of its own brutal, authoritarian regime. The Brookings Institute published a study earlier this year of Daesh social media activity, identifying at least 46,000 Twitter accounts firing out around 100,000 tweets a day. There is fighting both on the ground and in the digital space, as the propaganda war is waged alongside real bloodshed. Even the argument over the movement’s name is telling – there is a push to remove the pseudo-authoritative title of ‘Islamic State’ used in the West, in favor of Daesh, as it is referred to in the rest of the world.
The internet is the home of free speech, but there are conflicting views and reports: who do you trust? Sharing propaganda is a valid expression of free speech (subject to laws against inciting hatred, of course) and trust in it rests with our individual judgment of the source’s alignment with our values and beliefs. In the U.K. we have seen the decision making that comes from misplaced trust, with families crossing the Turkish border to Syria, while refugees pour out across the same border towards Europe.
Much of the Daesh publicity is sent out as heavy bursts of tweets to build trends, with interaction between the supporting accounts, but very little outside. However, evidence is growing of a far more complex manipulation of free speech online, from an experienced propaganda machine. The recent infiltration of a Russian ‘Troll Factory’ by investigative journalist Lyudmila Savchuk has exposed a more intricate and far-reaching Web of subtle coercion. This activity is in addition to the now-familiar ‘Twitter-bot’ strategies: internet researcher Lawrence Alexander’s study identified 17,650 Russian accounts operating in a similar way to the Daesh machine.
Savchuk’s article in the Telegraph talks of not only phony social media accounts but also blogs, forum participation, and responses to online journalism. She was part of a special unit of “people pretending to be individual bloggers– a fortune teller, a soldier, a Ukrainian man – [and] had to, between posts about daily life or interesting facts, insert political reflections.” Developing a fake source to this level of detail and constructing believable back-stories, reinforces the impression that propaganda is in fact the free expression of independent peer opinion, and strengthens misplaced trust.
The troll factory activity was not restricted to Russia. The Guardian had long held suspicions that its online comments section was being trolled. Their moderators, who deal with 40,000 comments a day, believed there was an orchestrated pro-Kremlin campaign. Once again, this campaign played on our trust by apparently expressing independent reaction to media reports.
Restricting freedom of expression is in the realm of dictatorships and censorship – but does the corruption of freedom of expression merit a system of regulation? The techniques and the intensity of online propaganda are such a concern that in 2013 the European Union set aside $3 million to tackle Eurosceptic trolling in the run up to the European elections. It’s a never-ending battle to present a balanced view; free speech is compromised at every turn. So who decides what is ‘positive’ free speech, and what is ‘negative’? The Brookings study neatly summarizes the problem: “Regulating ISIS per se presents very few ethical dilemmas, given its extreme violence and deliberate manipulation of social media techniques,” the study reads. “However, the decision to limit the reach of one organization in this manner creates a precedent, and in future cases, the lines will almost certainly be less clear and bright.”
The selection dilemma: where do we place our trust?
In the face of such manipulation, where do we place our trust? Research Scientist Peter Gloor’s Collective Intelligence Networks (COINS) theory talks about the ‘Crowd,’ the ‘Experts’ and the ‘Swarm’. Following the experts or the crowd may not give you the right result, but narrowing your sample to reflect your situation, identifying the right ‘Swarm,’ can do so. Similarly, Tim Berners-Lee’s comment that we are comfortable with “a manageable and trusted slice” of information underlines the necessity of finding the right ‘swarm’ to reach an appropriate consensus.
Do we therefore retreat into our chosen communities to reduce the noise? This presents its own dilemma. It’s possible for a population of non-mathematicians to achieve the consensus that 2+2=5. The consensus diverges from mathematical principles; it has been corrupted by false assumptions. The consensus reached by families travelling towards Syria diverges from the reality that is causing millions to flee; it has been corrupted by online propaganda. Philosopher Jürgen Habermas, a proponent of consensus reality as a model of truth, refers to the ‘ideal speech’ situation where there are no external and coercive influences. Can you restrict your community to eliminate those influences, reaching a reliable consensus among knowledgeable peers – or does this selection corrupt, implicitly? Free speech may be unreliable in a self-selecting community.
Regulation, selection – or skepticism?
So where does the answer lie? Regulation comes with extraordinary volumes of ethical baggage: who watches the watchmen? Selection has its place for expert consensus, but who decides the makeup of the community? Ultimately, I believe we all have a responsibility to champion our rights to free speech, while suspending our instinctive trust and exercising a healthy level of skepticism.
Kate Baucherel is a director of UK-based software startup Ambix, a qualified accountant with 25 years’ experience across a variety of industries, and an experienced digital marketer. She is the author of Poles Apart: Challenges for business in the digital age, and works with businesses of all sizes to help them use the internet effectively to achieve their goals. She has two young children, and lives in the north of England.
With computer literacy becoming an increasingly important skill in college and the workforce, middle schools and high schools across the nation must prepare students to meet modern expectations. This is a major challenge for underfunded districts where money is sparse and the costs of equipment, high-speed Internet and training seem out of reach. But the digital divide, the gap between those who have access to computer technology and those who do not, will not go away without school investment. Although funding for critical programs is often at stake, school district representatives must decide whether it is their ethical responsibility to integrate computers into their curricula. Due to the growing digital access gap, the answer to that question should be a resounding ‘yes.’
The myriad costs associated with providing classroom computers prevent many budget-strapped administrators from adopting a new, technology-focused teaching approach. It is understandably difficult for school administrators to ask teachers to utilize digital tools if investing in technology means putting off faculty raises. Unfortunately, the digital gap will continue to widen if it is not addressed, and administrators need to prioritize the use of modern equipment.
Nice computer labs are no longer the bar in most middle schools and high schools. It is the norm for teachers to incorporate technology into the classrooms, allowing students to participate in lessons by working on computers individually or in groups. Many schools are also transitioning to one-to-one (1:1) curricula wherein all children are assigned a laptop or tablet they can use in the classroom or at home. To truly prepare students for the road ahead, schools must move beyond labs and into teaching spaces, encouraging instructors to assign tasks that involve computers.
Long-term training is the most important outcome of computer-rich learning environments. But digital investments also pay off in the short term, as teachers benefit from organizational tools, immediate feedback and ‘differentiated learning’ applications. For example, teachers can customize lessons to meet each student’s level of understanding by asking the class to watch videos or complete assignments and answer questions on their own. Depending on their speed and results, software and online applications will provide students with new content, give them time to complete work or offer review assistance on confusing sections. Accounting for the individual needs of students is much more difficult when one message applies to a diverse classroom.
Low-income schools need more than equipment to close the gap, so it is crucial that administrators introduce digital learning practices quickly. In addition to computers, they must provide high-speed Internet that can handle modern digital requirements. Schools need significant bandwidth for students throughout the school to go online, participate in interactive assignments and watch videos at the same time. With fewer than 20 percent of educators believing their schools offer Internet connections that satisfy their scholastic needs, connection is a problem, especially in rural areas where Internet Service Providers offer few affordable high-speed options. According to the Federal Communications Commission, 41 percent of rural schools could not obtain high-speed connections if they tried.
The federal government has stepped up and made significant strides to help underprivileged schools obtain high-speed Internet and digital learning tools. In 2013, President Obama introduced the ConnectEd initiative to provide teaching assistance and high-speed Internet to schools and libraries across the country, paying particular attention to rural regions. He lauded North Carolina’s Moorseville Graded School District and its Superintendent Mark Edwards for adopting a digital curriculum despite limited resources. Several years after Moorseville schools provided a device to each student in grades 3-12, their graduation rates increased by 11 percent. Although the district ranked No. 100 of 115 in terms of dollars spent per student it had the third highest test scores and second highest graduation rates.
Investing in eye-catching technology and updating curriculums is not easy for all districts. The combined costs of new equipment, high-speed Internet and teacher training are difficult to cover when schools have other issues to address. As Kevin Welner, a director of the Education and the Public Interest Center at the University of Colorado at Boulder stated: “If you’re at a more local level trying to find ways to simply keep from laying off staff, the luxury of investing in new technologies is more a want than a need.”
This is an understandable problem, but seeing technology as a want rather than a need is an outdated mindset. School districts should make digital curricula a priority rather than a novelty. When budgets cannot be rearranged, schools can join together to purchase equipment at reduced bulk pricing and seek funds from communities and businesses. Major companies, such as Best Buy, offer direct assistance to schools while various nonprofits partner with corporations to secure academic resources.
Before they decide to put off investments in digital education, district superintendents should remember that they face numerous obstacles as they work to close the digital divide. Underprivileged students who are not exposed to digital education in the classroom are likely to be hampered by limited equipment and Internet access at home. According to the Pew Research Center, approximately one-third of households with annual incomes under $50,000 and children between the ages of six and 17 did not have access to high-speed Internet. Any computers help, but schools are the primary point of exposure for many students, making it particularly important to have well-equipped classrooms and trained teachers.
Occasional investments in computer labs only scratch the surface of the problem; so sporadic splurges offer limited results. To bridge the divide, district representatives must dedicate themselves to bringing computers into the classroom. Those who prioritize other issues are not being fair to today’s students. Los Angeles Unified schools Superintendent Ramon Cortines made headlines in February when he reversed his predecessor’s popular promise to give each student, teacher and school administrator an iPad. “We’ve evolved from an idea that I initially supported strongly and now have deep regrets about,” he stated, adding that a more balanced approach to spending was necessary. Ultimately, both the iPad initiative and faculty raises were put off, and, more importantly, one more class of students failed to receive sufficient digital training.
Integrating technology needed to bridge the digital divide should be a priority for administrators in all middle schools and high schools. Despite the numerous financial problems in low-income areas, school and district administrators must realize that digital curriculums are an ethical priority, even when it means putting off other school problems and seeking out outside revenue. When you’re on the wrong side of the gap, digital access is a major burden, and schools need to chip away at its ongoing cost.
Paulina Haselhorst was a writer and editor for AnswersMedia and the director of content for Scholarships.com. She received her MA in history from Loyola University Chicago and a BA from the University of Illinois at Urbana-Champaign. You can contact Paulina at PaulinaHaselhorst@gmail.com.
Google. A brand so enormous, it’s a verb. We endlessly Google, but we never Yahoo or Bing. And many of us have never even heard of competing search engines like Gigablast, Yandex, or Qwant. This enormity is the problem—the massive Google has seemingly shouldered out most of the competition, leaving the remaining few to languish in the shadow of its greatness. Google’s rapid surge from startup to internet superpower has drawn the eye, and ire, of competitors and legislators alike.
Since its humble beginning in 1998, Google has grown exponentially through the application of superior and innovative technologies to become more than just a search engine. With the financial heft to divert substantial funding toward innovation, Google is always at the leading edge of new, exciting and helpful technological advances. In a show of savvy business sense, the company has eagerly bought up competitors and other successful companies to help spread its influence across the Web and the world. Some acquisitions are familiar, such as image sharing service Picasa, video sharing service YouTube, Web feed service FeedBurner, and mobile device manufacturer Motorola. Other additions to the Google family are not as mainstream, but they help provide Google with a full complement of technology from digital coupons and facial recognition to e-commerce, cloud computing, and more, although its core product remains search engine technology.
It is this search technology with its complicated algorithms that has caused national and international antitrust coalitions to take a dim view of some of the tech giant’s business practices. Over the past several years, Google has been accused numerous times of using its vast influence and market share to hinder competition.
So what exactly is Google’s share of the Web search market? comScore, an American internet analytics company, recently released its search engine rankings for September 2015, giving Google a 63.9 percent share of the market. Peter Theil, PayPal co-founder and author of Zero to One, a book that calls out Google as a monopoly, cites a similar percentage. But how accurate is this number? According to Priceonomics, Google’s share is much closer to 94 percent and is perhaps even higher if worldwide numbers are included. This discrepancy might be a reflection of the partnership between comScore and Google. The two have been coordinating on the creation of an audience metrics program, so it is possible that comScore has a vested interest in protecting their business ally from suggestions of dominance. It could also be a simple error of calculation although Google would probably draw even more unwanted interest by correcting it upward. Regardless, even 63.9 percent is an incredible share of a burgeoning market—one that amounted to more than $66 billion in profit for the company in 2014.
Despite its steadily increasing progress, Google tries to downplay its success whenever it can, and with good reason. In 2013, after several years of scrutiny, the Federal Trade Commission (FTC) launched an investigation into the business practices centered around Google’s alleged “search bias.” Search bias is the term for what complainants say occurs when Google exploits its search algorithms to promote its products over those of competitors.
The company disagreed vehemently with the allegations and sought to defend itself in the public eye as well as in the courtroom. To help sway public and regulatory opinion in their favor, Google helped put on several events at George Mason University’s Law and Economics Center in Washington, D.C. that purported to increase discussion about search competition on the internet. Attendees included FTC regulators, congressional staffers and federal and state prosecutors. Emails obtained by The Washington Post revealed Google’s behind-the-scenes involvement with organizing the conference and inviting attendees. The conference proved fruitful for Google: The technology and legal experts present supported Google’s position, arguing their points in front of regulators who would later determine that there was no hard evidence of wrongdoing in Google’s changes to their search algorithms.
Google also commissioned a paper by noted conservative judge and antitrust scholar, the late Robert Bork, with antitrust professor Gregory Sidak, to help bolster their position. Bork and Sidak wrote, “That consumers can switch to substitute search engines instantaneously and at zero cost constrains Google’s ability and incentive to act anti-competitively.”
In addition to their attempt at shaping public and media discourse on the subject of their search practices, Google continues to try to influence decision-makers through monetary donations to companies and individuals that will support them. In the first quarter of 2015, Google spent approximately $5.47 million lobbying a select group of legislators on subjects such as privacy and competition issues in online advertising, openness and innovation in online services and devices, and international internet governance.
This last point, international internet governance, is a hot topic for Google as Margrethe Vestager, the European Union’s (EU’s) competition commissioner, this year made a formal complaint against the company for using its dominance to bias Web searches. This complaint marks the first time Google has faced formal charges for anti-trust violations. As an initial response, Google defended its business practices in a blog post, stating: “While Google may be the most used search engine, people can now find and access information in numerous different ways—and allegations of harm, for consumers and competitors, have proved to be wide of the mark.” Vestager’s recent charge isn’t Google’s first encounter with the EU—they’ve been under investigation since 2010 for anti-trust violations in the European market for promoting their products at the expense of their competition. The EU’s first three-year investigation of them ended in February 2015 with Google agreeing to “make concessions on how they display competitor’s links.”
Currently, the beleaguered giant also faces charges by Indian investigators who sent concerns regarding anticompetitive practices and search dominance to Google’s headquarters last week after a lengthy three-year investigation. They intend to pursue the matter formally, pending further fact-finding.
So, is Google dominance a reality or just a smoke-and-mirrors attempt by competitors and governmental agencies to slow down the company’s explosive growth? According to Investopedia, a monopoly is a single company or group that owns all or nearly all of the market for a given product or service. While Google isn’t the only company that provides search services, it does have the lion’s share of the market, giving it the ability to manipulate search results that can indeed hinder competition.
That, coupled with Google’s sometimes strong-armed tactics with competitors, makes it suspect. In the case of Yelp, a review service that has a strong following in the restaurant category, Google used its fiscal power to attempt to purchase the successful service outright. When Yelp turned down Google’s offer, the company responded first by buying competitor, Zagat, and then by “borrowing” Yelp results to support Google’s local search results content. They also created programs such as City Experts (recently replaced by Local Guides) to build a network of reviewers and experts that can serve local areas in much the same way as Yelp. In another move to swipe market share from local review competitors, their Local Carousel, a series of images and ratings that pops up during local information searches, is positioned to grab consumer attention and shift it away from organic search results in favor of the Google product.
Detractors have long pointed to Google’s ever-changing algorithms as a means of supporting the case that Google is manipulating search results on purpose and for its financial gain. Google may be doing this, but even if it is, these actions are protected under the auspices of free speech. Google has long argued that, just as an editor chooses which stories to print or not, and which to put on the front page, Google’s algorithms edit what the consumer sees. This type of editorial control is protected under the First Amendment, regardless of how the results are shown.
Even though Google grew its enormous market share fairly through exceptional service and cutting-edge technologies, it is still possible, and even easy, to access the internet without using its services. The open architecture of the internet gives consumers direct access to websites without using a search engine. Web browsers provide customizations that allow content to be accessed sans search engine, and mobile apps have proliferated as a new way of searching for needed content. If Google manipulates its algorithms to support its products, to the consumer it is not that different from watching a news channel that gives fuller coverage to stories that supports its political viewpoint, or a magazine that publishes advertisers with whom it has partnerships.
Regardless of how you view Google’s alleged dominance of the market and whether or not you agree with their supposed manipulation of search results, it is clear that there is a long way to go in defining the boundaries of online super companies. Their structure and essence are clearly a challenge to conventional thought about monopolies since current antitrust legislation is focused on businesses that developed during an industrial age rather than an informational one. Internet superpowers like Google, Amazon, PayPal and Facebook have an opportunity to rewrite the definition of fair competition for a global online community.
What is evident from the controversy surrounding Google’s practices is that there is significant confusion over what constitutes anticompetitive actions in the online world. Our current scramble to adjust to a post-industrial economy puts the cart before the horse by waiting for concerns to arise before addressing them. To rectify this, scholars, business people, legislators and consumers should unite to come to a mutually agreeable understanding of issues faced exclusively by online businesses. This understanding should encompass not only best business practices but also exhibit a sensitivity to the difference between competitiveness online versus in a brick-and-mortar world. A litmus test for anti-competitiveness constructed without a specific business in mind (e.g. Google) would be an excellent first step toward a protective antitrust policy geared toward the information age.
Nikki B. Williams is a freelance writer based in Houston, TX. She has written for a variety of clients from the Huffington Post and D.C.-based political action committees to Celtic jewelry designers in Ireland. You can contact her through her website, nikkibeewilliams.com.
October is National Cyber Security Awareness Month, and the fact that it’s a month instead of a day speaks volumes about the growth and prevalence of cyber crimes.
The international security company Gemalto proclaimed 2014 as the year of mega breaches and identify theft. According to the company’s breach level index:
– 1,023,108,267 records were breached in 2014
– There were 1,541 breach incidents, which represents a 78 percent increase in breached records from 2013
How frequently are data records lost or stolen?
North America accounted for 76 percent of total breaches.
And 2015 is shaping up to be another stellar year – it has already produced high-profile security breaches involving Ashley Madison, CVS, Anthem, and even the IRS.
So far, the Ashley Madison hack has been the most high-profiled breach of the year. Ashley Madison is a social-networking site for married men and women looking to find partners for extramarital affairs, and claims to have 40 million users. To date, the site’s hackers have released seven years of credit card data, in addition to names, addresses and phone numbers – and the users’ desired preferences in potential partners. This breach has resulted in public embarrassment, marital strife, possible blackmail situations, and at least one suicide.
And while the breaches themselves are highly publicized, much less is known about the people behind the scenes who are charged with protecting company data, their responses to data breaches, and the ethical decisions they face.
A 2015 report by Alien Vault, a threat intelligence and security management provider, shines a spotlight on the many issues facing security professionals. Below are the responses to three questions selected from the survey portion of the report, along with ethical analyses of the respondents’ answers.
Question 1: Do you ever visit hacker forums or associate with black hats to learn about the security you need?
Javvad Malik, the report’s author, notes that some companies forbid interactions with black hats. A black hat is a computer hacker who breaks into computers and networks for malicious reasons, as opposed to white hats (who may be employees or consultants) who break in to locate and identify breaches. However, if the type of information needed to mount an effective defense is not available through legal channels, roughly half of respondents feel they need to do whatever is necessary to obtain credible data in a timely manner.
I spoke with Abraham Snell, who has an MBA in Technology Management from Auburn University and is a Senior IT Infrastructure Analyst at the Southern Company in Birmingham, Alabama. He views visiting hacker forums or consorting with black hats as an instance in which the means justify the end. “It is a brilliant idea,” Snell said. “It is just the reverse of criminals getting police best practices so they can be more successful criminals. In this case, the side of right is learning about the dark side before they strike. In some cases, this will be the only warning of things to come.”
Question 2: What would you do if you found a major vulnerability on a company’s system or website?
|61.7%||Privately disclose to them|
|12.0%||Publicly fully disclose|
|9.8%||Disclose without releasing details|
|8.2%||Tell your friends|
|5.5%||Claim a big bounty|
|2.5%||Sell on the black market|
While privately or publicly disclosing the vulnerability seems the most logical choice, it is not uncommon for companies to threaten legal action against the person reporting the security risk. Fortunately, only a small percentage of respondents would seek financial compensation, but it is troubling that almost 18 percent would either do nothing or just tell their friends. However, if companies provide a hostile environment in which this type of disclosure is not welcome, can security professionals be blamed for their lackadaisical attitude?
According to Snell, there are definitely ethical issues involved in the next steps taken when a vulnerability is discovered. “Even if this type of disclosure is not welcome, you have a moral obligation to reveal the vulnerability,” Snell said. “If the information is breached, people may have their financial and personal information stolen, even their identities may be stolen. If you fail to sound the alarm, you’re just as guilty as the people who actually steal the information because you knew it could happen and you did nothing.”
After viewing the other choices selected by respondents, Snell said they are negligent at best, and most likely criminal in most states. “Telling your friends, unless they are security experts or regulators, is the same as doing nothing,” Snell said.
Regarding the bounty, Snell said, “I’m unclear on how you claim a big bounty unless it becomes a major international issue because companies will not pay their own employees to do what they are already paying them to do.” And if the employee tried to claim a bounty anonymously, that could lead to various legal implications. “The vast majority of people who do what Edward Snowden did end up like he is … a man without a country,” Snell said. He also explained that selling the info on the black market is both unethical and illegal.
Question 3: If your company suffers a breach, what is the best course of action?
|66.8%||Use the event to convince the board to give you the budget you need|
|25.7%||Tell the regulator, pay the fine, and move on|
|9.0%||If nobody knows, just keep quiet|
|6.6%||Go to the media and brag about how you ‘told them so’|
Overwhelmingly, the survey respondents feel that the only way they can get the resources they need is in the aftermath of a major cyber attack.
In fact, former White House Cyber Security Advisor Richard Clarke once said, “If you spend more on coffee than on IT security, you will be hacked. What’s more, you deserve to be hacked.”
Darryl Burroughs, deputy director of IMS Operations for the City of Birmingham, Alabama, shared an interesting perspective with me: “If a compelling case was made to increase the cyber security budget and the company blatantly refused to do so, the ethical dilemma rests with the Chief Financial Officer and others who make budget decisions that do not take into consideration IT requests,” Burroughs said.
He added, “The real question is what unethical decision did they make when they funded something less important than the company’s security?”
And that’s also a question that Sony’s Senior Vice President of Information, Jason Spaltro, has likely asked himself over and over again. Back in 2007, Spaltro weighed the likelihood of a breach and concluded, “I will not invest $10 million to avoid a possible $1 million loss.” At the time, that may have sounded like an acceptable business risk. However, in 2014, when the company’s data breach nightmare dominated the headlines – and late night talk show monologues – for months at a time, that $10 million would have been a sound investment.
Snell said there are a lot of factors that determine if using a breach to increase the budget is ethical or not. “I wouldn’t say most companies wouldn’t increase the budget anyway, but I would say that many current and previous executives are not trained in technology, so the threat of security breaches is not a topic that resonates with them,” Snell said.
As a result, he thinks that in many cases it takes a major incident to get funding funneled to the right programs that will protect the company. “The problem with security is that it is mainly a cost when things are going well. You only see the wisdom of the investment after a breach occurs or is attempted.”
On the other hand, Snell said if the budget is adequate, and fear mongering is being used as a tactic to get more money, that is definitely unethical.
Negotiating With Cybercriminals
The process of retrieving stolen data from cybercriminals is another ethically murky area for security professionals. A recent whitepaper by ThreatTrack Security reveals that 30 percent of respondents would negotiate with cybercriminals for data held hostage:
However, 22 percent said it would depend on the stolen material. Among this group:
– 37 percent would negotiate for employee data (social security numbers, salaries, addresses, etc.)
– 36 percent would negotiate for customer data (credit card numbers, passwords, email addresses, etc.)
I also spoke with Dr. Linda Ott, a professor in the department of computer science at Michigan Technological University, who also teaches a class in computer science ethics, about negotiating with cybercriminals.
As with most ethical questions, she does not believe there is a simple answer. “One might argue that a company should be responsible for paying whatever costs are necessary to recover the data since it was presumably because of the company’s negligence that the information was able to be stolen,” Ott said.
She explained, “However, unlike paying a ransom for the safe return of a person, the return of the data does not guarantee that the cybercriminals no longer have the data. And if they have a copy, paying the ransom merely amounts to enriching the criminals with no gain for the company whose data has been compromised.”
However, Ott noted that in certain situations the case for paying the ransom would be stronger. “For instance, if the company did not know what employee information was compromised, one might argue that they should pay for the return of the data,” Ott said. “In this scenario there is a benefit to the victims of the crime since they could be accurately notified that their information had been stolen.”
Big Brother: Friend or Foe
ThreatTrack’s survey also reveals a range of opinions regarding the government’s role in cybercrime extortion investigations:
– 44 percent said the government should be notified immediately and granted complete access to corporate networks to aggressively investigate any cybercrime extortion attempts
– 38 percent said the government should establish policies and offer guidance to companies who fall victim to cybercrime extortion
– 30 percent said companies should have the option of alerting the government to cybercrime extortion attempts made against them
– 10 percent said the government should make it a crime to negotiate with cybercriminals
Ott said the fact that most companies do not want government intervention is problematic. “Without government investigations of these matters, the cybercriminals remain free to continue their illegal activities,” she said. “This can ultimately lead to the theft of information of many more people.”
However, she explained, “Companies tend to do their analysis based on consideration of the impact on their reputation and the potential impact on their stock price, etc. They have little motivation to consider the bigger picture.”
So, how long did it take you to read this article? If it took you five minutes, 9,735 data records were lost or stolen during that time frame. That’s why Burroughs concludes, “The question is not if you will be breached – the question is when.”
Terri Williams writes for a variety of clients including USA Today, Yahoo, U.S. News & World Report, The Houston Chronicle, Investopedia, and Robert Half. She has a Bachelor of Arts in English from the University of Alabama at Birmingham. Follow her on Twitter @Territoryone.
With the 2016 presidential elections now on the horizon, there’s no escaping ‘the campaign’—whomever’s that might be. The proliferation of political advertising across all media, from billboards to iPhones, subjects the voting public to a constant barrage of targeted advertising. The strongest campaigns, in an attempt to master cross-platform messaging, have been making innovative strides in marketing and media, which certainly adds a sense of novelty to an otherwise staid political contest. One example is the gradual introduction of digital crowdsourcing into the political process. On one hand, crowdsourcing, as a novelty, is an essentially democratic notion, though beyond the surface, it is still exclusionary by nature. As many see it—including attorney, activist, and Harvard Law Professor Lawrence Lessig—the tricks of the contest may be new, but the game is the same…and it’s only getting more expensive to play.
As a primary indicator of this fracture, Lessig, in a recent article on Medium, points to the issue of money in politics, which is featured prominently in debates and campaign promises, but rapidly fades to the background in office. Indeed, there is a general consensus that money—and associated political and corporate corruption—should be at the top of the president’s priority list; second only to “creating good jobs,” as Lessig cites from this 2012 Gallup poll. But in practice, an official taking on finance reform is setting herself on a road towards destruction.
“To take on the influence of money is not to take on one party, but both parties,” Lessig writes. “The enemy of Congress is a failed president. […] So on the long list of promises that every normal president enters office with, the promise of reform always falls to the bottom.”
Broken promises of campaign finance reform may not be the explicit fault of a hypocritical chief executive, but rather the attrition caused by the mechanics of American politics in general. This leads us to Lessig’s central claim: “That on the issue of fundamental reform, an ordinary president may not be able to lead.” Instead, Lessig proposes the idea of a trustee president, which Lessig defines as prominent, “well-liked leader” who declares her presidential candidacy on a single issue. After this issue is resolved, the trustee president would step down and hand over the reigns to the VP for the remainder of the term. For finance reform, this trustee president “would use every power of the executive to get Congress to enact fundamental reform”—and then move on.
The idea of a trustee president is a unique proposition—perhaps even a ‘hack’ of the political system. “Our democracy will not heal itself,” Lessig writes. “Reform will not come from the inside alone. It needs a push.” Which is why, this past August, Lessig announced that if he raised $1 million by Labor Day, he would run for president on this exact model, in order to pass what his team has dubbed the Citizen Equality Act of 2017. As of Sept. 6, with money raised from 10,000 unique donations, Lessig is now attempting to run on the Democratic ticket (though according to his recent op-ed at Politico, the Democratic party isn’t being very receptive). The drama of Lessig’s thought-experiment-turned-real-experiment will be interesting enough to follow in the coming months, but what’s unique about his proposed Citizen Equality Act is that, in addition to being modeled on existing reform proposals, his will also “crowdsource a process to complete the details of this reform and draft legislation by the start of 2016” (emphasis added).
Crowdsourcing is a digital-age concept and term, though the notion of crowdsourcing in politics is a fundamentally democratic idea. However, that crowdsourcing is a digital concept keeps its referents firmly grounded in the present, as Tanja Aitamurto suggests in his 2012 book, Crowdsourcing for Democracy. He defines crowdsourcing as “an open call for anybody to participate in a task open online, where ‘the crowd’ refers to an undefined group of people who participate.” This is in contract to outsourcing, where a task is specifically assigned to a defined agent. Popular uses for crowdsourcing range from funding product or project development, as with sites like Kickstarter and IndieGoGo, to more refined applications, including urban planning, product design, mapping, species studies, and even solving complex technical or scientific issues. It’s only a small jump for crowdsourcing to be used for policy and reform, especially in democratic contexts, as Aitamurto demonstrates through various international case studies, including constitution reform in Iceland, budget preparation in Chicago, and the White House’s We The People petition system.
Aitamurto writes: “When policy-making processes are opened, information about policy-making flows out to citizens. […] Opening the political process holds the potential to increase legitimacy of politics, and increasing transparency can strengthen the credibility of policy-making.” In an ideal or direct democracy, especially in a modern context, crowdsourcing just makes sense, especially as a tool for mass communications and encouraging public participation. Our increased reliance on technology for economic participation, communication and citizenship casts crowdsourcing as a natural outgrowth of our cultural evolution, and in this way, it also just makes sense that crowdsourcing would be applied to politics. But ethically, crowdsourcing’s promise of cross-platform policy participation is not so equitable, especially when we begin to account for income and literacy as prerequisites for entry into the digital crowdsourcing process.
A few examples of how crowdsourcing is exclusionary, despite its ideal democratic applications, are the person without a smartphone, the family without an Internet connection, or the digitally illiterate citizen (i.e., the person who has never sent an email or used a computer). You can’t help crowdsource if you’re not part of the crowd; that is, if you’re on the wrong side of the digital divide. Of course, even before the digital age, income and literacy have long been limiting factors for democratic participation; they’ve simply found new media for the modern age. On this point, Aitamuro clarifies: “Crowdsourcing […] is not representative democracy and is not equivalent to national referendum. The participants’ opinions, most likely, do not represent the majority’s opinion.”
The natural limitations of the crowdsourcing process, as Aitamuro suggests, can be read as a downside of crowdsourcing in a democratic context, but if that democracy is broken—as Lessig and many others say it is—a crowdsourced tactic, despite its ethical complications, might be exactly the sort of push a broken democracy needs. But in order for it to really work, everyone needs to agree to the plan, and trust that its leaders will deliver on their promises. A task like that will take a lot of work, supercharged rhetorical finesse, and a massive amount of popular traction that Lessig currently lacks; a Suffolk University/USA Today poll from Oct. 1 lists a mere 0.47 percent of respondents stating support for Lessig. His plan makes sense, and the notion of entrusting the trustee with a crowdsourced reform is a novel reflection of idealized democratic values, but novelty in the political process is a mixed bag; especially in the midst of a pay-to-play political paradigm where not even Lessig could get a foot in the door without a cool million. In this way, crowdsourcing, despite being a novel digital-age concept, might simply be more of the same.
Benjamin van Loon is a writer and researcher from Chicago, IL. He holds a Master of Arts in Communication and Media from Northeastern Illinois University. Follow him on Twitter @benvanloon and view the rest of his work online at www.benvanloon.com.