Have you ever wished you could Google your own life experience? Have you worried about what you’d find if you could? In our Cult-of-Information age, it turns out that the technology to achieve this—also known as lifelogging—isn’t far off from total market saturation.
In May 2016, Sony made waves after it received a patent for a smart contact lens that records what you see. Narrative, an independent brand, sells a wearable camera with a 30-hour charge that takes a picture every 30 seconds. Kapture, a startup based in Cincinnati, Ohio, sells a Bluetooth-connected bracelet that continuously records audio with a 60-second buffer. At the speed these and other lifelogging technologies are improving and gaining users, it’s difficult to pause and ask, is this actually what we want or need?
One such pause came in 2011, when the UK’s popular Channel 4 series, Black Mirror—an unofficial 21st century update of The Twilight Zone—aired its now well-known episode, “The Entire History of You.” The episode takes place in a contemporary reality where people have capsules implanted in their heads recording everything they see and do, with a user interface allowing for memory searching and playback. Suspecting infidelity by his spouse, the episode’s protagonist replays and obsesses over particular memories until he destroys all of his relationships and goes insane. Rod Serling of The Twilight Zone would be proud.
While sensationalistic, the technophobic anxieties laced into “The Entire History of You” are common at times of technological change. People were scared of cars, record players, and telephones, too. But fears of technology aren’t like fears of spiders and heights; they’re often grounded in uncertainty around ethical and ideological freedom. This is especially true when the technological innovations are no longer focused on reducing physical limitations—as bikes did for transportation—but are instead enhancing mental and psychological abilities, where the limits, and the dangers of exceeding those limits, remain vague.
“We know deep down inside that not everything needs to be remembered, not everything we want to remember, and not everything needs a piece of technology to be remembered,” said Kapture co-founder Mike Sarow in a recent phone interview I had with him.
Like the implanted capsule in Black Mirror, Kapture is a physical device that captures everything, and it’s up to you if you want to archive it. If you’re a songwriter and get struck by a hook, or you just heard your boss say something quotable in your weekly team meeting, you can send the audio to an app with a tap of your Kapture bracelet. So far, reviews of Kapture say the hardware is clunky. Moreover, hearing Sarow’s visions of Kapture’s eventual transformation into a total platform technology, always recording from everywhere, the bracelet seems almost anachronous. But as an entrepreneur, Sarow also understands that the physical device softens the market to a more disruptive change. “With technology like this, you struggle with being too early or too provocative,” Sarow says. “You need to struggle with the storm until people actually become okay with it, and realize that it’s helpful.”
Sarow came up with the idea to develop Kapture because he wanted to remember something one of his friends said. These days, however, he focuses on its value for business—specifically, its potential usefulness in meetings where everyone seems distracted. “These days, there’s a decrease in value of what it means to pay attention to people,” Sarow says. Kapture is designed to correct this devaluation by using technology to compensate for a perceived deficiency in communication and interpersonal interest.
As McLuhan famously said, all technology is an extension of ourselves. While on one hand Kapture is an extension of listening and paying attention, it also extends the function of memory, like photographs and video do. But what makes Kapture different, and part of a new evolutionary wave in lifelogging technology, is that it’s A) always listening and B) extending a sense—sound—most people still prefer to keep to themselves. Modern culture is visual and the image reigns supreme, which is a relatively new historical development in a global human culture that used to prioritize oral tradition above all else. Kapture hearkens back to this tradition, but through a modern, mediated lens, designed largely around a perceived deficit in our mental ability—or interest in remembering—what we’re hearing.
This perceived sensory deficit is based on a broader and more primordial philosophy of mind that sees the mind and the brain as distinct, and views the brain’s function as a gatekeeper between everyday cognition and the paralysis of absolute consciousness. If memories “live” in the mind, it’s the brain’s job to keep this organized, chronological and usually inaccessible. Anyone who has experimented with mind-altering chemicals, or who has had a near-death experience, can attest to the strangeness of what happens when something “extends” the brain. Your memory expands, your emotions deepen, your meaning and self-perception shifts —but only temporarily, because the brain can’t handle sustained awareness of the mind without impacting our productivity and even our linguistic abilities.
Some people have the blessing of a photographic memory, and lifelogging technologies have the potential to bring average people up to at least that level. But when the process of remembering is mediated, along with the memories themselves, whose memories are we actually collecting and accessing? What about when these memories can be hacked, altered or simply deleted? These questions are central to the core idea of lifelogging technology. And as this technology eventually reaches a Malcolm Gladwell-style tipping point: If you can envision intellectual property lawyers and philosophers answering the same questions, you know you’re running into unexplored ethical territory. As such, there are two main ethical considerations would-be lifeloggers and developers should pay attention to with the growing Gospel of Re-Do:
Most importantly, developers and marketers need to ask which parts of our lives deserve to be “extended,” and which should be left alone. Once they have an answer, they need to ask, according to whom? Lifelogging and other technologies are engineered based on what we perceive as limitations—in this case, with memory—but without a holistic view, we can’t really know our strengths and weaknesses; we can only guess. The limiting power of the brain over the mind seems like a weakness, but it may actually be strength; it keeps us focused, it forges will and determination and so on. Technology-as-extension forms a perceived bridge between these weaknesses and so-called strengths, but makes it hard to see what’s on the other side. With lifelogging, at least we can remember what we’re seeing along the way.
Benjamin van Loon is a writer and researcher from Chicago. He holds a Master of Arts in Communication and Media from Northeastern Illinois University. Follow him on Twitter @benvanloon and view the rest of his work online at www.benvanloon.com.
The New York Times calls its custom-crafted dashboard “Stela” — which stands for “story and event analytics.” According to Shan Wang’s report for Niemanlab.org, the Times makes this user-friendly system available to staff so they can see an array of data about their articles:
“We were looking for ways to help reporters and editors get feedback on the things they were being asked to do online, such as tweaking headlines, promoting to social,” Steve Mayne, lead growth editor at the Times, said. “And we believed it would be much more effective for us to actually have a tool to show reporters how, for instance, certain actions directly resulted in more people reading their stories.”
The system as described by Wang is impressive and effective, and has become fairly well adopted inside the Times. As media organizations gain greater access to these instant report cards, several questions arise:
Loyola’s Don Heider (SoC Dean) and Jill Geisler (Bill Plante Chair in Leadership and Media Integrity) sort it out.
Don Heider: I think in this case, like so many, context is key. I can see using analytics as is described in the NY Times piece to really help reporters and editors be more responsive to the audience. I think most of us at this point realize that journalism today and in the future must be more interactive, and this gives journalists a tool set to pay attention to how readers are responding to their stories, headlines, and even photos and videos.
The worry is of course is about the “P” word. Will journalists begin pandering to readers to try to build views and clicks? When I said context above, I meant context as is in; who is in the newsroom? If you have a veteran crew of writers, reporters and editors, I think there is little risk. Managers can help by making sure the mission of organization is clear, and even what goals are when using analytic information. What are you seeing among the managers you teach and coach in news organizations?
Jill Geisler: Managers vary greatly when it comes to analytics. Some are protective of performance data – just because they like to control the flow of information in general. Some are conservative about sharing, fearing it will be misinterpreted and cause other “P” words like “panic” or “paranoia.” Some are still learning analytics themselves.
And then there are folks like my friend Marty Kady, editor of POLITICO Pro. Here’s what he told me:
“On my team, I’ve gone fully in favor of providing metrics (though we don’t judge our paywall products by total clicks). We have provided open rates for email newsletters and alerts, subscription renewal rates and a full list of subscribers to all the section editors. If you want people to feel fully bought in to the news and product mission, I think transparency in how we’re doing is essential.”
I like Marty’s transparent approach. With transparency comes additional responsibility for leaders. To share analytics effectively, think: Strategy, Success and Soul. Explain your organization’s strategy and how the metrics support it. Define clearly how the metrics do or don’t measure the success of the whole team and each individual member. Never forget that data-driven organizations can easily lose sight of values, their soul – without strong leadership.
Here’s my at-a-glance guide for sharing analytics:
|Strategy||Success /Team||Success/ Individual||Soul|
|How do the metrics we’re sharing fit with our overall strategy?
What are our priorities?
Knowing that digital strategy must be nimble, how do we explain a quick change in focus?
|How do we know we’re moving in the right direction?
Who or what should we be judging ourselves against?
How can we use data to work better as a team, rather than silos?
|How does data factor into the evaluation of an employee?
How can we help employees learn to interpret data in context?
How do we make certain that analytics aren’t the sole measure of a person’s contributions?
|How clear are we about what we stand for as an organization?
Do we make it clear that metrics won’t hijack news judgment and values?
Do we talk about values in the same conversations as analytics?
That said, let me ask you, Don, for your take the biggest ethical land mines you’d encourage media organizations to guard against when it comes to analytics? What’s your top five list?
I don’t know about a top five, but here are things I think about:
It sounds like Politico has an excellent approach. But do most newsrooms have the resources they to help put metrics into context?
As I was saying above, I worry that analytics without context can lead journalists to conflate popularity (impressions, page views, etc.) with journalistic importance. We always have to come back to that question; what’s our journalistic purpose? Why we journalists and what are is our duty? I would argue, even in a digital click-through age, our duty is to inform people, serve as watchdogs, and to tell important stories well. There are times when the most important stories do not perform as well as the less important stories (such as the latest Kardashian saga). That never releases us from our obligation to try to do our best to inform.
We can use analytics to helps us gain a broader understanding of what the public wants and needs to know, but we have to dig a little, examine trends and even ask the public from time-to-time; page impressions does not do that effectively. The bottom line; analytics have to be aligned with journalistic purpose. Conversely, following the wrong metrics can lead journalists in the wrong direction (Buzzfeed’s clickbait comes to mind).
As a researcher I can also tell you that one set of data never tell you the whole story. There are always hundreds of variables that can influence an outcome and this definitely holds true with web analytics. Most often a data set tells you what, it almost never tells you why.
Web analytics will never replace a human being’s ability to develop sources, ferret out a story or witness an event. Computers, algorithms, data analysis all become really helpful and powerful tools when paired with human intelligence.
I also think the more we look at analytics the more we realize that the future of journalism will be based upon building relationships with our audience. Engaging people in what we do, including listening to their ideas and feedback, even meeting them face-to-face. I think if we can really engage people in what we do and how we do it, there’s more chance they will financially support our endeavors.
Finally, I worry that if newsrooms become overly dependent on metrics, it may discourage risk-taking. We don’t want to get into the well-worn grooves of doing what works over and over. I have often see a crazy idea do more to break new ground and engage people than just repeating the same kind of news over and over again.
“If you’re not paying for the product, you are the product.” This phrase has been a popular way to describe the tradeoff we make for utilizing the many free and convenient services available online. While many consumers try to fiercely guard their personal information, it would appear that these attempts are in vain. You’re only as strong as your weakest link, and every friend or colleague is a potential chink in your armor.
Contacts For Hire
For example, in the past few years, companies began checking the social media profiles of job candidates and employees – in fact, Mashable reported on this trend as far back as 2012. This practice is illegal in a handful of states. However, according to 2016 data from the National Conference of State Legislatures, some legislation designed to protect job seekers and workers failed in seven states this year, and also failed in 10 states last year. (Legislation is either pending or it has not been introduced in several other states.)
Here’s the problem with checking social media profiles. Some companies aren’t just performing a cursory search; they’re asking for login and password information so they can see everything. In fact, some online job applications won’t allow individuals to even submit their applications unless they have authorized social media access and provided their usernames and passwords.
If that type of access is downright illegal in some states, isn’t it at least unethical in the rest of the country? I asked several experts to weigh in on this subject.
According to Tim Sackett, a human resources and recruiting talent pro as well as the president of HRU Technical Resources, most employers are scouring the internet before they make a hiring decision – whether they tell you or not. “I would rather an employee just tell me this is part of the deal – plus, many candidates have their profiles locked down, so if you don’t give me access, there is nothing to see,” Sackett said. And he added that “nothing to see” can be a red flag that causes an employer to question what that person may be trying to hide.
However, from an ethical standpoint, Sackett explained that whether asking for social media login information is right or wrong depends on factors such as the employer, the clients, and the company’s culture. “The answer is to work for a company that doesn’t have issues with your vices,” said Sackett. “If you like to party and post pics with your drunken friends on Saturday night, work for a company that is cool with that. If you and your friends like to dress up like Hello Kitty on your off time, work for a company that is cool with that.”
Almost half of the companies in a recent survey by the Society of Human Resource Management admit to using social media to screen applicants, and one-third report that they have disqualified applicants based on the information they found.
Jonathan Westover, associate professor of Organizational Leadership in the Woodbury School of Business at Utah Valley University and a human resource management consultant, agrees that companies are probably looking for red flags. “Will the applicant embarrass the company? Are they engaged in behaviors that might lead to poor performance? Hiring managers want to know this before they make a decision.”
And Westover thinks it’s possible that companies are also looking for a strong professional network – especially in highly-skilled or managerial jobs. “They may leverage candidates with strong networks, such as LinkedIn, in the recruitment and headhunting of other highly-skilled potential workers (for example, in the high tech industry).” But Westover said there are still underlying privacy issues – and he thinks that this type of access can be abused and used for other purposes.
One of the major concerns is how this information is used, according to Don Mayer, J.D. chair of the Department of Business Ethics and Legal Studies, and professor-in-residence at the Daniels College of Business at the University of Denver. He questions the ethics of this practice because the candidate or employee is not given the opportunity to explain any information or associations that the company may consider to be derogatory.
“Motives may vary, but I’m not clear on what criteria companies would use to disqualify someone because of their contacts, or because of comments made to friends on social media,” Mayer said. Are psychologists hired to do some sort of psych-analysis of patterns and ‘likes’ from Facebook?”
The possibility of disqualifying a candidate based on their list of friends is a serious ethical issue to Karen Young, SPHR, of HR Resolutions. “I’m concerned that all of a sudden, a company’s ‘valid business reason’ for not hiring an applicant is because someone looked at their Facebook page and saw that some their connections include LGBT, Hispanic and African American friends.”
Also, Young believes the social media access requirement may reduce the number of qualified people that would actually complete the application process.
There are other ethical issues regarding this requirement, according to Kate Jones, a partner in the Kutak Rock law firm. “Providing your social media credentials to a potential employer may not only infringe on your privacy, but also the privacy of your friends and contacts on social media,” Jones said.
Jones also explained that when applicants share their login credentials, they’re making a conscious decision to do so. “But your friends and contacts on social media do not have an opportunity to make that choice.” Jones said they might have chosen to share certain information only with certain friends and contacts. “Sharing your login credentials may affect your friends’ privacy,” she warned.
But should the bulk of the ethical blame rest on the job seeker or the potential employer? After all, no one is forcing applicants to agree to these terms. They can choose to terminate the application process and seek employment elsewhere. But is that a realistic expectation?
Keith Swisher, ethics consultant at Swisher P.C., thinks it’s an abuse of the potential employer’s power. “People need jobs, and employers should not exploit that need by, for example, requiring access to private communications.” Regarding employees, Swisher says, “Performance interviews, probationary periods or on-the-job observations would provide far more accurate and less intrusive information than the screening of private, out-of-office communications and associations.”
In 2015, The Atlantic reported that Facebook secured a patent that would allow banks to determine a potential borrower’s creditworthiness by analyzing the credit ratings of the individual’s social media connections. If the average credit rating of the individual’s friends happened to be below the minimum credit score, the individual’s application would be rejected – even if that person had good credit. Fortunately, Facebook decided against proceeding with the project.
Facebook also creates “shadow profiles” based on the information provided by an individual’s friends. For example, let’s say you’re a Facebook user, but you’ve given the company the email address you use for junk mail, and you’ve never supplied other information, such as your phone number.
However, if your friends have ever used Facebook’s “find friends” feature and allowed Facebook to scan their mobile phone contacts, all of this information is stored on Facebook’s servers. In other words, Facebook may have all of your email addresses and phone numbers stored in a shadow profile.
Facebook isn’t alone in this practice. One day, M. Forrest Abouelnasr was exchanging emails with a friend, and the friend switched to his business address. A few days later, when Abouelnasr was on LinkedIn, he noticed that this friend’s name popped up as someone he may know and want to connect with – although the two were already LinkedIn connections.
Abouelnasr realized that LinkedIn assumed the new email address belonged to a different person who didn’t have a LinkedIn account, and he wanted to know how LinkedIn was able to track his email contacts. In his blog, Abouelnasr shares the transcript of his conversation with LinkedIn’s customer service department.
When I contacted Abouelnasr about his experience, he told me at first, the rep erroneously stated that if a user had LinkedIn open and also had their mail server open (Gmail, Yahoo, etc.), LinkedIn would grab those email contacts. “This is impossible, and the company representative later corrected the mistake, saying that instead what the company actually does is collect a user’s smartphone contacts when the LinkedIn app is installed on their smartphone.”
How many users upload their contacts to various apps without stopping to consider that their friends and colleagues may not want their personal information exposed to a third-party? How many users stop to obtain permission?
But is it really such a big deal that LinkedIn, Google, Facebook and other companies are collecting information on people from their friends and without their knowledge? Mayer said he believes it is a big deal. “In terms of trustworthiness – which is a core ethical value to most people, and even to many corporations striving to be more ethical – this is not an entirely straightforward process,” he said. Also, Mayer stresses that companies don’t really explain what they intend to do with the information.
Among other things, we now know that companies sell information to data brokers. A CBS News report revealed that Acxiom, the largest data broker, has roughly 1,000 tidbits of data on over 200 million Americans. On top of that, Acxiom – along with thousands of other data brokers – sells various types of lists to other companies. Some of these lists might include people with gambling habits, gun owners, members of LGBT organizations, or patients with specific medical conditions. These groupings, and an assortment of other information, help advertisers market to specific individuals. But not all of the information is used for advertising. The information is also sold to insurance companies, banks, hospitals, schools and other organizations to help them make risk assessments.
This brings us back to the weakest link: You can take every conceivable precaution to protect your privacy, but be advised that it only takes one friend or colleague – through sheer carelessness, willful ignorance, the desire for convenience or the lure of a job – to create a vulnerability that companies can, and will, exploit.
Terri Williams writes for a variety of clients including USA Today, Yahoo, U.S. News & World Report, The Houston Chronicle, Investopedia, and Robert Half. She has a Bachelor of Arts in English from the University of Alabama at Birmingham. Follow her on Twitter @Territoryone.
Thanks to the internet, smartphone apps and other technological advances, we seem to be headed in the direction of a cashless society in which physical currency is replaced completely by an internet connection and a bank account. For instance, I can pay my portion of a bar tab with my phone using the payment-splitting app Venmo. I can send friends money through PayPal’s website. I can even buy Cheez-Its at work sans cash, because there’s a credit card reader attached to the vending machine. Using near-field communication, the tech-savvy among us can breeze through the retail checkout process by waving an Apple Smartwatch or Visa payWave credit card over a contactless terminal.
All these innovations have happened in the past few decades. At this rate, could cash become completely obsolete in the near future? And is that a good thing?
To answer the first question: Certain cultures are certainly heading in that direction. A 2013 article on Swedish news site The Local was headlined, “Swedes set for cashless future.” The article stated that only 27 percent of retail purchases involve cash (not including those made online), and some public buses in the country refuse paper money and coins altogether. Sweden’s central bank, the Riksbank, told The Local, “Neither retailers nor banks have any obligation to accept cash.”
Convenience-wise, a cashless society certainly sounds appealing (imagine a future with no dirty, wrinkled dollar bills or coins buried in the couch). Privacy-wise, it’s harder to say. Especially when you consider the existence of an electronic record of your most sensitive purchases, including items such as drugs.
One example: marijuana
On Oct. 1, 2015, recreational marijuana sales became legal in Oregon. Hundreds of dispensaries sprouted up; they quickly outnumbered Oregon’s Starbucks and McDonald’s locations. It was a boon for small businesses—except most banks don’t allow dispensaries to open accounts. Banks fear losing their Federal Deposit Insurance Corporation insurance, as marijuana is still illegal at the federal level. That means roughly 85 percent of Oregon’s dispensaries are cash-only. Many have ATMs inside, but that’s not the point. The point is someone at the federal level gets to decide marijuana is questionable even though the majority of Americans disagree.
Immediately, troublesome implications come to mind about the potential difficulties for dispensary owners. What if dispensary owners want to apply for a loan or rent retail space, but have no way to prove their creditworthiness? Paper records only go so far. And it’s inconvenient for any business to have to deal solely in cash: Tracking transactions, managing payroll and doing taxes are difficult. As Oregon Sen. Ron Wyden said last year, “It is ridiculous to make any business owner carry duffel bags of cash just to pay their taxes.” Finally, there’s theft. As Jeffrey Stinson wrote for the Pew Charitable Trusts, “The abundance of cash makes … dispensaries tantalizing targets for criminals.”
These arguments for a cashless society may be compelling, but swapping cash for plastic presents privacy concerns for consumers. Currently, cash-only marijuana purchases mean that my bank–and anyone who hacks into my bank account or subpoenas my banking activity–doesn’t know if or when I buy marijuana. If society goes cashless, if I apply for a loan through a national bank, loan officers might not view those purchases favorably when considering how responsible, upstanding and financially prudent I am–even though purchasing marijuana is legal in my state. (Some lenders, such as Earnest, don’t merely use your credit score to determine your creditworthiness; they also examine your purchase history.) So yes, marijuana dispensaries should not have to rely solely on cash, but consumers shouldn’t have to rely solely on debit and credit cards.
To put it plainly, most people probably don’t want their banks to know whenever they buy drugs. The minute Americans can purchase marijuana with credit or debit cards, other people will start making assumptions and judgments will start being used against them.
Your bank account tells all
In an imaginary future cashless society, it’s one thing for your bank to know your complete spending history. Unless you shun banks and get paid only in cash, your bank knows almost everything you buy already. But what happens when (as previously mentioned) someone subpoenas a record of your financial transactions to determine whether you deserve custody of your child? Or whether you’re a “suspicious character” and had motive to commit a crime? In a cashless society, activities such as gambling, going to a strip club and purchasing a counterfeit handbag lose their anonymity. Not only do banks gain knowledge of your every purchase, but they open the door for others to police your morality.
Sometimes this is a good thing, like with illegal activity. “A student at Columbia University was arrested and charged with five drug-related offenses, including possession with the intent to sell,” wrote Sarah Jeong for The Atlantic. “Supposedly, his fellow students and customers had paid him through the PayPal-owned smartphone app Venmo.” No matter how unmonitored they seem, most digital apps and payment systems aren’t anonymous. That’s good news for law enforcement and cases that are black and white, but what about the gray areas?
Who decides what’s unethical?
What if the morality of a situation is not as cut-and-dried? Who gets to decide that a legal purchase such as like purchasing marijuana in Oregon or tipping a stripper is unethical? What you and I may think is acceptable may be put in the same category as activities that are illegal and far less ethically ambiguous. For example, a recent initiative by the Department of Justice inadvertently grouped Ponzi schemes and adult entertainment together in the same category of suspicious, high-risk activity, despite their fundamental differences in legality.
Here’s the full story, according to Sarah Jeong in The Atlantic: In 2013, the Department of Justice launched an initiative called Operation Choke Point to crack down on high-interest payday loans. The DOJ asked banks to flag “high-risk activity.” In addition to guns, get-rich-quick schemes and pyramid schemes, other high-risk activities according to the FDIC included “tobacco sales, telemarketing, pornography, escort services, dating services, online gambling, coin dealers, cable-box descramblers, and ‘racist materials,’” wrote Jeong. As a result, banks stopped working with businesses that were actually legitimate, concluded an investigation by the U.S. House Committee on Oversight and Government Reform. As Jeong commented in her piece, “It’s strange to see a list of a handful of actually-illegal activities … alongside legal vices.” Indeed. While the goal of Operation Choke Point was to target exploitative payday lenders, not to serve as the morality police, the operation ended up revealing how easy it was to do the latter. In a cashless society, we lose the option to keep legal yet morally questionable purchases private, leaving others to label them as they will.
From hacking to hurricanes
Aside from concerns about privacy and morality, a cashless society is worrisome in light of threats such as fraud and natural disasters (two cases in which cash comes out on top).
Today, it’s arguably easier to open a fraudulent credit card in someone’s name than to snatch someone’s physical wallet; compare 326,000 reported robberies in the United States in 2014 to 333,000 reported cases of identity theft. And having your identity stolen is not merely annoying. “Identity theft is often committed to facilitate other crimes such as credit card fraud, document fraud, or employment fraud, which in turn can affect not only the nation’s economy but its security,” noted the Congressional Research Service. In contrast, cash is relatively secure, even if you don’t wear a wallet chain.
Plus, its liquidity makes it good for emergencies. Apps and online banking require not only power but also internet access. Should a large-scale natural disaster strike, no one will be taking Uber to the grocery store to buy flares with their debit card. Need a five-gallon jug of water? Don’t count on Amazon Prime.
Obviously, I’m not advocating draining your bank account and stashing all your money under your mattress. But as we slowly transition away from physical money, it’s worth first examining the sacrifices to our privacy, anonymity and security. Jeong warns, “[T]he cashless society offers the government entirely new forms of coercion, surveillance, and censorship.” Whether you care more about your freedom or your right to privacy, a cashless society should give us pause.
Holly Richmond is a Portland writer. Learn more at hollyrichmond.com.
Whether I’m searching for cheap shampoo, an expensive computer, or a spot-on Christmas present, I go straight to Amazon to compare my options and read the reviews. The more reviews I see, the more likely I am to trust the stars. Sure, some business owners may skew ratings by adding positive reviews, but I figure they can do little to overshadow negative opinions.
Unfortunately, I’m beginning to see an irritating trend that compromises the value of product reviews (and my ability to find top notch gifts). Companies, especially unfamiliar, foreign-based ones are unethically pumping up their search visibility by offering individuals free or discounted products in exchange for written reviews on popular marketplace sites such as Amazon. Unsurprisingly, those reviews are overwhelmingly positive. Since the number of reviews contributes to a product’s result rank, businesses can significantly increase both rankings and ratings by giving away free products in exchange for stars. This disingenuous practice misrepresents feedback from traditional reviewers who consider both quality and bang for their buck. But until the government deems such marketing practices deceitful by outlawing them, businesses can treat critiques of their approach as a matter of opinion. Seeing as the Federal Trade Commission is only beginning to regulate this evolving form of promotion and Amazon struggles to cope with biased feedback, forthcoming change is unlikely.
For years, the Federal Trade Commission has dedicated resources to protecting buyers from unethical and deceptive advertising tactics. The FTC Act prohibits businesses from misrepresenting or omitting information in a way that can mislead potential consumers. The Act states that the misrepresentation should be material enough to warrant action, “if it is likely to affect consumers’ choices or conduct regarding an advertised product or the advertising for the product.” By this definition, promotion campaigns that induce biased reviews cross the line. But the FTC Act has only recently addressed the growing trend of trading products for reviews. When it comes to reviews – especially ones associated with unfamiliar brands – the law has not kept up with evolving marketing trends.
The FTC Act was developed to single out and punish businesses, advertising agencies and catalog marketers who mislead the public, but there is insufficient precedent to legally classify reviewers on third-party websites as any of these entities. It is unclear whether reviewers fall under the umbrella of advertisers when they publish biased reviews. Nor is it clear how much responsibility businesses should take for reviewers who do not reveal their connection with businesses by failing to mention information about free products.
Concerns about the legality of promotional reviews have led the FTC to add a 2015 Endorsement Guide Q&A to its Act. Echoing the FTC’s general theme of transparency, the guide instructs businesses who offer perks to disclose their offers because: “Knowing that reviewers got the product they reviewed for free would probably affect the weight [the] customers give to the reviews, even if you didn’t intend for that to happen … [the] customers have the right to know which reviewers were given products for free.” Considering Amazon sellers offering samples cannot disclose information about perks in product descriptions, it is safe to say a degree of deception by omission is taking place.
What isn’t clear is how much blame businesses should take for such omissions. Marketplace platforms on Amazon don’t set aside a place for information about perks, instead asking reviewers to disclose such information within their comments. The FTC Guide – which does not have the force of law – recommends that bloggers who review free products mention benefits using clear language that stands out within their comments. But on Amazon, disclosures often get lost in a sea of reviews, especially in cases where businesses give away hundreds of samples. Even though companies often contact bloggers in their search for Amazon reviews, these reviewers do not adopt the role of bloggers on Amazon, and they do not have the same opportunities to make disclosures stand out. It is hard to blame bloggers for this FTC and Amazon oversight. At the moment, buyers bear the responsibility for seeking out biased reviews and must hope that, at the very least, commenters are honest about receiving free products when they post.
Since Amazon filters have not rid the site of biased reviews, you may be comically surprised by the high rankings and bulk reviews of unknown brands. In a recent Amazon search for the keyword ‘shampoo’ I found myself on a landing page where an ArtNaturals shampoo topped the list. Although I was not familiar with the brand, it received 4.5 out of 5 stars based on 4,407 reviews – and it had the blue #1 Best Seller Ribbon! I was intrigued. It showed up ahead of obvious results, such as TRESemmé and Pantene, which paled in comparison with a scant 519 ratings. After glancing at a few reviews, it became clear that many of the ratings were not written by typical consumers. Of the 10 most recent reviews, eight mentioned the shampoo was free or purchased at a discounted rate. None of those reviewers gave the product fewer than four stars. Quickly irritated, I looked at what unsatisfied commenters had to say – they were also aggravated. “I am so upset at both this product and the other reviews for this product, that I’m questioning the entire Amazon review process. I paid full price for this shampoo … and the product is awful,” wrote Lillian A.
Aware that misleading reviews mar their reputation, but unwilling to eliminate third-party review systems, Amazon took steps to minimize the most biased of reviews. It went after companies and affiliates who did not just manipulate reviews, they outright purchased them. The marketplace giant filed two lawsuits in 2015, one against businesses and one against fraudulent reviewers.
In its first lawsuit, Amazon filed charges against several websites for selling fake reviews, later claiming that some of those sites were forced to shut down. In the second, the company sued 1,000 individuals who used the service exchange platform Fiverr.com to sell five-star reviews for $5 each. This was the first time Amazon went after reviewers themselves, taking a bottom-up approach not yet standardized by the FTC. According to Amazon’s complaint: “An unhealthy ecosystem developed outside of Amazon to supply reviews in exchange for payment … This action is then next step in a long-term effort to ensure these providers of fraudulent reviews do not offer their illicit services…”
Amazon and the FTC have a long way to go before they can curb misleading review solicitation. That being said, not everyone agrees that exchanging products for reviews is something they should be worrying about in the first place. Some have suggested that small businesses need a place to start, and establishing a consumer base by incentivizing reviews is an effective, commonsense way of building brand visibility.
Numerous bloggers contacted by businesses in need of Amazon reviews stressed that free samples did not play into their feedback. In a 2015 Amazon discussion thread about free samples, one user wrote: “There’s a lot of fakers out there but it runs us real reviewers good name through the mud. I have received item discount [sic] and free … And yet according to this post I’ve been bribed? The seller offers me the discount as an incentive to write the review so he or she can see how their product is performing because most folks who buy stuff online do not take the time to come back and review the item.” Another agreed, adding that, “the impulse to be ‘nice’ is stronger” when products are free, but she was conscious of the pressure, and ultimately submitted honest reviews.
Not everyone will agree that offering free products results in overwhelmingly biased reviews, or that the process is unethical on the part of businesses, but most shoppers will agree that it misleads unsuspecting buyers. So is there a quick fix on the consumer end? Aside from digging deeper before clicking on the ‘Proceed to checkout’ button, not really. But there are interesting developments that may evolve into promising tools for the future. For example, Fakespot.com, a free, no-frills website launched in 2015 claims that it can detect the authenticity of Amazon reviews by looking for patterns in the comment sections. Assessments of its effectiveness have been mixed, but the prospect of new algorithms that can detect bias (and apply to sites beyond Amazon) are worth researching. If you were wondering, the website gave ArtNaturals a 55.2 percent ‘low quality review’ rating. Then again, the shampoo producer had long lost me at, “I got this product free in exchange for my honest and unbiased review.”
Paulina Haselhorst was a writer and editor for AnswersMedia and the director of content for Scholarships.com. She received her MA in history from Loyola University Chicago and a BA from the University of Illinois at Urbana-Champaign. You can contact Paulina at PaulinaHaselhorst@gmail.com.