In 2015, Clouds Over Sidra, the first virtual reality (VR) video created for the United Nations, drew the public’s attention for both for its technological novelty and its adeptness at engaging users. With the help of a headset, viewers followed along as a young Syrian girl named Sidra took them on a tour of her Zaatari refugee camp, walking them through her daily life. According to UN project manager Kristin Gutekuns, the visual account helped individuals identify with refugees, a change from the prior disconnect she believes contributed to poor fundraising outcomes. Tearing down the the barrier between viewers and refugees, Sidra brought us into her home, her school and her play lot, making us feel as if we – not the producers – could gauge her circumstances. “Instead of just feeling bad for someone, you actually feel like you might be in the same situation with them,” said Gutekuns. To the public, the short film was an indication that journalism had reached a new frontier in which it could provide high-tech experiences that merged two environments.
While the fascination with VR has not worn off, it is now time for us to move beyond marveling at its potential. Journalists and editors must work together to create guidelines for virtual reality, establishing ethical boundaries that VR should not cross. One of the most important questions we must ask ourselves is whether VR should be off-limits if it infringes upon the privacy of taped subjects.
Sidra allowed us into her home on her own accord, but virtual reality journalism is quickly progressing, and we may soon find ourselves intruding upon experiences we were not invited to share. When times of crises give producers the opportunity to enter the worlds of wounded victims and families, journalists should make the ethical decision to set their VR equipment aside.
Before you accuse me of underhandedly promoting censorship, let me explain. As of now, VR journalism is merely an addendum to articles, photographs and limited-scope video coverage. We can refuse to trespass on people’s grief while continuing to write about tragic events and publish (sensitively-selected) footage. However, optimizing the VR experience by offering users enhanced images of injured individuals or grieving families during natural disasters, shootings or other devastating events would not be a point of pride for journalists.
Creating regulations that preserve privacy is a particularly urgent matter because recent improvements in VR access have pushed news sites to quickly expand their video repertoire. Legal and ethical regulations have not kept up with video production advancements spurred on by a selection of new, affordable VR headsets. With free smartphone apps and a cardboard viewer that costs $5, you can access VR stories published by a variety of news sources. To dissuade competing journalists from invading personal spaces in an attempt to obtain the most appealing footage, ethical boundaries must be standardized as quickly as possible.
Debates over privacy infringement are not new to 360-degree footage. Google 360 was met with an outpour of criticism for violating personal space and endangering minors by publishing the public’s whereabouts. In response, Google changed the angles at which people were photographed to minimize facial recognition and obliged requests to blur out faces and license plates. Now, Google’s initial infringements seem slight compared to intrusions associated with VR journalism. A major draw of VR is its ability to grant us entry into someone’s environment and induce a rush of emotion as we gaze straight into strangers’ eyes. If a child appears in the scene, an additional boost of empathy further enriches the experience. Offering this intimate experience is not inherently unethical, so long as journalists draw an appropriate line between approved documentation and invasion of space at graphic sites.
It may be tempting to cross the fine line between entertainment and documentation when news outlets seek captivating footage to win over viewers. Fortunately, most news sources have made ethically sound judgments about limiting VR productions of breaking news tragedies. (Admittedly, equipment setup and editing limitations play a role in these decisions.)
When disaster sites are filmed, the footage is typically captured in the aftermath of tragedy. Numerous short videos of war sites are available online, but injuries and subsequent despair are not a theme in taping and distribution. Virtual reality exploration of destroyed cities and images of struggling families cognizant of taping can illustrate distress without compromising the privacy of the deceased and their loved ones. When striving to obtain footage that enables viewers to witness scenes of devastation up-close, journalists must select platforms with sensitivity. Perhaps VR offers the best opportunity for immersion, but it is often the least ethical of options.
As numerous videos have shown, VR is at its best when it takes us into foreign territory, but its potential extends far beyond the portrayal of disaster. Some journalists who have utilized VR impressed audiences with short documentaries reminiscent of Discovery Channel productions. In the past year, the New York Times published fascinating VR films that included tours of national parks and a trip to the surface of Pluto. In fact, according to a list of VR do’s and don’ts published by Stanford University’s journalism program, “The vast majority of news stories are not suited for VR. … VR pieces will complement other forms of reporting rather than replace them.” Journalists should therefore resist the urge to capture and release footage of sensitive material under the impression that it is the only way to offer prime VR experiences.
We do not have to leave the public in the dark about events involving tragedy and grief, but it is important to be selective about the means by which stories are told. Ideally, scenes of violence would not be shot without a subject’s permission, but that is not always a viable option. At the very least, journalists should abide by ethical privacy guides the law has yet to establish. In some cases, that means putting down the VR equipment.
Paulina Haselhorst was a writer and editor for AnswersMedia and the director of content for Scholarships.com. She received her MA in history from Loyola University Chicago and a BA from the University of Illinois at Urbana-Champaign. You can contact Paulina at PaulinaHaselhorst@gmail.com.
Online companies such as Amazon’s Mechanical Turk, UpWork (previously oDesk/Elance), TaskRabbit, Fiverr and other companies that match digital workers with online or real-world jobs offer individuals easy entry into the burgeoning workforce of freelancers. With active users around the world and more than 53 million Americans participating, these platforms are attracting a lot of attention among businesses and individuals. It seems like a win-win situation for everyone, yet the ethics of this unregulated workforce are often questionable.
These platforms give workers the ability to get paid for simple tasks or tasks that require skill but can be done at home. Some people even use these services to acquire additional skills that can help them transition to new careers. Best of all, this free market work-for-hire system is offered in an informal online platform that is easy to learn and navigate.
At the lower-paying end of these organizations are businesses such as Mechanical Turk. This company employs more than 500,000 people, with more than190 countries represented. Although some of these workers live in developing countries, at least half are from the United States. Even here, many people benefit from working at MTurk. Orlando, a 26-year-old from California, wrote, “Since beginning to work on Mechanical Turk, I’ve only made $500, but to me sir, that means a lot. It means paying for three weeks of daycare; it means groceries for the month; it means car and health insurance premiums.”
At first blush, this sounds great. But some feel these services may be preying on the desperation of workers needing employment, encouraging them to join a workforce without proper representation. Trebor Scholz, author, educator and Associate Professor of Culture and Media at The New School where he is chairing The Politics of Digital Culture conference series has that concern. He worries that “The shift away from employment to freelancing, independent contract work and other emerging forms of labor is an affront to one hundred years of labor struggles for the 8-hour workday, employer-covered health insurance, minimum wage, workplace harassment and many other protections that were established under the New Deal to foster social harmony and keep class warfare at bay.”
The average pay for digital labor underscores these worries. In 2009, the estimated hourly wage for Turkers was $2.30 per hour, far below the U.S. minimum wage of $7.25 per hour. One Turker, Rachel Jones from Minnesota, was able to reach just beyond minimum wage only after several years and completing 110,000 jobs. When asked about her experience, she admitted that she’d like higher wages, but fears a significant increase could destroy the world of crowdworking, which is what allows her to stay home with her children. That’s a chance she said she doesn’t want to take.
However, while Mr. Scholz rightly points out the relative lack of regulation and legislation in place to protect employees in the digital workplace from exploitative issues such as depressed wages, he leaves out the fact that 30 to 45 percent of working-age individuals around the world are unemployed or employed part-time. At the same time, many business sectors, notably skilled trades and technology, struggle to fill positions. Online talent platforms such as UpWork or Task Rabbit have provided one way to connect qualified, job-seeking employees with companies or individuals that need help. James Manyika, the Director of McKinsey Global Institute, said the free-market aspect of the system should eventually take care of the wage problem: “There is a strong correlation between labor market fluidity and an increase in wages.”
Despite fears of low wages, evidence shows that people can and do make decent money through these sites. While some use it to supplement other income, others make a living from online tasks alone. Leah Busque, the former IBM software engineer who created theTaskRabbit platform, confirmed that many people make up to $60,000 a year on the site. She wrote, “We are enabling micro-entrepreneurs to build their own business on top of TaskRabbit, to set their own schedules, specify how much they want to get paid, say what they are good at, and then incorporate the work into their lifestyle.” Megan Williams, a content strategist and the owner of Locutus Health Communications, claimed she earns $100 per hour on UpWork. She even details her experience with the platform on a blog for writers, with the intention of helping others achieve similar results. One man on TaskRabbit said he earns $2,000 per week on the site and boasted that he gets to spend half his time enjoying life on his boat in Napa, California. And one clever mom, Regina Aguilar, said she earns $400 per month on TaskRabbit to supplement her income so that she can stay home with her children.
A few years ago, I signed up with oDesk (now UpWork) so I could take a few gigs when times were slow. At times, I worked for less than I would have made otherwise, but I didn’t take any extremely low-paying jobs. However, one job ended up landing me an article assignment for the Huffington Post. That’s a fantastic item for my resume, and it was instrumental in bringing me writing work the more traditional way. Since clients tend to share experiences with one another, I found myself getting consistently higher paying jobs as client connections joined the oDesk work platform.
Still, it’s clear these platforms have inherent weaknesses. According to a McKinsey Global Institute study, they can restrict women’s social and economic empowerment due to gaps in gender access to digital technology in developing countries. And Mark Graham, a fellow at Oxford University’s Internet Institute pointed out that while some online workers do make more than their office-going brethren, there is a higher risk associated with working online. “Almost all of this work is precarious in some way as there isn’t much stability or security for these workers as it’s just as easy to fire them as it is to hire them,” said Graham. “It allows the clients of businesses hiring these workers a risk-free strategy—instead of taking the risk of taking these workers on more secure stable contracts, they are putting this risk onto workers themselves.”
Workers’ risks increase when platforms change operating systems and algorithms or merge with one another. The TaskRabbit community, a group of individuals offering to do small jobs such as shopping and small home repairs, recently went through an upheaval that left many employees angry. The company moved from a bidding system, which allowed Taskers to bid on jobs they liked and would fit into their daily schedules, to a system that assigns tasks via computer. Now, if a Tasker can’t commit within 30 minutes, the task is moved to someone else. This change was met with candid dismay as shown in comments on Reddit and Facebook, including this one: “I used to work every day, several tasks. I haven’t received any tasks since the change. My availability is completely open, I am highly rated and have been a Taskrabbit for a year (level 16). My rates are as low as possible, setting a task to $18/hour to be competitive means a take home pay of only $14.40. I wasn’t huge on bidding but I did do quick assign tasks all day long. Now there are no tasks to do :(.”
The organizational makeup of the platform carries even more risks because it protects the anonymity of employers through the platform’s auspices. When workers have little or no information about who’s offering the job, it makes it difficult for them to react to exploitation and band together to fight inequalities. As Trebor Scholz .” If you now take into consideration that this milieu that is marked by anonymity is also transnational, then the challenges to traditional unions become clear.” Also of concern is that employers can impact a freelancer’s rating and potential income with unwarranted bad reviews and by “reporting” the freelancer to the platform’s management. This risk of bad reviews causes some freelancers to refund dissatisfied clients rather than risk a hit to their ratings.
Digital talent platforms clearly offer benefits across the board to individuals in both developed and developing nations. These platforms give inexperienced and entry-level workers ways to work or gain experience while pursuing an education, with jobs that can grow into thriving businesses through persistence. They offer skilled workers casual work opportunities that fit their lifestyles. They provide a way to switch careers by gaining experience through part-time work and give stay-at-home parents a way to earn supplemental income without sacrificing their children’s care. But these benefits often times come at the expense of employees’ rights.
Legislation, regulation and, importantly, oversight is necessary to protect digital workers from exploitation by both potential employers and the platforms themselves. At the very least, there should be requirements for standard minimum wages, even if these vary from our current definitions. Regulation must be sought at a global level to grant protection across the world, and should direct particular attention to refusing to shield participating employers with the veil of anonymity.
Digital labor is here to stay. Each year, more talent platforms are added as the business community sees the advantage of this eager, inexpensive new workforce. Like some predicted during the Industrial Revolution when new forms of production caused an upsurge in cheap labor, workers will have to rebel against poor working conditions and exploitation to effect a change in the largely unregulated policies currently governing the online workforce.
Nikki B. Williams is a bestselling author based in Houston, TX. She writes about fact and fiction and the realms between, and her nonfiction work appears in both online and print publications around the world. Follow her on Twitter @williamsbnikki or at www.gottabeewriting.com.
The New York Times calls its custom-crafted dashboard “Stela” — which stands for “story and event analytics.” According to Shan Wang’s report for Niemanlab.org, the Times makes this user-friendly system available to staff so they can see an array of data about their articles:
“We were looking for ways to help reporters and editors get feedback on the things they were being asked to do online, such as tweaking headlines, promoting to social,” Steve Mayne, lead growth editor at the Times, said. “And we believed it would be much more effective for us to actually have a tool to show reporters how, for instance, certain actions directly resulted in more people reading their stories.”
The system as described by Wang is impressive and effective, and has become fairly well adopted inside the Times. As media organizations gain greater access to these instant report cards, several questions arise:
Loyola’s Don Heider (SoC Dean) and Jill Geisler (Bill Plante Chair in Leadership and Media Integrity) sort it out.
Don Heider: I think in this case, like so many, context is key. I can see using analytics as is described in the NY Times piece to really help reporters and editors be more responsive to the audience. I think most of us at this point realize that journalism today and in the future must be more interactive, and this gives journalists a tool set to pay attention to how readers are responding to their stories, headlines, and even photos and videos.
The worry is of course is about the “P” word. Will journalists begin pandering to readers to try to build views and clicks? When I said context above, I meant context as is in; who is in the newsroom? If you have a veteran crew of writers, reporters and editors, I think there is little risk. Managers can help by making sure the mission of organization is clear, and even what goals are when using analytic information. What are you seeing among the managers you teach and coach in news organizations?
Jill Geisler: Managers vary greatly when it comes to analytics. Some are protective of performance data – just because they like to control the flow of information in general. Some are conservative about sharing, fearing it will be misinterpreted and cause other “P” words like “panic” or “paranoia.” Some are still learning analytics themselves.
And then there are folks like my friend Marty Kady, editor of POLITICO Pro. Here’s what he told me:
“On my team, I’ve gone fully in favor of providing metrics (though we don’t judge our paywall products by total clicks). We have provided open rates for email newsletters and alerts, subscription renewal rates and a full list of subscribers to all the section editors. If you want people to feel fully bought in to the news and product mission, I think transparency in how we’re doing is essential.”
I like Marty’s transparent approach. With transparency comes additional responsibility for leaders. To share analytics effectively, think: Strategy, Success and Soul. Explain your organization’s strategy and how the metrics support it. Define clearly how the metrics do or don’t measure the success of the whole team and each individual member. Never forget that data-driven organizations can easily lose sight of values, their soul – without strong leadership.
Here’s my at-a-glance guide for sharing analytics:
|Strategy||Success /Team||Success/ Individual||Soul|
|How do the metrics we’re sharing fit with our overall strategy?What are our priorities?Knowing that digital strategy must be nimble, how do we explain a quick change in focus?||How do we know we’re moving in the right direction?Who or what should we be judging ourselves against?How can we use data to work better as a team, rather than silos?||How does data factor into the evaluation of an employee? How can we help employees learn to interpret data in context?Do we make certain that analytics aren’t the sole measure of a person’s contributions?||How clear are we about what we stand for as an organization?Do we make it clear that metrics won’t hijack news judgment and values?Do we talk about values in the same conversations as analytics?|
That said, let me ask you, Don, for your take the biggest ethical land mines you’d encourage media organizations to guard against when it comes to analytics? What’s your top five list?
I don’t know about a top five, but here are things I think about:
It sounds like Politico has an excellent approach. But do most newsrooms have the resources they to help put metrics into context?
As I was saying above, I worry that analytics without context can lead journalists to conflate popularity (impressions, page views, etc.) with journalistic importance. We always have to come back to that question; what’s our journalistic purpose? Why we journalists and what are is our duty? I would argue, even in a digital click-through age, our duty is to inform people, serve as watchdogs, and to tell important stories well. There are times when the most important stories do not perform as well as the less important stories (such as the latest Kardashian saga). That never releases us from our obligation to try to do our best to inform.
We can use analytics to helps us gain a broader understanding of what the public wants and needs to know, but we have to dig a little, examine trends and even ask the public from time-to-time; page impressions does not do that effectively. The bottom line; analytics have to be aligned with journalistic purpose. Conversely, following the wrong metrics can lead journalists in the wrong direction (Buzzfeed’s clickbait comes to mind).
As a researcher I can also tell you that one set of data never tell you the whole story. There are always hundreds of variables that can influence an outcome and this definitely holds true with web analytics. Most often a data set tells you what, it almost never tells you why.
Web analytics will never replace a human being’s ability to develop sources, ferret out a story or witness an event. Computers, algorithms, data analysis all become really helpful and powerful tools when paired with human intelligence.
I also think the more we look at analytics the more we realize that the future of journalism will be based upon building relationships with our audience. Engaging people in what we do, including listening to their ideas and feedback, even meeting them face-to-face. I think if we can really engage people in what we do and how we do it, there’s more chance they will financially support our endeavors.
Finally, I worry that if newsrooms become overly dependent on metrics, it may discourage risk-taking. We don’t want to get into the well-worn grooves of doing what works over and over. I have often see a crazy idea do more to break new ground and engage people than just repeating the same kind of news over and over again.
Have you ever wished you could Google your own life experience? Have you worried about what you’d find if you could? In our Cult-of-Information age, it turns out that the technology to achieve this—also known as lifelogging—isn’t far off from total market saturation.
In May 2016, Sony made waves after it received a patent for a smart contact lens that records what you see. Narrative, an independent brand, sells a wearable camera with a 30-hour charge that takes a picture every 30 seconds. Kapture, a startup based in Cincinnati, Ohio, sells a Bluetooth-connected bracelet that continuously records audio with a 60-second buffer. At the speed these and other lifelogging technologies are improving and gaining users, it’s difficult to pause and ask, is this actually what we want or need?
One such pause came in 2011, when the UK’s popular Channel 4 series, Black Mirror—an unofficial 21st century update of The Twilight Zone—aired its now well-known episode, “The Entire History of You.” The episode takes place in a contemporary reality where people have capsules implanted in their heads recording everything they see and do, with a user interface allowing for memory searching and playback. Suspecting infidelity by his spouse, the episode’s protagonist replays and obsesses over particular memories until he destroys all of his relationships and goes insane. Rod Serling of The Twilight Zone would be proud.
While sensationalistic, the technophobic anxieties laced into “The Entire History of You” are common at times of technological change. People were scared of cars, record players, and telephones, too. But fears of technology aren’t like fears of spiders and heights; they’re often grounded in uncertainty around ethical and ideological freedom. This is especially true when the technological innovations are no longer focused on reducing physical limitations—as bikes did for transportation—but are instead enhancing mental and psychological abilities, where the limits, and the dangers of exceeding those limits, remain vague.
“We know deep down inside that not everything needs to be remembered, not everything we want to remember, and not everything needs a piece of technology to be remembered,” said Kapture co-founder Mike Sarow in a recent phone interview I had with him.
Like the implanted capsule in Black Mirror, Kapture is a physical device that captures everything, and it’s up to you if you want to archive it. If you’re a songwriter and get struck by a hook, or you just heard your boss say something quotable in your weekly team meeting, you can send the audio to an app with a tap of your Kapture bracelet. So far, reviews of Kapture say the hardware is clunky. Moreover, hearing Sarow’s visions of Kapture’s eventual transformation into a total platform technology, always recording from everywhere, the bracelet seems almost anachronous. But as an entrepreneur, Sarow also understands that the physical device softens the market to a more disruptive change. “With technology like this, you struggle with being too early or too provocative,” Sarow says. “You need to struggle with the storm until people actually become okay with it, and realize that it’s helpful.”
Sarow came up with the idea to develop Kapture because he wanted to remember something one of his friends said. These days, however, he focuses on its value for business—specifically, its potential usefulness in meetings where everyone seems distracted. “These days, there’s a decrease in value of what it means to pay attention to people,” Sarow says. Kapture is designed to correct this devaluation by using technology to compensate for a perceived deficiency in communication and interpersonal interest.
As McLuhan famously said, all technology is an extension of ourselves. While on one hand Kapture is an extension of listening and paying attention, it also extends the function of memory, like photographs and video do. But what makes Kapture different, and part of a new evolutionary wave in lifelogging technology, is that it’s A) always listening and B) extending a sense—sound—most people still prefer to keep to themselves. Modern culture is visual and the image reigns supreme, which is a relatively new historical development in a global human culture that used to prioritize oral tradition above all else. Kapture hearkens back to this tradition, but through a modern, mediated lens, designed largely around a perceived deficit in our mental ability—or interest in remembering—what we’re hearing.
This perceived sensory deficit is based on a broader and more primordial philosophy of mind that sees the mind and the brain as distinct, and views the brain’s function as a gatekeeper between everyday cognition and the paralysis of absolute consciousness. If memories “live” in the mind, it’s the brain’s job to keep this organized, chronological and usually inaccessible. Anyone who has experimented with mind-altering chemicals, or who has had a near-death experience, can attest to the strangeness of what happens when something “extends” the brain. Your memory expands, your emotions deepen, your meaning and self-perception shifts —but only temporarily, because the brain can’t handle sustained awareness of the mind without impacting our productivity and even our linguistic abilities.
Some people have the blessing of a photographic memory, and lifelogging technologies have the potential to bring average people up to at least that level. But when the process of remembering is mediated, along with the memories themselves, whose memories are we actually collecting and accessing? What about when these memories can be hacked, altered or simply deleted? These questions are central to the core idea of lifelogging technology. And as this technology eventually reaches a Malcolm Gladwell-style tipping point: If you can envision intellectual property lawyers and philosophers answering the same questions, you know you’re running into unexplored ethical territory. As such, there are two main ethical considerations would-be lifeloggers and developers should pay attention to with the growing Gospel of Re-Do:
Most importantly, developers and marketers need to ask which parts of our lives deserve to be “extended,” and which should be left alone. Once they have an answer, they need to ask, according to whom? Lifelogging and other technologies are engineered based on what we perceive as limitations—in this case, with memory—but without a holistic view, we can’t really know our strengths and weaknesses; we can only guess. The limiting power of the brain over the mind seems like a weakness, but it may actually be strength; it keeps us focused, it forges will and determination and so on. Technology-as-extension forms a perceived bridge between these weaknesses and so-called strengths, but makes it hard to see what’s on the other side. With lifelogging, at least we can remember what we’re seeing along the way.
Benjamin van Loon is a writer and researcher from Chicago. He holds a Master of Arts in Communication and Media from Northeastern Illinois University. Follow him on Twitter @benvanloon and view the rest of his work online at www.benvanloon.com.
“If you’re not paying for the product, you are the product.” This phrase has been a popular way to describe the tradeoff we make for utilizing the many free and convenient services available online. While many consumers try to fiercely guard their personal information, it would appear that these attempts are in vain. You’re only as strong as your weakest link, and every friend or colleague is a potential chink in your armor.
Contacts For Hire
For example, in the past few years, companies began checking the social media profiles of job candidates and employees – in fact, Mashable reported on this trend as far back as 2012. This practice is illegal in a handful of states. However, according to 2016 data from the National Conference of State Legislatures, some legislation designed to protect job seekers and workers failed in seven states this year, and also failed in 10 states last year. (Legislation is either pending or it has not been introduced in several other states.)
Here’s the problem with checking social media profiles. Some companies aren’t just performing a cursory search; they’re asking for login and password information so they can see everything. In fact, some online job applications won’t allow individuals to even submit their applications unless they have authorized social media access and provided their usernames and passwords.
If that type of access is downright illegal in some states, isn’t it at least unethical in the rest of the country? I asked several experts to weigh in on this subject.
According to Tim Sackett, a human resources and recruiting talent pro as well as the president of HRU Technical Resources, most employers are scouring the internet before they make a hiring decision – whether they tell you or not. “I would rather an employee just tell me this is part of the deal – plus, many candidates have their profiles locked down, so if you don’t give me access, there is nothing to see,” Sackett said. And he added that “nothing to see” can be a red flag that causes an employer to question what that person may be trying to hide.
However, from an ethical standpoint, Sackett explained that whether asking for social media login information is right or wrong depends on factors such as the employer, the clients, and the company’s culture. “The answer is to work for a company that doesn’t have issues with your vices,” said Sackett. “If you like to party and post pics with your drunken friends on Saturday night, work for a company that is cool with that. If you and your friends like to dress up like Hello Kitty on your off time, work for a company that is cool with that.”
Almost half of the companies in a recent survey by the Society of Human Resource Management admit to using social media to screen applicants, and one-third report that they have disqualified applicants based on the information they found.
Jonathan Westover, associate professor of Organizational Leadership in the Woodbury School of Business at Utah Valley University and a human resource management consultant, agrees that companies are probably looking for red flags. “Will the applicant embarrass the company? Are they engaged in behaviors that might lead to poor performance? Hiring managers want to know this before they make a decision.”
And Westover thinks it’s possible that companies are also looking for a strong professional network – especially in highly-skilled or managerial jobs. “They may leverage candidates with strong networks, such as LinkedIn, in the recruitment and headhunting of other highly-skilled potential workers (for example, in the high tech industry).” But Westover said there are still underlying privacy issues – and he thinks that this type of access can be abused and used for other purposes.
One of the major concerns is how this information is used, according to Don Mayer, J.D. chair of the Department of Business Ethics and Legal Studies, and professor-in-residence at the Daniels College of Business at the University of Denver. He questions the ethics of this practice because the candidate or employee is not given the opportunity to explain any information or associations that the company may consider to be derogatory.
“Motives may vary, but I’m not clear on what criteria companies would use to disqualify someone because of their contacts, or because of comments made to friends on social media,” Mayer said. Are psychologists hired to do some sort of psych-analysis of patterns and ‘likes’ from Facebook?”
The possibility of disqualifying a candidate based on their list of friends is a serious ethical issue to Karen Young, SPHR, of HR Resolutions. “I’m concerned that all of a sudden, a company’s ‘valid business reason’ for not hiring an applicant is because someone looked at their Facebook page and saw that some their connections include LGBT, Hispanic and African American friends.”
Also, Young believes the social media access requirement may reduce the number of qualified people that would actually complete the application process.
There are other ethical issues regarding this requirement, according to Kate Jones, a partner in the Kutak Rock law firm. “Providing your social media credentials to a potential employer may not only infringe on your privacy, but also the privacy of your friends and contacts on social media,” Jones said.
Jones also explained that when applicants share their login credentials, they’re making a conscious decision to do so. “But your friends and contacts on social media do not have an opportunity to make that choice.” Jones said they might have chosen to share certain information only with certain friends and contacts. “Sharing your login credentials may affect your friends’ privacy,” she warned.
But should the bulk of the ethical blame rest on the job seeker or the potential employer? After all, no one is forcing applicants to agree to these terms. They can choose to terminate the application process and seek employment elsewhere. But is that a realistic expectation?
Keith Swisher, ethics consultant at Swisher P.C., thinks it’s an abuse of the potential employer’s power. “People need jobs, and employers should not exploit that need by, for example, requiring access to private communications.” Regarding employees, Swisher says, “Performance interviews, probationary periods or on-the-job observations would provide far more accurate and less intrusive information than the screening of private, out-of-office communications and associations.”
In 2015, The Atlantic reported that Facebook secured a patent that would allow banks to determine a potential borrower’s creditworthiness by analyzing the credit ratings of the individual’s social media connections. If the average credit rating of the individual’s friends happened to be below the minimum credit score, the individual’s application would be rejected – even if that person had good credit. Fortunately, Facebook decided against proceeding with the project.
Facebook also creates “shadow profiles” based on the information provided by an individual’s friends. For example, let’s say you’re a Facebook user, but you’ve given the company the email address you use for junk mail, and you’ve never supplied other information, such as your phone number.
However, if your friends have ever used Facebook’s “find friends” feature and allowed Facebook to scan their mobile phone contacts, all of this information is stored on Facebook’s servers. In other words, Facebook may have all of your email addresses and phone numbers stored in a shadow profile.
Facebook isn’t alone in this practice. One day, M. Forrest Abouelnasr was exchanging emails with a friend, and the friend switched to his business address. A few days later, when Abouelnasr was on LinkedIn, he noticed that this friend’s name popped up as someone he may know and want to connect with – although the two were already LinkedIn connections.
Abouelnasr realized that LinkedIn assumed the new email address belonged to a different person who didn’t have a LinkedIn account, and he wanted to know how LinkedIn was able to track his email contacts. In his blog, Abouelnasr shares the transcript of his conversation with LinkedIn’s customer service department.
When I contacted Abouelnasr about his experience, he told me at first, the rep erroneously stated that if a user had LinkedIn open and also had their mail server open (Gmail, Yahoo, etc.), LinkedIn would grab those email contacts. “This is impossible, and the company representative later corrected the mistake, saying that instead what the company actually does is collect a user’s smartphone contacts when the LinkedIn app is installed on their smartphone.”
How many users upload their contacts to various apps without stopping to consider that their friends and colleagues may not want their personal information exposed to a third-party? How many users stop to obtain permission?
But is it really such a big deal that LinkedIn, Google, Facebook and other companies are collecting information on people from their friends and without their knowledge? Mayer said he believes it is a big deal. “In terms of trustworthiness – which is a core ethical value to most people, and even to many corporations striving to be more ethical – this is not an entirely straightforward process,” he said. Also, Mayer stresses that companies don’t really explain what they intend to do with the information.
Among other things, we now know that companies sell information to data brokers. A CBS News report revealed that Acxiom, the largest data broker, has roughly 1,000 tidbits of data on over 200 million Americans. On top of that, Acxiom – along with thousands of other data brokers – sells various types of lists to other companies. Some of these lists might include people with gambling habits, gun owners, members of LGBT organizations, or patients with specific medical conditions. These groupings, and an assortment of other information, help advertisers market to specific individuals. But not all of the information is used for advertising. The information is also sold to insurance companies, banks, hospitals, schools and other organizations to help them make risk assessments.
This brings us back to the weakest link: You can take every conceivable precaution to protect your privacy, but be advised that it only takes one friend or colleague – through sheer carelessness, willful ignorance, the desire for convenience or the lure of a job – to create a vulnerability that companies can, and will, exploit.
Terri Williams writes for a variety of clients including USA Today, Yahoo, U.S. News & World Report, The Houston Chronicle, Investopedia, and Robert Half. She has a Bachelor of Arts in English from the University of Alabama at Birmingham. Follow her on Twitter @Territoryone.