Will big business compromise the ethics of artificial intelligence?
We are entering a new era of technology adoption. We have passed the point of using new tools to perform old tasks, and our behavior is changing. I grew up with a 1970s television that had four push-buttons on it; these were laboriously and manually tuned to U.K. channels BBC1, BBC2, and ITV. (The mysterious fourth button was 1970s redundancy-in-design at its finest, and came into use with the launch of Channel 4.) Two decades later, I had a remote control in my hand, enjoying the novelty of flicking through an ever-growing list of cable channels each evening. In 2017, the dizzying array of shows and multiple streaming services has changed the game. You can’t simply flick through the evening’s listings. When does that episode stream? Is that U.K. or U.S. release time? Which provider is the new show on? These days, I rely on voice control, asking Alexa to find a title, an episode, or a genre. Artificial intelligence (AI) has come to the rescue, it is a tool for its time, a behavioral shift in my home.
There is no obvious ethical problem around a company producing a clever TV remote. This, however, is the tip of the artificial intelligence iceberg, and it’s not at all clear if ethics is a priority for those developing AI tools.
Do you always know if you’re talking to a machine?
AI comes in many forms, from game-winning, game-changing AI Go player AlphaGo, to machine learning in the background of software, to chatbots at the forefront of interaction. In the last three months, I have watched robots discussing sushi and sashimi with startled humans around a table (in Austin, at South by Southwest’s Japan Factory). I’ve talked to a chatbot about accounts payable on a financial system (in London, at Sage Summit). I’ve engaged with several customer service chat representatives that clearly identify themselves as chatbots. It’s also very likely that I’ve commented, liked, or shared a post by a chatbot on a social media channel, without being aware of its source. Whether you know it or not, you probably have too.
Most interactions are via a keyboard, which instantly removes the voice and visual cues that identify humans. We judge intelligence by verbal reasoning: If machines were to answer questions in a manner indistinguishable from humans, they might be considered intelligent. This is the fundamental test of ‘judging intelligence’ devised by Alan Turing in 1950, and it has been passed. We can no longer be sure whether we are dealing with people or machines in our interactions across the World Wide Web.
Research published in March 2017 about online human-bot interactions suggests that as many as 15 percent of Twitter accounts are not, in fact, run by humans. The MIT Technology Review suggested in November 2016 that around 20 percent of all election-related tweets on the day before the U.S. presidential election were made by an army of influential chatbots. The sheer volume can distort the online debate and may have influenced the outcome of the election. Similar concerns have been raised about the U.K’s vote on Brexit earlier in the year.
How can we trust the entities with whom we interact to uphold human values and exercise ethical judgement? With whom does the responsibility lie? Will ethics be their priority in development?
Teaching Artificial Intelligence to think
AI started with a very narrow focus: Think of IBM’s Deep Blue beating Kasparov at chess. The goal was to select the right chess move from hundreds of thousands of possibilities with strategic implications. Experts agree that, due to this niche programming, the same machine would struggle to win a round of tic-tac-toe with a five-year-old. Artificial intelligence has evolved to cover a broader spectrum. The continued success of AlphaGo, consistently winning against champions in a more complex gaming environment, is a product of machine learning, the development of artificial intuition, and a process of mimicking human practice and study. The responsibility of programmers is to kick-start the machine learning and give the new mind proper direction. This is standard practice, and much faster than learning from scratch. It is worth noting, however, that the co-creator of AlphaGo, artificial intelligence researcher, neurologist, and co-founder of DeepMind, Dr. Demis Hassabis, believes that AI could learn from a zero state rather than being “supervised” to start its learning in a particular direction.
Guiding the learning of a new intelligence is an onerous responsibility. Dr. Hassabis recently spoke about the challenges facing the builders of AI. The majority of AI is built with benevolent intent, he said, adding that since there are so few people able to do this, the current risk of overtly negative programming is small. I am still uneasy, having grown up with the fiction of ‘evil scientist in hidden volcano lair’ from “Thunderbirds” to James Bond films. The growing body of evidence that social media is rife with chatbots, indistinguishable from human accounts and influencing popular opinion, suggests that ethical behavior is not always a priority.
Corporations are leading the way in bringing artificial intelligence into the public domain, using chatbots to enhance interaction with customers. In a social media dominated world, we have an expectation of responsiveness that a human workforce cannot meet: Chatbots allow an instant rapport to develop, keeping customers on your site, building loyalty, and improving satisfaction. There is significant effort on the part of large developers to build ethics in from the start. IBM Watson’s CTO Rob High recently outlined several key areas that ethical developers must consider, including basic trust, openly identifying as a chatbot, and managing data that is shared in a human-bot interaction. It’s a legal minefield. A simple example has parallels in flawed goal-setting. We observe that human behavior changes according to the goals that are set, which is often not as expected by the management! A goal to hit a raw sales target can have unscrupulous teams discounting for volume and consequently losing margin despite reaching their target. We see this as unethical because we are human, and ingrained ethics will ensure that such behavior is short-lived, whether through the actions of management or as a result of peer pressure. A chatbot needs to have ethical ‘gut feeling’ programmed in, and that takes time, effort, and money.
Investment in ethics requires diverse and ethical investors
Much of the innovation in emerging technology is coming from the exciting tech startup sector. Ideas are flying around from talented millennials, grabbing the imagination and hitting the tech and investment headlines. The skills of these young founders are not in question, but business models historically leave much to be desired. We are currently watching the struggles of one of the early tech successes, Uber, as its ethical stance comes into question from all sides: The CEO, Travis Kalanick, has recently stepped aside in the face of growing criticism of the business culture and values. The Financial Times cites “reckless pursuit of increased shareholder value” as a dangerous habit. Unfortunately, the rapid success of Silicon Valley businesses over the past 15 years has led to aggressive venture capital investment based on growth and users, rather than reliability and revenue, and is weighted heavily towards white male founders. According to AOL founder Steve Case, speaking at South by Southwest in Austin this year, only 10 percent of U.S. tech investments went to women, and 1 percent to African-Americans. This ‘bro-culture’ is unhealthy — Dan Lyons’ book “Disrupted: My Misadventure in the Startup Bubble” describes the “grow fast, lose money, go public, cash out” process and the male-and-pale majority in the industry. At what point in this gold rush do founders take a sober look at ethical business and diverse, ethical technology?
There is a joint responsibility for developers and for those non-technical parties to ensure that artificial intelligence retains its “benevolent intent” and reflects the best of our diverse human society. Ethical AI can only be guaranteed by ethical business practices. We hope that these can evolve fast enough to keep up with the rapid advances in technology.
Kate Baucherel is a published author, speaker, trainer and coach, and co-founded community software company Ambix. She has two young children, and lives in the north of England. Find out more at www.katebaucherel.com, or follow @katebaucherel on Twitter.
As volatile and unpredictable as President Trump’s first months in office have been, he has been consistent in his derision of leaks and in calling for the prosecution of those responsible for them. (As a candidate, he conveniently held a different position.) With the first Trump-era leak prosecution now underway, it seems as if the Department of Justice has taken the president’s marching orders to heart. This is unlikely to be the only leak prosecution we will witness under this administration and Attorney General, bringing to the forefront the question of journalists’ responsibility towards leakers in a digital age.
The Winner Leak Investigation
The recent prosecution has been linked to a June 5 story on The Intercept that suggested Russian interference with the elections might have been more profound than was previously known. The news outlet published a redacted version of the top-secret NSA document upon which the story was based, which it had received through an unknown leaker. Two days before the story went online, 25-year-old government contractor Reality Leigh Winner was arrested and charged with violating 18 U.S.C. Section 793(e) for removing classified material from a government facility and mailing it to a news outlet.
Unsealed court records reveal that the arrest came after a reporter for The Intercept had contacted another NSA government contractor and officials at the NSA in an attempt to verify the document’s authenticity. (The identities of the news outlet and government agency have not been officially released, but there is little doubt they are The Intercept and the NSA.) The reporter also mentioned in at least one of those exchanges that he had received the documents through the mail and that they had been postmarked in Augusta, Georgia, which happens to be where Winner lives. He also shared photographs and copies of the document with them. This information prompted an investigation that quickly pointed to Winner as the potential leak. She confessed shortly afterwards, without the presence of a lawyer, when FBI agents showed up at her door with a search warrant.
Assessing The Intercept’s Actions
The Washington Post’s media blogger Erik Wemple analyzed The Intercept’s actions and argued, as many others have since, that the news outlet’s effort to reach out to government officials in order to assess the authenticity of the document provided authorities with an important lead in their investigation. The reporter revealed where the documents were sent from, and the documents he shared contained important clues as to the leaker’s identity, as can be gleaned from the affidavit: “The U.S. Government Agency examined the document shared by the News Outlet and determined the pages of the intelligence reporting appeared to be folded and/or creased, suggesting they had been printed and hand-carried out of a secured space.” Authorities learned that only six workers had printed the report, including Winner. Winner was the only one of the six on whose computer email exchanges with The Intercept were found. On June 6, The Intercept posted a statement claiming that the information in the government’s affidavit and search warrant contained unproven allegations about Winner and about how the FBI had come to arrest her.
In hindsight, the actions of The Intercept seem troubling. But Wemple mitigates his criticism by stating that the actions stemmed from the legitimate need to verify the documents and by arguing that Winner would have been found out anyway: “Yet the mistakes of the leaker before the Intercept even received the document would likely have sealed her fate, regardless of any clumsiness by the reporter in verifying the scoop.” I am not convinced, however, that the information contained in the affidavit warrants this conclusion. The email exchange she had with The Intercept from her work computer dates from March and contained a request for the transcript of a podcast episode, hardly a smoking gun.
None of the stories I have seen so far emphasizing the ease with which investigators caught Winner have made a convincing argument that the same would have been true had the document not been made available to them by The Intercept. They often seem to assume that ultimately the document would be published, dooming Winner’s chances of remaining anonymous. But news outlets routinely report on classified information without sharing the actual documents with their readers or authorities. Why did this happen here? Making these documents available in their original form amounted to providing the FBI with a roadmap to its target.
As this blogger and security expert points out, the document posted on The Intercept contained enough meta information to determine the serial number of the printer used and the exact time when the document was printed. Like most modern printers, Winner’s printer leaves hard-to-see yellow dots on a document containing this information: “The document leaked by the Intercept was from a printer with model number 54, serial number 29535218. The document was printed on May 9, 2017 at 6:20.” Simply scanning the document in black and white before posting it would have eliminated this problem: “To fix this yellow-dot problem, use a black-and-white printer, black-and-white scanner, or convert to black-and-white with an image editor.”
It would be false and unfair to state that The Intercept does not care about the fate of people leaking to them. On its site, it offers potential leakers advice on how to become a whistleblower without being detected. However, as one security expert noted, the guidelines focus on sending information without being caught, but says nothing about covering your tracks in obtaining the information.
It also makes the following commitment: “At The Intercept, our editors and reporters are committed to high-impact reporting based on newsworthy material. If we decide to go forward with a story, we will have a discussion with you about what risks of retaliation you might face and whether you want to remain anonymous. We will be explicit with you about the parameters of our agreement to protect your anonymity, and we will honor our commitments.”
However, in this case, the news outlet did not know the identity of the leaker and therefore could not engage in this back and forth. But of course, this does not absolve reporters and editors from their obligation to do everything in their power to protect others from identifying the source. Reporting on the document without sharing it in its original form with authorities and readers would have been a more prudent course of action. Perhaps investigators still would have been able to identify Winner as the source, but this speculative assessment has little bearing on the ethical analysis. The SPJ Code of Ethics requires journalists to minimize harm and The Intercept failed to meet this requirement.
The Positive Duty to Protect Leakers
The leaker-reporter relationship is fraught with inequality. Leakers put everything on the line when leaking classified information. Not only do they risk losing their job when found out, but also their freedom. When they are leaking classified national security information as Winner did, they are typically prosecuted under the 1917 Espionage Act and are facing experienced federal criminal prosecutors, long prison sentences and astronomical legal bills.
Take the case of Stephen Jin-Woo Kim, a State Department contractor who in 2009 leaked highly classified information about North Korea to Fox News reporter Jay Rosen. The journalistic community was up in arms when Rosen was named in an affidavit as a co-conspirator so that a judge would approve a warrant to search his emails (Rosen was never indicted, nor was he ever going to be), but did not bat an eye when Kim pleaded guilty and received a 13-month prison sentence. (Ironically, The Intercept was one of the few media outlets that reported on the devastating effect of the episode on Kim).
Reporters on the other hand, have relatively little skin in the game. As a matter of prosecutorial tradition, journalists are not prosecuted for printing classified information. No administration (until now) wants to wage a war against the press by putting reporters in jail. Whether the First Amendment precludes them from doing so is not clearly established, and I would not be surprised to see developing case law on this issue under the current administration. However, by and large, reporters do not face the same risks leakers do.
Given this unequal distribution of risk, media organizations should be aware of all the digital trails that leakers can leave behind, and help them erase them, instead of adopting a leaker-beware approach. As is often the case for leakers facing prosecution, Winner was not the most sophisticated leaker. She was not as well-versed in the game of leaks as many D.C. insiders are. But this made her more deserving of moral consideration, not less. And while The Intercept was unaware of who she was, this ignorance should have lead them to assume she was a source vulnerable to detection. Instead, its actions compounded the mistakes Winner allegedly made.
Digital technologies have made it easier for media outlets to obtain troves of information through leaks and data dumps, but they have also made it easier for the government to follow the digital trail back to the leaker. Traditionally, the source-reporter relationship has mainly required that reporters keep their promises of confidentiality. Nowadays, a digital trail is more likely to reveal an anonymous source than a reporter’s loose lips, and journalists need to do more than just keep their mouths shut in order to protect the identity of their informants.
Whereas leak investigations were a rarity before this century, the Bush and -especially- the Obama administrations have been much more aggressive in going after leakers, prosecuting more of them than all the previous administrations combined. This changed landscape leaves leakers in a more vulnerable position than ever before, making it not only the media’s positive duty to protect their identity at all cost, but also to advocate for, rather than blame, leakers once they have been caught. As the story unfolds and more facts become known, our assessment of what happened might alter, but this lesson won’t.
It has been well over one year now since Facebook enabled its almost two billion users to stream live video. During the roll-out, Facebook unabashedly encouraged its users to embrace the opportunity to “create, share and discover live videos.” Unlike Twitter, which required that users download the Periscope app separately before they could livestream, Facebook offered a fully integrated streaming functionality. (Twitter has since been working to eliminate its extra roadblock.)
Using Facebook Live is as easy as pushing the “Live” icon on the Facebook app. First-time users are greeted by a short set of basic instructions – which they can skip – explaining how to get started, the workings of the view counter and how to interact with viewers. Other than a cheerful reminder that reads, “Remember: They can see and hear you!” there is no alerting users to the ethical minefield that unfolds when livestreaming video. Instead, the sign-off message reads, “Try it! Just relax, go live and share what’s happening.” What could go wrong?
When livestreaming apps such as Periscope and Meerkat first burst onto the scene a couple years ago, journalism professionals embraced their potential but also engaged in thorough debate about the ethical pitfalls of these apps. Professionals trained and experienced in the moral questions presented by broadcasting live footage to large audiences saw the need to examine the potential harm posed by this technology. Yet, Facebook developers trust teenagers to figure out the harms on their own, through sometimes costly trial and error.
According to Mark Zuckerberg, Facebook Live marked a “shift in how we communicate, and it’s going to create new opportunities for people to come together.” There is no doubt that Facebook Live has done exactly that, as it has produced it predictable parade of viral stars and shareable content. But Facebook live has also been used to broadcast murder, torture, rape, beatings and other violent content, presenting some serious ethical concerns.
My point is not that the technology caused these events, or even enabled them. This type of dead-end ethical analysis is highly speculative and amounts to blaming technology for heinous acts committed by individuals. The ethical analysis does not end there. As the platform on which these videos are posted, Facebook aids in distributing this upsetting content, especially because the content remains available after the livestream has ended if a user chooses to post it. (On Instagram, by comparison, live videos used to disappear once the recording has stopped.) This is a choice that was made by the developers at Facebook, and it’s one that carries moral weight, as it gives these acts (and their actors) a notoriety they would otherwise lack. While the availability of this disturbing content raises a smorgasbord of ethical concerns for its creators, hosts, moderators, audiences and subjects, I want to narrow the focus here to one particularly troublesome type of content: suicides that are livestreamed.
The Livestream Suicides
In recent months, several people broadcast their suicides on streaming services.
As with other situations, we cannot assess the role the presence of livestreaming technology played in the tragic decisions these people made. Experts warn not to attribute suicide to a single cause. Even if we could somehow demonstrate that the existence of livestream functioned as a trigger in one case, there might be separate instances in which the livestreaming allowed others to see the cries for help and intervene.
Facebook has taken some laudable initiatives regarding this issue. It has an ongoing partnership with reputable suicide prevention programs that work on identifying and reaching out to users displaying suicidal thoughts. It is even contemplating the use of artificial intelligence and pattern recognition to flag content indicative of suicidal tendencies. In the wake of the recent suicides, the social network announced it would extend these measures to its livestreaming function. However, this was not an unforeseeable problem, and one can’t help but wonder why it took a number of people taking their lives before Facebook would take this step.
Compounding the ethical quagmire is the fact that Facebook tends to be slow in removing these types of videos. It took Facebook two hours to remove the video of “Facebook killer” Steve Stephens murdering Robert Godwin. (Contrary to some initial reports, this video was not livestreamed; it was uploaded after the fact.) When the suicide video of the 12-year-old from Georgia started making its rounds on Facebook, the company denied initial requests to remove it, according to a Buzzfeed report. Kyle MacDonald, a New Zealand-based psychotherapist, experienced a similar sluggish reaction when requesting removal of links to the suicide video. In the opinion pages of The Guardian, he took Facebook to task: “Facebook also claimed that because it is not hosting the video, it is not responsible,” he wrote. “This is despite the fact that due to its inaction the links were widely available on Facebook for anyone to see long after I reported the problem. It has not been verified that the video is authentic but whether it is or it isn’t, the content of the video shows a child committing the most serious act of self harm and is not appropriate for public viewing.” According to the New York Daily News, the video of the Alabama man committing suicide stayed up for two hours and generated more than 1000 views. A recent video of a Thai man killing his 11-month-old daughter before taking his own life stayed up for 24 hours. Because of livestreaming, Facebook has at times become a platform for snuff movies usually confined to the dark recesses of the internet.
Most journalism organizations generally won’t report on individual suicides unless they are newsworthy, and if they are, journalists follow a set of guidelines developed by experts in the field of suicide prevention and reporting. These guidelines include the stipulations that the method used by the suicide victim should not be disclosed, the word “suicide” should not be used in the headlines about individual suicides and coverage should not be prominent or extensive. While one arguably could find examples where these guidelines are not followed, most responsible news organizations tend to abide by them.
Why? Because experts have established that suicide is contagious, in a sense – one suicide can prompt others to harm themselves. Irresponsible media coverage is one of the contributing factors to this so-called contagion. While I am no expert in the subject matter, the graphic and realistic depictions of peers committing suicide seem to combine all the elements that experts agree should be avoided. They present the act as a way out for a troubled person with whom they might identify, they generate considerable media attention, they show the method used in great detail and they lack context. In other words, this content puts people struggling with suicidal inclinations at risk in a very direct and tangible way.
In February, Zuckerberg addressed the problem, claiming that future artificial intelligence could help detect troublesome content in the long-term. But for the time being, he said, it will be up to the Facebook community to provide a safe environment. This response does not cut it ethically. Facebook and other social media platforms have not caused suicides, but they are responsible for the suicide videos being captured by their technology and distributed across their networks. Moreover, Facebook has not been successful in removing this dangerous content in a timely fashion. This issue cannot be addressed by yet-to-be-developed technology.
Here is what I believe Facebook ought to do:
The technological and economic feasibility of these suggestions can be questioned. But the approach taken so far by Facebook – and tech companies in general – has been to release technology first and worry about ethics later. (This approach led Donald Heider, founder of the Center For Digital Ethics & Policy, to argue that Facebook should hire a chief ethicist.) But when human lives are at stake, it might be time to switch this modus operandi.
Bastiaan Vanacker is an Associate Professor at the School of Communication at Loyola University Chicago and Program Director of the Center for Digital Ethics and Policy.
Colleges weigh a variety of factors when deciding whether to admit an applicant. Students know the importance of test scores, grades, recommendations, extracurricular activities, and the college application essay. But there’s another factor that may actually be important as well.
According to a recent Kaplan Test Prep survey, the number of college admissions officers who say social media affects an applicant’s chances of being accepted has increased. Currently, only 35% of college admissions officers turn to social media for more information on an applicant. However, 42% say what they find online negatively impacts their decision, up from 37% last year. On the other hand, 47% say it has positively affected their decision, which is also up from 37% last year. Applicants can change their privacy settings so their social media data can’t be accessed. But what if –hypothetically- a college asked a prospective student for his or her log in information?
In some states, it is illegal for public colleges and universities to ask college applicants for password information. According to data from the National Conference of State Legislatures (NCSL), this practice is no longer permitted in Arkansas, California, Delaware, Illinois, Maryland, Michigan, New Hampshire, New Jersey, New Mexico, Rhode Island, Utah, Virginia, and Wisconsin.
As an example, Wisconsin’s statute states that no educational institution may, “Request or require a student or prospective student, as a condition of admission or enrollment, to disclose access information for the personal Internet account of the student or prospective student or to otherwise grant access to or allow observation of that account.”
The statute also states that no institution may, “Refuse to admit a prospective student because the prospective student refused to disclose access information for, grant access to, or allow observation of the prospective student’s personal Internet account.”
However, the NCSL list only covers a handful of states, and does not apply to private schools. It should be noted, however, that I could not find any instances of colleges that actually engaged in this practice. Whether this is a hypothetical situation or not, a law that forbids a school from asking for login credentials does not stop the institution from using other means. For example, Wisconsin’s statue also states that an institution is not prohibited from, “viewing, accessing, or using information about a student or prospective student that can be obtained without access information or that is available in the public domain.”
There are no laws against Google searches, and it would appear that many schools are utilizing this tool and other means. Bradley Shear, managing partner at Shear Law, specializes in social media, privacy, reputation, and technology and he believes that social media searches are widespread among higher ed institutions. “Regardless of the number of college admissions officers who say they don’t check social media, and in spite of the statutes prohibiting schools from asking for log-in data, the vast majority of schools are indeed searching online for any incriminating posts or photos,” Shear explains. With or without a password, he says that some admissions officers are either searching themselves, or the schools are hiring former investigators and police officers to identify applicants.
And, Shear believes that ethically, this is a slippery slope. For one reason, he says the information is unauthenticated. How many people are there in any given city with the same name? Even trying to narrow the information to high school seniors or recent grads could yield several duplicates.
Mistaken identity is a serious enough problem that attorneys general in over 30 states complained that liens and civil judgments were being erroneously reported on consumer credit reports. According to the new guidelines effective July 1, 2017, liens and civil judgments cannot be added to a credit report unless (1) the name, (2) the address, and (3) either the birth date or the social security number have been verified.
Hopefully, this level of personal information would not be included in an applicant’s social media profile. However, a Pew Research Center report reveals that 93% of teens between the ages of 14 and 17 share their real name, 94% share a photo, and 83% include their birthdate. Also, among this age group, 76% share their school’s name, and 72% share their city or town.
Shear also explains that applicants can be discriminated against because of their connection to others. In other words, they’re being judged by their friends and family members.
Shear relays one incident that stands out. “There was an applicant who had top scores – he was a great kid, with a very clean digital profile.” The applicant did not mention anything about his parents on social media. However, the interviewer stated that he found some Tweets by the parents, and indirectly was able to connect the dots and figured out this applicant’s family was wealthy and had political beliefs that the interviewer did not agree with. “The conversation veered off topic very quickly – but what did the family’s wealth, their vacation photos, and their political beliefs have to do with the student’s application?” Shear asks.
When students complete an application, they can’t be asked about their religion, politics, sexual orientation, etcetera, because this information could be used against them. However, Shear says that colleges can go online to discover this – and other types of information, which nullifies the original intent of privacy.
Suppose the school is able to verify that the social media account is for the correct applicant, and it is not able to glean information from friends and family members. Shear still believes this practice is problematic. “We’re talking about kids and they are going to say dumb things and do dumb things, and we shouldn’t hold it against them.” He questions the logic of deciding that individuals at this young age are unredeemable based on social media posts. “Instead, let’s hope they grow from these experiences,” Shear says. “Schools need students from different backgrounds and experiences, and you hope that these individuals leave college a better person than they started.”
As teens transition to college, it’s expected that many of them will probably make a lot of mistakes regarding how they allocate their finances, how much time they spend studying, etcetera, because their parents have doled out money and handled their finances, in addition to monitoring their school work and study time.
As a result, there’s an understanding – and at least temporarily, an acceptance that young college students may overspend their budgets, they may oversleep for classes, and they may spend more time partying than studying.
But, when schools check the social media accounts of these applicants, does this imply that there is no mercy, no room for growth, and no opportunity for development in this area? And if so, is that fair when many parents, partially out of respect for their teen’s privacy – and also because many of them may not be digitally savvy – don’t monitor social media activity as closely as other areas of a teen’s life?
I’m a member of the “email generation,” so that was – and still is – one of my primary ways of communicating professionally and personally. And while my email account doesn’t contain any crazy photos or outrageous comments, even I would be uncomfortable if someone said, “Give me your password so I can read your email communication.” On one level, I understand that anything I transmit digitally could be read by someone else, but there’s still an assumption that my communication will only be read by the intended recipients.
For teens, social media is the primary means of communication. And they share anything and everything. Anything and everything includes what they ate for breakfast; how they can’t decide which pair of jeans to wear; why there’s a long line at McDonalds. They post such selfies as “This is me, sitting in my room, bored.”
And since social media is as natural to them as breathing, they also tend to share their passions, disappointments, complaints, and various levels of silliness via this vertical. For many of them, a “filter” is a special effect for a selfie, not the ability to use discretion or self-censor what they post. “Most K-12 schools don’t have the ability to provide digital education to our kids,” Shear laments. “And because they’re not being provided the tools to deal with these digital issues, and then for colleges to hold it against them, that raises some questions, such as ‘What is the real mission of a college?’”
However, Grant Cooper, a career coach and resume writer, believes the use of social media in determining an applicant’s suitability is both fair and ethical. “Universities use a wide range of assessment tools and practices to ensure that applicants possess the appropriate extracurricular, academic, and psychological profiles to succeed within their institutions.”
According to the Kaplan Test survey, some of the examples of negative information found through social media searches included an applicant using questionable, borderline-racist comments, and an applicant brandishing weapons. From “Girls Gone Wild” to drunk frat brothers and overly-aggressive athletes, college students can pose a public relations nightmare for colleges and universities. And while the names of the offenders may be forgotten, negative incidents can haunt schools for a long time, negatively impacting the school’s reputation and ability to recruit and retain students.
“One unfortunate social media photo or a single questionable comment is generally not enough to bar a candidate from consideration,” Cooper says. “But a series of media posts or photos showing a pattern of immature or inappropriate behavior would absolutely be a red flag.”
Another one of the examples in the survey included an applicant who was a felon and did not disclose this information on his application. According to the admissions officer, the individual was not admitted because he lied to the school- although for some reason, he felt the need to reveal the entire story on social media.
According to an article in the New York Times, Auburn is one of 16 universities that asks applicants if they’ve ever been charged with, convicted of, or pled guilty or no contest to a crime (besides minor traffic violations). Also, the University of Alabama asks applicants if they’ve ever received “a written or oral warning not to trespass on public or private property?”
But is there a rationale to this line of questioning? The Times article also reports that Virginia Tech added a question about arrests or convictions as a result of the April 2007 incident at that school in which a student killed 32 people and wounded 17 more before taking his own life. It turns out that the individual had been accused of stalking in the past.
To what extent are these schools asking these questions and scouring social media profiles searching for potential warning signs? Applicants posting inappropriate messages about sexual assault, sharing videos of themselves drinking and driving, texting and driving, and engaging in other reckless behavior could give admission counselors pause. While it’s debatable if past behavior is the best indicator of future behavior, to be fair, at least colleges consistently apply this standard to applicants. That’s why high school grades and entrance exam scores are so important: it is assumed that students with good grades and high scores will continue this behavior in college.
According to The Hechinger Report, some college are using social media in yet another way. For example, Ithaca College created a private, social networking site for the school’s applicants. They can interact with fellow applicants, along with student ambassadors, faculty, and staff. However, the school analyzes such data as the number of photos the students upload to the site, and how many contacts they make to determine who is more or less likely to enroll at Ithaca.
On one hand, college is expensive for the student, the student’s family, and the taxpayers who ultimately back student loans. And it’s expensive to schools when students drop out, resulting in a loss of tuition and fees. But that’s not the only loss. Colleges and universities are ranked based on a variety of factors, including graduation rates. So, schools want students who are more likely to fit into their environment and have the greatest chance of achieving academic success.
In that respect, it seems logical that schools would want to analyze social media data to recruit the best students. However, it’s not clear how much weight is given to these interactions. Would students with limited Internet access be unfairly overlooked? What about students who just don’t engage a lot on social media? (And yes, while small in number, I’m sure those students exist.)
Social media plays an increasingly important role in society. However, is that role too large when evaluating the potential of young applicants? Perhaps. But I also believe that a school has the right to determine what it deems to be acceptable vs. unacceptable behavior. In the 21st Century, colleges have become businesses selling a product to consumers. And managing the company’s brand is job #1. It’s a hard lesson for careless teenagers to learn. As former baseball player Vernon Law would say, “Experience is a hard teacher because she gives the test first, the lesson afterward.”
Terri Williams writes for a variety of clients including USA Today, Yahoo, U.S. News & World Report, The Houston Chronicle, Investopedia, and Robert Half. She has a Bachelor of Arts in English from the University of Alabama at Birmingham. Follow her on Twitter @Territoryone.