Every day, the trusting public looks at the work of photojournalists online, in magazines or in newspapers, assuming those visual representations of news events are truthful. These all-important images can inspire, spark debate or incite anger, action or even rebellion.
So what happens when an image is changed, whether through setting up a scene or through (digital) manipulation? There are dozens of software applications that can easily change a photograph to show whatever the creative mind of the manipulator wants it to show, for good or for ill.
The question of how a photo should be – if at all – manipulated for public consumption is debated appropriately through a recent photography show in New York City. The Bronx Documentary Center hosted a curated display entitled, “Altered Images: 150 Years of Posed and Manipulated Documentary Photography,” which garnered national attention for its tricky yet important subject matter.
According to organizer and photographer Michael Kamber, he created the exhibit to show some of the most controversial examples of manipulated photojournalism. These photos ranged in time from as early as the American Civil War to this year’s World Press Photo contest. In that prestigious contest, some 20 percent of participants were disqualified for digitally altering their submitted images.
“The World Press Photo Contest must be based on trust in the photographers who enter their work and in their professional ethics,” said Lars Boering, the managing director of World Press Photo, in a statement about the contest controversy. “We now have a clear case of misleading information and this changes the way the story is perceived. A rule has now been broken, and a line has been crossed.”
A line, indeed, has been crossed. A photo cannot be changed digitally in any way other than cropping it for size for it to be considered true, accurate and fair to the viewer. Period. There is no way around this if you want to gain or keep the public’s trust and respect.
One thing that was notable from the exhibit is that several of these manipulated photos were caught because someone other than the photographer altered them. The photographer noticed the change right away in these cases and contacted the editor or publication to report the problem. But once the image is out there for the public’s consumption, the damage is done.
As a news reporter with more than 20 years of experience, I can say that I have worked with some of the finest photojournalists in the nation. I consider my years at The Detroit News among my most enjoyable, especially because of the photographers I worked with. The Photo Desk had a standard that was never doubted or questioned: You never set up a photo. Never.
What does “set up a photo” mean? It meant that you didn’t send a news photographer to a ribbon cutting; that wasn’t going to end up in our newspaper. You didn’t tell a source to prepare a “fake” scene for the photographer to capture. You didn’t give the photographer a set-up moment to show off the story’s central theme. If the story didn’t happen when the photographer was there, there was no story. A photo had to happen naturally – like the photographer was a fly on the wall and observed things as they happened, capturing the image as if you and that photojournalist were there together watching the story unfold.
I believed in that mantra then, and I still believe in it now. I trusted every image that I saw in the newspaper then, and I want to believe in every image that I see now. But when you see the problems that have come up within the photojournalism world because of digital manipulation, you see why this trust has been shaken.
If you think that I’m taking too strong a stance, let me back it up with comments from three photographers that I have worked with on a regular basis, all of whom say the same thing. If you see their photos, you should trust that they have been created honestly and without digital alteration. If you’re creating art, that is one thing and some changes from the original photo are to be expected. However, and they could not have been more adamant about this, if you are purporting to be a photojournalist and presenting news, that is something entirely different.
Jessica Muzik comes to the subject from two points of view. She is the Vice President of Account Services for Bianchi Public Relations, Inc., as well as the owner of Jessica Muzik Photography LLC. Her photographs have been published both online and used by news organizations.
“I don’t think one can lose more credibility as a photojournalist than to alter or set up photos,” Muzik said. “The public trusts photojournalists to capture real moments and timely events, not to compromise their ethics by altering an image to fit the needs of a particular media outlet.”
“In my line of work, I always say that what the media reports is considered 10 times more credible than any advertisement that can be placed because we trust that the media are objective in all matters and that includes photojournalists,” Muzik added. “If a photojournalist feels the need [to] alter or set up an image, that is not photojournalism, but rather photography.”
Asia Hamilton is the owner of Photo Sensei, a company that offers photography workshops to professionals and amateurs in several cities. Her goal in part is to help people in image-sensitive cities, including Detroit, show off their photo skills with respect to themselves and the community, demonstrating both their creativity and the city’s best assets.
Because Detroit often gets a bum rap when it comes to its “ruin porn,” or images of the city’s abandoned or burned out buildings, Hamilton often works with people to find other ways to highlight Detroit via her Photo Sensei classes. Thus, she too has a tough stance when it comes to manipulating an image within the news realm.
“I think photo altering is ok if the photography is art or editorial related,” Hamilton said. “However, photojournalism should not be altered because it is a documentation of facts. The news can only be trusted if it is completely factual.”
My favorite comment came from John F. Martin, a news photographer who has a commercial business that does work for news agencies as well as corporations.
“Staging or otherwise manipulating an image from a news event is lying, plain and simple. It’s no different than a writer making up a quote. This was instilled in us on day one of journalism school (Ohio U, ’96). It turns my stomach when I read about these seemingly increasing incidences,” Martin said.
That’s the crux of the problem, isn’t it? Photo manipulation has happened too much and too often. That’s reprehensible and cannot be allowed to stand. The situation has grown so dire that for-profit businesses have been established to find and expose photo manipulation.
The company in question is called Fourandsix Technologies Inc., and its founder Dr. Hany Farid recently introduced a new service, Izitru. Its purpose is to allow anyone who puts their images online to prove without a shadow of a doubt that these images are authentic. They can do this by allowing the photos to be tested, thereby receiving a Izitru “trust rating” for any viewers.
Yes, the world of photojournalism has come to that—a trust rating—frightening and unacceptable.
Karen Dybis is a Detroit-based freelance writer who has blogged for Time magazine, worked the business desk for The Detroit News and jumped on breaking stories for publications including City’s Best, Corp! magazine and Agence France-Presse newswire.
In the wake of the Ashley Madison hack that exposed 32 million cheaters and the public ruin of dentist Walter Palmer, there is no better time to discuss charges of guilt by social media. Communication platforms have given users of the world tremendous collective power to prosecute and punish. We—the Facebookers, bloggers, tweeters (and re-tweeters)—are an unstable, finicky force that turns lives into a tailspin for both alleged and verified offences. Even as they level the playing field by keeping an eye on powerful figures, online ‘courtrooms’ lack what official jurors can provide—a finite and predictable sentence.
The inherent ethical problem with social media condemnation is its permanence. Unlike weighed, professional consequences or water cooler gossip, online defamation remains there for an unknown period of time, wreaking unpredictable chaos. The correlation between a crime’s severity and its punishment ceases to exist on public platforms.
Take a look at the fall of Jonah Lehrer, a bestselling author caught embellishing quotes in several works, including his bestselling Imagine, a book about the neurology of creativity. After an ambitious reporter dug through Lehrer’s work to find false statements, bloggers and social media users jumped in with their take on the matter. “Smugly self-satisfied and pseudo-intellect are not a pretty combination,” wrote one commenter. “I’ve gone from sad to angry,” tweeted a professor who examined the problem.
In his book, So You’ve Been Publicly Shamed, British journalist Jon Ronson describes Lehrer’s seemingly endless personal and professional descent. Within days of the revelation, Lehrer’s publishers recalled his book and offered refunds to buyers. Jonah had to resign from The New Yorker, and Wired severed its ties with him. Unsurprisingly, the ethics lecture he was due to deliver at Earlham College was swiftly canceled.
Losing a job and being dropped by a publisher are expected, justifiable consequences. Having a career ruined at the hands of buzzing bloggers and feisty commenters is not. When it comes to doling out punishments, all social media users get to chime in, and comments about offenses are tracked open-endedly. Even when they are not interested in causing lifelong difficulties, Internet commenters cannot predict longevity.
Unsurprisingly, fears of lifelong smears lead many digitally shamed individuals to respond with pleas for forgiveness. Public apologies written by relatively unknown digressers developed hand-in-hand with social media, and they are oddly discomforting phenomena. Like Josh Duggar, the reality TV personality caught cheating on his wife, and Justine Sacco, the PR specialist who published an inappropriate tweet, Jonah asked us for a second chance. He offered his apology in front of an audience of 300 at the Knight Foundation’s Media Learning Seminar. Livestream broadcast the speech, and he spoke next to a large Twitter feed that displayed comments from those who wished to offer their two cents. He addressed the audience saying:
My mistakes have caused deep pain to those I care about. I’m constantly remembering all the people I’ve hurt and let down. Friends, family, colleagues, my wife, my parents, my editors. I think about all the readers I’ve disappointed . . . I have broken their trust. For that I am profoundly sorry. It is my hope that some day, my transgressions might be forgiven.
This speech belongs behind closed doors, directed at focused listeners. Lehrer’s offense was not severe enough to involve a formal courtroom or a public plea for compassion; it certainly did not belong on a public stage where uninvolved tweeters could weigh in.
There is something awe-inducing yet uneasy about the collective power social media possesses. Online, posted tirades stack up, living on blogs and in tweets long after their authors forget about them. Individually, we can’t take back the impact of a throng, even when someone asks us to try.
The permanence of online judgment makes it all but impossible for offenders to wipe their slate clean and restart their careers. Moderators can delete particularly vile or threatening comments from old threads, but erasing the most cringe-worthy thoughts cannot undo established perceptions. It is unethical for us to punish wrongdoers indefinitely, but permanence is a core aspect of social media.
But as brutal as our criticisms can be, it would be shortsighted to completely ignore the empowerment associated with social media judgment. An online jury keeps powerful figures and oppressive unknowns in check by democratizing the justice system. You may have the money to hire great lawyers or garner the support of locals to discriminate, but if the greater public gets wind of your offense, your options for redemption shrink drastically. Social media is just too large and too widespread to quiet.
Online rants ignite individuals to come together and promote a cause, albeit through aggressive means. The ongoing saga of a Kentucky county clerk who refused to issue marriage licenses to gay couples despite the U.S. Supreme Court’s ruling to legalize gay marriage exemplifies activism prompted by social media outrage. As news of Kim Davis’ refusal to sign paperwork was released, people passionately expressed their pain and anger over continued legal obstacles.
The lawsuits filed against Davis cast her into the limelight, leading readers to share reactions and respond collectively. A USA Today update on the story garnered over 96,000 Facebook connects, 3,000 tweets, 1,000 LinkedIn shares and 4,400 comments. “She needs to take her bible and go stand in the unemployment line!” wrote one person. “I’ve seen better looking heads on lettuce!” wrote another. The headlines incited mass protests and even motivated a non-profit organization to erect a billboard calling Davis out. “Dear Kim Davis,” it began. “The fact that you can’t sell your daughter for three goats and a cow means we’ve already redefined marriage.”
Davis offended a plethora of people and ignored court rulings, so neither the protests nor her jailing were unwarranted. The billboard was a bit much. After stepping back from the situation and cooling off a bit, it is important to remember that social media reactions will likely haunt Davis and her family for years to come.
It is impossible to tell whether the Lehrer and Davis shaming will prove to be a temporary setback or a lifelong scarlet letter. Punishments that correspond with the degree of an offense and concessions resulting from good behavior are not privileges enjoyed by those judged online. Readers are eager to see justice restored, and the need for balance is often achieved through punishment.
Upon reflection, I believe that most of us don’t want to be responsible for lifelong joblessness, disgrace and the unavoidable ripple effects that public prosecutions have on families, coworkers and friends. Unfortunately, the effect of rage expressed online is ultimately out of our hands. Long after the storm has weathered, guilty parties and their families are left to pick up the pieces.
The court of social media is a place to be feared. It is not structured or vetted, yet its users deliver long-term punishments that outweigh crimes. If we suspect that we have been too harsh in our words, we can always step back, log off and separate ourselves from an outrage. And we will probably never know the extent of our impact. As Ronson wrote, “The snowflake never needs to feel responsible for the avalanche.”
Paulina Haselhorst was a writer and editor for AnswersMedia and the director of content for Scholarships.com. She received her MA in history from Loyola University Chicago and a BA from the University of Illinois at Urbana-Champaign. You can contact Paulina at PaulinaHaselhorst@gmail.com.
Online petitions, boycotts and speaking out on social media are common ways to raise your voice about a particular issue or individual. But a more controversial method, hacktivism (hacker+activism), has been increasingly employed to further agendas. Hacktivism is defined as hacking, or breaking into a computer system, for political or social ends, and it is currently illegal. Proponents claim hacktivist actions mirror real-world protests but incur harsher penalties because they are carried out in the online environment. Are they right and are hacktivists indeed treated in a way that violates our notions of justice and fairness?
The Computer Fraud and Abuse Act (CFAA), also known as the “anti-hacking law,” was created in 1984 to criminalize unauthorized access to computers. Since then, the law has been modified five times, with each modification resulting in a broader definition of what constitutes “unauthorized access.” Opponents of the CFAA argue that the expansion potentially regulates every computer in the U.S. and many more abroad. Intentionally vague language within the law allows the government to claim that something as minor as violating a corporate policy (as in the case of the United States v. David Nosal) is equivalent to a violation of the CFAA, putting even minor offenders at risk for serious criminal charges. But is comparing hacktivism with real-world protests an apples-to-apples equation?
Hacking has been around for decades. Steve Wozniak and Steve Jobs first hacked into the Bell Telephone System in the mid-70s with the famous “blue box” to place (i.e. steal) free long-distance calls. In the mid-1980s, a college student protesting nuclear weapons released a computer virus that took down NASA and Department of Energy computers. And in 1999, Hacktivismo, an international cadre of hackers, created software to circumvent government online censorship controls, as they believe freedom of information is a basic human right. Since then, the rapid proliferation of online groups able to shut down individual, corporate and even government computers has become a focus for the FBI and other agencies concerned about this trend.
Hacktivism made headlines in 2010 when the group Anonymous reacted to events arising from the arrest of WikiLeaks leader Julian Assange. Assange’s detainment, which coincided with the WikiLeaks release of classified information hacked from U.S. intelligence channels, had supporters outraged. Feelings escalated when the website recruiting donations for his defense was left reeling by the refusal of MasterCard, Visa and PayPal to handle donations earmarked for Assange’s aid fund. Anonymous fought back by hacking into and disrupting the websites of all three financial companies, causing service outages and millions of dollars in damage.
Anonymous achieved its goal by mounting a distributed-denial-of-service (DDoS) campaign. Interested parties could join the Anonymous coalition by direct participation or by downloading a tool that allowed their computer to be controlled by Anonymous operatives. Dr. Jose Nazario, a network researcher with Arbor Networks, claims that it takes as few as 120 computers linked together in this way to bring down a large corporation’s web presence. Anonymous insists this technique is not hacking; it is simply overloading a website by flooding it with traffic that makes it impossible to load pages for legitimate visitors. According to Dylan K., an Anonymous representative: “Instead of a group of people standing outside a building to occupy the area, they are having their computer occupy a website to slow (or deny) service of that particular website for a short time.” But this is not equivalent to a real-world protest: hacktivists don’t need the voices of thousands for their protest to be effective. Less than 120 computers would suffice to take down an entity—something 120 people on the sidewalk could not manage.
The FBI soon unearthed the identities of some of the hacktivists involved in various Anonymous hits. One, Fidel Salinas, was charged first with simple computer fraud and abuse. By the end of seven months, there were 44 counts of felony hacking looming over him for his part in disrupting government servers. Salinas claims the escalating charges were due to the FBI increasing pressure on him to turn informant. This kind of “encouragement” is nothing new. Cybercriminal and Anonymous hacker Hector Xavier Monsegur, under the internet alias “Sabu,” initiated the high-profile attacks on MasterCard and PayPal in response to the Assange arrest. By 2012, Monsegur had been arrested and was busy working in concert with the FBI to unearth the identities of other Anonymous members, who were then prosecuted under the CFAA.
The Electronic Frontier Foundation (EFF) which, according to its website, is “the leading nonprofit organization defending civil liberties in the digital world,” is promoting reform of the CFAA through consumer education, petitions and other legal means. One of their central arguments for CFAA reform concerns the treatment of hacktivist Aaron Swartz, who downloaded millions of scholarly journals from the JSTOR database, a subscription-only service, through MIT’s campus network. Swartz’s actions were predicated on his belief that publicly-funded scientific literature should be freely accessible to the taxpayers who paid for it. After his arrest, federal prosecutors charged him with two counts of wire fraud and 11 violations of the CFAA, amounting to up to 35 years in prison and over $1 million in fines. Swartz committed suicide a few days after declining a plea bargain that would have reduced the time served to six months in a federal prison. The EFF explains that if his act of political activism had taken place in the physical world, he would have only faced penalties “…akin to trespassing as part of a political protest. Because he used a computer, he instead faced long-term incarceration.” However, the EFF seems to gloss over the fact that, no matter how pure his reasoning, when Aaron Swartz played Robin Hood he wasn’t merely trespassing — he was stealing.
In response to Swartz’s untimely death, the EFF suggested changes in the way the CFAA calculates penalties, seeking refinement of overly broad terms and arbitrary fines. Its emphasis is on the punishment fitting the crime, and its hope is to align the CFAA’s penalty recommendations more closely with those received for the same acts when they arise during a physical political protest. The EFF is currently working on a full reform proposal that they hope will restrict the CFAA’s ability to criminalize contract violators or technology innovators while still deterring malicious criminals.
It’s true that the CFAA is too broad and may allow prosecutors to apply draconian charges for misdemeanor crimes, but the EFF is not taking into consideration the real harm done by hacktivist “protests.” A physical political protest is most often a permitted, police-monitored event. It may cause temporary (a few hours or days at most) disruption of business; garner media attention; and alert the public to the seriousness of the issue. The online protests staged by “Operation Payback”, Anonymous, and most recently, Team Impact, the Ashley Madison hacker(s), resulted in far more damage and disruption to the targeted organizations than would a “real world” protest. These acts are more akin to vigilantism or even terrorism since the hacktivists rely on intimidation in pursuit of self-defined injustice—and outcomes often involve harm to innocent people. If a physical protest had resulted in the same outcome—a company looted, lives destroyed and money lost—it would be considered a criminal act.
Hacktivists seem hardened against the collateral damage they inflict in achieving their goals, arguing that the end justifies the means. The recent Ashley Madison scandal is a great example of hacktivism without conscience. Hackers calling themselves Team Impact threatened Avid Life Media, Inc., the parent company of infidelity website Ashley Madison, to release information regarding their customers if they didn’t shut down the site. They believed that Ashley Madison was faking most of the female profiles available on the site to scam more men into signing up. When the company continued operating, Team Impact released the data, potentially ruining marriages, destroying careers, and compromising the personal data of users who now face threats of blackmail and identity theft. The company itself is facing $500 million in lawsuits, but the toll on its customers—the very people that Team Impact was claiming to help—was heavy indeed.
Similarly, Anonymous’ hacking of the PayPal website alone cost that company $5.5 million in revenue and damaged numerous small businesses and individuals who were unable to complete financial transactions during the shut-down.
Hacktivists claim their actions are equivalent to real-world protests and as such, should be protected from criminalization. It’s true that citizens’ right to peaceful public assembly is protected by the United States’ Constitution’s First Amendment and further guaranteed by the Supreme Court. However, the law is clear that the government can put restrictions on the manner, time and place of such a gathering to preserve order and safety.
The First Amendment does not guarantee the right to assemble when there is the danger of riot, interference with traffic, disorder, or any other threat to public safety or order. One group’s right to speak out should not conflict with rights of other individuals to live and work safely. This should be true online as well as in the physical world, but hacktivists often act outside of this stricture. Mikko Hypponen, chief research officer for F-Secure, sums it up well: “The generation that grew up with the Internet seems to think it’s as natural to show their opinion by launching online attacks as for us it would have been to go out on the streets and do a demonstration. The difference is, online attacks are illegal while public demonstrations are not. But these kids don’t seem to care.”
Online groups should not be allowed to achieve their desired results using extortion, intimidation, terror or vigilantism. But it is equally important that governments and corporations not have the right to sway, direct, or otherwise channel the free will of the people toward or away from any one purpose by using force or fear of penalty. And setting laws in place that make non-violent, non-damaging civil disobedience a major infraction of the law is tantamount to muzzling free speech. Gabriella Coleman, Assistant Professor of Media, Culture and Communication at New York University writes that if DDoS attacks by hacktivists are always deemed unacceptable, that this would be “damaging to the overall political culture of the internet, which must allow for a diversity of tactics, including mass action, direct action, and peaceful of (sic) protests, if it is going to be a medium for democratic action and life.”
Both sides are wrong to some extent. The problem with internet hacktivists is the veil of anonymity behind which they hide. Real-world political protests require that people stand up for what they believe—physically. They put their faces out there, sign their names on petitions and take responsibility for their views. The Supreme Court has ruled that anonymous speech deserves protection, but hacktivism is not speech—it is action. Hacktivists can intimidate and extort individuals, corporations, and governments without having the courage to step forward. Sometimes, people will take actions anonymously that they would not under scrutiny, a truism that makes the groups like Anonymous capable of causing chaos on a worldwide scale.
There can and should be many ways to speak your mind and promote your political agenda online, and you should be able to do so without fear of reprisal from law enforcement. However, intentional damage inflicted by anonymous disruptive mass action can also hurt unrelated innocent individuals. With our society’s level of reliance on internet services for business and daily living, hacktivist activity has potentially far-reaching consequences. Shutting down banking or payment capabilities doesn’t just hurt the targeted banks and credit card companies; it prevents many small businesses and individuals from conducting necessary business and impacts their daily lives in a negative way. Releasing the personal data of subscribers or customers to harm a government or company doesn’t just hurt the target—it sets thousands, sometimes millions, of lives on edge.
And let’s face it: Breaking into a store in a “real world” protest, stealing its customer lists or proprietary data and either disseminating it or destroying it is not trespassing. It’s not a misdemeanor. It’s not peaceful. It’s theft at best and terrorism at worst.
Online activists should mount an up-front, highly publicized, web-based boycott of their opponent—peacefully and legally—to exercise their freedom of public redress in the way in which the Constitution intended. Team Impact could have constructed a viral message letting people know that Ashley Madison was scamming them and easily made their point without the collateral damage. And governments who are interested in keeping discourse alive need to take a step back from the edge of fascism by narrowing their definition on “unauthorized use” of computers to prevent minor instances of online civil disobedience from being classified as criminal offenses.
Nikki Bee Williams is a freelance writer based in Houston, TX whose diverse writing clients include political action committees, nonprofit organizations, engineering firms, Celtic jewelry designers in Ireland and more. Her work also appears in the Huffington Post and her first nonfiction book, The One Size Does NOT Fit All Guide to Stress Management was a #1 Bestseller on Amazon.com. She can be contacted at nikkibeewilliams.com.
It’s no surprise that new threats to personal security and privacy crop up as online communities change and grow. We’ve known about “sharenting” for a while—the tendency of parents to share every milestone in their child’s life online makes lots of personal information about children readily available to people looking to nab a new identity. But now there’s a new game in town: digital kidnapping. Digital kidnappers take screen shots of pictures posted on social media, blogs, and other online sites and use them for various activities, the most prevalent of which is online role-playing. Online role-playing has been around for decades, but only recently has it sparked outrage when a subgroup of this community, baby role-players, began stealing and repurposing online photos for their game.
Some members of the baby role-playing community are snapping up images of children on photo-sharing sites such as Instagram, Flickr, Facebook as well as various blogs to use as avatars or virtual children in their game. Players either pretend to be the child or claim the baby as their own and assign friends and other players to act as online family members to the child. There are even virtual adoption agencies where a role-player can request a youngster with a distinct look, which the “agency” seeks to fill by finding a matching image online. Participants search at #babyrp to find new “babies” for adoption or get chosen as a family member.
Psychologists theorize that many of these players are teens and tweens from less-than-optimal home situations who are fantasizing about having the perfect family. When interviewed by Fox News, child psychologist Dr. Jephtha Tausig-Edwards explains why these children are acting out these fantasies online: “They’re going to do this maybe because they’re bored, they’re going to do this maybe because maybe they want some attention,” Tausig-Edwards said. “They’re going to do this because perhaps they really are a little envious and they would like that beautiful child to be their own.”
Other psychologists, like Dr. Justin D’Arienzo, admit that there are darker reasons why someone might be interested in these types of pictures. The internet has become a haven for fetishists and others who practice socially deviant behaviors, including those that require children or some element of childhood for their personal fulfillment or sexual gratification. And, although the children themselves are not being exploited—the photos in many cases have been recontextualized to play out a dark or abusive fantasy. For example, a recent thread of comments on an Instagram post regarding a baby boy has one commenter asking if he or she “can have a private with a dirty baby.”
However, one of the most recent cases of digital kidnapping didn’t involve role-playing in game form. Instead, an adult male from New York, Ramon Figueroa, stole online photos of a 4-year-old girl from Dallas and posted them on his Facebook page, claiming she was his daughter. He posted numerous pictures of the little girl, with the action in each shot lovingly described by the doting “father.” Some of the captions he wrote under the pictures of the little girl were, “Girl version of me,” and “This is how she looks in the morning…she said daddy stop (taking pictures).” After being contacted by the girl’s mother about his use of the photos, he promptly blocked her from seeing his page.
Unfortunately, there is currently no law against pretending someone is related to you. This little girl’s mother had only one option: To file a complaint with Facebook, which met with little success initially. Dismayingly, Facebook merely confirmed that Mr. Figueroa’s profile met their standards and, as such, there was nothing that could be done about the pictures if he didn’t voluntarily take them down. However, after being contacted by news media, Facebook recanted and agreed to remove posts of this nature as parents report them.
In response to the laissez-faire attitude of social media websites regarding these stolen photos, concerned parents got together and launched at change.org. The hope was to force Instagram to close down all baby role-playing accounts, but either due to lack of publicity or lack of interest, it garnered only 1,047 signatures. Of course, taking down the #babyrp account won’t do much to curb other types of digital kidnapping that are cropping up worldwide. A recent investigation by Scotland’s Sunday Post uncovered numerous instances of online photo theft. Over 570 selfies of Scottish girls; more than 700 selfies from girls in Northern Ireland; and thousands from young girls around the U.K. had been stolen and uploaded to a porn site. The girls were often in their school uniforms, but there were some instances where skin or underwear was showing. When confronted, the representative for the website denied that the images existed and because the site was out of the country, there wasn’t anything further that could be done.
Another young British girl had her personal images stolen from her social media account and posted on a website that offered “hot horny singles in your local area.” When her photo popped up on a sidebar advertisement for the sex site on a friend’s computer, he called to let her know that her pictures were being used. She has since updated her Facebook privacy settings in the hopes of preventing future occurrences.
Until this issue gets more attention from legislators and stricter privacy regulations are implemented, you are the first line of defense against this kind of identity theft. Fortunately, there are things you can do to protect yourself or your loved ones from digital kidnapping.
The first and most failsafe option is to stop posting pictures online. However, if you do choose to share them, you should monitor and adjust your privacy settings so that only people you know have access. Alternately, you can choose a privacy app, such as Kidslink, that allows parents to determine who sees their photos across social media programs. For those who refuse to curtail their online sharing, there are also apps that will watermark your images to deter would-be photo borrowers. Another protective action critical for people who wish to continue unrestricted photo posting is to turn off the geolocation option on images so that they will not reveal the real-world location of your child.
Finally, if you’ve previously posted pictures without putting privacy protections in place and you’d like to see if any of them are being used without your permission, do a reverse image search on your photos. You can use a site like TinEye that offers this service for free, or you can drag an image from your computer into the search box on Google Chrome or Firefox to reverse search. You can also go to images.google.com, drag an image into the search bar and press enter. Any website on which the image appears will come up in the search results, as well as visually similar images.
Ways to steal personal information are quickly outpacing protective measures granted to internet users through general legislation or attempts at self-governance by internet entities, such as Facebook and Twitter. The deficiency of guidelines regarding the acquisition of posted photos leaves the onus of providing identity protection, particularly to minors, firmly in the hands of parents. Parents should take the time to fully understand and consider all of the ramifications of posting photos online, including reading and comprehending the privacy policies of each online forum they use. While setting up appropriate safeguards is important, it is also critical to police the distribution of the photos and information around the internet through reverse image searches so images acquired and used without permission are found quickly. The earlier a child’s photo is removed from an unknown site, the more protection that child is afforded from repercussions in the offline world.
Nikki B. Williams is a freelance writer based in Houston, TX whose diverse writing clients include political action committees, nonprofit organizations, engineering firms, Celtic jewelry designers in Ireland and more. Her work also appears in the Huffington Post and her first nonfiction book, The One Size Does NOT Fit All Guide to Stress Management was a #1 Bestseller on Amazon.com. She can be contacted at nikkibeewilliams.com.
In June, the International Association for Computing and Philosophy (IACAP) and the International Society for Ethics and Information Technology (INSEIT) held a joint conference. Bringing together members of both organizations, this conference served as IACAP’s annual meeting, as well as INSEIT’s annual conference, referred to as Computing Ethics: Philosophical Enquiry (CEPE). The conference was hosted by Tom Powers of the University of Delaware’s Center for Science, Ethics, and Public Policy and Department of Philosophy. The American Philosophical Association’s Committee on Philosophy and Computers also helped sponsor the conference.
Philosophers and technologists submitted papers and proposals in February, and a committee put together a program of 31 presentations and six symposia. Topics included the nature of computation, the role of computation in science, big data, privacy and surveillance, the dangers and benefits of AI, autonomous machines, persuasive technologies, research ethics, and the role of ethicists in computing and engineering.
Many of the conference participants displayed an underlying preoccupation with the ways our relationship with machines change as machines acquire characteristics that we have always considered to be distinctively human. Two specific concerns were the danger posed by machines as they become more autonomous, and the potential threat to human virtue as intelligent machines become capable of playing more human-like roles in sexual activities.
Machine ethics and autonomy: Bringsjord and Verdicchio
Selmer Bringsjord and Mario Verdicchio gave presentations on the dangers of machine autonomy. The basic worry motivating these discussions is this: If machines are under the control of a person, then even if the machines are powerful, their danger is limited by the intentions of the controllers. But if machines are autonomous, they are ipso facto not under control—at least not direct control—and, hence, the powerful ones may be quite dangerous. For example, an industrial trash compactor is a powerful piece of equipment that requires careful operation. But a trash compactor that autonomously chooses what to crush would be a much more formidable hazard.
Bringsjord considered a more nuanced proposal about the relationship between power, autonomy and danger, specifically that the degree of danger could be understood as a function of the degree of power and degree of autonomy. This would be useful since most things are at least a little dangerous. From practical and ethical perspectives, we would like to know how dangerous something is. But understanding degrees of danger in this way requires making sense of the idea of degrees of autonomy. Bringsjord aimed to accomplish this while operationalizing the concept of autonomy enough to implement it in a robot. In earlier work, Bringsjord developed a computational logic to implement what philosophers call akrasia, or weakness of will, in an actual robot. His aim in his current work is to do something similar for autonomy. In his presentation, Bringsjord outlined the features of autonomy that a logic of autonomy would have to reflect. Roughly, a robot performing some action autonomously requires the following: The robot actually performed the action, it entertained doing it, it entertained not doing it, it wanted to do it, it decided to do it, and it could have either done it or not done it. Bringsjord concluded that a powerful machine with a high degree of autonomy, thus understood, would indeed be quite dangerous.
Again, the special danger in autonomous machines is that, to the extent they are autonomous, they are outside of human control. When present, human control is a safeguard against danger, since humans are typically bound and guided by moral judgment. For a machine beyond human control, then, it is natural to want some substitute for morality. Machine ethics is the study of the implementation of moral capacities in artificial agents. The prospect of autonomous machines makes machine ethics particularly pressing.
However, Verdicchio argued that only some forms of what we might think of as autonomy require machine ethics. Like Bringsjord, Verdicchio understands autonomy as the conjunction of a variety of factors. They agree that autonomous action requires several alternative courses of action, consideration (put in more naturally computational terms, simulation) of those possibilities, and something like desire or goal-directedness toward the action actually performed. But it is regarding this last element that we also find some disagreement between Verdicchio and Bringsjord. According to Verdicchio, even with the other elements that plausibly constitute autonomy, goal-directedness is not enough to make machines distinctively dangerous. He argued the kind of machine autonomy that should worry us—the kind that calls for machine ethics—would be realized only when the machine sets its own goals, only when it is the source of its own desires. Without this capacity, it is simply a complex machine, perhaps one that has become more complex on its own, but still one that is directed toward the ends of its creators or operators.
Is Verdicchio right that we can dispense with machine ethics unless machines can set their own ends? It is an interesting question. Likely, the answer depends on how we understand the scope of machine ethics. A distinctive feature of AI systems is that they are capable of solving problems or achieving goals in novel ways, including ways their programmers did not anticipate. This is the point of machine learning: Not all of the relevant information and instructions need to be given to the machine in advance; it can figure out some things on its own. So, even if the machine’s goal is set by programmers or operators, the means to this end may not be.
To frame the point in familiar philosophical terms, a machine’s instrumental desires may vary in unpredictable ways, even if its intrinsic desires are fixed. If constraining these unpredictable instrumental desires within acceptable limits is part of machine ethics, then it seems clear that machine ethics is required for some machines that lack the capacity to set their own ultimate goals. But, on the other hand, putting constraints on the means to one’s given ends is a rather thin and limited part of what we usually consider ethics. And perhaps we can think of such constraints simply as additional goals given to the machine in advance. Ultimately, whether or not we consider the construction of such constraints part of machine ethics probably depends on the generality, structure and content of these constraints. If they involve codifications of recognizably ethical concepts, then the label ‘machine ethics’ will seem appropriate. If not, then we will be more likely to withhold the label.
But this semantic issue should not detract from the important point raised by Verdicchio’s presentation. The autonomy of a machine that could set and adjust its own ultimate goals would raise much deeper concerns than one that could not, since such a machine might eventually abandon any constraints specified in advance.
Symposium on sex, virtue, and robots: Ess, Taddeo, and Vallor
Sophisticated, AI-powered robots with sexual abilities, or sexbots, do not yet exist, but we have every reason to believe they will soon. It is hard to imagine the ever-booming and lucrative digital sex industry not taking advantage of the development of new advances in personally interactive devices. Sexbots were the focus of a symposium called: “Sex, Virtue, and Robots.” John Sullins moderated a discussion between the audience and a panel composed of Charles Ess, Mariarosaria Taddeo, and Shannon Vallor. The panelists applied the framework of virtue ethics to address the question of whether having sex with intelligent robots was morally problematic.
More than competing theories of normative ethics, virtue ethics puts special emphasis on human character traits. Specifically, virtue ethicists hold that actions are to be evaluated in terms of the character traits—the virtues and vices—that the actions exemplify and produce. Given that people have been using sex dolls—human-size dolls with carefully crafted sexual anatomy—and a variety of artificial masturbation devices for years, one might have thought that robots designed to provide sexual services would not raise new ethical issues. Regarding virtue ethics in particular, one might think that sex with robots is no different, in relation to a person’s character, from the use of masturbation aids. But that is not quite so obvious as it might have seemed at first. What distinguishes sexbots is AI. Not only are sexbots supposed to be realistic in their look and feel, the interactive experience they promise is intended to be realistic as well.
So, sexbots promise a more interactive, personalized—perhaps intimate—experience than the minimally animated sex dolls and toys of today. Why would that matter? The panelists were largely in agreement that there was nothing intrinsically wrong with people having occasional sexual experiences with robots. But they all shared some version of the worry that sex with AI-powered robots might displace other intrinsically valuable sorts of activity, as well as the character development said activities might enable and promote. Ess invoked philosopher Sara Ruddick’s distinction between complete sex and merely good sex, the former being distinguished not just by the participants’ enjoyment but also by equal, mutual desire and respect between the individuals involved. Ess’s worry is that, without a capacity for deep practical wisdom and the genuine sort of autonomy we believe ourselves to have, sexbots couldn’t possibly be participants in complete sex. If robots became a replacement for human sexual partners, then complete sex is an important good on which we might miss out.
One part of Taddeo’s worry is quite similar. Her focus was eros—an ancient Greek conception of love discussed by Plato. As Taddeo characterized it, eros is a maddening kind of love, the experience of which shapes a person’s character. Taddeo’s concern with eros is similar to Ess’s concern with complete sex. In both cases, the worry is that, to the extent that sexbots replace human partners, a distinctively valuable sort of experience would be impossible. Taddeo adduced several other pressing worries as well. One was that female sexbots would exacerbate a problem that we already find caused by pornography—specifically, the promotion of unrealistic stereotypes about women and the misogyny that this might produce. She also noted that reliance on robots for sex might complicate our romantic relationships in unfortunate ways.
Vallor’s primary worry about sexbots is similar to Ess’s concern about complete sex and Taddeo’s concern about eros. Like Ess and Taddeo, Vallor suggested that sexbots might displace some important human good. However, instead of focusing on the intrinsically desirable forms of sex and love on which we might miss out, Vallor focused on the processes of maturation and growth that comes from having sex (whether good or bad) with real humans. Our human bodies are fleshy, hairy, moist, and imperfect in a variety of ways. When people are sexually immature, they react to these features of bodies with fear and disgust. Sex with humans, Vallor suggested, is part of the process of leaving behind these immature reactions. She noted that failure to outgrow these sorts of fear and disgust is associated with vices like racism, misogyny, and self-loathing. Furthermore, the persistence of such fear and disgust can inhibit the development of those virtues—like empathy, care and courage—that have essential practical ties to our bodies. Sexbots offer the possibility of sexual gratification without engaging biological realities. Hence, use of sexbots, to the extent it replaced sex with human persons, might result in a stunted maturation process, producing persons who were more vicious and less virtuous.
The notes of caution sounded by the panelists were generally compelling. Not only does new technology absorb our time and attention, it necessarily alters and displaces activities we otherwise would have continued. This is unfortunate when the displaced activities were valuable—or, more precisely, when the old activities were more valuable than the activities that displaced them. But it was not altogether clear to all of the audience members that sex with robots was less valuable overall than the traditional option. A question along these lines was posed to the panel by Deborah Johnson, one of the conference’s keynote speakers. Her question was directed primarily at Vallor’s point about how sex facilitates the development of certain virtues. Johnson suggested that, perhaps, the elimination of traditional forms of sex would eliminate any important role for these particular virtues in sexual relations, and, perhaps, we could still develop these virtues as they applied to other contexts. And, if so, a world in which we lacked both traditional sex and the virtuous traits acquired through it might be just as good as our present situation. In response, Vallor held that the practical scope of the virtuous traits sex helps us learn is broader than just sexual activity, and so their loss would be felt in other areas, too.
Vallor’s response seems correct, though the issue ultimately depends on psychological facts about exactly what experiences the acquisition of particular character traits requires. Regardless, Johnson’s objection is exactly the sort of challenge we should take seriously. As technological change creates new sources of value at the expense of earlier sources, too often the focus is exclusively on what has been lost or exclusively on what has been gained. In contrast, a better approach looks at both, comparing the old and the new. This, of course, is not easy, and sometimes the old and the new will be incommensurable. Even so, it is vital that we bring the comparisons, the trade-offs, into clear view.
Issues about autonomous machines and sexbots bring out two aspects of the uneasiness we experience as artificial entities become more like humans. For one thing, we care how machines behave. As they become more autonomous and less subject to our direct control, we want their behavior to serve our needs without endangering us. Secondly, we care about how the behavior of machines changes us—whether enhancing or supplanting our cherished human capacities and traits.
Reflection shows that the two sets of issues are bound together in complicated ways. When we wonder what sorts of changes are good for us, this calls the very notions of harm and danger into question. The risk of an industrial robot ripping a person in half is just one sort of danger. But we might well consider the potential of sexbots to arrest our development of virtue a different, but also quite fearsome, sort of danger. Furthermore, although machine ethics must attend to how machines’ choices affect persons’ health and physical well-being, a richer machine ethics would also consider how the actions of robots affect persons’ character, psychological well-being, and overall quality of life.
As more intelligent machines are developed, no doubt, we will encounter many new situations that raise difficult questions about the relationship between machine ethics and human ethics. The philosophers of IACAP and INSEIT will have plenty of important work to do for years to come.
Owen King is a visiting instructor in the Oberlin College Department of Philosophy. He is finishing a PhD in philosophy at The Ohio State University. His research focuses on well-being and the nature of value.