Censorship versus Privacy: the implications of the “right to be forgotten”

Can a  Screen Shot 2014-05-20 at 3.14.18 PMdigital tattoo  be temporary? A groundbreaking court case out of Europe may be redefining the boundaries shaping online identity. Google’s recent decision to respect the “right to be forgotten” has triggered a debate spotlighting the intersection of freedom of speech, censorship, and privacy rights.

 

The case was initially introduced by Mario Costeja Gonzalez, a Spanish businessman irked that a Google search for his name still turned up a 1998 newspaper article detailing his past financial woes, despite the fact that his debts had been paid off years before. A Spanish court referred his case to the Court of Justice of the European Union in Luxembourg, which ruled in favor of the complaint. The historic decision determined that search engines do indeed have a duty to ensure that data deemed “inadequate, irrelevant or no longer relevant” does not appear in their results. In a more sweeping ruling, the court also declared that ordinary citizens have a right to request that search engines remove links to sites that may provide excessive personal data about them.

 

The controversial ruling triggered a firestorm of debate, and has pitted freedom of speech advocates against champions of online privacy. For those concerned primarily with privacy, the landmark decision places unprecedented autonomy into the hands of private citizens, allowing them to manage their online reputation and protect their personal data to an unparalleled degree. However, advocates of freedom of speech have been quick to decry the decision as a potential mechanism through which freedom of speech may be abused. Many have raised concerns that forcing Google to remove certain links from their search results is in fact a dangerous act of censorship, especially if groups who could potentially profit from this clause are able to effectively block access to certain information.

 

The terms “inadequate, irrelevant or no longer relevant” are both sweeping in scope and arbitrary in their definition. For the moment, it remains unclear how Google plans on defining these terms, and addressing the takedown requests that have since flooded its servers.

 

What are the implications of enabling the ability to alter the search results associated with your name? Much of the debate appears to center around the terms used to define the process. While “managing” and “editing” imply granting greater freedom to individuals, terms such as “censoring” and “controlling” suggest a world in which access to information is obstructed.

 

Further questions arise when the “right to be forgotten” begins to infringe on “the right to know.” While the removal of some content may follow through logically in terms of relevance – such as the Spanish businessman still associated with past debts long since paid off – what are the implications of erasing content that may have consequences for our safety? Allowing victims of revenge porn and cyber bullying, who are often unprotected by domestic legislation, to disassociate themselves from slanderous content, would provide them with an opportunity to reclaim their identity online. However, if an individual is seeking to restrict access to information associating them with a past of violence or corruption, the lines become slightly blurred. Furthermore, when one considers the possibility that private companies or sovereign states could gain the ability to tweak search engine results, the “right to be forgotten” begins to sound like an excerpt from George Orwell’s dystopian novel 1984, where the feared “Ministry of Truth” could decide which facts were acceptable.

 

Given the diversity of takedown requests, Google may have to operate on a case-by-case basis when determining whether links should be removed. Given the amount of information available about the average individual online, along with the sheer number of languages and platforms through which information is shared, processing removal requests will require a massive mobilization of resources on the part of Google. The potentially massive information backlog has led some to suggest that the tech company take an “all or nothing” approach for maximum efficiency. This would mean that individuals would simply be erased completely from search results, rather than having the option to nitpick which links they would like removed.

 

However, given the importance of personal branding online, disappearing completely from search results may be more harmful to your image than one unflattering link. Should this option be the only one made available, individuals may be faced with the prospect of picking the lesser of two evils. At the end of the day, what is more detrimental: having an imperfect image online, or a nonexistent one?

 

While no comparable cases have been recorded in Canada, it is only a matter of time before the debate reaches North America. Should similar legislation be initiated domestically, there may be concrete implications for the identity of Canadians online.

 

What are your thoughts on the “right to be forgotten” debate? Should controlling your search results be a fundamental right, or is it an act of censorship? Share your thoughts below!

One response to “Censorship versus Privacy: the implications of the “right to be forgotten””

  1. John Harvey

    I think a ‘right to add context’ is more appropriate than a ‘right to be forgotten’. To use the example of the businessman, he should have a right to defend himself–talking about the circumstances of the debt, mentioning that he’s been debt-free for 20 years–but a ‘right to be forgotten’ seems too broad to me. Give him 500 words which will appear under the result.

    YouTube takes down a huge number of videos every day, due to copyright violation–music, gameplay footage, rebroadcasting of TV, etc.–and while a lot of those requests are legitimate, every now and again, someone’s (totally legal) video is taken down. These proven imperfections in similar systems combined with the fact that the overwhelming majority of the takedown requests are generated by large corporations, as opposed to private individuals, makes me think that this would quickly become a tool used to hide unpleasant (but true!) information. The system in place doesn’t benefit some guy who put music on the internet–he doesn’t have time to write out all the takedown requests. If you can afford to drop a few thousand dollars to protect your rights, you’re fine, but if you aren’t, you’re stuck.

    Moreover, I don’t think this is an issue for Google. If you have an problem with the content someone is hosting on their website, get in touch with them: that’s typically how revenge porn/cyberbullying is dealt with. The law is clear–you’ve got to take that stuff down. Adding another avenue to remove online content doesn’t make sense to me. Google’s a search engine, and specifically asking them not to do their job (showing search results) doesn’t feel right. As has always been the case with issues of censorship, the answer to bad speech is more speech. If you don’t like what comes up when you get Googled, buy some AdWords.

Leave a Reply