Search and Distortion

My brilliant colleague Hannah Carnegy-Arbuthnott has written a really interesting paper on 'Privacy, Publicity, and the Right to be Forgotten'.

She argues that as well as claims for privacy and against defamation, we also have legitimate moral claims against distortion, where distortion is:

the presentation of true information about someone 'in a way that suggests it would be appropriate to hold them accountable for it, when it is no longer appropriate to do so' (2).

Hannah uses this to give a potential normative justification for the Google Spain1 ruling: the apparently paradoxical decision that Google should take down search results pointing to newspaper articles about a particular foreclosure, but that the original source could continue to stay online, and furthermore, search results for the person's name could also bring up newspaper articles about the court case ruling that the original search results had to be removed, even though those articles also reported the details of the past foreclosure.

Her argument is that the way Google presents search results, in particular its ranking algorithm and the user-facing explanation of how that works, create a context in which highly ranked information about an individual is presented as something for which it is appropriate to hold that person accountable. This can amount to distortion in cases where it is inappropriate or no longer appropriate to hold the subject accountable.

Three cases

Hannah's working example is the person searching ('the searcher') for information about another private individual ('the subject') in order to find out about them before making some choice or decision. This might be going on a date or engaging in a business deal.2 Here distortion can be a real harm, though of different degrees. Let's think of three cases where the search happens and distortion follows. In particular, let's consider three cases where the searcher has different levels of informedness about how Google ranks its results.

In the first case the searcher is unreflective and possibly naïve. They do not consider the decisions behind the ordering of results and how that might skew their information gathering, merely looking at the first few results of any search they conduct. Moreover, they may not pay attention to the dates on reports and websites (where there are any), merely taking all search results as equally current.

In the second case the searcher approximates to the legal fiction of a 'reasonable person'. They are aware Google ranks results by an algorithm, may have even glanced at or heard about Google's explanation of how this works, and take at face value that Google is using advanced technology to provide results most relevant to what they were looking for. They don't understand the technology nor consider that Google has very little information about the specific context of their search and thus does not know their actual intentions.

In the third case the searcher is sophisticated and expert. They are aware that Google makes money by selling advertising and may even suspect that Google takes payments to manipulate search results. They certainly understand that Google search costs a lot of money and that 'if you aren't paying, you are the product'.

If we are asking about the balance of responsibility - between Google and the searcher - for the consequent distortion, one might think that responsibility rests primarily with the searcher in case three and agree with Hannah that it is to a greater extent Google's responsibility in case two, warranting the 'right to be forgotten' judgement of Google Spain. Which makes it seem natural to conclude that there is even less the responsibility attaching to the searcher in case one, thus the Google Spain ruling also protects the naïve searcher from being the unwitting cause of harm - distortion - to the subject of the search.

Paternalism and Naïvety

In case one the searcher is certainly the unwitting cause of harm, but does that make them also the innocent cause of that harm?

There is a line of thought which goes like this: if Google is primarily responsible for a harm caused when a searcher has read their documentation (i.e. the judgement of Google Spain), then they are equally responsible when the user hasn't read it. The uninformedness of the searcher would only absolve Google of responsibility to any degree if the searcher reading the documentation would have prevented the harm. The general principle this reasoning appeals to is that if being better informed (within reason) would not have led to someone making an ethically better decision, then there can be no ethical blame attached to their being uninformed. Harmless (to others) ignorance is not ethically culpable.

While very attractive, this reasoning is subtly paternalistic in the case of search and distortion. The subject of the search has a claim against distortion precisely because the searcher holds them (inappropriately) accountable for something. Holding someone accountable is an ethically significant choice and one that should not be made lightly. The naïve and unreflective searcher is making this choice without taking the minimum required effort to ensure they are not doing it inappropriately. Even if making that effort would not have changed their decision (i.e. case two), their choice to hold the subject accountable without making that minimum effort is an ethical fault.

For that reason we can conclude that the subject of distortion has a greater claim against the naïve and unreflective searcher than against the more informed researcher of case two. To ignore this is to infantilise the unreflective searcher, to absolve them of responsibility as if they lacked full ethical agency. It is paternalistic.

Consumer Protection

Laws like GDPR are intended to protect consumers and citizens from exploitation and harm to their wellbeing. Hannah's argument is that the 'right to be forgotten' is intended to protect the subject of Google searches from the harm of distortion. The right to be forgotten is primarily aimed at harms caused by unreflective searchers, since they are the overwhelming majority. It has the side-effect of protecting the naïve and unreflective consumer of Google search rankings from the harm of making an ethical error, of inappropriately holding someone accountable for something.

While this is only a side-effect, we should be very careful about the state taking the role of protecting citizens from their own ethically bad decisions when the better decision would have caused no greater harm. Liberal political theory sees interventions in individual free choice as only justified in order to prevent harm to others, but a reasonably well informed decision in the case we are considering would have equally resulted in distortion. So there is no liberal ground for intervention in case one.

Surely the properly liberal response to the potential of Google search to cause distortion as it is used by the 99.99% of consumers is not to focus on the rare case two, but instead try to make the majority more sensitive to the ethical features of choices they make on the basis of Google search.3


  1. Case C-131/12, Google Spain SL and Google Inc. v. Agencia Española de Protección de Datos (AEPD) and Mario Costeja González (2014). Henceforth 'Google Spain'. 

  2. Hannah's actual examples are dating someone met through an app and deciding to interview a job applicant. I am not sure I agree with the implication that the latter is ethically acceptable, even if it is a common practice. What seems to make such searches acceptable as part of a particular decision-making process is that the searcher is taking some sort of risk - emotional or financial - and there is no duty of impartiality or norm of procedural fairness

  3. There is another, different, problem with Google Spain: as a remedy for distortion, the right to be forgotten looks like a mistake. The way Google summarises its ranking algorithm suggests the problem is not inclusion of information but the weighting it is given: "To give you the most useful information, Search algorithms look at many factors and signals, including the words of your query, relevance and usability of pages, expertise of sources, and your location and settings. The weight applied to each factor varies depending on the nature of your query. For example, the freshness of the content plays a bigger role in answering queries about current news topics than it does about dictionary definitions." This clearly states that the ranking for older sources depends upon the content, so we can conclude that a news story about a foreclosure many years ago would only be ranked highly if Google's algorithm judged it to be the kind of content that doesn't have a 'use before date'. But the point about distortion is precisely that a lot of personal information about one's past does have a 'use before date' and court judgements on matters like a foreclosure fall into that category, at least when the search term is just a name and does not specify an interest in court cases. Under the assumption that the law should protect individuals against distortion created by the ranking of Google search results, the appropriate action would be to require the algorithm to be adjusted to take into account that it is is no longer appropriate to hold individuals accountable for certain types of information. 


You'll only receive email when they publish something new.

More from Tom Stoneham
All posts