6 Insidious Ways Surveillance Changes the Way We Think and Act
When I moved to a Czech village in 1994 to teach English, I was fascinated by the cultural difference between Americans like me and my new community. At that time, the oppressive memory of the dreaded Communist secret police, the StB, was still fresh. (Check out a haunting series of street photos snapped by agents in their heyday.) As a brash young ex-pat, born after the era of McCarthy and J. Edgar Hoover, I understood little of what it felt like to live under constant surveillance.
The Czechs knew better. Several decades under the watchful eyes of the StB (and before that, the spies of the Habsburg Empire) had molded their attitudes and behavior in ways that were both subtle and profound. They were on their guard with newcomers. When you got to know them, you might sense a tendency toward fatalism about the future. Signature Czech traits included a sophisticated gallows humor and a sharp sense of the absurd, honed by a lifetime of experiencing Kafka-esque political conditions (Kafka himself was a Czech).
Then there was that subversive streak. When you gained their trust, Czechs would often gleefully show you their old smuggled rock-and-roll records or describe a forbidden radio set up in some corner of the house. These proud tales of rebellious triumph over the StB were cast against stories of horror, like a student who told me of the day her daddy disappeared after “talking to the oven” where a radio was hidden. For most Czechs, the salient lesson of the police state was an us-against-them mentality. Only sometimes you didn’t know who “they” were.
1994 was the very beginning of the Information Age, and it has turned out rather differently than many expected. Instead of information made available for us, the key feature seems to be information collected about us. Rather than granting us anonymity and privacy with which to explore a world of facts and data, our own data is relentlessly and continually collected and monitored. The wondrous things that were supposed to make our lives easier—mobile devices, gmail, Skype, GPS, and Facebook—have become tools to track us, for whatever purposes the trackers decide. We have been happily shopping for the bars to our own prisons, one product at at time.
Researchers have long known that there are serious psychological consequences to being surveilled, and you can be sure that it's changing us, both as a society and as individuals. It’s throwing us off balance, heightening some characteristics and inhibiting others, and tailoring our behavior sometimes to show what the watcher wants to see, and other times to actively rebel against a condition that feels intrusive and disempowering.
If you think about it, "Prism" is the perfect name for a secret program of extensive watching that will shift our perspective and potentially fracture our view of each other and of ourselves as citizens. Public opinion is now sorting itself out, and we don’t yet know how Americans will come to feel about the new revelations of spying on the part of the government, private contractors, and their enablers. But whether we like it or not, surveillance is now a factor in how we think and act. Here are some of the things that can happen when watching becomes the norm, a little map to the surveillance road ahead.
1. Shifting power dynamics: When an NSA agent sorts through our personal data, he makes judgments about us—what category to place us in, how to interpret and predict our behavior. He can manipulate, manage and influence us in ways we don’t even notice. He gains opportunities for discrimination, coercion and selective enforcement of laws. Because the analysis of megadata results in a high number of false positives, he may target us even if our activities are perfectly blameless from his perspective.
As Michel Foucault and other social theorists have realized, the watcher/watched scenario is chiefly about power. It amplifies and exaggerates the sense of power in the person doing the watching, and on the flip side, enhances the sense of powerlessness in the watched. Foucault knew that knowledge is linked to power in insidious ways. Each time the watcher observes, she gains new knowledge about the watched, and correspondingly increases her power. That power is then used to shape reality, and the watcher’s knowledge becomes “truth.” Other perspectives are delegitimized, or worse, criminalized.
2. Criminal activity: Every apologist for the surveillance state will make the claim that spying on citizens protects us from things like terrorism, crime and violence. That may indeed be true. What is also true is that surveillance can be used just as easily to commit a crime as to prevent it. History shows us ample cases of governments, including our own, using surveillance to turn upon their own people in unlawful ways, in some cases launching attacks that are just as devastating as those feared from outsiders.
Surveillance also turns citizens into criminals, either by distorting laws to criminalize behavior which was once considered lawful, or in breeding hostility and rebellion on the part of the populace which can lead to crime.
Today’s private contractors also have incentives to use surveillance to commit crimes outside of any political agenda. How about a little insider trading? How about stealing business ideas? How about using collected data to sexually prey upon others? To blackmail for any purpose imaginable? To sell our information to the highest bidder? For every Snowden who balks at the use of data collected for surveillance, you can bet there are two other contractors using it to enrich or empower themselves. Unlike elected officials, there is no way for us to even attempt to make them accountable.
3. Diminished citizenship: In his article, “The Dangers of Surveillance,” Neil M. Richards warns that state scrutiny can chill the exercise of our civil liberties and inhibit us from experimenting with “new, controversial, or deviant ideas.” Intellectual privacy, he argues, is key to a free society. Surveillance protects the status quo and serves as a brake on change.
We’ve begun to see this in the places where we expect intellectual freedom to be most strongly protected. Recently, Harvard University administrators were found to be spying on the email accounts of 16 deans while trying to find the source of a media leak, an action which curtails cherished academic freedom. The U.S. government was outed as spying on journalists at the Associated Press, behavior which dampens reporters’ enthusiasm for investigating the government’s secrets and analyzing its actions.
When intellectual privacy, freedom of speech, and freedom of the press are restricted through surveillance, powerful ideas about truth, values, and how we live are increasingly imposed from the top down, rather than generated by citizens from the bottom up. When Big Brother is watching, Big Brother decides what's best for us. Citizens become apathetic, disengaged, and worst of all, feel a loss of dignity in their very status as citizens.
4. Suspicious minds: Surveillance makes everyone seem suspicous. The watched become instilled with an air of criminality, and eventually begin to feel culpable. Psychological researchers have found that surveillance tends to create perceptions and expectations of dishonesty.
The growing mutual distrust between the watcher and wathed leads to hostility and paranoia. One of the key features of Jeremy Bentham’s Panopticon was the notion that the inmates of an institution based on his design, such as a prison, would never be able to tell whether they were being watched or not, creating a heightened sense of unease. The tradition of secret police operatives and informants blending in with citizens prevents the watched from knowing the identity of the watcher, as does the distance of technology firms and government entities spying through computers and communication devices. All of these can breed an unhealthy social atmosphere as well as an individual sense of discomfort and suspicion.
5. Divided society: In his book, Brain on Fire, Tim McCormack discusses the class divisions that tend to rise between the watcher and the watched. Rights, privileges and power become distributed according to who has the most access to observation. The watcher groups categorize people based on who most arouses suspicion, a criterion which may be based on various prejudices or political agendas.
A watcher class may emerge which protects its interests by more watching, and more punishing and control of the watched. It increasingly wields power over technology, financial and legal systems, the political realms, and military capabilities. Those who hold power may become invisible to all but a few insiders, a nightmare scenario Orwell imagined in 1984.(Maybe that’s why sales of Orwell’s book have skyrocketed in the wake of revelations about Prism).
6. Unhappiness: Finallly, though you will not hear many pundits talking about it, surveillance tends to make us unhappy. Bentham's Panopticon was designed to inflict pain on a few (those in prison) for the sake of the happiness of the many in the community. But when everyone is being watched, everyone is experiencing pain, even, perhaps, the watcher. The brilliant German film "The Lives of Others" depicts the mental anguish of an agent of the East German secret police as he spies on his neighbors.
Some kind os surveillance may make us feel happier, at least initially. The presence of cameras on the street, for example, may give us a comforting sense of security (even though the cameras may actually be doing nothing to stop crime). But when we discover that we are being watched in ways we never imagined, for purposes we can hardly fathom, our happiness decreases. Bosses reading our email, technology firms tracking our Internet searches, and government agencies monitoring our communications for secret purposes makes us feel anxious and resentful. Systematic surveillance may squelch our creativity as we are managed to become more conformist. We come to distrust each other and our sense of unfairness rises.
The goal of using surveillance to produce a happy and stable state may well beget, perversely, the opposite: a society of edgy, unhappy beings whose sense of themselves is chronically diminished. Not exactly a recipe for Utopia.
Lynn Parramore is contributing editor at AlterNet. She is cofounder of Recessionwire, founding editor of New Deal 2.0, and author of "Reading the Sphinx: Ancient Egypt in Nineteenth-Century Literary Culture." She received her Ph.D. in English and cultural theory from NYU. Follow her on Twitter @LynnParramore.
Why Privacy Matters Even if You Have 'Nothing to Hide'
By Daniel J. Solove
When the government gathers or analyzes personal information, many people say they're not worried. "I've got nothing to hide," they declare. "Only if you're doing something wrong should you worry, and then you don't deserve to keep it private."
The nothing-to-hide argument pervades discussions about privacy. The data-security expert Bruce Schneier calls it the "most common retort against privacy advocates." The legal scholar Geoffrey Stone refers to it as an "all-too-common refrain." In its most compelling form, it is an argument that the privacy interest is generally minimal, thus making the contest with security concerns a foreordained victory for security.
The nothing-to-hide argument is everywhere. In Britain, for example, the government has installed millions of public-surveillance cameras in cities and towns, which are watched by officials via closed-circuit television. In a campaign slogan for the program, the government declares: "If you've got nothing to hide, you've got nothing to fear." Variations of nothing-to-hide arguments frequently appear in blogs, letters to the editor, television news interviews, and other forums. One blogger in the United States, in reference to profiling people for national-security purposes, declares: "I don't mind people wanting to find out things about me, I've got nothing to hide! Which is why I support [the government's] efforts to find terrorists by monitoring our phone calls!"
The argument is not of recent vintage. One of the characters in Henry James's 1888 novel, The Reverberator, muses: "If these people had done bad things they ought to be ashamed of themselves and he couldn't pity them, and if they hadn't done them there was no need of making such a rumpus about other people knowing."
I encountered the nothing-to-hide argument so frequently in news interviews, discussions, and the like that I decided to probe the issue. I asked the readers of my blog, Concurring Opinions, whether there are good responses to the nothing-to-hide argument. I received a torrent of comments:
- My response is "So do you have curtains?" or "Can I see your credit-card bills for the last year?"
- So my response to the "If you have nothing to hide ... " argument is simply, "I don't need to justify my position. You need to justify yours. Come back with a warrant."
- I don't have anything to hide. But I don't have anything I feel like showing you, either.
- If you have nothing to hide, then you don't have a life.
- Show me yours and I'll show you mine.
- It's not about having anything to hide, it's about things not being anyone else's business.
- Bottom line, Joe Stalin would [have] loved it. Why should anyone have to say more?
On the surface, it seems easy to dismiss the nothing-to-hide argument. Everybody probably has something to hide from somebody. As Aleksandr Solzhenitsyn declared, "Everyone is guilty of something or has something to conceal. All one has to do is look hard enough to find what it is." Likewise, in Friedrich Dürrenmatt's novella "Traps," which involves a seemingly innocent man put on trial by a group of retired lawyers in a mock-trial game, the man inquires what his crime shall be. "An altogether minor matter," replies the prosecutor. "A crime can always be found."
One can usually think of something that even the most open person would want to hide. As a commenter to my blog post noted, "If you have nothing to hide, then that quite literally means you are willing to let me photograph you naked? And I get full rights to that photograph—so I can show it to your neighbors?" The Canadian privacy expert David Flaherty expresses a similar idea when he argues: "There is no sentient human being in the Western world who has little or no regard for his or her personal privacy; those who would attempt such claims cannot withstand even a few minutes' questioning about intimate aspects of their lives without capitulating to the intrusiveness of certain subject matters."
But such responses attack the nothing-to-hide argument only in its most extreme form, which isn't particularly strong. In a less extreme form, the nothing-to-hide argument refers not to all personal information but only to the type of data the government is likely to collect. Retorts to the nothing-to-hide argument about exposing people's naked bodies or their deepest secrets are relevant only if the government is likely to gather this kind of information. In many instances, hardly anyone will see the information, and it won't be disclosed to the public. Thus, some might argue, the privacy interest is minimal, and the security interest in preventing terrorism is much more important. In this less extreme form, the nothing-to-hide argument is a formidable one. However, it stems from certain faulty assumptions about privacy and its value.
To evaluate the nothing-to-hide argument, we should begin by looking at how its adherents understand privacy. Nearly every law or policy involving privacy depends upon a particular understanding of what privacy is. The way problems are conceived has a tremendous impact on the legal and policy solutions used to solve them. As the philosopher John Dewey observed, "A problem well put is half-solved."
Most attempts to understand privacy do so by attempting to locate its essence—its core characteristics or the common denominator that links together the various things we classify under the rubric of "privacy." Privacy, however, is too complex a concept to be reduced to a singular essence. It is a plurality of different things that do not share any one element but nevertheless bear a resemblance to one another. For example, privacy can be invaded by the disclosure of your deepest secrets. It might also be invaded if you're watched by a peeping Tom, even if no secrets are ever revealed. With the disclosure of secrets, the harm is that your concealed information is spread to others. With the peeping Tom, the harm is that you're being watched. You'd probably find that creepy regardless of whether the peeper finds out anything sensitive or discloses any information to others. There are many other forms of invasion of privacy, such as blackmail and the improper use of your personal data. Your privacy can also be invaded if the government compiles an extensive dossier about you.
Privacy, in other words, involves so many things that it is impossible to reduce them all to one simple idea. And we need not do so.
In many cases, privacy issues never get balanced against conflicting interests, because courts, legislators, and others fail to recognize that privacy is implicated. People don't acknowledge certain problems, because those problems don't fit into a particular one-size-fits-all conception of privacy. Regardless of whether we call something a "privacy" problem, it still remains a problem, and problems shouldn't be ignored. We should pay attention to all of the different problems that spark our desire to protect privacy.
To describe the problems created by the collection and use of personal data, many commentators use a metaphor based on George Orwell's Nineteen Eighty-Four. Orwell depicted a harrowing totalitarian society ruled by a government called Big Brother that watches its citizens obsessively and demands strict discipline. The Orwell metaphor, which focuses on the harms of surveillance (such as inhibition and social control), might be apt to describe government monitoring of citizens. But much of the data gathered in computer databases, such as one's race, birth date, gender, address, or marital status, isn't particularly sensitive. Many people don't care about concealing the hotels they stay at, the cars they own, or the kind of beverages they drink. Frequently, though not always, people wouldn't be inhibited or embarrassed if others knew this information.
Another metaphor better captures the problems: Franz Kafka's The Trial. Kafka's novel centers around a man who is arrested but not informed why. He desperately tries to find out what triggered his arrest and what's in store for him. He finds out that a mysterious court system has a dossier on him and is investigating him, but he's unable to learn much more. The Trial depicts a bureaucracy with inscrutable purposes that uses people's information to make important decisions about them, yet denies the people the ability to participate in how their information is used.
The problems portrayed by the Kafkaesque metaphor are of a different sort than the problems caused by surveillance. They often do not result in inhibition. Instead they are problems of information processing—the storage, use, or analysis of data—rather than of information collection. They affect the power relationships between people and the institutions of the modern state. They not only frustrate the individual by creating a sense of helplessness and powerlessness, but also affect social structure by altering the kind of relationships people have with the institutions that make important decisions about their lives.
Legal and policy solutions focus too much on the problems under the Orwellian metaphor—those of surveillance—and aren't adequately addressing the Kafkaesque problems—those of information processing. The difficulty is that commentators are trying to conceive of the problems caused by databases in terms of surveillance when, in fact, those problems are different.
Commentators often attempt to refute the nothing-to-hide argument by pointing to things people want to hide. But the problem with the nothing-to-hide argument is the underlying assumption that privacy is about hiding bad things. By accepting this assumption, we concede far too much ground and invite an unproductive discussion about information that people would very likely want to hide. As the computer-security specialist Schneier aptly notes, the nothing-to-hide argument stems from a faulty "premise that privacy is about hiding a wrong." Surveillance, for example, can inhibit such lawful activities as free speech, free association, and other First Amendment rights essential for democracy.
The deeper problem with the nothing-to-hide argument is that it myopically views privacy as a form of secrecy. In contrast, understanding privacy as a plurality of related issues demonstrates that the disclosure of bad things is just one among many difficulties caused by government security measures. To return to my discussion of literary metaphors, the problems are not just Orwellian but Kafkaesque. Government information-gathering programs are problematic even if no information that people want to hide is uncovered. In The Trial, the problem is not inhibited behavior but rather a suffocating powerlessness and vulnerability created by the court system's use of personal data and its denial to the protagonist of any knowledge of or participation in the process. The harms are bureaucratic ones—indifference, error, abuse, frustration, and lack of transparency and accountability.
One such harm, for example, which I call aggregation, emerges from the fusion of small bits of seemingly innocuous data. When combined, the information becomes much more telling. By joining pieces of information we might not take pains to guard, the government can glean information about us that we might indeed wish to conceal. For example, suppose you bought a book about cancer. This purchase isn't very revealing on its own, for it indicates just an interest in the disease. Suppose you bought a wig. The purchase of a wig, by itself, could be for a number of reasons. But combine those two pieces of information, and now the inference can be made that you have cancer and are undergoing chemotherapy. That might be a fact you wouldn't mind sharing, but you'd certainly want to have the choice.
Another potential problem with the government's harvest of personal data is one I call exclusion. Exclusion occurs when people are prevented from having knowledge about how information about them is being used, and when they are barred from accessing and correcting errors in that data. Many government national-security measures involve maintaining a huge database of information that individuals cannot access. Indeed, because they involve national security, the very existence of these programs is often kept secret. This kind of information processing, which blocks subjects' knowledge and involvement, is a kind of due-process problem. It is a structural problem, involving the way people are treated by government institutions and creating a power imbalance between people and the government. To what extent should government officials have such a significant power over citizens? This issue isn't about what information people want to hide but about the power and the structure of government.
A related problem involves secondary use. Secondary use is the exploitation of data obtained for one purpose for an unrelated purpose without the subject's consent. How long will personal data be stored? How will the information be used? What could it be used for in the future? The potential uses of any piece of personal information are vast. Without limits on or accountability for how that information is used, it is hard for people to assess the dangers of the data's being in the government's control.
Yet another problem with government gathering and use of personal data is distortion. Although personal information can reveal quite a lot about people's personalities and activities, it often fails to reflect the whole person. It can paint a distorted picture, especially since records are reductive—they often capture information in a standardized format with many details omitted.
For example, suppose government officials learn that a person has bought a number of books on how to manufacture methamphetamine. That information makes them suspect that he's building a meth lab. What is missing from the records is the full story: The person is writing a novel about a character who makes meth. When he bought the books, he didn't consider how suspicious the purchase might appear to government officials, and his records didn't reveal the reason for the purchases. Should he have to worry about government scrutiny of all his purchases and actions? Should he have to be concerned that he'll wind up on a suspicious-persons list? Even if he isn't doing anything wrong, he may want to keep his records away from government officials who might make faulty inferences from them. He might not want to have to worry about how everything he does will be perceived by officials nervously monitoring for criminal activity. He might not want to have a computer flag him as suspicious because he has an unusual pattern of behavior.
The nothing-to-hide argument focuses on just one or two particular kinds of privacy problems—the disclosure of personal information or surveillance—while ignoring the others. It assumes a particular view about what privacy entails, to the exclusion of other perspectives.
It is important to distinguish here between two ways of justifying a national-security program that demands access to personal information. The first way is not to recognize a problem. This is how the nothing-to-hide argument works—it denies even the existence of a problem. The second is to acknowledge the problems but contend that the benefits of the program outweigh the privacy sacrifice. The first justification influences the second, because the low value given to privacy is based upon a narrow view of the problem. And the key misunderstanding is that the nothing-to-hide argument views privacy in this troublingly particular, partial way.
Investigating the nothing-to-hide argument a little more deeply, we find that it looks for a singular and visceral kind of injury. Ironically, this underlying conception of injury is sometimes shared by those advocating for greater privacy protections. For example, the University of South Carolina law professor Ann Bartow argues that in order to have a real resonance, privacy problems must "negatively impact the lives of living, breathing human beings beyond simply provoking feelings of unease." She says that privacy needs more "dead bodies," and that privacy's "lack of blood and death, or at least of broken bones and buckets of money, distances privacy harms from other [types of harm]."
Bartow's objection is actually consistent with the nothing-to-hide argument. Those advancing the nothing-to-hide argument have in mind a particular kind of appalling privacy harm, one in which privacy is violated only when something deeply embarrassing or discrediting is revealed. Like Bartow, proponents of the nothing-to-hide argument demand a dead-bodies type of harm.
Bartow is certainly right that people respond much more strongly to blood and death than to more-abstract concerns. But if this is the standard to recognize a problem, then few privacy problems will be recognized. Privacy is not a horror movie, most privacy problems don't result in dead bodies, and demanding evidence of palpable harms will be difficult in many cases.
Privacy is often threatened not by a single egregious act but by the slow accretion of a series of relatively minor acts. In this respect, privacy problems resemble certain environmental harms, which occur over time through a series of small acts by different actors. Although society is more likely to respond to a major oil spill, gradual pollution by a multitude of actors often creates worse problems.
Privacy is rarely lost in one fell swoop. It is usually eroded over time, little bits dissolving almost imperceptibly until we finally begin to notice how much is gone. When the government starts monitoring the phone numbers people call, many may shrug their shoulders and say, "Ah, it's just numbers, that's all." Then the government might start monitoring some phone calls. "It's just a few phone calls, nothing more." The government might install more video cameras in public places. "So what? Some more cameras watching in a few more places. No big deal." The increase in cameras might lead to a more elaborate network of video surveillance. Satellite surveillance might be added to help track people's movements. The government might start analyzing people's bank records. "It's just my deposits and some of the bills I pay—no problem." The government may then start combing through credit-card records, then expand to Internet-service providers' records, health records, employment records, and more. Each step may seem incremental, but after a while, the government will be watching and knowing everything about us.
"My life's an open book," people might say. "I've got nothing to hide." But now the government has large dossiers of everyone's activities, interests, reading habits, finances, and health. What if the government leaks the information to the public? What if the government mistakenly determines that based on your pattern of activities, you're likely to engage in a criminal act? What if it denies you the right to fly? What if the government thinks your financial transactions look odd—even if you've done nothing wrong—and freezes your accounts? What if the government doesn't protect your information with adequate security, and an identity thief obtains it and uses it to defraud you? Even if you have nothing to hide, the government can cause you a lot of harm.
"But the government doesn't want to hurt me," some might argue. In many cases, that's true, but the government can also harm people inadvertently, due to errors or carelessness.
When the nothing-to-hide argument is unpacked, and its underlying assumptions examined and challenged, we can see how it shifts the debate to its terms, then draws power from its unfair advantage. The nothing-to-hide argument speaks to some problems but not to others. It represents a singular and narrow way of conceiving of privacy, and it wins by excluding consideration of the other problems often raised with government security measures. When engaged directly, the nothing-to-hide argument can ensnare, for it forces the debate to focus on its narrow understanding of privacy. But when confronted with the plurality of privacy problems implicated by government data collection and use beyond surveillance and disclosure, the nothing-to-hide argument, in the end, has nothing to say.
Daniel J. Solove is a professor of law at George Washington University. This essay is an excerpt from his new book, Nothing to Hide: The False Tradeoff Between Privacy and Security, published this month by Yale University Press.
Panopticism is a social theory originally developed by French philosopher Michel Foucault in his book, Discipline and Punish.
Jeremy Bentham proposed the panopticon as a circular building with an observation tower in the centre of an open space surrounded by an outer wall. This wall would contain cells for occupants. This design would increase security by facilitating more effective surveillance. Residing within cells flooded with light, occupants would be readily distinguishable and visible to an official invisibly positioned in the central tower. Conversely, occupants would be invisible to each other, with concrete walls dividing their cells. Due to the bright lighting emitted from the watch tower, occupants would not be able to tell if and when they are being watched, making discipline a passive rather than an active action. Although usually associated with prisons, the panoptic style of architecture might be used in other institutions with surveillance needs, such as schools, factories, or hospitals.
Foucault's Discipline and Punish
In Discipline and Punish, Michel Foucault builds on Bentham's conceptualization of the panopticon as he elaborates upon the function of disciplinary mechanisms in such a prison and illustrated the function of discipline as an apparatus of power. The ever-visible inmate, Foucault suggests, is always "the object of information, never a subject in communication". He adds that,
"He who is subjected to a field of visibility, and who knows it, assumes responsibility for the constraints of power; he makes them play spontaneously upon himself; he inscribes in himself the power relation in which he simultaneously plays both roles; he becomes the principle of his own subjection" (202-203).
Foucault offers still another explanation for the type of "anonymous power" held by the operator of the central tower, suggesting that, "We have seen that anyone may come and exercise in the central tower the functions of surveillance, and that this being the case, he can gain a clear idea of the way the surveillance is practiced". By including the anonymous "public servant," as part of the built-in "architecture" of surveillance, the disciplinary mechanism of observation is decentered and its efficacy improved.
As hinted at by the architecture, this panoptic design can be used for any "population" that needs to be kept under observation or control, such as: prisoners, schoolchildren, medical patients, or workers:
"If the inmates are convicts, there is no danger of a plot, an attempt at collective escape, the planning of new crimes for the future, bad reciprocal influences; if they are patients, there is no danger of contagion; if they are madmen there is no risk of their committing violence upon one another; if they are schoolchildren, there is no copying, no noise, no chatter, no waste of time; if they are workers, there are no disorders, no theft, no coalitions, none of those distractions that slow down the rate of work, make it less perfect or cause accidents".
By individualizing the subjects and placing them in a state of constant visibility, the efficiency of the institution is maximized. Furthermore, it guarantees the function of power, even when there is no one actually asserting it. It is in this respect that the Panopticon functions automatically. Foucault goes on to explain that this design is also applicable for a laboratory. Its mechanisms of individualization and observation give it the capacity to run many experiments simultaneously. These qualities also give an authoritative figure the "ability to penetrate men’s behavior" without difficulty. This is all made possible through the ingenuity of the geometric architecture. In light of this fact Foucault compares jails, schools, and factories in their structural similarities.
Examples in the late 20th and early 21st centuries
A central idea of Foucault’s panopticism concerns the systematic ordering and controlling of human populations through subtle and often unseen forces. Such ordering is apparent in many parts of the modernized and now, increasingly digitalized, world of information. Contemporary advancements in technology and surveillance techniques have perhaps made Foucault’s theories more pertinent to any scrutiny of the relationship between the state and its population.
However, while on one hand, new technologies, such as CCTV or other surveillance cameras, have shown the continued utility of panoptic mechanisms in liberal democracies, it could also be argued that electronic surveillance technologies are unnecessary in the original "organic" or "geometric" disciplinary mechanisms as illustrated by Foucault. Foucault argues, for instance, that Jeremy Bentham's Panopticon provides us with a model in which a self-disciplined society has been able to develop. These apparatuses of behavior control are essential if we are to govern ourselves, without the constant surveillance and intervention by an "agency" in every aspect of our lives. The Canadian historian Robert Gellately has observed, for instance, that because of the widespread willingness of Germans to inform on each other to the Gestapo that Germany between 1933-45 was a prime example of Panopticism.
Panoptic theory has other wide-ranging impacts for surveillance in the digital era as well. Kevin Haggerty and Richard Ericson, for instance, have hinted that technological surveillance "solutions" have a particularly "strong cultural allure" in the West. Increasingly visible data, made accessible to organizations and individuals from new data-mining technologies, has led to the proliferation of “dataveillance,” which may be described as a mode of surveillance that aims to single out particular transactions through routine algorithmic production. In some cases, however, particularly in the case of mined credit card information, dataveillance has been documented to have led to a greater incidence of errors than past surveillance techniques.
According to the tenets of Foucault's panopticism, if discursive mechanisms can be effectively employed to control and/or modify the body of discussion within a particular space (usually to the benefit of a particular governing class or organization), then there is no longer any need for an "active agent" to display a more overtly coercive power (i.e., the threat of violence). Since the beginning of the Information Age, there exists a debate over whether these mechanisms are being refined or accelerated, or on the other hand, becoming increasingly redundant, due to new and rapid technological advancements.
Panopticism and capitalism
Foucault also relates panopticism to capitalism:
"[The] peculiarity of the disciplines [elements of Panopticism] is that they try to define in relation to the multiplicities a tactics of power that fulfils three criteria: firstly, to obtain the exercise of power at the lowest possible cost (economically, by the low expenditure it involves; politically, by its discretion, its low exteriorization, its relative invisibility, the little resistance it arouses); secondly, to bring the effects of this social power to their maximum intensity and to extend them as far as possible, without either failure or interval; thirdly, to link this 'economic' growth of power with the output of the apparatuses (educational, military, industrial or medical) within which it is exercised; in short, to increase both the docility and the utility of all elements of the system" (218).
"If the economic take-off of the West began with the techniques that made possible the accumulation of capital, it might perhaps be said that the methods for administering the accumulation of men made possible a political take-off in relation to the traditional, ritual, costly, violent forms of power [i.e. torture, public executions, corporal punishment, etc. of the middle ages], which soon fell into disuse and were superseded by a subtle, calculated technology of subjection. In fact, the two processes - the accumulation of men and the accumulation of capital - cannot be separated; it would not be possible to solve the problem of the accumulation of men without the growth of an apparatus of production capable of both sustaining them and using them; conversely, the techniques that made the cumulative multiplicity of men useful accelerated the accumulation of capital ... The growth of the capitalist economy gave rise to the specific modality of disciplinary power, whose general formulas, techniques of submitting forces and bodies, in short, 'political anatomy', could be operated in the most diverse political régimes, apparatuses or institutions" (220-221).
Panopticism and Information Technology
Building onto Foucault's Panopticism and Bentham's original Panopticon, Shoshana Zuboff applies the Panoptical theory in a technological context in her book, "In the Age of the Smart Machine." In chapter nine, Zuboff provides a very vivid portrayal of the Information Panopticon as a means of surveillance, discipline and, in some cases, punishment in a work environment. The Information Panopticon embodies Bentham's idea in a very different way. Information Panopticons do not rely on physical arrangements, such as building structures and direct human supervision. Instead, a computer keeps track of a worker’s every move by assigning him or her specific tasks to perform during their shift. Everything, from the time a task is started to the time it is completed, is recorded. Workers are given a certain amount of time to complete the task based on its complexity. All this is monitored by supervision from a computer. Based on the data, the supervisor can monitor a worker’s performance and take any necessary action when needed.
The Information Panopticon can be defined as a form of centralized power that uses information and communication technology as observational tools and control mechanisms. Unlike the Panopticon envisioned by Bentham and Foucault, in which those under surveillance were unwilling subjects, Zuboff’s work suggests that the Information Panopticon is facilitated by the benefits it offers to willing participants.
In chapter ten of “In the Age of the Smart Machine,” Zuboff provides the example of DIALOG, a computer conferencing system used at a pharmaceutical corporation in the 1970s. The conferencing system, originally intended to facilitate communication among the corporation’s many branches, quickly became popular with employees. Users of DIALOG found that the system facilitated not only innovation and collaboration, but also relaxation, as many employees began to use the system to joke with one another and discuss non-work related topics. Employees widely reported that using the system was a positive experience because it created a culture of shared information and discussion, which transcended the corporation’s norms of formality and hierarchy that limited the spread of information between divisions and employees of different ranks. This positive culture was enabled by the privacy seemingly offered by the conferencing system, as discussion boards could be made to allow access only to those who were invited to participate. The Panoptic function of the conferencing system was revealed, however, when managers were able to gain access to the informal discussion boards where employees posted off-color jokes. Messages from the discussion were posted around the office to shame contributors, and many of DIALOG’s users, now knowing there was a possibility that their contributions could be read by managers and fearing they would face disciplinary action, stopped using the system. Some users, however, kept using the system, raising the question of whether remaining users modified their behavior under the threat of surveillance, as prisoners in Bentham’s Panopticon would, or whether they believed that the benefits offered by the system outweighed the possibility of punishment.
Zuboff’s work shows the dual nature of the Information Panopticon – participants may be under surveillance, but they may also use the system to conduct surveillance of others by monitoring or reporting other users’ contributions. This is true of many other information and communication technologies with Panoptic functions – cellphone owners may be tracked without their knowledge through the phones’ GPS capabilities, but they may also use the device to conduct surveillance of others. Thus, compared to Bentham’s Panopticon, the Information Panopticon is one in which everyone has the potential to be both a prisoner and a guard.
It is argued by Foucault that industrial management has paved the way for a very disciplinary society. A society that values objectivity over everything else. The point of this is to get as much productivity from the workers as possible. Contrasting with Bentham's model prison, workers within the Information Panopticon know they are being monitored at all times. Even if a supervisor is not physically there, the computer records their every move and all this data is at the supervisor's finger tips at all times. The system's objectivity can have a psychological impact on the workers. Workers feel the need to conform and satisfy the system rather than doing their best work or expressing concerns they might have.
The Information Panopticon diverts from Jeremy Bentham's model prison by adding more levels of control. While the Bentham's model prison system is made up of inmates at the lowest level monitored by a guard, the Information Panopticon can have various levels. A company or firm can have various satellite locations, each monitored by a supervisor, and then a regional supervisor monitoring the supervisors below him or her. Depending on the structure and size of a firm, information Panopticons can have several levels, each monitoring all the levels beneath it.
Now, the efficiency of the Information Panopticon is in question. Does it really lead to a better work place and higher productivity, or does it simply put unnecessary stress on the people being monitored? A major criticism of the system is its objectivity. It is solely based on numbers, therefore not allowing for human error. According to Zuboff, some people find the system to be highly advantageous, while others think it is very flawed because it does not account for the effort a worker puts into a task or things outside of a worker's control. Furthermore, the lack of direct supervision only adds to a potentially precarious situation.
Social desirability bias
In social science research, social desirability bias is a type of response bias that is the tendency of survey respondents to answer questions in a manner that will be viewed favorably by others. It can take the form of over-reporting "good behavior" or under-reporting "bad," or undesirable behavior. The tendency poses a serious problem with conducting research with self-reports, especially questionnaires. This bias interferes with the interpretation of average tendencies as well as individual differences.
Topics where socially desirable responding (SDR) is of special concern are self-reports of abilities, personality, sexual behavior, and drug use. When confronted with the question "How often do you masturbate?," for example, respondents may be pressured by the societal taboo against masturbation, and either under-report the frequency or avoid answering the question. Therefore, the mean rates of masturbation derived from self-report surveys are likely to be severe underestimates.
When confronted with the question, "Do you use drugs/illicit substances?" the respondent may be influenced by the fact that controlled substances, including the more commonly used marijuana, are generally illegal. Respondents may feel pressured to deny any drug use or rationalize it, e.g. "I only smoke marijuana when my friends are around." The bias can also influence reports of number of sexual partners. In fact, the bias may operate in opposite directions for different subgroups: Whereas men tend to inflate the numbers, women tend to underestimate theirs. In either case, the mean reports from both groups are likely to be distorted by social desirability bias.
Other topics that are sensitive to social desirability bias:
- Self-reported personality traits will correlate strongly with social desirability bias
- Personal income and earnings, often inflated when low and deflated when high
- Feelings of low self-worth and/or powerlessness, often denied
- Excretory functions, often approached uncomfortably, if discussed at all
- Compliance with medicinal dosing schedules, often inflated
- Religion, often either avoided or uncomfortably approached
- Patriotism, either inflated or, if denied, done so with a fear of other party's judgment
- Bigotry and intolerance, often denied, even if it exists within the responder
- Intellectual achievements, often inflated
- Physical appearance, either inflated or deflated
- Acts of real or imagined physical violence, often denied
- Indicators of charity or "benevolence," often inflated
- Illegal acts, often denied
- Voter turnout
Cognitive liberty, or the "right to mental self-determination", is the freedom of an individual to control his or her own mental processes, cognition, and consciousness. It has been argued to be both an extension of, and the principle underlying, the right to freedom of thought. Though a relatively recently defined concept, many theorists see cognitive liberty as being of increasing importance as technological advances in neuroscience allow for an ever-expanding ability to directly influence consciousness. Cognitive liberty is not a recognized right in any international human rights treaties, but has gained a limited level of recognition in the United States, and is argued to be the principle underlying a number of recognized rights.
The term "cognitive liberty" was coined by neuroethicist Dr. Wrye Sententia and legal theorist and lawyer Richard Glen Boire, the founders and directors of the non-profit Center for Cognitive Liberty and Ethics (CCLE). Sententia and Boire define cognitive liberty as "the right of each individual to think independently and autonomously, to use the full power of his or her mind, and to engage in multiple modes of thought."
Sententia and Boire conceived of the concept of cognitive liberty as a response to the increasing ability of technology to monitor and manipulate cognitive function, and the corresponding increase in the need to ensure individual cognitive autonomy and privacy. Sententia divides the practical application of cognitive liberty into two principles:
- As long as their behavior does not endanger others, individuals should not be compelled against their will to use technologies that directly interact with the brain or be forced to take certain psychoactive drugs.
- As long as they do not subsequently engage in behavior that harms others, individuals should not be prohibited from, or criminalized for, using new mind-enhancing drugs and technologies.
These two facets of cognitive liberty are reminiscent of Timothy Leary's "Two Commandments for the Molecular Age", from his 1968 book The Politics of Ecstasy:
- Thou shalt not alter the consciousness of thy fellow man
- Thou shalt not prevent thy fellow man from altering his own consciousness.
Supporters of cognitive liberty therefore seek to impose both a negative and a positive obligation on states: to refrain from non-consensually interfering with an individual's cognitive processes, and to allow individuals to self-determine their own "inner realm" and control their own mental functions.
Freedom from interference
This first obligation, to refrain from non-consensually interfering with an individual's cognitive processes, seeks to protect individuals from having their mental processes altered or monitored without their consent or knowledge, "setting up a defensive wall against unwanted intrusions". Ongoing improvements to neurotechnologies such as transcranial magnetic stimulation and electroencephalography (or "brain fingerprinting"); and to pharmacology in the form of selective serotonin reuptake inhibitors (SSRIs), Nootropics, Modafinil and other psychoactive drugs, are continuing to increase the ability to both monitor and directly influence human cognition. As a result, many theorists have emphasized the importance of recognizing cognitive liberty in order to protect individuals from the state using such technologies to alter those individuals’ mental processes: "states must be barred from invading the inner sphere of persons, from accessing their thoughts, modulating their emotions or manipulating their personal preferences."
This element of cognitive liberty has been raised in relation to a number of state-sanctioned interventions in individual cognition, from the mandatory psychiatric 'treatment' of homosexuals in the US before the 1970s, to the non-consensual administration of psychoactive drugs to unwitting US citizens during CIA Project MKUltra, to the forcible administration of mind-altering drugs on individuals to make them competent to stand trial. Futurist and bioethicist George Dvorsky, Chair of the Board of the Institute for Ethics and Emerging Technologies has identified this element of cognitive liberty as being of relevance to the debate around the curing of autism spectrum conditions. Duke University School of Law Professor Nita Farahany has also proposed legislative protection of cognitive liberty as a way of safeguarding the protection from self-incrimination found in the Fifth Amendment to the US Constitution, in the light of the increasing ability to access human memory.
Though this element of cognitive liberty is often defined as an individual’s freedom from state interference with human cognition, Jan Christoph Bublitz and Reinhard Merkel among others suggest that cognitive liberty should also prevent other, non-state entities from interfering with an individual’s mental "inner realm". Bublitz and Merkel propose the introduction of a new criminal offense punishing "interventions severely interfering with another’s mental integrity by undermining mental control or exploiting pre-existing mental weakness." Direct interventions that reduce or impair cognitive capacities such as memory, concentration, and willpower; alter preferences, beliefs, or behavioral dispositions; elicit inappropriate emotions; or inflict clinically identifiable mental injuries would all be prima facie impermissible and subject to criminal prosecution. Sententia and Boire have also expressed concern that corporations and other non-state entities might utilize emerging neurotechnologies to alter individuals' mental processes without their consent.
Freedom to self-determine
Where the first obligation seeks to protect individuals from interference with cognitive processes by the state, corporations or other individuals, this second obligation seeks to ensure that individuals have the freedom to alter or enhance their own consciousness. An individual who enjoys this aspect of cognitive liberty has the freedom to alter their mental processes in any way they wish to; whether through indirect methods such as meditation, yoga or prayer; or through direct cognitive intervention through psychoactive drugs or neurotechnology.
As psychotropic drugs are a powerful method of altering cognitive function, many advocates of cognitive liberty are also advocates of drug law reform; claiming that the "war on drugs" is in fact a "war on mental states". The CCLE, as well as other cognitive liberty advocacy groups such as Cognitive Liberty UK, have lobbied for the re-examination and reform of prohibited drug law; one of the CCLE's key guiding principles is that: "governments should not criminally prohibit cognitive enhancement or the experience of any mental state". Calls for reform of restrictions on the use of prescription cognitive-enhancement drugs (also called smart drugs or nootropics) such as Prozac, Ritalin and Adderall have also been made on the grounds of cognitive liberty.
This element of cognitive liberty is also of great importance to proponents of the transhumanist movement, a key tenet of which is the enhancement of human mental function. Dr Wrye Sententia has emphasized the importance of cognitive liberty in ensuring the freedom to pursue human mental enhancement, as well as the freedom to choose against enhancement. Sententia argues that the recognition of a "right to (and not to) direct, modify, or enhance one's thought processes" is vital to the free application of emerging neurotechnology to enhance human cognition; and that something beyond the current conception of freedom of thought is needed. Sententia claims that "cognitive liberty's strength is that it protects those who do want to alter their brains, but also those who do not".
Privacy and the Threat to the Self
By Michael P. Lynch
In the wake of continuing revelations of government spying programs and the recent Supreme Court ruling on DNA collection – both of which push the generally accepted boundaries against state intrusion on the person — the issue of privacy is foremost on the public mind. The frequent mantra, heard from both media commentators and government officials, is that we face a “trade-off” between safety and convenience on one hand and privacy on the other. We just need, we are told, to find the right balance.
The connection between loss of privacy and dehumanization is a well-known and ancient fact.
This way of framing the issue makes sense if you understand privacy solely as a political or legal concept. And its political importance is certainly part of what makes privacy so important: what is private is what is yours alone to control, without interference from others or the state. But the concept of privacy also matters for another, deeper reason. It is intimately connected to what it is to be an autonomous person.
What makes your thoughts your thoughts? One answer is that you have what philosophers sometimes call “privileged access” to them. This means at least two things. First, you access them in a way I can’t. Even if I could walk a mile in your shoes, I can’t know what you feel in the same way you can: you see it from the inside so to speak. Second, you can, at least sometimes, control what I know about your thoughts. You can hide your true feelings from me, or let me have the key to your heart.
The idea that the mind is essentially private is a central element of the Cartesian concept of the self — a concept that has been largely abandoned, for a variety of reasons. Descartes not only held that my thoughts were private, he took them to be transparent — all thoughts were conscious. Freud cured us of that. Descartes also thought that the only way to account for my special access to my thoughts was to take thoughts to be made out of a different sort of stuff than my body — to take our minds, in short, to be non-physical, distinct from the brain. Contemporary neuroscience and psychology have convinced many of us otherwise.
But while Descartes’s overall view has been rightly rejected, there is something profoundly right about the connection between privacy and the self, something that recent events should cause us to appreciate. What is right about it, in my view, is that to be an autonomous person is to be capable of having privileged access (in the two senses defined above) to information about your psychological profile — your hopes, dreams, beliefs and fears. A capacity for privacy is a necessary condition of autonomous personhood.
To get a sense of what I mean, imagine that I could telepathically read all your conscious and unconscious thoughts and feelings — I could know about them in as much detail as you know about them yourself — and further, that you could not, in any way, control my access. You don’t, in other words, share your thoughts with me; I take them. The power I would have over you would of course be immense. Not only could you not hide from me, I would know instantly a great amount about how the outside world affects you, what scares you, what makes you act in the ways you do. And that means I could not only know what you think, I could to a large extent control what you do.
That is the political worry about the loss of privacy: it threatens a loss of freedom. And the worry, of course, is not merely theoretical. Targeted ad programs, like Google’s, which track your Internet searches for the purpose of sending you ads that reflect your interests can create deeply complex psychological profiles — especially when one conducts searches for emotional or personal advice information: Am I gay? What is terrorism? What is atheism? If the government or some entity should request the identity of the person making these searches for national security purposes, we’d be on the way to having a real-world version of our thought experiment.
But the loss of privacy doesn’t just threaten political freedom. Return for a moment to our thought experiment where I telepathically know all your thoughts whether you like it or not From my perspective, the perspective of the knower — your existence as a distinct person would begin to shrink. Our relationship would be so lopsided that there might cease to be, at least to me, anything subjective about you. As I learn what reactions you will have to stimuli, why you do what you do, you will become like any other object to be manipulated. You would be, as we say, dehumanized
The connection between a loss of privacy and dehumanization is of course, a well-known and ancient fact, and one for which we don’t need to appeal to science fiction to illustrate. It is employed the world over in every prison and detention camp. It is at the root of interrogation techniques that begin by stripping a person literally and figuratively of everything they own. Our thought experiment merely shows us the logical endgame. Prisoners might hide their resentment, or bravely resist torture (at least for a time) but when we lose the very capacity to have privileged access to our psychological information — the capacity for self-knowledge, so to speak, we literally lose our selves.
In making the connection between autonomous personhood and the privacy of thought in this way, we needn’t rely on a Cartesian view of the mind. The connection isn’t metaphysical. It is a presupposition of understanding and communicating with one another. Mutual communication — as opposed to, say, eavesdropping — is about sharing. When communicating freely in this way, we see one another as subjects, as persons whose thoughts are our own — thoughts to which we have privileged access and are attempting to communicate. This assumption might be mistaken in particular cases of course. But it is hard to make sense of mutual, open communication without it. This is not a fact that requires us to think that the mind is non-physical. But it does tell us that our concept of psychological privacy and one centrally important notion of personhood — that of an autonomous person — are deeply linked.
John Locke, who thought about all these ideas, described personhood in general as a forensic concept. By this, he meant that it was an idea with a legal purpose — and it is. We use it to decide who can be held responsible, and who has rights that the state should not violate. But the concept of an autonomous person has an additional role. It matters because it is the idea we use when we think of ourselves as just that — as developed adult selves. So while privacy, too, is a legal concept, its roots are deeply intertwined with the purposes and point of the more basic concept of having a self. And that in turn raises all sorts of questions worth asking. Some of these are philosophical and psychological: including the limits of, and underlying explanation for, the privacy of the mental. But others should get us to think about how our technologies are themselves changing our ways of thinking about the self.
However we resolve these issues, we would do well to keep the connections between self, personhood and privacy in mind as we chew over the recent revelations about governmental access to Big Data. The underlying issue is not simply a matter of balancing convenience and liberty. To the extent we risk the loss of privacy we risk, in a very real sense, the loss of our very status as subjective, autonomous persons.
Michael P. Lynch is a professor of philosophy at the University of Connecticut and the author of “In Praise of Reason” and “Truth as One and Many.” He is at work on a new book, “Prisoners of Babel: Knowledge in the Datasphere.” Twitter @Plural_truth.