Technology and New Challenges for Privacy: Journal of Social Philosophy Special Issue
The good news: The new issue of the Journal of Social Philosophy is a special issue on “Technology and New Challenges for Privacy.” The less good news is that it’s entirely behind a paywall.
There are no abstracts per se, but the first page of each of the seven articles (including the introduction by editor Leslie P. Francis) is available. I used my snipping tool to place the text below. (Note that the emphasis has been added by me.)
What looks especially interesting here:
- The use of large-scale sets of health data raises questions of social justice that are often obscured by the way they are framed. (Privacy, Confidentiality, and Justice)
- Continuous surveillance can place individuals at risk of physical, economic, political, or other damage. Just being aware of how susceptible we are to objectification by anonymous watchers can feel belittling. (Continuous Surveillance of Persons with Disabilities: Conflicts and Compatibilities of Personal and Public Goods)
- The interests aligned against privacy are often defined in terms of their larger social value, and the protection of privacy often has lower political priority than other social interests. (Privacy and the Integrity of Liberal Politics: The Case of Governmental Internet Searches)
- Weighing the value and the harm of anonymity (The Ties That Blind: Conceptualizing Anonymity)
I’d like to thank @PhilosophersEye for a tweet that led me to this special issue.
Articles
Introduction: Technology and New Challenges for Privacy
Leslie P. Francis
Within the scope of the past half-century, privacy has been heralded as a core constitutional value and unequivocally pronounced dead. Privacy— now replaced by liberty—was discerned to lie at the core of a set of constitutional protections and thus to support rights to contraception, abortion, removal of life-sustaining treatment, and other intimate decisions such as whom to marry. But constitutional privacy’s hegemony was short-lived in the United States at least: subject initially to criticism as conceptually confused, constitutional privacy dissolved into a panoply of values ranging from physical integrity to the contents of suitcases or telephone records. Outside of the law, the rise of the Internet, search engines, social networking, and “big data” brought unprecedented abilities to collect, mine, analyze, and identify information about individuals. Commentators identified what they called the “privacy paradox” of people professing to value privacy but behaving as though they did not.
What, then, is to be made of this apparent rapid rise and fall of privacy? Perhaps privacy is at best contextually understood and supported, assuming different forms and importance depending on the norms in place in particular circumstances. Even so, perhaps important metaphysical, epistemological. and normative issues remain, albeit assuming differing forms in different contexts.
This special issue examines philosophical issues raised by contemporary technological challenges to what has been understood as privacy. If privacy is about individual control of direct access to the person, is this to be understood metaphysically, in terms of a particular conception of the person? Is this to be understood epistemologically, in terms of access to other’s persons or minds? Or, is it to be understood socially, as a matter of constructing personal space in different ways in different social contexts with different social norms? Do apparent rejections of privacy—as with willingness to share intimate details over the Internet, to allow government surveillance to intercept feared terrorist threats, or to permit physically implanted chips to halt wandering by persons with cognitive impairments—signal privacy’s demise? Or, are they harbingers of what might be characterized as a “surveillance paradox“: valuing surveillance but only because of the protections of the person that it brings? These issues lie at the center of the papers in this special issue.
The Ties That Blind: Conceptualizing Anonymity
Julie Ponesse
“It is not good to announce every truth.”
—Alexis de Tocqueville“It is incredible what impudence these fellows will show, and what literary trickery they will venture to commit, as soon as they know they are safe under the shadow of anonymity.”
—Arthur Schopenhauer1. Introduction: Anonymity Ambiguities
Talk of anonymity abounds in the twenty-first century. We speak that “anonymous sources” and “anonymous donations” are comforted by “anonymity promises” and “anonymity guarantees” and express desires to speak only “on condition of anonymity.” The growth of the Internet, alone, has in historically unprecedented ways made it possible to anonymize ourselves to both good and bad ends. It allows us not only to secure our personal information and to voice our opinions without fear of undue embarrassment or reprisal, but it also makes us more vulnerable to those whose abuses anonymity makes possible—fraudsters and identity thieves, trolls and griefers, rumor-mongers, and online stalkers. In health care, anonymity helps to ensure patient privacy (as with gamete donors) and to protect individuals with socially stigmatizing conditions (such as human immunodeficiency virus [HIV]). It also provides a unique sphere of protection to journalistic sources, whistleblowers. and those giving testimony to report crimes (e.g.. to Crime-stoppers and Kids Help Phone) and to face their attackers without the threat of further harm. But anonymity can also subvert more authentic forms of communication and facilitate harms that would not be possible, or desirable, without it. Harassment and stalking, rudeness and indecency, mischief, deception, gossip-mongering, and the exploitation and homogenization of peoples all thrive better in an atmosphere of anonymity than without it.
Although most of us. in one way or another, have a sense of what it means to be anonymous and although commitments to anonymity tend to be strong or even impassioned, when we start to dig below the surface, it becomes apparent that a clear sense of the concept eludes us. A cursory look at “anonymity” language reveals some incongruities. “Anonymous” is often used not only to refer to individuals, such as “anonymous authors'” and “anonymous sources,” but it can also refer to the objects or properties in virtue of which individuals are made …
Private Persons and Minimal Persons
Elijah Millgram
It’s a commonplace that privacy can now be abridged and abdicated in ways that weren’t routinely possible until very recently. I want here to draw attention to an alternative configuration of the mind that these techniques make available, which I will call the minimal person.
My explication of minimal personhood is going to take the long way around. I will have to explain what the ethical and political concept of privacy has to do with the older and very different philosophers’ notion of logical privacy: this part of the discussion will connect the recent debates over extended cognition and first-person authority to one another. To get into a position where I can do that, I will have to explain how personhood and the laws of logic are also related topics. And to do that, I will start out with an exercise in what Paul Grice and, following him, Michael Bratman have called “creature construction.”
Philosophers have a long history of treating persons as an occasion for old-school metaphysics. Within the practice of old-school metaphysics, that only you can think your own thoughts is a remarkable fact requiring an equally remarkable explanation. I will be exploring the lower-key proposal that persons are administrative devices, that logical privacy is a bureaucratic requirement rather than a remarkable fact, and that the privilege of keeping your thoughts to yourself—privacy in the layman’s understanding of it—is an aspect of the information management regime to which traditionally organized persons belong. In the more minimal alternative version of the person which I will describe, not everything you think needs to be thought by you.
I
Let’s turn first to motivating the design of the administrative device. Creature-construction arguments proceed by describing a series of progressively more ambitious organisms or robots, each of which handles a performance shortfall identified in its predecessor; the immediate point of these descriptions is to isolate the features that support the incremental improvements in performance, and thereby to justify incorporating those features into the design of an agent faced with a particular range of challenges. After we have the upcoming creature-construction argument in place, I will return to the question of what we are to make of the conclusions of such exercises.
I am going to opt for imaginary robots over imaginary organisms, and the primary dimension along which I am going to arrange my robots is how much the …
Anita Ho, Anita Silvers, and Tim Stainton
I.
Enhanced technology capability now offers multiple means for automated continuous surveillance that transmits and records individuals’ locations and activities. Continuous surveillance systems are used to obtain warning of undesirable or dangerous situations or behaviors, and to track these. Sometimes the existence of surveillance suffices as a deterrent. Sometimes surveillance enables an effectively early counter-measure to be deployed.
Continuous surveillance by mechanical means can deliver a collective good to the community by reducing or preventing disruptive or destructive events, whether such a result is maliciously intentional (such as breaking into cars parked in a mall parking lot) or mindlessly negligent (such as rolling through an intersection after the light turns red). But continuous surveillance also may place individuals at risk of physical, economic, political, or other damage. For example, personal information about the subject’s health, heritage, or habits may be collected and then exposed to public view, perpetuating a biased portrayal that haunts the individual’s social interactions. Moreover, just being aware of one’s susceptibility to objectification by anonymous watchers can feel belittling and thereby be damaging to people as well. Putting continuous surveillance into practice thus calls for deciding whether pursuing a particular collective benefit weighs favorably against courting the risks to which whoever is subjected to observation may be vulnerable.
Debate about the permissibility of continuous surveillance usually considers whether protecting people against the prospective harms to them as individuals should prevail over, or instead be subordinated to, realizing an expected collective good. Framing the conflict so starkly as between personal and public interests presupposes that continuous surveillance is not problematic if, but only if, individuals’ private interests are congruent with the public good. Considering continuous surveillance through the lens shaped in response to this conflict suggests that, from the standpoint of the individual, the prospect of being surveilled should be approached cautiously.
On this familiar framing of how to weigh permission versus prohibition for surveillance, authorization to decide whether being under surveillance aligns with an individual’s personal interest lies with that person. Each individual’s own …
Privacy and the Integrity of Liberal Politics: The Case of Governmental Internet Searches
Dorota Mokrosinska
Governments make extensive use of information and communication technologies to monitor, collect, store, and process personal information about individuals on the Internet. Indeed, as Daniel Solove remarked, the Internet is becoming one of “the government’s greatest information gathering tools.” Governments mine the Internet either by direct online searches or by collecting data from the records of third parties such as Internet service providers, search engines, web-based services (for example, MSN network or Hotmail), online retailers such as Amazon.com or eBay, credit card companies, video rentals, or libraries. Governmental actors justify such practices by citing the public benefit that is derived from the improved capacity to detect fraud, drug trafficking, computer crime, child pornography, and, in the aftermath of September 11, 2001, (potential) acts of terrorism. As Helen Nissenbaum observes, a detailed image of individuals’ Internet activities—knowing what individuals are searching for, what links they have clicked from query results, and what they are buying, reading or watching — is believed to be a valuable indicator that enables the authorities to identify and eliminate threats to society.
Whereas governmental surveillance raises privacy concerns, such concerns do not always lead to better privacy protection measures. When individual privacy conflicts with broader political interests such as those listed above, protecting individuals’ privacy seems to be a luxury that society can ill afford. As Solove remarks, “|t]he interests aligned against privacy—for example, efficient consumer transaction, free speech, or security — are often defined in terms of their larger social value. In this way, protecting the privacy of the individual seems extravagant when weighed against interests of society as a whole.” It takes startling privacy invasions such as those involved in the National Security Administration (NSA) surveillance programs to mobilize a political response.
The fact that privacy protection often has lower political priority than other social interests corresponds to the view of privacy that dominates public and academic discussion. The exercise of privacy, as pictured in public and academic discussion, involves a retreat from social life. This view is of liberal pedigree: Liberals construe privacy as a right asserted by individuals against the demands of a society. We find this view in Alan Westin’s now classic liberal definition of privacy: “Viewed in terms of the relation of the individual to social participation, privacy is the voluntary and temporary withdrawal of a person from the general …
Privacy and Positive Intellectual Freedom
Alan Rubel
1. Introduction
Privacy is often linked to freedom. Protection against unreasonable searches and seizures is a hallmark of a free society, and pervasive state-sponsored surveillance is generally considered to correlate closely with authoritarianism. One link between privacy and freedom is prominent in the library and information studies field and has recently been receiving attention in legal and philosophical scholarship. Specifically, scholars and professionals argue that privacy is an essential component of intellectual freedom. However, the nature of intellectual freedom and its link to privacy are not entirely clear. My aim in this article is to offer an account of intellectual freedom as a type of positive freedom. I will argue that a full account of intellectual freedom must involve more than an absence of constraints. Rather, intellectual freedom is at least partly a function of the quality of persons’ agency with respect to intellectual endeavors. Such an account best explains the relation between intellectual freedom and privacy and avoids problems with conceptions of intellectual freedom based solely on constraints.
2. Background
2.1 Intellectual Freedom
Analyzing the relationship between intellectual freedom and privacy requires first a working conception of intellectual freedom. The International Federation of Library Associations and Institutions (IFLA) bases its view of intellectual freedom on Article 19 of the United Nations Universal Declaration of Human Rights, which states:
Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.
The IFLA explicitly links free expression with the ability “to know,” stating that free expression demands knowledge and that “freedom of thought and freedom of expression are necessary conditions for freedom of access to information.” Based on this view, the IFLA urges libraries to offer “uninhibited access to information …
Privacy, Confidentiality, and Justice
John G. Francis and Leslie P. Francis
Large-scale sets of health data are increasingly useful in understanding health-care quality, comparative safety and efficacy of health-care treatment, disease incidence and prevalence, and the impact of public policy on health status, among many other issues. Some of these data sets have been created in a manner in which individuals were never mentioned, others have been extracted from data sets that originally contained identifiers, and still others retain at least some identifying information. In the implementation of contemporary data protection policy, the primary dividing line drawn is between data sets that do not contain information that can be linked directly to individuals and data sets that do. The more information that can be linked to individuals, it is thought, the greater the risks to them, and so the need for consent to data collection and use — but concomitantly the greater the utility of the data. On the other side, the assumption is that data that have never contained identifiers or that have been stripped of identifiers pose little risk to individuals and can be used to serve a variety of goals without individual consent—at least if there are appropriate protections against re-identification. Criticisms of the use of aggregate data in this way argue that it violates autonomy to use data drawn from individuals without their consent, or that such data uses may harm groups, such as racial or ethnic groups, depending on categorizations used, but these arguments have not gained much traction in the face of strong public health goals. In this article, we argue that it is problematic to frame the debate over data in this way, as a conflict between individual or group rights and the public good. Framing the privacy debate in terms of identifiable information focuses attention on privacy issues and individual choice in a manner that obscures important questions of social justice. But framing the debate about the use of de-identified data in terms of risks of re-identification or group harms may also obscure questions of justice. This article develops an account of the issues of social justice raised by uses of these data and responses to them. Our example is the use of racial classifications in research and public health initiatives regarding human immunodeficiency virus (HIV).
Privacy Values
Privacy is a remarkably protean concept. The extent to which it is multifaceted may call attention to important ethical values or serve as a source of confusion. In the public policy of data protection, privacy has come to cover not …
Related posts:
Big Data, privacy, and civil disobedience
The end of privacy
Wikileaks, nerd supremacy, anarchy, dictatorship, and democracy
Harvesting your intentions: Tumblr’s David Karp meets Zygmunt Bauman
WikiLeaks and modern medicine
Image source: Technews ‘n’ Gadgets
Sorry, comments are closed for this post.