In this essay, we will use the ACM Code of Ethics to analyze the ethically dubious task of designing surveillance software (i.e. software that is intentionally designed to capture or analyze communications or data, and that is covert and intrusive in nature). The use of such software presupposes that there is a greater utilitarian need to acquire the private information of an individual or organization, than to protect said group’s privacy. I hope to answer the question of: What does the ACM Code of Ethics dictate in this situation.
Surveillance systems by design, invade the privacy of individuals. From the start, someone working on a surveillance system project might question the ethics of doing so. Isn’t there a 0moral imperative that states “Respect the privacy of others”1? What takes precedence here is the greater utilitarian need of a society to protect its citizens from harm. It is probably not by accident, that this is the first ACM Moral Imperative: “Contribute to society and human well-being” 2. This imperative sets the framework for all that follow.
The second moral imperative states “Avoid harm to others”3. Surveillance software does not physically harm an individual, however it could be argued that the software may lead to the capture and incarceration of an individual and thereby do harm to them. The first moral
imperative takes precedence over the second in this situation. However, how this imperative might still apply to us is that we need to do our best job in assuring that the software does what it is intended to do, and does not cause inadvertent harm to other individuals or groups.
An example of this might be queries that returned incorrect results, or queries that casted to wide a net and improperly targets individuals. Problems like these can be avoided by adhering to imperatives like “Give comprehensive and thorough evaluations of computer systems and their
impacts, including analysis of possible risks” 4, and “Be fair and take action not to discriminate” 5.
There are moral imperatives that remind us to exam the limitations of our software. And even though confidentiality agreements may prevent us from speaking out publicly, owners of the software should at least have a good understanding of its limitations. This is where the imperative “Improve public understanding of computing and its consequences” 6, and the imperative “Ensure that users and those who will be affected by a system have their needs clearly articulated during the assessment and design of requirements; later the system must be validated to meet requirements” 7, can point us in the right direction.
Note: the surveillance systems that one creates may intentionally be misused. And even though we may not be in a position to effect policy on its’ use, we nonetheless should have a conversation with the users of the system before we walk out of the door. Sections “Articulate social responsibilities of members of an organizational unit and encourage full acceptance of those responsibilities” 8, and “Articulate and support policies that protect the dignity of
users and others affected by a computing system” 9 address this. It is a good point for our leaders to understand the implication of these last two sections. There may come a time when they are asked to appear before congress, and testify to the viability and accuracy of our systems. The code of ethics tells us we must not be passive players when it comes to ethics. It tells us that we should participate, and if need be, be a voice for those who may not be able to speak for themselves.
1) 1.1 Respect the privacy of others
2) 1.1 Contribute to society and human well-being
3) 1.2 Avoid harm to others
4) 2.5 Give comprehensive and thorough evaluations of computer systems and their
impacts, including analysis of possible risks
5) 1.4 Be fair and take action not to discriminate
6) 2.7 Improve public understanding of computing and its consequences
7) 3.4 Ensure that users and those who will be affected by a system have their needs
clearly articulated during the assessment and design of requirements; later the system
must be validated to meet requirements
8) 3.1 Articulate social responsibilities of members of an organizational unit and
encourage full acceptance of those responsibilities
9) 3.5 Articulate and support policies that protect the dignity of users and others
affected by a computing system