On The Human and Ethical Ramifications of Information Systems

Do guns kill people or do people kill people?

To be clear: this is not a political blog. While I do have my opinions on the tension between 2nd amendment rights and public safety I am not particularly interested in discussing those opinions here. There are many other places on the Internet where you can engage in discussions like that. I want to use this question to engage a more fundamental ethical problem — the question about the application of ethics to technology, and in particular whether technology/tools have inherent ethics or whether the ethics purely rest in the human or intelligent being making the decision.

By the way, I am ignoring all the other legitimate uses for guns in this post because, as I said above, this is not really about gun rights. We are also going to intentionally ignore all of the American background about the Founding Fathers, the Revolution, British occupation, militia, government overreach, law enforcement, drug cartels, suicide rights, child danger, # of guns per country, etc., etc.

So, there are two simple answers to the question about guns killing people or people killing people.

1. Guns kill people.
2. People kill people.

If we expand #1, what we are saying is that guns themselves have an ethical component that is primarily negative. The technology itself is bad or evil. There is a moral value that is inherent to a gun itself aside from the use to which it is put. This tends to lead to discussion about gun control or even outright bans. If you start to examine this view at all, it becomes kind of suspicious because how can an inanimate piece of wood and metal ((or plastic or printed from a 3d printer with sintering powder or whatever they are doing these days)) be good or bad independent of a human being? If a gun is lying there, doing nothing, and is never used, how does using the words “good” or “bad” even make any sense? It also ignores how people are involved in making the decisions, and tends to avoid any discussion of how we could try to affect people’s behavior.

If we expand #2, what we are saying is that the technology is entirely value-neutral and that the responsibility for the usage of the technology rests solely on the human being that is using the technology. This tends to lead to discussion around removal of gun control, the rights of the individual, and how people lacking in self-control or with bad intentions are solely responsible for death via gun. If you examine this view closely it also starts to become suspect, because it does not acknowledge that the people would, at the least, have been much less likely to be killed without the gun(s) being involved. ((This, by the way, is the viewpoint that most techies seem to take about the technology that they work with in general although not necessarily about guns in particular.))

I do not agree with either of these viewpoints. They both seem unsatisfactory to me and do not address some underlying objections.

Let us generalize the question a bit to make it more relevant.

Does Technology/Tool X cause Effect(s) Y (without reference to human action or intervention), or is Effect(s) Y caused entirely by human beings, and are human beings solely responsible for Effect(s) Y without reference to Technology/Tool X?

The way the question is phrased is problematic and leads to the unsatisfactory answers. It is a logical fallacy called a bifurcation, which means that a forced, limited set of answers have been posed to a question which might have more answers. Perhaps neither of the options is accurate or helpful in the above question.

I think there is a third option, which is that technology itself does not have an independent, inherent ethical value, but that the possible uses which the technology might be put to by human beings do “slant” its ethical impact. So, a technology which would be primarily or exclusively used for actions which we would deem generally unethical or even evil (like, say, biological warfare) would be considered ethnically suspect. This is because the technology amplifies the ability of human beings to do bad things. Orwell said “On the whole human beings want to be good, but not too good and not all of the time.” So there will always be somebody around part of the time who wants to do things like rob, kill, rape, etc. even though most people are basically decent.

A technology used primarily for actions that we would consider “good”  ((“What is good/the good” is the subject of a whole branch of philosophy called Ethics, and I am not going to get into all of that here. This post is already long enough. I will let you, the reader, make an arbitrary decision about what you define as “good” for the purposes of our discussion)) would be something like vaccines or construction tools, and those would therefore be slanted as “good” technologies. ((“Dual-use” technologies like guns or nuclear power/bombs are trickier to categorize and manage because of the complexity of their possible applications. Human beings being what we are, we will always find tricky ways to use and misuse tools that originally had another purpose or purposes. Many technologies or tools end up being dual-use, but that discussion of use is more a detailed thing to hash out once we have agreed on the general principle that we are discussing here.))

What does this have to do with Information Technology and systems?

We implement systems all of the time as IT professionals. These systems change the way people do their work and relate to each other. Do these systems have an ethical component? The most common response to this is “the system has no inherent ethics. The ethics are entirely composed of how people treat each other and the system is incidental.” We already exposed the problems surrounding this argument above, but just to emphasize, if the system makes it much easier for one person or group of people to take some action towards another person or group of people, how can we say that it has no ethics?

A real-life example: Ashley Madison. This is a website which encourages partners in a relationship ((I would guess usually a marriage?)) to cheat on each other. Could people cheat on each other before this website? Of course, that has been going on since the beginning of time. Does this website make it easier for them to do so? YES. Have the owners of this website discussed the ethical ramifications of the technology that they are providing — yes, see here. What did they say? They basically echoed argument #2 above — that people cheat on people and that their website is just a morally neutral tool. In fact, they went beyond that and said that they are offering a “public service.”

Let’s be clear — the folks at Ashley Madison are not just standing back while people shoot each other. Metaphorically, they are walking into a hostile situation and handing out guns, and then saying that they are in no way responsible for the resulting violence. Clearly the system lends itself to a particular use, and the ethical implications of this use are troubling, to say the least. Any claims on their part of innocence or even assistance ring hollow.

Now, should we regulate Ashley Madison out of existence or ban it or something like that? No, I don’t think so, because that starts to conflict with a lot of other legal and ethical issues around free speech, free choice, etc. We used to enforce laws against adultery in this country, and at this point I think we’ve generally agreed that those laws are not helpful, in fact are counterproductive, and are kind of silly in some ways. Some of those laws are still on the books but are generally ignored. They probably should be removed/struck down. This post is not about remedying ethical issues — that is another whole, long discussion — it is about questioning whether ethics are involved.

So when we implement a system that affects how people interact, particularly if it enables or discourages certain kinds of behavior or it fundamentally alters the balance of power between people or groups, the system itself has an ethical component independent of the people that use it. Don’t misunderstand me — the people are ethical agents too — but acting like we, as implementers or caretakers of the system, are totally morally neutral to me seems mendacious or at least thoughtless.

I would like to note that I actually wrote most of this blog post a while back, before the whole Ashley Madison hacking incident played out. Therefore my views as expressed above were not  influenced by the hacking incident, although it was certainly interesting to see how AM had been duping their customers and also how anybody’s assumptions about AM (or any other cloud vendor, for that matter) taking your security as seriously as you do are probably unwise and untrue.