It has been bothering me for a while that I lacked a coherent logical framework for my views on morality. Being the logical-mathematical type, I favour an axiomatic approach to ethics, in which I will choose the axioms that make sense to me and give conclusions I find reasonable. You might claim that this actually makes me immoral (because I could very well, for example, adopt some axiom that allows me to commit genocide). However, I deny that any ethical system has any ultimate justification, so that everyone else’s reasons for choosing their own ethical systems are just as bad as mine. (Often, such an ultimate justification is derived from a higher power. I do not believe in any such thing.) I am writing this post simply to express myself, and not in an attempt to preach that my ethical system is superior.
The primary conflict facing me here is that of deontological and utilitarian approaches to ethics. I hear utilitarian arguments all the time. It seems to be the default ethical principle that intelligent atheists use for resolving moral dilemmas. For example:
- Saying things that offend people is bad because it hurts their feelings and therefore entails disutility. However, if they are unreasonably sensitive to dissenting viewpoints, then the disutility associated with avoiding saying those things may exceed the disutility of offending them. This is assumed to be the case when what one is saying is scientific, artistic, or political in nature.
- Torture is justifiable when the victim knows the location of a ticking time bomb that is about to blow up New York City, because the disutility of allowing New York City to be blown up vastly exceeds the disutility of inflicting severe pain on a single individual.
- Dropping the atomic bombs on Hiroshima and Nagasaki was the right thing to do because it resulted in fewer deaths on both sides than would have resulted from any other course of action.
- The authoritarian policies of the Chinese government are justifiable because they stabilize a system that has improved the quality of life of the vast majority of its citizens, whereas the number of people who have been punished for their dissent is very small.
I actually do not agree with any of the four statements presented above, and thus I cannot accept utilitarianism as a moral precept for making a moral judgement in the above cases. This is why I have usually identified with a deontological ethical system. Thus:
- Everybody has a right to speak, but nobody has a right not to be offended.
- Torture is never justifiable under any circumstances whatsoever. People have a right not to be tortured. If I were asked to torture somebody in order to save New York City, I would refuse. It is likely that somebody else would end up doing it, but then the moral responsibility for the torture would lie with that person, and not with me.
- Killing innocent people without their informed consent is never justifiable under any circumstances whatsoever.
- There should be no laws that take away people’s fundamental human rights. The authoritarian policies of the Chinese government violate all three of the points I have made above.
The problem is that it is difficult to extend a deontological system to all cases in which one might have to make a difficult decision that affects other people. In typical everyday situations, nobody’s fundamental human rights are at stake. I would have to add more and more axioms, to the point where some of them would inevitably come into conflict, and I would likely then have to fall back on a utilitarian argument in order to resolve the conflict.
My solution has been to follow a two-tiered ethical system. Level 1, the higher tier, consists of a minimal set of negative liberties, including the rights, above, not to be tortured at all, not to be killed unless guilty, and not to be barred from expressing any point of view or disseminating any information. Violating these liberties is wrong, although one may waive them to varying extents under certain circumstances. (Physician-assisted suicide comes to mind.) I can’t think of any circumstance under which two negative liberties can come into conflict.
You might claim that the ticking time bomb scenario represents a conflict between the right of one person not to be tortured and the right of New Yorkers not to be killed. However, if I make the choice not to commit torture, then I am not violating the right of New Yorkers not to be killed, because it is a negative liberty. The only person violating that right in this case would be the person who planted the bomb. Generally, inaction does not violate negative liberties.
Of course, I do have to address the fact that inaction is very wrong in some cases. For example, if you go in for surgery, and the doctor makes incisions and then walks away without closing them, and thus leaves you to die. Generally, inaction may sometimes violate a level 1 obligation. These may be created only by contract. The doctor, in beginning to operate, accepts an implicit contract because you have entrusted the doctor to do their best to heal you. Opening and not closing would be a brazen violation of this contract. Negative rights overrule obligations, although one may contractually waive one’s own negative liberties to varying extents.
If I were designing a religion, I would make it so that transgressions against level 1 that are not punished in real life would result in punishment after death. (It would be finite, of course; the infinite punishment in, for example, the Christian version of hell seems completely ridiculous to me.) But that’s not the case, and I believe that real life is all that there is and ever will be; so instead I simply condemn level 1 transgressions in the strongest terms possible, and consider them “evil” acts, whatever that might mean.
I use level 2 to make all moral decisions in which level 1 does not apply. This constitutes nearly all moral decisions I have ever had to make in real life. In the sense that level 2 transgressions wouldn’t result in going to hell in my hypothetical religion, they are not “morally binding”. In level 2, being responsible for causing disutility to other people is bad, so that it is essentially utilitarian. For example, saying mean things to nice people violates level 2, so I will probably not do it, and I have some weak expectation that other people shouldn’t do that, either. (Saying mean things to mean people, on the other hand, may have various positive effects such as making them realize that they have been hurtful, or empowering their “victims” to stand up to them.) However, I don’t believe that there is anything evil about saying mean things to nice people, no matter how hurt they may be as a result. In fact, even if the mean things you say to a nice person causes that person to commit suicide, I still won’t consider you to have done anything evil. (That doesn’t mean I won’t disapprove, of course.) Some utilitarian argument should also be used to determine whether to lie to somebody, whether to reveal something you are told in confidence, whether to be sexually unfaithful within a monogamous relationship, and so on.
This is the ethical system that I have provisionally adopted. By this division into level 1 and level 2, I have achieved my goal of not letting utilitarian arguments convince me of the moral justifiability of things I consider to be very wrong.
I intend to keep a log of moral decisions that I make from now on and how they fit into this ethical system I have constructed for myself. In accordance with what I said in the first paragraph, I will clarify or modify my ethical system whenever I find it to conflict with my intuitive concept of right and wrong.