Sunday, 25 October 2015

David Benatar on the Harm of Coming into Existence

In his book Better Never to Have Been. The Harm of Coming into Existence (Oxford University Press 2006), David Benatar skilfully defends the seemingly absurd view that we would all be better off if we had never been born and that, precisely for this reason, it is a) morally wrong to bring children into existence, b) morally wrong not to abort a fetus before it comes into existence “in the morally relevant sense at around twenty-eight or thirty weeks gestation” (148), and c) morally desirable that our species (and indeed all sentient species) go extinct earlier rather than later. Even if one’s children are going to have a comparatively good life (which one can never be sure of in advance), it is still never good enough to outweigh the harm of existence, and the longer humanity carries on with prolonging its existence by procreation, the more unjustifiable suffering there will be.

According to Benatar, non-existence (or more precisely not coming into existence, which is different from ceasing to exist) is always preferable to existence. This is so for the following reasons: first, even the most blissful human life is still subject to various forms of inevitable suffering: “pain, disappointment, anxiety, grief, and death” (29). No matter how lucky you are, it is simply not possible to avoid all of these harms once you have started existing. The only way to avoid them is by not coming into existence. “Only existers suffer harm.” (29) Second (and most crucially) this suffering is not outweighed by the many good things that you may enjoy when you are alive, even if those good things in your life by far outnumber the bad things. While this may be sufficient to make your existence worth continuing, it is not sufficient for your life to be worth starting. The good things cannot outweigh the bad things because there is a basic asymmetry between pleasures (positive experiences, satisfied preferences, or goods of any kind) and pain (negative experiences, unsatisfied preferences, or the lack of goods), such that the absence of pain is good even if that good is not experienced by anyone, while the absence of pleasure is not bad unless that absence is experienced by someone (30). So in other words, while non-existence is better than a bad existence, it is not worse than a good existence. This asymmetry explains why we tend to believe that it is a moral duty not to bring people into existence that we know are likely to have a miserable life, but not that it is moral duty to bring people into existence that are likely to have a (comparatively) good life. If we wanted to insist on the symmetry between pleasure and pain, then we would either have to claim that there is nothing wrong with bringing people into the world that we know will have a miserable life, or that we are morally obligated to bring as many happy people into the world as possible. If we are not prepared to subscribe to either of those two views, then we have to accept the asymmetry between pleasure and pain. Yet if it is good to prevent the existence of a life with pain in it, but not bad to prevent the existence of a life with pleasure in it, then it follows that even “a life filled with good and containing only the most minute quantity of bad – a life of utter bliss adulterated only by the pain of a single pin-prick – is worse than no life at all.” (48)

Benatar knows very well that few people will be willing to accept his conclusion, no matter how compelling his argument may be. The world is, after all, full of “cheery optimists” (211) who stubbornly and against all logic cling to the belief that their life is, all things considered, not so bad (and much better than it actually is), that bringing children into the world is a good thing or at least not something that is generally morally wrong, and that we have a moral obligation not to endanger the continued existence of humanity. However, as Benatar argues, these deeply ingrained intuitions are not trustworthy because they are simply the psychological effect of evolutionary pressures. We only think that way because it promotes the survival of the species: “Those with pro-natal views are more likely to pass on their genes.” (8) That is why we are very good at seeing the silver lining, but not so good at seeing the cloud, whose continued existence we tend to ignore. Instead of seeing life as it really is (namely “a piece of shit when you look at it”, to quote not Benatar, but Monty Python), we are “engaged in a mass self-deception about how wonderful things are for us” (100). The fact that most people do not regret having come into existence does therefore not count against the argument because it is not rational reflection that leads people to be happy with their existence, but their “primal” psychological biases, which have been shaped by the process of natural evolution. Benatar thus uses the same kind of evolutionary debunking argument to discredit widely held moral intuitions (in his case: that it is not morally wrong to reproduce and not morally wrong not to abort a healthy child, and that it is morally wrong to prevent the existence of future human life) that Peter Singer uses in “Ethics and Intuitions”[1] in order to debunk anti-utilitarian intuitions.

Now since I am a cheery optimist myself (i.e., I do not regret having come into existence and do not feel guilty of having brought others into this world), I find it difficult to agree with Benatar’s conclusion and would very much like to find fault with it. However, I do accept that while we do not have a moral duty to cause the existence of happy people, we do have a moral duty not to cause the existence of unhappy people. So it seems that I do accept the asymmetry claim: not causing the existence of happy people is not wrong, but causing the existence of unhappy people is. I also agree that we would not be worse off if we had never existed. So I guess what I do not agree with is the claim that we would have been better off if we had never existed. While existence may not be preferable to non-existence, even if that existence is rich and rewarding, neither is non-existence generally preferable to existence (though it might be in some cases). If that is correct, then we do not have a duty to procreate (at least not for the sake of those we bring into existence), but neither do we have a duty not to procreate. It seems to me that Benatar’s claim that non-existence is preferable to even the best possible human existence gains its plausibility not so much from the asymmetry claim, but from the evolutionary debunking argument that suggests we vastly overestimate the quality of our lives. But for this to be even possible we need to assume that we may be mistaken in finding our lives worth living. What Benatar is saying is that even though we may be perfectly happy with our lives, we ought not to be happy, that even though we may not regret at all having been brought into existence, we ought to regret it. Life is in fact pretty bad, but we are constitutionally unable to see it. Yet if we don’t perceive our lives as bad, how can they be in fact bad? Well, we might say that there are certain features that a human life must have in order to be called good. But normally we would seek to establish a list of such objective good-making features by looking at what actual lives we think go well. But this Benatar cannot do because he believes that there are no such lives. What he does instead is postulate a counterfactual state of complete autonomy as the norm for a good life, which, incidentally, feeds into the transhumanist narrative that the current state of humanity is fundamentally deficient and, in comparison to what is theoretically possible, a harmed state, or a state of disability[2]: “Paraplegics may require special access to public transport, but the inability of everybody to fly or to cover long distances at great speed means that even those who can use their legs require transportation aids. Our lives surely go less well for being so dependent. Our lives also go less well because we are susceptible to hunger and thirst (that is unable to go without food or water), heat and cold, and so on. In other words, even if disability is socially constructed, the inabilities and other unfortunate features that characterize human lives are enough to make our lives go very badly – indeed much worse than we usually recognize.” (119)

In other words, our lives are in fact bad because we lack complete independence, because we need stuff and because it is not fully under our control whether we get what we need. I don’t think that neediness is something that makes our lives on the whole bad (and worse than if we weren’t needy creatures). More importantly, I don’t think it is more realistic to regard our various dependencies in that way. It is not in any way closer to the truth of the matter. It simply betrays a different attitude to life and what makes it good. Transhumanists, however, should adopt Benatar’s view and argue that as long as we don’t radically enhance ourselves so that we are no longer dependent on food and water, temperature, and transportation aids, we’d be better off dead, so that the only justification for continuing our existence as a species is a determined effort to pursue a transhumanist agenda of overcoming all our dependencies. It all fits together perfectly: the transhumanist dissatisfaction with the current human condition and Benatar’s “pro-death view”.

And Benatar’s view is even more “pro-death” than he himself cares to acknowledge. If I was convinced that Benatar was right, that it would indeed be better if the human race became extinct sooner rather than later, then I might well feel compelled to conclude that we have a moral duty to “embark on a ‘speciecide’ programme of killing humans” (196). The amount of suffering in the world could, after all, “be radically reduced if there were no more humans.” (224). For obvious reasons Benatar does not encourage this inference, saying that it would be wrong for a moral agent to kill somebody “without proper justification”, mostly because cutting a human life short adds to (rather than diminishes) the harm of their existence. But the problem is that if there is harm in killing people, then we can still weigh this harm against the harm that would result from allowing the human species to continue to exist. In other words, the fact that if I would be responsible for the continued suffering of many more generations of humans that would be brought into existence if I did not kill everyone off surely does give me “proper justification”. It seems that the harm I would inflict on those that already exist would be more than outweighed by the many billions of lives that I would save from “the immense amount of suffering that this will cause between now and the ultimate demise of humanity” (208). I think I’d rather stay a cheery optimist than accept this conclusion.

[1] Peter Singer, “Ethics and Intuitions”, The Journal of Ethics 9 (2005): 331-352. Cf. my reading notes, “Peter Singer on Ethics and Intuitions”:

[2] Cf. John Harris, “Is Gene Therapy a Form of Eugenics?”, Bioethics 7.2, 3 (1993): 178-187; and John Harris, Enhancing Evolution, Princeton 2007.

Monday, 19 October 2015

Lazari-Radek and Singer on the Objectivity of Ethics and the Unity of Practical Reason

Katarzyna Lazari-Radek and Peter Singer’s paper “The Objectivity of Ethics and the Unity of Practical Reason”, published in Ethics 123 (2012): 9-31, aims to defend the objectivity of ethics, or more precisely the objectivity of a particular ethical judgement, against the kind of evolutionary debunking argument that was brought forward especially by Sharon Street.[1] They take as their starting point the “dualism of practical reason” that confounded Henry Sidgwick in his Methods of Ethics[2] and that prevented him from concluding that there is only one rational answer to the question what we ought to do, namely the utilitarian one that favours impartiality and tells us to aim at the good of all. Sidgwick’s problem was that it seems just as rational to only aim at one’s own good. Thus practical reason commands us to both pursue our own best interest and the best interest of all. Although those two goals may often coincide, there are clearly also situations where they can clash. Therefore, in those situations, reason cannot tell us what we ought to do. Sidgwick thought that this problem could not be resolved and that, therefore, ethics cannot be completely rationalized. Lazari-Radek and Singer think it can.

Their strategy is to revisit Street’s evolutionary critique of objectivity in ethics and then to show that while the maxim of universal benevolence or impartiality survives the attack, rational egoism does not. Street has argued that our evaluative attitudes including our moral beliefs about what is right and wrong have been shaped by evolutionary forces and that because our knowledge of how evolution works gives us no reason to suppose that it favours the development of evaluative attitudes that are objectively true (rather than beliefs that are conducive to our survival and to reproductive success), it would be a very unlikely coincidence if our moral beliefs actually were all true. If we were constructed in a different way (say, more like social insects), so that our survival and reproductive success were dependent on different evaluative attitudes, then we would think differently about what is right and wrong. Hence, we have no reason to suppose that our moral beliefs are objectively true.
However, Lazari-Radek & Singer argue that while this argument is on the whole persuasive, it does not undermine the objective truth of the ultimate principle of ethics, which, with Sidgwick, they take to be the principle that we should always do “what is best for the well-being of all” (16), precisely because such a principle does not seem to improve our chances of survival or to increase our reproductive success. On the contrary, it seems to diminish it (19-21).

Indeed, if believing in the objective truth of that principle were conducive to our survival or reproductive success, then we would have no good reason to suppose that the principle was objectively true. But if it is in fact not conducive to our survival or well-being, then, paradoxically we do have a reason to regard the belief as objectively true. Why is that? Because if that belief does not stem from our evolved evaluative attitudes, then it can only be the result of the use of reason. Of course our ability to reason generally is very useful for us by allowing us to solve problems that would otherwise have threatened our survival, so it no doubt has evolutionary value, too, but it is quite possible that it also allows us to do things that are not relevant to our survival, like doing advanced physics and mathematics and grasping objective moral truths. So when we ask why we developed those particular abilities in the first place if they are not conducive to our survival, then a plausible explanation for their existence is “that the ability to reason comes as a package that could not be economically divided by evolutionary pressures. Either we have a capacity to reason that includes the capacity to do advanced physics and mathematics and to grasp objective moral truths, or we have a much more limited capacity to reason that lacks not only these abilities but others that confer an overriding evolutionary advantage. If reason is a unity of this kind, having the package would have been more conducive to survival and reproduction than not having it.” (17)

The unity of reason helps us explain why we have the ability to track moral truths despite the fact that we could survive and reproduce just as well, or even better, without it, and this lack of an evolutionary explanation allows us to conclude that the principle of universal benevolence must be objectively true: “there is no plausible explanation of this principle as the direct outcome of an evolutionary process, nor is there any other obvious non-truth-tracking explanation.” (26)

For the same reason we can now also conclude that practical egoism, i.e. the maxim that we should always choose the action that produces the best outcome for ourselves, is not objectively true. We do, after all, have a perfectly good evolutionary explanation for why we should have that particular evaluative attitude. The belief that we primarily ought to promote our own good and that of our kin rather than that of everyone is exactly the kind of evaluative attitude that we should expect to have developed under evolutionary pressures. It is therefore not reliable and should not be seen as having any normative significance. Since we have that attitude not because we have used our ability to reason and as a result grasped the truth of the underlying principle of egoism, but merely because we have been shaped by the forces of evolution to be that way, practical egoism is in fact not rational. Sidgwick’s dualism of practical reason is thus proven to be unfounded. There is no dualism of practical reason. What practical reason commands is one thing, and one thing only: that we always seek and promote the best outcome for all (28).

Is that argument convincing? I think not. While it may make sense to distrust beliefs and evaluative attitudes that we merely have because having them increases (or at one point did increase) our evolutionary fitness (so that we can assume we would also have them if they were not true) and to try and confirm them on independent grounds, this gives us no reason at all not to hold on to those beliefs and attitudes. It merely gives us reason to doubt that those attitudes are objectively true. If I have to have a healthy concern for my own good to survive, and I do have an interest in surviving, then it is perfectly rational for me to promote my own good first and foremost. It is just not rational to believe that this is what I ought to do, or more precisely that it is objectively true that this is what I ought to do. In other words, it is difficult to uphold moral realism in the face of an evolutionary explanation of our moral beliefs, but it is not difficult to continue letting ourselves be guided by certain moral or prudential principles.

Perhaps more importantly in the context of the present argument, the fact that we do not have an evolutionary explanation for some of our evaluative attitudes, e.g. universal benevolence or the belief that we should do what is best for all, does not imply that they are more reliable than those that can be thus explained.  When Lazari-Radek and Singer state that “there is no plausible explanation of this principle as the direct outcome of an evolutionary process, nor is there any other obvious non-truth-tracking explanation” (26), they simply assume without further argument that those acts of reasoning that lead us (or some of us) to postulate that particular moral principle of universal benevolence are truth-tracking. But surely the fact that a belief is not directly caused by evolutionary forces does not prove that we have it because it is true. If reason comes indeed in one package, so that our ability to postulate the truth of that particular moral principle is a mere (not fitness-enhancing) by-product or our (generally fitness-enhancing) ability to reason, then we have already explained it. A further explanation – we have it because that belief is true – is not needed. Moreover, it is difficult to see why we should assume that, although reason is generally an ability that has evolved because it increases our chances of survival and not because it leads us to the truth, should in some instances allow us to see the world as it really is. If reason is not generally truth-tracking, why should we suppose that it is when it leads us to have beliefs that are not conducive to our survival? The hypothesis of a “unity of reason” helps us explain how we could have developed an ability to grasp objective moral truths, just as it helps us explain why we have developed the ability to grasp abstract mathematical truths that have no practical value, but it doesn’t do anything to show that we have indeed developed such an ability.

[1] Sharon Street, “A Darwinian Dilemma for Realist Theories of Value”, Philosophical Studies 127 (2006): 109-166.
[2] Henry Sidgwick, The Methods of Ethics, 7th ed. London: Macmillan 1907.

Saturday, 10 October 2015

Peter Singer on Ethics and Intuitions

In his 2005 paper “Ethics and Intuitions” (The Journal of Ethics 9: 331-352), which I recently reread, the Australian philosopher and ethicist Peter Singer sets out to “argue that recent research in neuroscience gives us new and powerful reasons for taking a critical stance toward common intuitions” (332). The argument follows an argumentative pattern that I have noticed is increasingly being used by ethicists today, especially those of a broadly utilitarian persuasion: some science or another is said to present us with indubitable facts that clearly show some of our commonly held moral convictions to be wrong, unfounded, or simply not worth holding on to. In Singer’s case, the scientific findings that he builds his case on are the results of measuring test subjects’ brain activity through functional magnetic resonance imaging (fMRI) while confronting them with trolley problems and asking them what the right thing to do in such situations was. Unsurprisingly, it turned out that people are generally more reluctant to get involved in “personal violations” than in “impersonal violations” in order to achieve a certain morally desirable outcome. If all that needs to be done to save five people on a railroad track who are about to be crushed by an oncoming trolley is to throw a switch that redirects the trolley to a different track where it will kill only one person (who would otherwise remain unharmed), most people are prepared to say that one should do it. If, however, the five people can only be saved by pushing a stranger with sufficient mass and weight to stop the trolley on the track, then most test subjects say that this would be wrong, even though the outcome (one life is sacrificed to save five others) is exactly the same. So in the first case they judge like good utilitarians, while in the second they don’t. The question is why. Neuroscience provides the answer: while in the first, “impersonal”, case those areas in the brain that are associated with the emotions show less activity than the areas associated with cognition, the opposite happens in the second, “personal”, case. Also, those few people who thought it was right to push the stranger onto the tracks showed more cognitive brain activity than emotional brain activity. It did, however, take them longer to come to a decision, indicating that they, too, first had to overcome an instinctive negative emotional reaction to the idea of personally harming people.

But what do those findings tell us about what is right and wrong, and more specifically whether it is right or wrong to kill people, either personally or impersonally, in order to save a larger number of lives? It seems to me that they don’t tell us anything at all about this. Singer, however, believes otherwise. The very fact that anti-utilitarian judgements are apparently due to strong instinctive emotional reactions rather than an emotion-free process of reasoning provides sufficient grounds for rejecting those judgements as unfounded. Although admittedly those neuroscientific findings in and of themselves “cannot prove any normative view right or wrong” (347), they should ultimately lead us to embrace utilitarianism as the best normative ethical theory if we consider them in the context of what we know about our own evolutionary history. For what we know is this: for a long time humans lived in small groups, and in these groups “violence could only be inflicted in an up-close and personal way – by hitting, pushing, strangling, or using a stick or stone as a club. To deal with such situations, we have developed immediate, emotionally based responses to questions involving close, personal interactions with others” (347-8). Knowing this, we can understand why “pushing the stranger off the footbridge elicits these emotionally based responses” (348), while merely throwing a switch to kill someone from a distance does not, namely because that kind of situation simply did not arise when those responses were developed. This means that our current emotional response to personal violations is a mere accident of our evolutionary history and hence without normative significance. We should therefore disregard those responses and conclude there is no morally relevant difference between the impersonal violation (throwing the switch and thereby causing a person’s death) and the personal violation (pushing somebody to their death). Moreover, we should also conclude that the way most people react to the thought of impersonal violations, judging them as justifiable when they are required to bring about a greater good (i.e., in this case, more lives being saved), is the right way to react, simply because it is clearly the rational answer: “The death of one person is a lesser tragedy than the death of five people. That reasoning leads us to throw the switch in the standard trolley case, and it should also lead us to push the stranger in the footbridge” (350). Of course believing that the death of anyone is a tragedy (and hence ought to be prevented) may be said to be based on a moral intuition, but if it is, then it is a “rational intuition” (351) which derives from an impartial, objective consideration of the situation and not from the accidental circumstances of our evolutionary past. To support this view, Singer approvingly cites Sigdwick’s third ethical axiom according to which “the good of any one individual is of no more importance, from the point of view (if I may say so) of the Universe, than the good of any other.” (351)

Now I can see at least three problems with this argument. First, contrary to what Singer suggests, the fact that humans used to live in small groups and that because of the short range of their weapons all killing was necessarily personal does nothing at all to explain why most people in our society are reluctant to regard personally killing someone as morally permissible. Especially not when it comes to strangers. On the contrary, if it is correct that all violence had to be up-close and personal (i.e. “by hitting, pushing, strangling, or using a stick or stone as a club”), and if we can assume, which I think we can, that such violence was not uncommon, especially between those small groups who, after all, had to compete for scarce resources, then believing in the formative power of our evolutionary heritage should lead us to expect people to be rather unconcerned about inflicting mortal harm on strangers. It is hard to see what evolutionary purpose a reluctance to kill strangers could have had 50,000 years ago. So the whole evolutionary explanation of our emotional reaction in the trolley case scenario is nothing but pseudo-scientific hogwash.

Secondly, it may be true that “from the point of the Universe” nobody’s good is of more importance than the good of any other, but that is because from the point of the Universe nobody’s good is of any importance at all. This is the perfectly reasonable conclusion drawn by the psychopath who entirely lacks the instinctive emotional responses that ordinary people have to inflicting harm on others. Singer mentions the psychopath in passing when he describes the reaction of those who judged that pushing the stranger off the bridge was the right thing to do.  Even those more rational test subjects, he points out, had to struggle with their emotions and ultimately judged the case “in spite of their emotions”. Anyone would, “unless they were psychopaths” (341). But since Singer argues that those emotional inhibitions, and indeed all non-rational moral intuitions, are misplaced and ought to be ignored or discarded, it is difficult to avoid the conclusion that the ideal moral reasoner is in their mental constitution rather like a psychopath. Singer seems to sense this himself when, in a curious move towards the end of his paper, he, once again citing Sidgwick in his support, warns us from praising “people who are capable of pushing someone off a footbridge in these circumstances” (350) because that might encourage them to “do it on other occasions when it does not save more lives than it costs”. This is clearly a form of the slippery slope argument that is occasionally employed by utilitarians, but I find it difficult to get my head around it. The slippery slope argument may have some plausibility when we consider acts that are just a bit wrong or bad and that may easily pave the way for acts that gradually become worse. You start with one little theft or lie or act of bullying, add another, and soon you have created a habit that runs out of control, especially if you receive some encouragement. But why on earth should we expect that being encouraged to do the right thing (and praise surely is a form of encouragement) should lead people to do bad things (such as morally unjustified killing sprees)? Unless of course there is something already deeply wrong with someone who is capable of doing that supposedly right thing.

Finally, even though Singer briefly addresses the genetic fallacy in his paper, denying that he commits one, his argument seems to me a paradigmatic case of such a fallacy. The fact that the specific way we look at the world and judge what is good and bad, right and wrong, has its origin in our human nature - which has been shaped by our evolutionary history and generally circumstances beyond our control that conceivably might have been different and, if they had, might have left us with a different nature - in no way discredits it. If it did, then we would be left with nothing at all that we could rely on, because the way we reason is just as much rooted in what we have grown to be as the way we feel. There is no view from nowhere, and the universe couldn’t care less whether we live or die, or how many live or die. Singer claims that a normative ethical theory may reject all of our moral intuitions “and still be superior to other normative theories that better matched our moral judgments” (345). I don’t think such a theory would be desirable, or useful, or indeed possible. Ethical theory cannot ignore who and what we are. It must refer back to our nature, which includes our moral nature. For better or worse, there is no escape from our way of looking at the world. Trying to get away from this, to leave our human perspective behind, is not “a way forward” (349), but a fool’s errand. For what leads to moral scepticism is not, as Singer claims (351), the acknowledgment of our grown nature, but on the contrary its denial.