Are there objective answers to moral questions




















There have been many different proposals. Many claim that there is a necessary connection between morality and religion, such that, without religion in particular, without God or gods there is no morality, i. Divine Command Theory is widely held to have several serious flaws. Most think that right and wrong are not arbitrary -- that is, some action is wrong, say, for a reason.

Aristotle, and most of the ancient Greeks really had nothing to say about moral duty, i. All action leads to some end. This is pleasure or happiness. A powerful philosophical picture of human psychology, stemming from Hume, insists that beliefs and desires are distinct existences Hume , Book II, part iii, sect. Smith , 7. This means that there is always a potential problem about how reasoning, which seems to work by concatenating beliefs, links up to the motivations that desire provides.

Accordingly, philosophers who have examined moral reasoning within an essentially Humean, belief-desire psychology have sometimes accepted a constrained account of moral reasoning.

As Hume has it, the calm passions support the dual correction of perspective constitutive of morality, alluded to above. Since these calm passions are seen as competing with our other passions in essentially the same motivational coinage, as it were, our passions limit the reach of moral reasoning.

These are desires whose objects cannot be characterized without reference to some rational or moral principle. Introducing principle-dependent desires thus seems to mark a departure from a Humean psychology. The introduction of principle-dependent desires bursts any would-be naturalist limit on their content; nonetheless, some philosophers hold that this notion remains too beholden to an essentially Humean picture to be able to capture the idea of a moral commitment.

Desires, it may seem, remain motivational items that compete on the basis of strength. Sartre designed his example of the student torn between staying with his mother and going to fight with the Free French so as to make it seem implausible that he ought to decide simply by determining which he more strongly wanted to do. One way to get at the idea of commitment is to emphasize our capacity to reflect about what we want.

Although this idea is evocative, it provides relatively little insight into how it is that we thus reflect. Another way to model commitment is to take it that our intentions operate at a level distinct from our desires, structuring what we are willing to reconsider at any point in our deliberations e. Bratman While this two-level approach offers some advantages, it is limited by its concession of a kind of normative primacy to the unreconstructed desires at the unreflective level.

A more integrated approach might model the psychology of commitment in a way that reconceives the nature of desire from the ground up. One attractive possibility is to return to the Aristotelian conception of desire as being for the sake of some good or apparent good cf. Richardson Reasoning about final ends accordingly has a distinctive character see Richardson , Schmidtz Whatever the best philosophical account of the notion of a commitment — for another alternative, see Tiberius — much of our moral reasoning does seem to involve expressions of and challenges to our commitments Anderson and Pildes Recent experimental work, employing both survey instruments and brain imaging technologies, has allowed philosophers to approach questions about the psychological basis of moral reasoning from novel angles.

The initial brain data seems to show that individuals with damage to the pre-frontal lobes tend to reason in more straightforwardly consequentialist fashion than those without such damage Koenigs et al. Some theorists take this finding as tending to confirm that fully competent human moral reasoning goes beyond a simple weighing of pros and cons to include assessment of moral constraints e.

Others, however, have argued that the emotional responses of the prefrontal lobes interfere with the more sober and sound, consequentialist-style reasoning of the other parts of the brain e.

Greene The survey data reveals or confirms, among other things, interesting, normatively loaded asymmetries in our attribution of such concepts as responsibility and causality Knobe A final question about the connection between moral motivation and moral reasoning is whether someone without the right motivational commitments can reason well, morally. The vicious person could trace the causal and logical implications of acting in a certain way just as a virtuous person could.

The only difference would be practical, not rational: the two would not act in the same way. On his view in the Groundwork and the Critique of Practical Reason , reasoning well, morally, does not depend on any prior motivational commitment, yet remains practical reasoning. That is because he thinks the moral law can itself generate motivation. For Aristotle, by contrast, an agent whose motivations are not virtuously constituted will systematically misperceive what is good and what is bad, and hence will be unable to reason excellently.

Moral considerations often conflict with one another. So do moral principles and moral commitments. Recall that it is one thing to model the metaphysics of morality or the truth conditions of moral statements and another to give an account of moral reasoning.

In now looking at conflicting considerations, our interest here remains with the latter and not the former. Our principal interest is in ways that we need to structure or think about conflicting considerations in order to negotiate well our reasoning involving them. One influential building-block for thinking about moral conflicts is W. Although this term misleadingly suggests mere appearance — the way things seem at first glance — it has stuck.

This suggests that in each case there is, in principle, some function that generally maps from the partial contributions of each prima facie duty to some actual duty. What might that function be? Accordingly, a second strand in Ross simply emphasizes, following Aristotle, the need for practical judgment by those who have been brought up into virtue How might considerations of the sort constituted by prima facie duties enter our moral reasoning?

They might do so explicitly, or only implicitly. There is also a third, still weaker possibility Scheffler , 32 : it might simply be the case that if the agent had recognized a prima facie duty, he would have acted on it unless he considered it to be overridden. This is a fact about how he would have reasoned. On this conception, if there is a conflict between two prima facie duties, the one that is strongest in the circumstances should be taken to win. Duly cautioned about the additive fallacy see section 2.

Hence, this approach will need still to rely on intuitive judgments in many cases. But this intuitive judgment will be about which prima facie consideration is stronger in the circumstances, not simply about what ought to be done.

The thought that our moral reasoning either requires or is benefited by a virtual quantitative crutch of this kind has a long pedigree. Can we really reason well morally in a way that boils down to assessing the weights of the competing considerations?

Addressing this question will require an excursus on the nature of moral reasons. Philosophical support for this possibility involves an idea of practical commensurability. We need to distinguish, here, two kinds of practical commensurability or incommensurability, one defined in metaphysical terms and one in deliberative terms. Each of these forms might be stated evaluatively or deontically. The first, metaphysical sort of value incommensurability is defined directly in terms of what is the case.

Thus, to state an evaluative version: two values are metaphysically incommensurable just in case neither is better than the other nor are they equally good see Chang Now, the metaphysical incommensurability of values, or its absence, is only loosely linked to how it would be reasonable to deliberate.

Hence, in thinking about the deliberative implications of incommensurable values , we would do well to think in terms of a definition tailored to the deliberative context. Start with a local, pairwise form. We may say that two options, A and B, are deliberatively commensurable just in case there is some one dimension of value in terms of which, prior to — or logically independently of — choosing between them, it is possible adequately to represent the force of the considerations bearing on the choice.

Philosophers as diverse as Immanuel Kant and John Stuart Mill have argued that unless two options are deliberatively commensurable, in this sense, it is impossible to choose rationally between them. Interestingly, Kant limited this claim to the domain of prudential considerations, recognizing moral reasoning as invoking considerations incommensurable with those of prudence.

Schneewind This is the principle that conflict between distinct moral or practical considerations can be rationally resolved only on the basis of some third principle or consideration that is both more general and more firmly warranted than the two initial competitors. From this assumption, one can readily build an argument for the rational necessity not merely of local deliberative commensurability, but of a global deliberative commensurability that, like Mill and Sidgwick, accepts just one ultimate umpire principle cf.

Richardson , chap. Sometimes indeed we revise our more particular judgments in light of some general principle to which we adhere; but we are also free to revise more general principles in light of some relatively concrete considered judgment.

On this picture, there is no necessary correlation between degree of generality and strength of authority or warrant. That this holistic way of proceeding whether in building moral theory or in deliberating: cf. Note that this statement, which expresses a necessary aspect of moral or practical justification, should not be taken as a definition or analysis thereof. If even the desideratum of practical coherence is subject to such re-specification, then this holistic possibility really does represent an alternative to commensuration, as the deliberator, and not some coherence standard, retains reflective sovereignty Richardson , sec.

The result can be one in which the originally competing considerations are not so much compared as transformed Richardson , chap. Suppose that we start with a set of first-order moral considerations that are all commensurable as a matter of ultimate, metaphysical fact, but that our grasp of the actual strength of these considerations is quite poor and subject to systematic distortions. Perhaps some people are much better placed than others to appreciate certain considerations, and perhaps our strategic interactions would cause us to reach suboptimal outcomes if we each pursued our own unfettered judgment of how the overall set of considerations plays out.

In such circumstances, there is a strong case for departing from maximizing reasoning without swinging all the way to the holist alternative. A simple example is that of Ann, who is tired after a long and stressful day, and hence has reason not to act on her best assessment of the reasons bearing on a particularly important investment decision that she immediately faces This notion of an exclusionary reason allowed Raz to capture many of the complexities of our moral reasoning, especially as it involves principled commitments, while conceding that, at the first order, all practical reasons might be commensurable.

The broader justification of an exclusionary reason, then, can consistently be put in terms of the commensurable first-order reasons. Whether such an attempt could succeed would depend, in part, on the extent to which we have an actual grasp of first-order reasons, conflict among which can be settled solely on the basis of their comparative strength.

If that is right, then we will almost always have good exclusionary reasons to reason on some other basis than in terms of the relative strength of first-order reasons. We may take it, if we like, that this judgment implies that we consider the duty to save a life, here, to be stronger than the duty to keep the promise; but in fact this claim about relative strength adds nothing to our understanding of the situation.

The statement that this duty is here stronger is simply a way to embellish the conclusion that of the two prima facie duties that here conflict, it is the one that states the all-things-considered duty. Hence, the judgment that some duties override others can be understood just in terms of their deontic upshots and without reference to considerations of strength. Understanding the notion of one duty overriding another in this way puts us in a position to take up the topic of moral dilemmas.

Since this topic is covered in a separate article, here we may simply take up one attractive definition of a moral dilemma. Sinnott-Armstrong suggested that a moral dilemma is a situation in which the following are true of a single agent:. Making sense of a situation in which neither of two duties overrides the other is easier if deliberative commensurability is denied. If either of these purported principles of the logic of duties is false, then moral dilemmas are possible. Dancy , Dancy argues that reasons holism supports moral particularism of the kind discussed in section 2.

Taking this conclusion seriously would radically affect how we conducted our moral reasoning. Philosophers have also challenged the inference from reasons holism to particularism in various ways. Mark Lance and Margaret Olivia Little have done so by exhibiting how defeasible generalizations, in ethics and elsewhere, depend systematically on context. We can work with them, they suggest, by utilizing a skill that is similar to the skill of discerning morally salient considerations, namely the skill of discerning relevant similarities among possible worlds.

More generally, John F. Horty has developed a logical and semantic account according to which reasons are defaults and so behave holistically, but there are nonetheless general principles that explain how they behave Horty And Mark Schroeder has argued that our holistic views about reasons are actually better explained by supposing that there are general principles Schroeder This excursus on moral reasons suggests that there are a number of good reasons why reasoning about moral matters might not simply reduce to assessing the weights of competing considerations.

If we have any moral knowledge, whether concerning general moral principles or concrete moral conclusions, it is surely very imperfect. What moral knowledge we are capable of will depend, in part, on what sorts of moral reasoning we are capable of. Although some moral learning may result from the theoretical work of moral philosophers and theorists, much of what we learn with regard to morality surely arises in the practical context of deliberation about new and difficult cases.

This deliberation might be merely instrumental, concerned only with settling on means to moral ends, or it might be concerned with settling those ends. There is no special problem about learning what conduces to morally obligatory ends: that is an ordinary matter of empirical learning. But by what sorts of process can we learn which ends are morally obligatory, or which norms morally required? And, more specifically, is strictly moral learning possible via moral reasoning?

Much of what was said above with regard to moral uptake applies again in this context, with approximately the same degree of dubiousness or persuasiveness. For instance, it is conceivable that our capacity for outrage is a relatively reliable detector of wrong actions, even novel ones, or that our capacity for pleasure is a reliable detector of actions worth doing, even novel ones.

For a thorough defense of the latter possibility, which intriguingly interprets pleasure as a judgment of value, see Millgram That is to say, perhaps our moral emotions play a crucial role in the exercise of a skill whereby we come to be able to articulate moral insights that we have never before attained. Perhaps competing moral considerations interact in contextually specific and complex ways much as competing chess considerations do.

If so, it would make sense to rely on our emotionally-guided capacities of judgment to cope with complexities that we cannot model explicitly, but also to hope that, once having been so guided, we might in retrospect be able to articulate something about the lesson of a well-navigated situation. If we are, then perhaps we can learn by experience what some of them are — that is, what are some of the constitutive means of happiness.

Dewey []. Once we recognize that moral learning is a possibility for us, we can recognize a broader range of ways of coping with moral conflicts than was canvassed in the last section. There, moral conflicts were described in a way that assumed that the set of moral considerations, among which conflicts were arising, was to be taken as fixed. If we can learn, morally, however, then we probably can and should revise the set of moral considerations that we recognize. Often, we do this by re-interpreting some moral principle that we had started with, whether by making it more specific, making it more abstract, or in some other way cf.

Richardson and So far, we have mainly been discussing moral reasoning as if it were a solitary endeavor. This is, at best, a convenient simplification. Laden In any case, it is clear that we often do need to reason morally with one another. Here, we are interested in how people may actually reason with one another — not in how imagined participants in an original position or ideal speech situation may be said to reason with one another, which is a concern for moral theory, proper.

There are two salient and distinct ways of thinking about people morally reasoning with one another: as members of an organized or corporate body that is capable of reaching practical decisions of its own; and as autonomous individuals working outside any such structure to figure out with each other what they ought, morally, to do.

The nature and possibility of collective reasoning within an organized collective body has recently been the subject of some discussion. Collectives can reason if they are structured as an agent.

This structure might or might not be institutionalized. As List and Pettit have shown , — , participants in a collective agent will unavoidably have incentives to misrepresent their own preferences in conditions involving ideologically structured disagreements where the contending parties are oriented to achieving or avoiding certain outcomes — as is sometimes the case where serious moral disagreements arise.

Where the group in question is smaller than the set of persons, however, such a collectively prudential focus is distinct from a moral focus and seems at odds with the kind of impartiality typically thought distinctive of the moral point of view. This does not mean that people cannot reason together, morally. It suggests, however, that such joint reasoning is best pursued as a matter of working out together, as independent moral agents, what they ought to do with regard to an issue on which they have some need to cooperate.

In the case of independent individuals reasoning morally with one another, we may expect that moral disagreement provides the occasion rather than an obstacle. Cohen argued Cohen , chap. What about the possibility that the moral community as a whole — roughly, the community of all persons — can reason?

This possibility does not raise the kind of threat to impartiality that is raised by the team reasoning of a smaller group of people; but it is hard to see it working in a way that does not run afoul of the concern about whether any person can aptly defer, in a strong sense, to the moral judgments of another agent. Even so, a residual possibility remains, which is that the moral community can reason in just one way, namely by accepting or ratifying a moral conclusion that has already become shared in a sufficiently inclusive and broad way Richardson , chap.

The author is grateful for help received from Gopal Sreenivasan and the students in a seminar on moral reasoning taught jointly with him, to the students in a more recent seminar in moral reasoning, and, for criticisms received, to David Brink, Margaret Olivia Little and Mark Murphy.

He welcomes further criticisms and suggestions for improvement. The Philosophical Importance of Moral Reasoning 1. General Philosophical Questions about Moral Reasoning 2. General Philosophical Questions about Moral Reasoning To be sure, most great philosophers who have addressed the nature of moral reasoning were far from agnostic about the content of the correct moral theory, and developed their reflections about moral reasoning in support of or in derivation from their moral theory.

We may group these around the following seven questions: How do relevant considerations get taken up in moral reasoning? Is it essential to moral reasoning for the considerations it takes up to be crystallized into, or ranged under, principles? How do we sort out which moral considerations are most relevant? In what ways do motivational elements shape moral reasoning? What is the best way to model the kinds of conflicts among considerations that arise in moral reasoning?

How can we reason, morally, with one another? The remainder of this article takes up these seven questions in turn. Sinnott-Armstrong suggested that a moral dilemma is a situation in which the following are true of a single agent: He ought to do A.

He ought to do B. He cannot do both A and B. Bibliography Anderson, E. Anderson, E. Arpaly, N. In praise of desire , Oxford: Oxford University Press. Audi, R. Practical reasoning , London: Routledge. Bacharach, M. Beauchamp, T. Robinson, Clifton, N. Brandt, R. Does the joy or, to use Harris' words, flourishing that those creatures experience outweigh it? Is there an objective way to measure?

Of course not, it's all subjective. Even if we can use science to determine how our subjective views are related do I actually prefer mint ice cream to raspberry ripple? Nice post Brian, I agree. In fact some similar points about Harris were made on this blog when his book came out:. Simon — thanks for sharing your earlier post.

You go into much greater detail about the sort of philosophical work that would be required to make the argument Harris pretends to make; and we are in perfect agreement that he doesn't do this work: in fact, he doesn't attempt it.

I'm not bothered so much by some of the specific claims Harris makes — such as: throwing acid in a girl's face for trying to learn how to read is not nice — but rather by his extraordinary disregard for basic reasoning, conceptual clarity, etc. Secular moral philosophy badly done is still secular moral philosophy: calling it "science" doesn't gain you an inch of objectivity, and I fear it makes progress on these important questions much harder, by muddying the waters.

Again, I appreciate your link, and your carefully reasoned critique — I realize I'm joining the parade rather late in the day!

As you've shown, different people can make essentially the same critique in quite different ways. I think it's helpful to do that … and with respect to a publicity-seeker like Harris, it's especially helfpul to do it in ways accessible to non-philosopers.

So well done! I suppose I only disagree with you when you say that Harris does "plain old secular moral reasoning" in his book — I think he actually does very little moral reasoning, because as you point out, he simply cherry picks easy moral examples and then claims, without any recognizable argument, that they are supposed to teach us a general lesson.

How's that for honesty? Certainly you would find much more and much better secular moral reasoning in virtually any introductory ethics book.

Simon — would you be willing to spell out how it is that Nozick's experience machine challenges that view? I'm curious to learn! How does Nozick's experience machine challenge the view that value consists only in conscious states, such as pleasure? As a response to the Experience Machine, may I suggest taking a look at a recent piece of experimental philosophy by Filipe de Brigard:.

He surveyed a number of his students and found that it wasn't clear that, if they found out their current life was in an experience machine or The Matrix , they would prefer to abandon it and return to 'reality'. He suggests many people's reactions to Nozick's experience machine can be explained a psychological 'status quo' bias: people want to stick with what they have, regardless of whether it's real or not.

Simon — just finished reading the passages in the book you sent — beautifully written, and they very carefully capture the intuition you spoke of. Thank you again for passing this on. Thank you Simon and Matt for those resources. Matt — I think this new work on the experience machine is crucial: the idea that the status quo bias may explain much of people's preferences in the original thought experiment seems potent. Matt and Brian — DeBrigard's work experimental work on the experience machine is pretty clearly fatally flawed, I think.

He claims that status quo bias explains reluctance to unplug in his surveys, and could similarly be what dissuades us from plugging in.

But there's an extremely important disanalogy between Nozick's original pluggin in scenario and the unplugging scenario he tests in his surveys. My actual desires include desires for the wellbeing of my loved ones and the fulfillment of my plans, for example, and I would look forward to these desires seeming satisfied if I entered the experience machine. In Nozick's scenario, my plugged in life would develop organically from my present one, only better. So there's no similar disconnect here, and for that reason there are no grounds for thinking that status quo bias would dissuade me from plugging in, while it very clearly could dissuade me from unplugging.

In fact, I don't see why experimental methods actually lend anything at all to De Brigard's argument. There are no reports of any surprising findings about differences in intuitions between groups in it. This work could be conducted from the armchair by lone philosophers though of course with input from empirical psychology identifying general psychological sources of bias.

Experimental methods like De Brigard's are only of interest when people's intuitions significantly differ. Matt, I'm not sure what you mean by the claim that people value "the conscious state of experiencing reality".

So this can't be the explanation of their choosing not to plug in. Maybe you mean that people value a mental state with externalist content — that is, a state that by definition they can only have if they are indeed experiencing reality like "the knowledge that snow is white", which you can only have if snow is indeed white. Assuming controversially! But this seems to me a whole lot less plausible than the claim that people value both mental states of certain kinds, and realities of certain kinds, and these things together explain their reluctance to plug in to the machine.

And, anyway, you can trivially show that people value reality being a certain way irrespective of their conscious states by using further, similar examples. Great post. I have to say that I had many of the same reactions while reading The Moral Landscape. I was hoping for some sort of moral revalation that proved, scientifically, beyond a reasonable doubt, that a certain world-view and sense of morality was objectively superior to another.

What I was left reading was a paragon of circular reasoning and an exercise in frustration. Okay…"When humans suffer, their brains register the suffering and we can measure it. Again, as you say, it's less science than it is common sense. And that's really nothing new. To call a western version of common sense "objective" and back it up with speculative future-science that may not even ultimately contribute to the argument is a specious argument.

I'm not sure I would call him a "liar" per se, but I would certainly call his book inaccurate and a little misleading. There were some good, hearty re-hashings of good old-fashioned utilitarian hedonism on a macro scale, a school of thought with which I happen to sympathize. But his whole argument rested on a pillar or, as you said it, cornerstone of subjectivity on which Harris scribbled the words "objective science". Trevor, your paragraph on fMRIs and suffering is a perfect distillation of the way I felt, too.

Thanks for your input. I totally agree about Harris, and it's a shame that he is the person associated with this claim, since there are much better attempts out there. It doesn't take philosophical genius, though, just the same sort of caveats and hedgings that any argument in philosophy even Hume's law ultimately depends on. I made such an attempt as part of my dissertation "Hedonism as the Explanation of Value".

It's basically reductionist Cornell-Realism with a bit of scientific some neuro- psychology to back it up. David, Do you have your dissertation posted somewhere online?

Is there a link? I'm curious indeed. Best wishes, Brian. No, common sense will not do fine. The "common sense" of the Taliban tells them it is fine to force women to wear burkas. Common sense is highly subjective. Hi Scote — thanks for your comments.

If someone DOESN'T have this position to start with, there's nothing "science" can do to address the moral disagreement. I am surprised that Harris does not properly evaluate the place of cognitive-emotive thinking on the perception of pain or the role that hormones and other neuro-active chemicals have on the perception of pain.

Any psychologist or neuro-scientist working in the area of mental health should be aware that what is painful or pleasurable to someone at one point in their life may be perceived differently at another point in their lives.

So which reading do we take as accurate? Dear Rosemary, Thank you for your contribution. I'm as surprised as you are. But his discussion just doesn't go there, wildly swinging from common sense to dubious meta-ethical assertions. Best, Brian.

I think the difference between me and Harris is that I have trained and worked as a clinician in a number of different settings and a number of different specialities on clinical and neuro-psychology. I suspect that Harris has had a "sheltered" clinical life, or none at all. While I'm partially sympathetic to the criticisms, and I also realize Harris does none of this sort of investigation either, but what if we determined as I've guessed for a long time is the case that humans are "naturally" consequentialists not utilitarians, mind!

Of course, the reverse would be interesting as well, for the same reason, but I suspect it isn't true. Or more complicatedly, that there are psychological dispositions to one or the other which vary in strength. Yes, this couldn't tell us which viewpoint was "correct" in some strong moral realist way, but it could tell us how likely it was that a person or humans generally could adopt a given ethical viewpoint.

If one adopts something like the "ought implies can" principle, then it seems to follow we'd have an argument against the particular "ethical theory" in question, at least to a limited degree.

Needless to say, some people will still say we should adopt it even if it is psychologically impossible for some of us to adopt it, but …. Hi Keith — Thanks for your input. I'm not sure I agree that the data show this, but that's a different story. Either way, people are clearly able to choose options or reasons in ways which might be thought of as both deontological and utilitarian, and I'm not sure that a statistical shading of ease going one way or another gives us anything to work with to answer, "how should we live?

Could you explain how, exactly, the evidence you're speaking of would show certain ethical theories are impossible to adopt?

The fact that some theories are more prominent or, perhaps, easier to adopt than others would not show this, nor would evidence concerning whether we were naturally predisposed towards certain theories. Okay, I keep being told this is a duplicate comment, but haven't seen my comment appear, so will try one more time:. I haven't read the book, but is it the case that Harris is very concerned with moral relativism? I mean, look at his examples: the Taliban, and the throwing of acid in a girl's face because she tries to learn to read.

Within the Taliban's own value system, forcing women to wear burqas at the threat of being beaten, and stoning gays to death, are morally okay. An answer to this would be if morality was related to something objective about the world, that would hold true for any group of people.

I guess Harris is thinking "what could be more objective than science? You're exactly right. This is Harris' approach: he points out that religions make absolutist moral claims, and secularists have nothing to respond with, because they're stuck with weak-tea relativism. Well then Harris thinks , what could be more objective than science to ground moral judgments as against the religious folks?

But it's not. Or if it is, Harris doesn't show how. So while relativism might not be very satisfying — while it feels better to say to the Taliban, "You're WRONG, according to science" — we're left with the grayer position of criticizing them from within a philosophical framework whose cogency we have to show through argument and other forms of persuasive appeal.

I would love to read a serious negative review of The Moral Landscape. Unfortunately this isn't it. It's full of philosophical cant I learned as a freshman and does nothing to address the arguments Harris uses to support his overarching argument you sketch above.

Also, I don't know where people get the idea that Harris' believes himself to be doing original philosophical work, let alone that it is anything approaching revolutionary. For heaven's sake, he cites the work of philosophers who make the same of similar arguments! Ethicists can legitimise their speculations of what 'ought' by aligning their claims to what the sciences have discovered about brains, psychology, social groups, etc the 'is'.

This is profound news outside of academia! We see "common sense" leading to vastly less harmonious ways of thinking about emotions, culture, and morality. This is exactly what's under discussion. I confess your Oct '10 post the scientistic argument hasn't helped me to understand why the sciences cannot expand to investigate ethics. I have in mind the social sciences like political science, not simply hard science drawing strong conclusions.

As to your question, do we not increase the legitimacy of arguments about animal treatment by invoking contemporary scientific claims about pain and stress? TML is arguing that — far from being ethically irrelevant — scientific discoveries across the disciplines are already confluent with learning how we 'ought' to behave in the world.

No, blamer, The Moral Landscape claims to argue for much more than what you suggest. That's just blatantly obvious!



0コメント

  • 1000 / 1000