fnord888:

Singer prefers to use a shocking, attention-getting frame even when it isn’t the consistent utilitarian frame.

Take, for example, here. Consistent, boring utilitarian frame: “Sorry, but despite the utility you get from practicing your religion as you prefer, which is why we have religious freedom in the first place, it’s outweighed by the disutility of inhumane slaughter practices/increased utility of a unified healthcare law/military service system.” Frame that’s shocking and consistent but gets you dismissed as a crank rather than invited to write op-eds: “The real issue isn’t humane slaughter, it’s the fact that the resources used to produce meat could be used to produce more food to feed starving children in Africa.” Singer, grabbing attention: “This isn’t about religious freedom, I judge that your preferences are unimportant.”

If you want to argue about what the logically consistent position of utilitarianism is, I already stated it, which is that “people should donate to charity” and “donating to charity is morally praiseworthy” and “donating to charity is a moral duty” are all equally true and in fact simply different ways of framing the same fact that arises from the fundamental premise of utilitarianism (and the utility calculation about charity).

I don’t think Peter Singer sounds particularly inflammatory there (though this may be because I’m personally bad at determining what looks inflammatory to other people), or that he judges that religious people’s preferences are unimportant.

The position “Donating to charity is a moral duty” is not compatible with the position “Donating charity is morally praiseworthy”, because if something is a duty, it can’t be praiseworthy - you don’t get praised for doing something you’re supposed to do. Given that utilitarianism, a normative ethical theory, holds that one should maximize world utility, maximizing world utility is a duty, and therefore isn’t praiseworthy.

(Source: raginrayguns)

scientiststhesis:

What does it mean to say that you “believe” in your ethical system? What counts as evidence for it, or against it? What evidence convinced you that this ethical system was the right one as opposed to some other?

I give one example here. To elaborate some, I personally reject moral externalism (and therefore utilitarianism and deontology), so morality (whatever it is) must follow from what we already want - it must follow from our already existing interests. See this SEP article, specifically the section “Hobbesian contractualism”:

The correct moral principles are those that would be produced by rational agreement because they are mutually advantageous… rational agreement does not simply show which moral principles are correct; rather, the correctness of moral principles is constituted by the fact that they would be agreed upon in the specified circumstances.

"Should" has a meaning independent of any particular moral theory, though it’s such a fundamental word that I’m not sure how to describe it in other words. Utilitarians, Kantians, egoists, etc, disagree about what people should do, but there is an ethically neutral meaning of "should" that they use when talking to each other.

"Should" has a template of a meaning. It’s supposed to mean “the thing your moral theory says is the right thing to do.”

No, “should” means more than that, it not only means “the thing your moral theory says is the right thing to do” but also “the right thing to do” simpliciter. For example, when utilitarians and Kantians debate, they’re not debating whether utilitarianism/Kantianism says that something is the right thing to do, they’re debating whether that action is actually the right thing to do.

I would like to meet this “standard utilitarian” you speak of, because every utilitarian I have ever met or heard of, including myself, is not a moral realist

Utilitarianism is a normative ethical theory. [1][2][3] Normative ethics presuppose moral realism, as they are concerned with how one should act*. Also see Peter Singer here.

And finally: are you sure you’re not committing the typical mind fallacy when you talk about how this Standard Utilitarian reasons? Because their reasoning looks suspiciously similar to your own, and your reasoning is not one I have ever found before, nor is it one I understand.

No, the reason the standard utilitarian’s reasoning sounds similar to mine is because both the standard utilitarian and I are moral realists, and this is how moral realism works, or at least is one approach to it.

*Not “should” in the sense of how one should act according to a particular ethical theory, but simply with how one should act. For example, for a utilitarian, “utilitarianism says you should do X” and “you should do X” are synonymous, and this is not just because there’s an unstated “according to utilitarianism” in the second statement, but because there’s an unstated “according to utilitarianism, which is the correct ethical theory”.

(Source: raginrayguns)

fnord888:

1) On the contrary, the “extreme altruism” meme is at least somewhat separate from utilitarianism, even as utilitarianism is conceived in the modern academy. Peter Unger isn’t a utilitarian (at least, not a standard utilitarian by your formulation).

Extreme altruism is separate from utilitarianism - one can be an extreme altruist but not a utilitarian - but utilitarianism in the current world implies extreme altruism, as Singer suggests.

2) As I stated upthread, it’s equally correct and in fact equivalent under the fundamental principle of utilitarianism to say that donating to charity is praiseworthy and donating to charity is a duty. You say that utilitarians only emphasize the second for strategic reasons. But consider: Singer has strategic reasons to emphasize the shocking and disconcerting framing of the utilitarian position on charity.

First, of course, is that Singer apparently likes to shock people. The way he frames the arguments about infanticide, disability, animal rights, it’s not any different. Even if the framing contradicts the precepts of preference utilitarianism, sometimes.

Second, that sort of shock can be an effective technique for persuasion. It shakes people out of the status quo. That’s particularly true when you follow it up with a fall-back that’s not only comparatively reasonable but basically minimal (as I noted, the minimum standard he actually endorses for most people in The Life You Can Save is less than the tithe required by many traditional religious ethics). It’s the moral rhetoric equivalent of a concession close.

I don’t know what Singer’s motivations are, but it makes sense for him to say what he says because it follows from the basic utilitarian principle of maximizing world utility. Singer may get more attention by being shocking - but the consistent utilitarian position is shocking, so he’s presenting it accurately.

As for Singer denying supererogation, supererogation requires an absence of duty as well as the presence of praise.

From the SEP:

Supererogation is the technical term for the class of actions that go “beyond the call of duty.” Roughly speaking, supererogatory acts are morally good although not (strictly) required.

Which implies that supererogation requires a presence of duty. You can’t go beyond the call of duty if there is no duty.

(Source: raginrayguns)

fnord888:

Yes, because moral realists are moral realists. Obviously they claim that.

But the chess analogy provides absolutely no support for that position. Which was my point in the first place.

The point of the chess analogy is not to show that moral realism is true, it’s to show that objective factual statements can be made about social constructs. It shows that a position like “You can’t say that something is true of morality, because it’s a social construct” is incorrect. Maybe you can say that nothing is true of objective morality - but not because morality is a social construct.

(Source: eccentric-opinion)

fnord888:

Arguendo, let’s that characterization and see where that takes us if apply it to moral systems again.

We are always in the frame of utilitarianism in the sense that there exists a certain system called utilitarianism and that it is possible to make objective statements about it. If you say “the morally correct action is the one that maximizes the paperclip count” you are making an objectively wrong statement about the rules of utilitarianism.

It’s about utilitarianism, and about the world only insofar as utilitarianism is within the world. That’s not an affirmation that the world is within utilitarianism (as you put the utilitarian realist position).

However, when utilitarians say that we are always in the frame of utilitarianism, they mean more than that we can say that certain things are true according to utilitarianism, they mean that utilitarianism is true, so if something is true according to utilitarianism, it’s true simpliciter.

(Source: eccentric-opinion)

fnord888:

Peter Singer is, I suspect, actually the source for the one percent number Ozy refers to, in The Life You Can Save. Though his actual recommendation is more complicated than that, it falls short of even the traditional tithe for a vast majority of the population of even affluent countries. One SHOULD give more, he agrees. But when it comes to drawing the line between assigning blame for failure to meet a standard and praise for exceeding it, that’s where he drew the line.

Now, I’d be willing to stipulate significant parts of mainstream philosophy including John Stuart Mill and even (especially) Peter Singer fail to reason rigorously or even consistently about ethics. But you’ve been the one appealing to “the standard utilitarian position” when people state that their conception of utilitarianism doesn’t possess the flaws you’re claiming.

In the paper I posted, Singer denies supererogation, so he can’t say that some levels of donations are morally praiseworthy.

More from him:

So how does my philosophy break down in dollars and cents? An American household with an income of $50,000 spends around $30,000 annually on necessities, according to the Conference Board, a nonprofit economic research organization. Therefore, for a household bringing in $50,000 a year, donations to help the world’s poor should be as close as possible to $20,000. The $30,000 required for necessities holds for higher incomes as well. So a household making $100,000 could cut a yearly check for $70,000. Again, the formula is simple: whatever money you’re spending on luxuries, not necessities, should be given away.

Now, evolutionary psychologists tell us that human nature just isn’t sufficiently altruistic to make it plausible that many people will sacrifice so much for strangers. On the facts of human nature, they might be right, but they would be wrong to draw a moral conclusion from those facts. If it is the case that we ought to do things that, predictably, most of us won’t do, then let’s face that fact head-on. Then, if we value the life of a child more than going to fancy restaurants, the next time we dine out we will know that we could have done something better with our money. If that makes living a morally decent life extremely arduous, well, then that is the way things are.

Something like this is the standard utilitarian position - the specific numbers may differ, but the main point is that you should give a lot.

I’m not arguing that the standard utilitarian position is correct - far from it, I think it’s wildly incorrect. But it is the standard utilitarian position, and if you don’t agree with it, don’t call yourself a utilitarian.

(Source: raginrayguns, via fnord888)

fnord888:

The point of the Tolkien analogy is to refute the argument that something must be real if objectively true statements can be made about it.

I’m not sure. It could also be said that statements like “Smaug was killed by Samwise Gamgee” are really abbreviations of statements like “‘The Hobbit’ and ‘Lord of the Rings’ describe a world in which Smaug was killed by Samwise Gamgee”, which are statements about the real world - specifically, about what’s written in the books.

As for the other argument in the chess analogy, you say elsewhere:

For [moral realist] utilitarians, it’s more than “The moral theory I subscribe to says maximizing world utility is the right thing to do”, it’s “Maximizing world utility is the right thing to do”.

But, outside the frame of chess, the rules of chess are not objectively correct. In fact, they’re objectively incorrect: in reality, nothing prevents one from moving a bishop down a file rather than diagonally. And no thoughtful chessplayer would deny that fact.

For chessplayers, it really is just “According to the rules of the game I am choosing to play/talk about, the correct way to move a bishop is diagonally.” 

We are always in the frame of chess - in the sense that there are certain rules of chess that exist even when we’re not playing or thinking about the game - and utilitarians would say that we are always in the frame of utilitarianism. The fact that it is physically possible for us to move pieces incorrectly or act in ways contrary to utilitarianism does not mean that we are ever outside the frame in which our actions are correct or incorrect. Lightning will not strike you down if you move a bishop down a file, but if you say that bishops move like rooks, you will be making an objectively wrong statement about the rules of chess.

(Source: eccentric-opinion)

aboveauthority:

monolingual privilege is not accidentally saying “gracias” at an italian restaurant when they bring ur food over

fnord888:

So let’s try a different tack. What do classical utilitarians say about the subject? From John Stuart Mill’s Utilitarianism:

The objectors to utilitarianism cannot always be charged with representing it in a discreditable light. On the contrary, those among them who entertain anything like a just idea of its disinterested character, sometimes find fault with its standard as being too high for humanity. They say it is exacting too much to require that people shall always act from the inducement of promoting the general interests of society. But this is to mistake the very meaning of a standard of morals, and confound the rule of action with the motive of it. It is the business of ethics to tell us what are our duties, or by what test we may know them; but no system of ethics requires that the sole motive of all we do shall be a feeling of duty; on the contrary, ninety-nine hundredths of all our actions are done from other motives, and rightly so done, if the rule of duty does not condemn them. It is the more unjust to utilitarianism that this particular misapprehension should be made a ground of objection to it, inasmuch as utilitarian moralists have gone beyond almost all others in affirming that the motive has nothing to do with the morality of the action, though much with the worth of the agent. He who saves a fellow creature from drowning does what is morally right, whether his motive be duty, or the hope of being paid for his trouble; he who betrays the friend that trusts him, is guilty of a crime, even if his object be to serve another friend to whom he is under greater obligations.

But to speak only of actions done from the motive of duty, and in direct obedience to principle: it is a misapprehension of the utilitarian mode of thought, to conceive it as implying that people should fix their minds upon so wide a generality as the world, or society at large. The great majority of good actions are intended not for the benefit of the world, but for that of individuals, of which the good of the world is made up; and the thoughts of the most virtuous man need not on these occasions travel beyond the particular persons concerned, except so far as is necessary to assure himself that in benefiting them he is not violating the rights, that is, the legitimate and authorised expectations, of any one else. The multiplication of happiness is, according to the utilitarian ethics, the object of virtue: the occasions on which any person (except one in a thousand) has it in his power to do this on an extended scale, in other words to be a public benefactor, are but exceptional; and on these occasions alone is he called on to consider public utility; in every other case, private utility, the interest or happiness of some few persons, is all he has to attend to. Those alone the influence of whose actions extends to society in general, need concern themselves habitually about large an object. In the case of abstinences indeed- of things which people forbear to do from moral considerations, though the consequences in the particular case might be beneficial- it would be unworthy of an intelligent agent not to be consciously aware that the action is of a class which, if practised generally, would be generally injurious, and that this is the ground of the obligation to abstain from it. The amount of regard for the public interest implied in this recognition, is no greater than is demanded by every system of morals, for they all enjoin to abstain from whatever is manifestly pernicious to society.

Emphasis added.

Which brings us back to the original point about eating cheese because you enjoy it being a morally desirable action under utilitarianism.

While Mill contributed to the development of utilitarianism, he was empirically mistaken that “the occasions on which any person…has it in his power to [multiply happiness] on an extended scale… are but exceptional” - given the opportunities to improve people’s lives with charity, this is certainly no the case today, even if it was in his time (which is also doubtful). Given that there are numerous opportunities to forgo one’s own benefit to increase world utility, I think Mill’s position would have been different if he was aware of this fact. A more modern utilitarian, Peter Singer, writes about this in Famine, Affluence, and Morality:

[I]f it is in our power to prevent something bad from happening, without thereby sacrificing anything of moral importance, we ought, morally, to do it… [I]f I am walking past a shallow pond and see a child drowning in it, I ought to wade in and pull the child out…

The uncontroversial appearance of the principle just stated is deceptive. If it were acted upon… our lives, or society, and our world would be fundamentally changed…

The outcome of this argument is that our traditional moral categories are upset. The traditional distinction between duty and charity cannot be drawn, or at least, not in the place we normally draw it. Giving money to the Bengal Relief Fund is regarded as an act of charity in our society. The bodies which collect money are known as “charities”. These organizations see themselves in this way - if you if you send them a check, you will be thanked for your “generosity”. Because money is regarded as an act of charity, it is not thought that there is anything wrong with not giving. The charitable man may be praised, but the man who is not charitable is not condemned. People do not feel in any way ashamed or guilty about spending money on new clothes or a new car instead of giving it to famine relief. (Indeed the alternative does not occur to them.) This way of looking at the matter cannot be justified. When we buy new clothes not to keep ourselves warm but to look “well-dressed” we are not providing for any important need. We would not be sacrificing anything significant if we were to wear our old clothes, and give the money to famine relief. By doing so, we would be preventing another person from starving. It follows from what I have said earlier that we ought to give money away, rather than spend it on clothes which we do not need to keep us warm. To do so is not charitable, or generous. Nor is the kind of act which philosophers and theologians have called “supererogatory” - an act which it would be good to do, but not wrong not to do. On the contrary, we ought to give the money away, and it is wrong not to do so

Given the present conditions in many parts of the world, however, it does follow from my argument that we ought, morally, to be working full time to relieve great suffering of the sort that occurs as a result of famine or other disasters. Of course, mitigating circumstances can be adduced - for instance, that if we wear ourselves out through overwork, we shall be less effective than we would otherwise have been. Nevertheless, when all considerations of this sort have been taken into account, the conclusion remains: we ought to be preventing as much suffering as we can without sacrificing something else of comparable moral importance. This conclusion is one which we may be reluctant to face. I cannot see, though, why it should be regarded as a criticism of the position for which I have argued, rather than a criticism of our ordinary standards of behavior. Since most people are self-interested to some degree, very few of us are likely to do everything that we ought to do. It would, however, hardly be honest to take this as evidence that it is not the case that we ought to do it.

(Source: raginrayguns)

fnord888:

Right. But given a common framework in which to discuss them, it’s possible to make objective statements about things that clearly aren’t real.

If I’m talking about Tolkien’s canon (explicitly or implicitly), it’s objectively wrong to say that Smaug was killed by Samwise Gamgee, and objectively right to say that Smaug was not killed by Samwise Gamgee. But I think it’s pretty clear that neither Smaug nor Samwise Gamgee are objectively real things.

That’s also true, but the rules of chess are not fictional, as chess is a game that is actually played.

But this is getting away from the original point, that human constructs and objective truths about them are not mutually exclusive.

(Source: eccentric-opinion)

scientiststhesis:

eccentric-opinion:

People don’t choose what morality to believe in any more than they choose to believe in gravity. They believe that a certain ethical system is correct, and must change their beliefs about moral facts in order to change ethical systems. It’s not a matter of doing something like sitting down and thinking “I’m going to be a utilitarian today”, it’s more like thinking “World utility is all that matters morally, regardless of whether I care about it, therefore utilitarianism is correct”. People can believe in a morality that externally imposes things on them - they may not like what they consider “being moral”, but they can’t just choose to not be moral, as they believe in their ethical system.

I don’t have this experience you’re describing. I chose to become an utilitarian. After reasoning for a while, I consciously decided that maximising world utility is the thing that best approximates my moral intuitions, and that’s the best tool to deal with edge cases.

For you, wanting to maximize world utility follows from your intuitions, but i’m not surprised that you haven’t had this experience of “my ethical system is normatively correct” given that you don’t believe in moral realism. For moral realists, whether internalists (like me) or externalists (like standard utilitarians, Kantians, etc), morality is an unchosen belief in the same way that belief in gravity is an unchosen belief. They believe it to be correct, not just a matter of taste.

If a utilitarian says “you should care”, it’s not the short version of “I believe that would maximize utility”, it’s not the short version of anything, except perhaps the nearly synonymous “I believe that morality requires you to do this, because morality requires you to maximize world utility, and I believe this would maximize world utility”.

Umm… yes, yes it is? Should doesn’t have a meaning independent of your moral theory. Saying “you should do X” means “the moral theory I subscribe to says doing X is the right thing to do,” and in the case of utilitarianism, the right things to do are the ones that maximise world utility, and therefore “the moral theory I subscribe to says doing X is the right thing to do” means “I believe doing X will maximise world utility and I believe maximising world utility is the right thing to do.” That’s what moral theories are for.

"Should" has a meaning independent of any particular moral theory, though it’s such a fundamental word that I’m not sure how to describe it in other words. Utilitarians, Kantians, egoists, etc, disagree about what people should do, but there is an ethically neutral meaning of "should" that they use when talking to each other.

If a utilitarian told someone something like “Caring about world utility would maximize world utility”, and they’d respond “I don’t care about maximizing world utility”, the utilitarian would say that the person who doesn’t care about world utility is committing moral error by not caring about maximizing world utility.

And yes, indeed that utilitarian would say so, from within utilitarianism. But most utilitarians are a lot more sophisticated than that, and work on a level much more meta than that. My model utilitarian is ozymandias271 who explicitly says, when there is risk of confusion, that what zie means when zie says “you should do X” is “I believe X would maximise world utility and I believe that is the right thing to do.”

Standard utilitarians not only believe from within utilitarianism that people should do certain things, they also believe that the world is within utilitarianism, and that doing otherwise is wrong. According to standard utilitarianism, people who aren’t utilitarians are mistaken about morality. For utilitarians, it’s more than “The moral theory I subscribe to says maximizing world utility is the right thing to do”, it’s “Maximizing world utility is the right thing to do”. I know I’m repeating myself, but utilitarianism is a normative theory, not merely a descriptive one. This means that for someone who subscribes to a normative theory, the statement “My theory says you should do X” implies “You should do X”, because those who subscribe to a normative theory believe that morality is binding in some way.

And you seem to be arguing by definition here. “[T]hose who believe in one must believe that those who subscribe to other ethical systems (or to no ethical systems at all) are in moral error.” Says who? I believe in the utilitarian ethical system, and I do not believe people who belong to others are in moral error. Beliefs can’t be morally incorrect, only actions can. If a person is a virtue ethicist and they behave exactly like I would, then I’m not going to say that they’re immoral because “they did it for the wrong reasons.”

I think it’s clear by now that we mean different things by “utilitarianism”, and the term as used in LW circles is not the same thing as it’s used in most philosophical discussions, whether popular or academic. You may be a utilitarian by how you define that word, but you are not a standard utilitarian who believes that utilitarianism is normative and that maximization of world utility is a preference-independent duty.

As for beliefs being morally incorrect, a morally erroneous belief is not an immoral belief, but an incorrect belief about morality.

(Source: raginrayguns)

Rather than talk more about utilitarianism, which I don’t subscribe to, I’ll instead illustrate moral error using an ethical theory I subscribe to: contractarianism.

Suppose you are living in a Hobbesian state of nature, in which you can kill with no legal impunity, but can also be killed. There is no law-enforcing entity that would do anything to you if you were to kill your neighbor, or if your neighbor were to kill you. Perhaps you don’t care about your neighbor’s life, and would have no qualms about killing them if it meant getting their stuff - but your neighbor feels the same way about you. However, you do care about two things: “stuff” (like the things you can plunder from your neighbor) and your own life, and you care about self-preservation much more than you care about stuff. Your neighbor has similar preferences. In such a situation, both you and your neighbor would have reason to be afraid of each other, and not leave your houses in case your stuff would be stolen while you’re gone, and both of you would also have to worry about being murdered. This is a suboptimal situation for both of you - while you’d like your neighbor’s stuff, you don’t like sitting at home with a shotgun in your lap, worrying about your neighbor barging in, and it’s hard for you to be productive in such a state. Again, the same holds for your neighbor. In such a situation, it would be in your and your neighbor’s self-interest to designate a third party that would deter robbery and murder if you or your neighbor were to commit it.

In other words, you and your neighbor would have a reason to restrict yourselves in your dealings with each other in exchange for the other doing the same, i.e. to recognize each other’s rights. If it would be good for you to make such an agreement, you would be violating your neighbor’s rights if you were to rob or kill him - as rights are determined by what both of you would like to agree to. Strictly speaking, no actual agreement is necessary for rights to exist, it only has to be the case that such an agreement would be good according to the agents’ own preferences.

Now, suppose someone says “People don’t have the right to not be murdered”. This person is in error, because rights are determined by the beneficial contract, and the beneficial contract says that people should restrict themselves from murdering in exchange for others doing the same, so there is a right to not be murdered. Because he is wrong about rights, and rights are a topic of morality, this person would be making a moral error. And his error would be objective, because this specific part of the contract is determined by people’s preference to not be murdered, which is a fact about the world.

towardsagentlerworld:

scientiststhesis:

towardsagentlerworld:

Okay, yes, that is definitely consistent with moral non-realism.

As someone who does believe in objective “goodness”, I’d be curious to learn more about what your reasons are for not believing in moral realism, if you’d be okay with elaborating.

I might be able to elaborate a bit.

First, what does it mean for objective goodness to exist? Do rocks feel objectively compelled to do good? Do plants do?

What about non-human intelligences? Aliens? Imagine a species that evolved with winnowingThey think you’re bad for not eating babies. Are they objectively wrong? Are they objectively evil? What does that mean?

(…)

First of all, by “objective”, I didn’t mean “situation-independent”, or “subject-independent”. 

Let me explain.

Human beings (and other living things, for that matter), have evolved to seek certain stimuli that help us survive, and avoid others that might lead to our death. The mechanism that helps us decide whether a stimulus is helpful or harmful is our subjective experience of “pleasure/happiness” and “pain/suffering”.

Definitionally, pleasure is good and pain is bad. I say definitionally, because “pleasure” is the subjective experience of “this stimulus is good, seek it out”, while pain is the subjective experience of “this stimulus is bad, avoid it”.

(One could argue that certain forms of pain should not be avoided, but that’s really a question of whether that specific pain is worth it — in other words, whether it provides pleasure returns that outweigh the amount of pain incurred.)

So, my claim is: objectively (and definitionally), the experience of pleasure is good, and objectively (and definitionally), the experience of pain is bad.

Of course, what specific actions lead to pleasure and pain are context-dependent, and subject-dependent. Eating grass, presumably, provides pleasure to a cow, while it would not provide pleasure to (most) humans. Having strong friendships provides pleasure to (most) humans since humans are a social species; for a solitary animal like a tiger, encountering another tiger may cause territorial stress and anxiety as opposed to happiness.

(As for the Babyeaters, a strong cultural value of theirs caused suffering to other members of their species, so I think the Superhappies’ modification of them caused an objectively better world-state: the Babyeaters remained happy with a slightly modified cultural norm, and preteen Babyeaters ceased to experience suffering and death.)

If all beings who were capable of experiencing pleasure and pain had their pleasure-experiencing maximized and their pain-experiencing minimized, this would be an objectively better universe than the one we currently live in. Yes, the specific methods involved for maximizing pleasure and minimizing pain would vary from being to being, but that doesn’t change the end goal.

There are many human ideologies that believe that suffering or pain is good and desirable (see: the just-world fallacy, and that quote about baseball bats.) My claim that “goodness” is objective is the claim that regardless of what humans believe, suffering is bad, and happiness is good. Any ideology that claims otherwise is mistaken.

(Source: raginrayguns)

scientiststhesis:

What does it mean to say that morality externally imposes anything on you? You’re the one who picks your morality. If your morality says a thing, it’s because you wanted it to say that thing!

Morality cannot be external to you. That doesn’t even make any sense. If you are an utilitarian, that means you personally actually care about and actually want to help all beings capable of experiencing utility. After all, if you chose to become an utilitarian, well, that’s what you were signing up for, wasn’t it?

People don’t choose what morality to believe in any more than they choose to believe in gravity. They believe that a certain ethical system is correct, and must change their beliefs about moral facts in order to change ethical systems. It’s not a matter of doing something like sitting down and thinking “I’m going to be a utilitarian today”, it’s more like thinking “World utility is all that matters morally, regardless of whether I care about it, therefore utilitarianism is correct”. People can believe in a morality that externally imposes things on them - they may not like what they consider “being moral”, but they can’t just choose to not be moral, as they believe in their ethical system.

Again that word, “should.” If a utilitarian says “you should care,” that’s the short version of “I believe that would maximise utility.” Let’s replace label with content in that sentence:

"But if you don’t care about certain beings’ utilities, a utilitarian would tell you that caring about them would maximise utility, and that it is optimal in maximising utility to include them in your moral decisionmaking, regardless of whether you actually care about them.”

If a utilitarian says “you should care”, it’s not the short version of “I believe that would maximize utility”, it’s not the short version of anything, except perhaps the nearly synonymous “I believe that morality requires you to do this, because morality requires you to maximize world utility, and I believe this would maximize world utility”. If a utilitarian told someone something like “Caring about world utility would maximize world utility”, and they’d respond “I don’t care about maximizing world utility”, the utilitarian would say that the person who doesn’t care about world utility is committing moral error by not caring about maximizing world utility. This is because utilitarianism is a normative ethical system, and as in all normative ethical systems, those who believe in one must believe that those who subscribe to other ethical systems (or to no ethical systems at all) are in moral error.

(Source: raginrayguns)

fnord888:

eccentric-opinion:

To the people who say that morality isn’t objective because humans made it up, I ask this: are the rules of chess objective? If I’d say that bishops move the same as rooks, I’d be objectively wrong, even though the rules of chess are entirely a human construct.

Objectivity and being a human construct are not mutually exclusive.

If I say that white starts with more pawns than black, I’m objectively wrong if I’m talking about standard chess. If I’m talking about Dunsany’s chess, I’m objectively right. If I’m talking about xiangqi, it’s a category error.

Right. My point is that even though all of those games are human constructs, there are many objective statements that can be made about them.