If you define “morality” to refer to something that’s not objective, the word “morality” will refer to something not objective. This is not surprising. But I don’t see why you’d want to do that. The common usage of the word “right” is nebulous and it’s not productive to try to figure out what it really means, because different people mean different things by it, and even the same person may use it differently in different contexts. However, the usage of (political) “rights” as something like “the product of mutually agreed-upon self-restriction” falls under the broad umbrella of how the word is used in general.
Indeed. But just the fact that different people mean different things by it should start suggesting that the thing is, well, subject-dependent.
It could mean that. Or it could mean that different people refer to different things by the word “morality”. (For example, if I referred to carnivores that bark as “cats” and carnivores that meow as “dogs”, I wouldn’t necessarily have any different beliefs about cats or dogs than you would, I would only be using different labels for the same concepts.) Or it could mean that most people use the word very nebulously and don’t have anything like a concrete ethical system in mind when they talk about morality.
I could talk about what I refer to as “morality” without using that word, though it would be cumbersome to do so.
Also, “figure out what it really means”? What does that mean? Nothing really means anything, words are just levers in brains to invoke certain concepts.
Imagine that people talk about “zarbles”. The language would have expressions like “as striped as a zarble”, people would describe round objects as zarblish, people would describe black-and-white objects as zarblish, etc. But few would have anything concrete in mind when they’d say “zarble”. Then someone would come along and ask, “Is a zarble a zebra-patterned marble?”, and then the fog would clear from people’s heads and they’d realize that when they’re talking about things being zarblish, they’re saying they’re similar to a zebra-patterned marble.
That’s what “figure out what it really means” means. Unfortunately, a similar analysis of what people mean by “is moral” would not produce something concrete.
First, even if you don’t want your neighbor’s stuff, it would still be to your advantage to be a credible threat to your neighbor so he’d have a reason to make this agreement instead of plotting to murder you and steal your stuff. Second, the benefits of the agreement in my example are not only that you and your neighbor wouldn’t steal from or murder each other, but also that the two of you would be in a state where you could cooperate (rather than live in fear of each other), and that would be in your advantage even if you don’t want to steal your neighbor’s stuff. Third, this particular example is about a specific case, and the reasoning can be applied more generally. For example, suppose you and your neighbors drive cars, and cars produce pollution. In a pre-contract world, all of you would drive without being constrained by the amount of pollution you impose on others, and this would produce a certain amount of pollution. However, you and your neighbors could get together and agree to a certain amount of compensation for pollution produced, paid by a driver to the rest of the neighbors. This would mean that in return for you having to pay others for the pollution you impose on them, others would pay you for the pollution they impose on you - and people would pollute less in general, now that it wasn’t free. Making such an agreement would be to your and your neighbors’ advantage in this case.
Okay? I still don’t see the point you’re trying to make with this, could you enlighten me?
My point was that even if you don’t have the specific preferences of the person in my example, you have a reason to make the agreement I described.
Okay, so you’re arguing over the definition of “utilitarianism.” Alright, then. I do believe other people ought to want to maximise world expected utility. No one is obligated to do anything, obligating people to do stuff doesn’t maximise world utility. I wish everyone wanted to do so, and if they did that would probably maximise world utility - which is to say, one level down, that if everyone was moral then everyone would be moral, according to me.
Question: Do you think they ought to maximize world utility, or do you merely want them to? Do you think the two are separate?
To give what will hopefully work as an example, suppose an ethical egoist likes cheese, and wants someone to buy it for them. Nevertheless, they do not think that the other person ought to buy them cheese, they think the other person should buy cheese and use it as they see fit.
To put it more generally, “I want you to X” and “You ought to X” seem to be separate. The latter implies the former, but they aren’t synonymous.
As for obligation, saying that someone is obligated to do something means that they ought to do something regardless of whether they want to. Do you think people ought to maximize world utility even if they don’t want to?
So, what name do you give to a non-moral-realist who believes the correct moral course of action is maximising world utility, believes other people should maximise world utility, and doesn’t believe that this belief is about the territory?
Seriously, though, a non-moral-realist who believes in a “correct moral course of action” is self-contradictory. If they’re a moral non-realist, they don’t believe that there’s a correct moral course of action (or that other people should maximize world utility), and if they believe that there’s a correct moral course of action, they’re not a non-realist.
With the contradictory parts removed -
moral non-realist who wants to maximize world utility - altruistic moral non-realist (no special term)
person who believes that the correct moral course of action is maximizing world utility and other people should maximize world utility - utilitarian
The thing about all this is that you’re not using the word “true” in the “usual way.” When you say “X is true” one would reason that X is a proposition about objective reality that can be tested against and about which evidence can be gathered.
Moral propositions do not have that property. If an utilitarian (in the way you defined an utilitarian) and a contractarian disagree about whether an ethical proposition is “true,” there is no experiment even in principle that could determine which one of them was right! There are just the opinions/preferences of the agents, but there is no evidence, there is no testing and falsifying, there is just what the agents have reasoned out as the most convincing arguments to them. But a thing whose “truth-condition” depends on the reasonableness of an argument to an agent could not possibly be regarded as an objective fact about the world!
It’s not a matter of defining words like your game with cats and electrons. If moral realism says that morality is a thing that exists in the territory, which can be true or false independently of an agent reasoning about it, then it has to show its work and point to the feature of the world that’s objectively a morality. A person who says “this is a cat” can point to one and describe exactly what propositions are true about cats that anyone else can test for and check. A person who says “this is morality” cannot do so.
On the contrary, people (ethical theorists) point to things in the territory and say “this is morality” all the time, as I did in my example. They may point to different things in the territory, just like people could point to different things and say “this is a cat”. People can talk about whether a system of morality is internally consistent, and appeal to moral intuitions (i.e. “Your theory implies THAT, do you really believe THAT?”, and then the philosopher has to clarify, bite the bullet, or give up the theory). However, it is possible to have several internally consistent and mutually exclusive ethical theories, and to have people who agree with each theory bite the necessary bullets about the counterintuitive parts. However, the discussion needn’t end at that point. The disagreeing philosophers can prod at the foundations of each theory, such as the nature of moral motivation, moral epistemology, etc.
To summarize my own position on this, which would be endorsed by some moral realists but not by others:
There are some things that follow from what we already want/like*, but those things are often not obvious, so people can be in error about what they are. Lost Purposes and Cached Thoughts abound, vocabulary confusions make reasoning more difficult, Blue-Green thinking inhibits thinking about certain ideas labeled as Evil Enemy Thoughts, etc. For these and other reasons, people may not always successfully follow the chain of logic from “I want X” through “X requires Y” and “Y requires Z” to either “I want Z” or “If X requires Z, I’ll do without”. They go ahead and say “X! But not Z! Saying that X requires Y or that Y requires Z is heartless and cruel!” People accumulate errors in their thinking, so a lot of what they endorse doesn’t follow from what they fundamentally want (i.e. the things they’d want if they were consistent).
Some of these errors are in the area commonly labeled as “morality”, such as whether people have rights (and if so, what rights and why), how one ought to act in certain situations, etc. There’s no external stone tablet that dictates how people should act, so the only source of shoulds is what follows from what they already want, i.e. if you want X, then you should want Y.
If it helps, I can rephrase my position in two different ways. If “morality” refers to things under the nebulous umbrella that covers certain acts/attitudes/etc, my position is “There are objective** (but situational and agent-relative***) truths about how one ought to act in the areas that are commonly said to be labeled ‘moral questions/situations/etc’.” If morality refers to what one ought to do, my position is “There are objective** (but situational and agent-relative***) things that one ought to do that follow from what is already motivating, or would be motivating in a state of internal consistency”.
Finally, because of the psychological unity of mankind (which ethical theorists often refer to as “human nature”), there are commonalities in what people’s internally consistent preferences are. This means that to these hypothetical imperatives of the form “If X, then Y” we can add certain “X” when we’re talking about humans, and can therefore say “Y”.
I don’t believe in stone tablets, but I’m still a moral realist.
* The things that already motivate us, or the things that would motivate us if we were internally consistent.
** In the sense that I’m using it, “objective” means something like “a fact about the world and not a matter of opinion, and which may be but is not necessarily something independent of minds”.
*** They are situational because your preferences may be different in different situations. In a simplified example, “I should get in my car” is true if and only if “I want want to drive somewhere” or “I want to get something out of my car”. Therefore, “I should get in my car” is situational (because it isn’t true if you don’t want anything from your car), but objective (because when you want something from your car, it follows that you should get into it, and if you don’t want anything from your car, you shouldn’t get into it, and that’s a fact about what follows from your preferences, which exist in the world). They are agent-relative for a similar reason: if you want to drive somewhere, you should get in the car, but if being in cars is intensely unpleasant for me, I shouldn’t get into a car even though I may want to drive somewhere.