Wednesday, February 11, 2015

Another reply to Craig's metaethical argument (v 1.1.1)

1. Introduction.

In this post, I will assess Craig's metaethical argument[a] by means of a hypothetical debate between Bob - a defender of Craig's metaethical argument[a], and of Craig's version of Divine Command Theory (henceforth, DCT) – and Alice – a non-theist who believes that the second premise of Craig's argument is true (the first part is almost the same as this previous post, with minor improvements), at least as she understands the premise – more on that in the debate.

Alice also believes that the argument from moral evil – at least, in its evidential version, if there is a significant difference between the logical and evidential versions – is a decisive argument against the existence of an omnipotent, omniscient, morally perfect (henceforth, omnimax) being – though she holds that there are other, independent and also decisive arguments.

Still, the debate below is not about the argument from moral evil, but – mostly - on Craig's metaethical argument, and his DCT.

However, Bob defends Craig's premises not only by relying on Craig's own defenses, but also by means of other arguments I added in order to assess more potential theistic options.

As you may imagine, I hold that Alice wins the debate, hands down. But I've been trying to find good arguments to defend Bob's position as well – to no avail, in my assessment.

Still, I reckon some readers might wonder whether I'm being fair to Bob – or rather, to Craig -, or think that even if I tried but failed to find any good arguments in support of his side, there are better arguments, and I just didn't see them.

If you are among those readers – or if you just would like to play Bob's advocate - , I invite you to try to find some of those good arguments, and improve Bob's case.

On the other hand, if you think that Bob actually wins the debate based on the arguments I give, I guess we will just disagree about that, but I'd like to ask you which one(s) of Bob's arguments you found persuasive.

Before I go on, I would like to clarify that:

1. I don't claim that Craig holds all of the views defended by Bob. In fact, he doesn't. Bob is a fictional character. When I attribute a stance or claim to Craig, I will make that clear, providing relevant links and/or references if needed (e. g., I won't provide a reference or link if I say Craig claims that God exists because it's obvious that he does, but I will if I say he has some specific belief about the meaning of 'objective' in this context).

2. I don't claim that the views I express here are original. In fact, I took most of the ideas from other sources[A], and while I came up with several idea, I have no good reason to think I'm the first one to have done so. In fact, that's very probably not the case when it comes to most of the ideas in this post.

3. It might be argued that only Craig's theory about moral duties/obligations – namely, that they are constituted by God's commands – should be called Craig's “Divine Command Theory”, whereas Craig's theory about goodness should go by another name.[-1]

Regardless of whether that's the most accurate terminology, nothing substantive hinges on that, and as a convention and for the purpose of simplifying terminology, I will use the expressions “Craig's DCT”, “Craig's Divine Command Theory”, or similar ones, to talk about his theory about moral ontology, including the theory about moral duties, goodness, values, and so on.

2. The hypothetical debate.

Alice: No omnimax being exists. There are several ways to see that, but – for example -, an omnimax being surely would not create – at the very last - a universe with so much suffering – let along undeserved suffering -, or with moral agents with an imperfect moral sense. They would have flawless moral knowledge.

Moreover, she would prevent at least many of the instances of moral evil that actually occur – well, actually, she wouldn't need to intervene because she would not create beings in any way inclined to do evil in the first place, but in any event, if such beings existed even if not created by her, she just wouldn't allow that.

Bob: You're mistaken. God exists. And he is the greatest conceivable being, so in particular, he is an omnimax being. In fact, the very existence of moral evil implies that God exists.

There is a very important point.

Alice: Why would the existence of moral evil imply that God exists?

Bob: For a number of reasons, as explained by William Lane Craig. He argues as follows:

P1: If God did not exist, then objective moral values and duties would not exist. [0]

P2: Objective moral values and duties do exist.

C: God exists.

Alice: I'm familiar with Craig's metaethical argument and his defenses of it, and I agree that P2 is true, but I see no good reason to think P1 is true. Why do you believe P1?

Bob: Only theism can provide an ontological grounding of moral goodness and/or of moral obligationsas Craig's DCT does -, in the sense of informative identification.

For example, moral goodness is identified with some qualities of God – or resemblance to such qualities, if one is talking about moral goodness in creatures -, whereas moral obligations are identified with God's commands. And moral values like justice, forbearance, love, etc., are good because they're found in God.[2]

This is akin to the way in which, say, water is identified with H2O, heat with molecular motion, or – to use Craig's own example – the way in which a meter – in the past – was identified with the distance between two lines on a bar in Paris. [1]

Alice: Different people in different contexts mean different things by 'God'. So, let's be clear. What do you mean?

Bob: By 'God' I mean what Craig means, i. e., the greatest conceivable being. In particular, given that moral goodness is a great-making property, God is morally perfect [2][3] and that entails he is maximally morally good.

Alice: Okay, so I will grant the water and heat identifications for the sake of the argument, but let me point out that not having an informative identification account is not a problem (see, for example, this post).

That aside, the account of moral goodness given by DCT, as you describe it, is not only not informative, but it seems to be circular.

In fact, the account identifies moral goodness with resemblance to God in some respects – or with some qualities of God, or with resemblance to some qualities of God, if you prefer; this is not crucial to my objection -, defines the word 'God' as 'the greatest conceivable being', and uses 'great' in a way such that 'God is morally good' is a conceptual truth[2], and moreover, moral goodness is a great-making property[3].

As Craig says, asking why God is good is like asking why bachelors are unmarried.[2] But then, identifying moral goodness with resemblance to God seems akin to identifying being unmarried with resemblance to being who – by definition – is a bachelor and has such-and-such properties.

Bob: 'God' is defined in terms of greatness, not in terms of moral goodness, and it seems to me that a proper informative account of bachelorhood would identify being a bachelor with being unmarried and having one or more other properties.

Alice: But that goes in the other direction. DCT seems similar not to an identification of bachelorhood to unmarriedness plus one or more other properties, but to an identification of unmarriedness in terms of bachelorhood, or more precisely, in terms of resemblance to a certain bachelor. That seems viciously circular.

Bob: I disagree. I see no circularity problem. I don't know what you're getting at.

Alice: Okay, let's me raise the circularity issue from a different perspective. I would ask you what greatness is – that is, I'm asking for the ontological foundation of greatness, in the sense of informative identification.

If you reply that greatness is resemblance to God in some respects – or to some qualities of God, etc. -, and then you define 'God' as 'the greatest conceivable being', then there is a circularity problem. Moreover, it seems that to be great is to be morally good and also have such-and-such properties (for some 'such-and-such'). But according to DCT, to be morally good is to resemble the maximally great being in some respects - or some qualities of the greatest conceivable being, or something along those lines.

Do you see the problem?

To be great is to be morally good and to also have such-and-such properties, and to be morally good is to resemble the greatest conceivable being. That seems circular.

Bob: You misunderstand the account. There is no circularity. For example, one might stipulateas it was in the past – that to be a meter long is to be as long as the distance between two lines on a certain standard bar.

If someone were to then ask what it is to be a meter long, it would be informative to tell him that to be a meter long is to be as long as the distance between two lines on the bar in question.

That is informative. The bar itself – or more precisely, the part of the bar between the two lines - has the property of being a meter long, and indeed it is the paradigm of a meter.

Similarly, God has the property of being morally good – indeed, to a maximal degree -, but also God is the paradigm of moral goodness, and that is not problematic. The account of moral goodness provided by DCT is informative.

Alice: Let's say 'oldmeter' is defined as a distance equal to the distance between the lines on the bar in question – that bar, over there, in Paris. [4]

If someone asked what an oldmeter is, it would proper and informative to tell her that to be an oldmeter long is to be as long as the distance between the lines on that bar, over there, in Paris.[4], and/or that an oldmeter is a distance equal to the distance between the lines on that bar, over there, in Paris.[4], etc.

On the other hand, it would be improper, circular, and not informative to tell her that to be an oldmeter long is to be as long as the distance between two lines that are at a distance of one oldmeter, or that to be an oldmeter long is to be as long as the distance between the lines on a bar that has two lines at a distance of one oldmeter. That's what DCT looks like, as you described it.

Do you see the problem?

Bob: I see, and know I understand in which way you misunderstand the account. There is no circularity in the account of moral goodness provided by DCT. The account does not identify moral goodness with resemblance to a being that has such-and-such properties, or more precisely the property of greatness to a maximal degree or a maximum degree. Rather, the account identifies moral goodness with resemblance to God, the maximally great being that actually exist, with respect to some relevant qualities.

In the case of the meter – or the oldmeter in your example-, the way of identifying the paradigm was to say it was the distance between two lines on that bar, over there in Paris. [4]

In the case of moral goodness – or even greatness -, the way of identifying the paradigm is to say it's the greatest conceivable being, that being who actually exists.

What's the problem?

Alice: In that case, the definition of 'God' as 'the greatest conceivable being' only plays a role as a means of identifying the object that is the paradigm, and it's not essential to the account. That avoids circularity but still leaves us in the dark as to what moral goodness is.

Bob: I don't see why.

Alice: Let us assume now and for the sake of the argument that there is an omnipotent, omniscient, maximally loving and kind being. Let's call that being 'Jane'. To be clear, I just picked an – assumed – actual being by listing some of her properties - enough properties to identify her uniquely -, and then used a proper name – Jane – to name that being.

So, Jane is the only omnipotent, omniscient, maximally loving and kind being that exists.

Let us now pick the meter bar in Paris – that specific bar, which is still over there, in Paris [4]and let's define 'oldmeter' to be a distance equal to the distance between the two lines on that bar. That is a stipulative definition. Let us call the bar 'Ted'.

I have three questions:

1. Why is Jane morally good?

2. Why is Jane maximally great?

3. Why is the distance between the two bars on Ted one oldmeter?

Bob:

1. Jane is God, 'God' is defined as the greatest conceivable being, and it's a conceptual truth that maximal greatness entails moral goodness. So, Jane is morally good.

2. Jane is God, and 'God' is defined as the greatest conceivable being. So, Jane is maximally great.

3. Ted is the bar over there, in Paris [4], because that was stipulated – i. e., 'Ted' is the name given to the bar - , and 'oldmeter' is defined as the distance between the two lines on the bar over there, in Paris [4]. So, the distance between the two lines in Ted is one oldmeter.

Alice: Do you see why your reply to the third question is very different from your reply to the first and second questions?

Bob: No. Why?

Alice: Your reply to the third question actually informs me why the distance is one oldmeter, by explaining to me how 'oldmeter' was defined, and that 'Ted' is the name given to the bar, etc.

On the other hand, your first and second replies leave me in the dark as to why Jane is morally good, or great, let alone maximally great.

In fact, given your definition of 'God' as 'the greatest conceivable being', the first part of your replies – namely, Jane is God” - actually meansJane is the greatest conceivable being”.

But I'm asking why Jane is morally good, and why Jane is maximally great, so insisting that Jane is the greatest conceivable being leaves me in the dark as to why Jane is maximally great, or morally good. Your reply is uninformative. You just repeat that she is maximally great, but you do not explain why she is maximally great.

From a slightly different perspective, the term 'oldmeter' is stipulatively defined as a distance equal to the distance between the two lines on the bar over there, in Paris [4], so if somebody asks why the distance between the two lines on the bar over there, in Paris [4], is one oldmeter long, it is a proper reply to explain that 'oldmeter' is defined as a distance equal to the distance between the two lines on the bar over there, in Paris [4]and of course, that distance in question is equal to itself. [b]

On the other hand, the terms 'morally good', and 'great' are not defined in any way in DCTthey are ordinary terms[c], and left undefined -, so it would be improper to answer the question of why the only omnipotent, omniscient, maximally kind and loving being that exists, is morally good – or why she is great – by bringing up the definition of 'God' as 'the greatest conceivable being and saying that the being in question is God. That would not answer why Jane is morally good, or maximally great.

In his reply to Harris[2], Craig says that asking why God is good is like asking why bachelors are unmarried. Yes, granted, as long as one keeps that definition of 'God' as 'the greatest conceivable being', it would be like asking why bachelors are unmarried – assuming that it's a conceptual truth that maximal greatness entails moral goodness, but I'm granting that -, but on the other hand, asking why Jane is good, or why the only omnipotent entity that exist that specific entity, assuming she exists is morally good, is not at all like asking why bachelors are unmarried.

If DCT is to avoid vicious circularity, it can't rely on the 'greatest conceivable being' definition of the word 'God' to answer questions like the ones I asked (e. g., 'Why is Jane morally good?', 'Why is the only omnipotent being maximally great?', etc.).

Bob: I don't think that Craig made any mistakes in the context with the debate with Harris[2].

Of course, the ball has to stop somewhere, and for Craig – and for me – it stops with God.

Alice: But again, that leaves us in the dark as to why Jane is morally good. It's uninformative.

Bob: It is somewhat informative. But how much information we get is an epistemic matter. Craig's metaethical argument is an ontological metaethical argument. The account does not need to be as informative as, say, “water is H2O”.

Alice: You're the one who brought up informative identification!

Anyway, I have another question:

4. What makes it the case that Jane maximally great? (or if you prefer, what makes the statement “Jane is maximally great” true?)

This is a question about truth-makers. What makes the statement 'Jane is maximally great' true?

Remember, I stipulated that Jane is the only omnipotent, omniscient, maximally living and kind being that exists.

Bob: Jane is maximally loving and kind, and those are all great-making properties, which she has to a maximal degree. Jane also has power and knowledge to a maximal degree – that's all great-making.

Moreover, Jane is also maximally morally good – moral goodness is another great-making property -, and generally, Jane has all of the great-making properties to a maximal degree.

That's what makes it the case that Jane is maximally great: the fact that she has all of the great-making properties to a maximal degree.

Alice: Alright, so here's another question:

5. What makes it the case that Jane is morally good? (or if you prefer, what makes the statement “Jane is morally good” true?)

Remember, moral goodness is a great-making property, so it can't be that greatness is a morally good-making property. That would be viciously circular. So, what makes it the case that the only omnipotent, omniscient, maximally kind and loving being – i. e., Jane -, is also morally good?

Bob: Jane is the paradigm of moral goodness. To be morally good is to resemble Jane in some respects.

Alice: But you seem to be suggesting that what makes Jane morally good is the fact that Jane resembles Jane in some respects.

Bob: That might be a way to see it, though I'd rather say that as the paradigm of moral goodness, she is necessarily morally good, indeed maximally so and morally perfect.

Alice: But either way, what makes her the paradigm of moral goodness? What makes the statement 'Jane is the paradigm of moral goodness' true?

Unlike the meter and the meter bar, 'moral goodness' is not defined in terms of Jane, so it's not true by definition of the words. You can't properly make that definitional move.

Bob: It is a necessary brute fact that Jane is the paradigm of moral goodness. The ball has to stop somewhere. Jane is God. The ball stops with God.

Alice: But what makes the statement true? In virtue of what is Jane morally good?

Bob: I think you're missing the point, conflating truth-making with informative identification. Informative identification is not the same as truth-making. DCT offers an account in terms of informative identification.

Alice: Alright, but in that case, I'm going to point out that the point remains that DCT is different from the 'oldmeter' identification in that we get a proper explanation as to why the distance between the two lines on the bar[4] is one oldmeter, but on the other hand, DCT leaves us in the dark as to why Jane - the only omnipotent, omniscient, maximally good and kind entity that exists - is the paradigm of moral goodness. It provides no explanation whatever as to why she is the paradigm of moral goodness, or what makes the statement “Jane is the paradigm of moral goodness” true.

Bob: Again, whether the account provides us with a good amount of information – like “Water is H2O” - is not the point. You're confusing ontology with epistemology.

Alice: I'm not confusing ontology with epistemology. I was just going with your explanation of what an ontological foundation is, and your examples of water, heat, and a meter. You're the one who brought up the informative identification stuff.

Bob : The point remains that theism provides an ontological grounding in the sense that there is some being that exists in the mind-independent world and that serves as a paradigm of moral goodness.

Alice: But why would there have to be a paradigm? Is there a paradigm of cruelty as well?

Bob: Without a paradigm of moral goodness, there would be nothing in the world that would make moral statements like 'Agent A is morally good' or 'Agent B has a moral obligation to do X' true, or at least objectively true.

Alice: Okay, so this is about truth-makers after all. But then again, what makes “Jane is morally good” or “Jane is the paradigm of moral goodness” true? And again, what about a paradigm of cruelty? Is there one of those as well? Does it exist necessarily too?

Bob: You're confused. The point is that without a paradigm, there would be no ontological foundation of objective moral values or duties.

Alice: Do you mean that nothing would make statements like “the Holocaust was immoral” true, or that the property 'immorality' would not be the same as a property we may describe in non-moral terms? Or something else? And again, what about cruelty?

Bob: I'm saying there is no ontological foundation. I already explained it.

Alice: But then, you failed to adequately respond to my objections above, or to explain it in a way that is clear enough for a reasonable reader to grasp. Or address the “cruelty” parallel.

Bob: I disagree. And Craig's arguments show why without God, there would be no ontological foundation of moral goodness, moral duties, moral values, and so on.

Alice: I don't think they do – not even close.

But let me raise the objection from cruelty – which you keep failing to address - in a somewhat different manner:

Craig holds that if God did not exist, then rape would still be cruel.[5] So, the ontological foundation of cruelty is not God, on his theory.

So, let's introduce two definitions:

'C-God': = an essentially maximally cruel, omnipotent being.

'c-god': an essentially maximally cruel being.

Now, let's make a parallel argument:

P1.2: If C-God did not exist, then objective cruelty would not exist.

P2.2: Objective cruelty does exist.

C.2: C-God does exist.

Bob: But leaving aside other problems, why would the foundation of objective cruelty be essentially omnipotent?

Alice: No particular reason, but the same applies to the ontological foundation of moral goodness, if an agent were the ontological foundation of moral goodness. Why would she have to be omnipotent?

You might try an argument from contingency, but that argument fails.

Regardless, one may remove the condition of essential omnipotence from the cruelty argument, and still argue on the basis of objective cruelty for the existence a c-god.

P1.3: If no c-god existed, then objective cruelty would not exist.

P2.3: Objective cruelty does exist.

C.3: At least one c-god does exist.

And so, there exists at least one essentially maximally cruel being – a c-god.

But it should be obvious that something is very wrong with that argument, and that – just in case – biting the bullet and suggesting that, say, perhaps Lucifer is maximally cruel wouldn't work. But I offer to show why biting the bullet is a terrible idea, if you like.

Bob: But why would the ontological foundation of objective cruelty have to be essentially maximally cruel?

Alice: One may mirror the question: If an agent were the ontological foundation of moral values and duties, why would she have to be essentially maximally good, or essentially maximally great, etc.?

Given that the metaethical argument concludes that an entity with such essential properties exists, it's proper to object to it by presenting a mirror argument that also posits an essentially maximally cruel agent. It would be up to the defender of the metaethical argument to show that there is a relevant difference here.

So, as long as you don't offer a good reason for the distinction, I will keep the essentialness condition in the argument from objective cruelty.

However, even if you managed to offer a good reason for the distinction and for removing the essentialness condition from the argument from objective cruelty, one may still make an argument that goes from the existence of objective cruelty to the existence of a maximally cruel being – even if not essentially so -, and that objection is still a problem for Craig's metaethical argument.

So, why should I remove the essentialness condition.

Bob: Let's leave essentialness aside; it's not the point. Why do you think that cruelty is objective, in the relevant sense of 'objective'?

Alice: Because it meets Craig's requirements, going by his own explanation of objectivity. In fact, one may mirror some of Craig's arguments on the matter of objective morality.[6] For example, the Holocaust was cruel. And it would have been cruel even if the Nazis had won the war and convinced everyone that it wasn't cruel.

Moreover, whether an action is cruel, or whether a person is a cruel person, are matters of fact, not matters of taste, or opinion.

So, cruelty is objective in the relevant sense of 'objective' – i. e., the sense in which the word 'objective' is used in Craig's metaethical argument.

Bob: While you're considering some of Craig's points in that context, Craig also identifies objectivity with mind-independence.[7] But if a person is a cruel person, that's a character trait – it seems that's the property of a person's mind -, and as such, it's mind-dependent. So, on the mind-independence/mind-dependence distinction, it seems to me cruelty is not objective.

Alice: Craig explains what he means by 'objective' in the context of his defense of the metaethical argument[6], and he did not specify mind-independence.

Now, it's true that he believes also that the word 'objective' means 'mind-independent'. [6] But his belief is not a required condition in the conception of objectivity that he explains when defending his metaethical argument. [6]

Furthermore, it seems Craig's belief about the meaning of 'objective' is mistaken – remember, he makes it clear in the context of his metaethical argument that he's trying to capture [one of] the ordinary, colloquial meaning[s] of the word 'objective', rather than giving a stipulative technical definition. And the ordinary meaning of 'objective' in question is not 'mind-independent'.

In fact, while Craig says that he takes 'objective' to mean 'mind-independent', he makes other claims that are incompatible with that alleged meaning.

For example:

i. In his reply to a question about the metaethical argument, Craig says: [8]

Objective” means “independent of people’s (including one’s own) opinion.” “Subjective” means “just a matter of personal opinion.”

ii. In his defense of the metaethical argument[9], he says that something is objective if it's not dependent on people’s opinion, while it's subjective if it is so dependent.

iii. In his reply to a question about objectivity and mind-independence, Craig explains that on Berkeley's view, the whole world would be mind-dependent, and holds that 'objective' means 'mind-independent'. [7]

Now, surely, even if Berkeley's view were correct, it would not be a matter of personal opinion whether, say, Hitler hated the Jewish people, or whether Hitler caused pain to other people, or whether I'm a Christian, or whether William Lane Craig believes that Yahweh exists. All of those matters are matters of fact, not matters of opinion, and would be matters of fact, not matters of opinion, even if Berkeley were correct. So, those matters would be objective matters in the sense of 'objective' that is relevant here, even on Berkeleyan idealism – even if the whole world were mind-dependent. So, it is not the case that 'objective' means 'mind-independent' in the context of the metaethical argument.

In fact, Craig is confusing two very different issues:

1. Whether something is mind-dependent in the sense of being generated by a mind (e. g., dreams, to use one of Craig's examples[7]).

2. Whether a matter, or subject, or issue is a matter of fact – i. e., objective, in the relevant sense -, or a matter of taste or opinion – i. e., subjective, in the relevant sense.

In any case, and for example, just as there is a fact of the matter as to whether the Holocaust was immoral – it was -, there is a fact of the matter as to whether it was cruel – it was.

Furthermore, just as whether a person is mentally ill is an objective matter – never mind mental illness is a mental property -, and whether she is a good person is an objective matter, so is whether she is a cruel person.

The point remains that cruelty is objective in the relevant sense of the matter – that is, the sense of 'objective' relevant in the context of the metaethical argument -, and so are some mental properties.

Bob: When you make assessments of objectivity “if Berkeley's view were correct”, you're making assessments in a counterpossible scenario, because it's not possible that Berkeley's view is correct. Why do you think it's proper to make such assessments in this context?

Alice: It seems intuitively clear to me. But in any case, when you make assessments of objectivity “if God did not exist”, you're making assessments in a possible scenario – because God does not exist -, but which you believe is counterpossible. So, I may also ask: Why do you think it's proper to make such assessments in this context?

Bob: God exists, but that aside, I think it's proper to make assessments like that because it's intuitively clear, and in fact it seems much philosophy is done in that manner, without controversy.

Alice: I then say the same regarding my use of a counterpossible scenario above.

Bob: Fair enough. Back to the subject of objectivity, I don't think Craig made any mistakes or confused any issues, but I actually agree that there is objective cruelty, in the relevant sense of 'objective'. I was just asking why you thought that. But why do you think that that is a problem for a defender of Craig's metaethical argument?

Alice: Because of the parallel argument I raised earlier.

Bob: But are you not making some circular argument here, by defining a c-god as essentially maximally cruel?

Alice: No, I'm merely mirroring Craig's argument, for a reductio.

In fact, Craig does argue for the existence of an essentially maximally morally good being, on the basis of the facts that there is objective moral goodness and there are objective moral duties, and his first premise. I'm making a mirror argument from a similar first premise and the fact that there is objective cruelty, to the conclusion that there is an essentially maximally cruel being.

Moreover, it's not even required to introduce and define 'c-god'. If you prefer, you may simply substitute 'an essentially maximally cruel being' for 'c-god' in the argument from objective cruelty.

Bob: Okay, so let me say that it's clear to me that something is wrong with that argument from objective cruelty. However, I'm curious, so I accept your offer. Why would biting the bullet be such a bad idea for a defender of Craig's metaethical argument, in your assessment?

Alice: There are several reasons, but for example, when Craig said that rape would still be cruel if God did not exist, he did not say anything along the lines of 'as long as a maximally cruel being existed'. He surely didn't have that in mind. That's understandable, since it should be obvious that no maximally cruel being is required for an act of rape to be cruel. Of course, it should be equally obvious that no maximally good being – let alone a command from her – is required for an act of rape to be immoral, though Craig will never realize that. But I digress. In any case, it seems clear to me that Craig would also reject the argument from objective cruelty to the existence of a maximally cruel being.

Bob: That's not good enough a reason to think it's a bad idea for a defender of Craig's metaethical argument to bite the bullet. A person may defend Craig's metaethical argument without agreeing with all of the claims he makes when defending it.

Alice: Fair enough, then I will give you another reason: if the argument from objective cruelty succeeds, that entails that either there is a necessary maximally cruel being – which is surely in conflict with the theistic view the defender of the metaethical argument is committed to -, or at least that in every possible world in which an act of cruelty is committed, there is a contingent essentially maximally cruel being. But that is clearly not true.

For example, it's clearly possible on theism that in some world W, God creates a few beings, and eventually one of them behaves cruelly, but he is not maximally cruel, let alone essentially so.

Additionally, the conclusion of the argument from objective cruelty plus theism entails that God actualized an essentially maximally cruel being. Why would God do that? (but even if one removes the essentialness condition, biting the bullet remains a bad idea, for the reasons given above and others).

Bob: Interesting arguments. In any case, as I said, it's clear to me that something is wrong with the argument from objective cruelty to a c-god, so we don't need to keep debating this point.

Alice: Okay, so what's your reply to the argument from objective cruelty?

Bob: I'll address that in a moment, but first, let me point out that if cruelty is merely the absence of kindness, then there is no need for an ontological foundation of cruelty. There is just an ontological foundation of kindness.

Alice: Leaving aside other issues, it's not the case that cruelty is merely the absence of kindness.

To see this, let A and B be two beings with no kindness whatsoever, as follows:

A is completely indifferent to the pain or suffering of others, for their own sake. For instance, if A sees a man torturing and killing children for pleasure and A can stop that easily and effortlessly, A feels no motivation whatsoever to do so in order to help, and lets it happen unless he has some other reason to intervene (e. g., if someone will give A something A wants if and only if A helps those children).

On the other hand, A has no interest whatsoever in inflicting any pain or suffering, either – not for their own sake. A might sometime sinflict pain, etc., in self-defense if needed – for example -, but only as a means to an end – i. e., to defend himself -, not because he cares whether someone else suffers. Agent A simply doesn't care about the suffering, happiness, etc., of other agents, at all.

As it happens, A does not inflict any pain or suffering, since A does not care about such things for their own sake, and there happens to be no further motive for A to do that.

Agent B has no kindness whatsoever, either. However, unlike A, B revels in horribly torturing other agents – human beings - just for pleasure, and he does so all the time.

So, B is surely extremely cruel. It might be debated whether A is cruel, but even if A is cruel, surely B is more cruel than A.

Yet, if cruelty were merely the absence of kindness, two agents that are equal with respect to kindness would be also equal with respect to cruelty, and A and B are equal with respect to kindness – i. e., they both have none at all -, but they are not equal with respect to cruelty.

Hence, it is not the case that cruelty is merely the absence of kindness.

So, the argument from objective cruelty to a c-god remains a problem for the defender of Craig's metaethical argument.

Bob: Alright, so the argument from objective cruelty might be a bit of a challenge after all. But there is an answer to it, which defeats the challenge rather easily. To see this, let's consider – for example – cruelty in the context of whether a person is a cruel person (cruel actions are handled similarly); i. e., as a character trait.

There is a metaphysically necessary connection between a person's feelings, memories, cognitive capacity, dispositions to act, and generally mental traits described without the word 'cruel' or any synonyms, and the issue of whether he is cruel, in the following sense:

Suppose Jack1 is in possible world W1, and Jack2 is in W2, and Jack1 has the same feelings, memories, cognitive capacity, dispositions to act and generally mental traits – all of that described without the word 'cruel' or any synonyms - in W1 as Jack2 does in W2. Then, they are equally cruel. Maybe neither of them is cruel, or maybe they both are. But it is not possible that one is cruel, but the other one is not, or that they're both cruel but to different degrees.

If you think otherwise, and you think one needs to add that they make the same choices, then let's add that condition. But that is not the point. The point is that we may describe a person's mind as sketched above without using the word 'cruel' or any synonyms (nor 'cruelly', etc.), and any two people who are equal with respect to that description, are equal with respect to cruelty.

So, my suggestion is that cruelty is informatively identified with having such-and-such traits – as described above -, and whatever the 'such-and-such' contains. Slightly more precisely, to be a cruel person is to have the dispositions to act and/or feel in such-and-such ways under such-and-such circumstances, and/or actually carrying out the deeds, so something along those lines.

Granted, this is not a very informative account – not nearly as informative as, say, “water is H2O” -, but that is not the point, either. The point is that the property of being cruel is plausibly the same property as the property of having such-and-such dispositions, feelings, and/or [perhaps, though I doubt it] having already made such-and-such choices, etc.

It is a complex property, perhaps and infinite disjunction of conjunctions of other properties, but that's what it is. No further ontological foundation is required. So, that's a plausible ontological foundation of objective cruelty.

Alice: Alright, but in that case, moral goodnessas a character trait - may well be a complex property, involving having such-and-such dispositions to act, feelings, etc., described without using 'morally good' or any moral terms (i. e., without using the terms 'good', 'bad', 'wrong', 'right', 'morally permissible', 'morally obligatory', 'morally impermissible', 'goodness', 'badness', 'wrongness', etc.) . Moral goodness in the sense of 'a good situation', etc., is handled similarly, identifying it with some features of the situation. And if you want a unified account – which I'm not sure is a good idea, but that aside -, then a disjunction of the two can do the job.

In fact, the view that being a morally good person is the same as being disposed to be kind in such-and-such situations and to such-and-such agents, being caring and loving – or disposed to be caring and loving, etc. - towards such-and-such beings and in such-and-such situations, and so on, seems intuitively plausible.

On the other hand, the idea that to be morally good is to resemble a certain specific being in certain respects is much less plausible, not to mention the fact that we have excellent reasons – decisive ones – to conclude that such being does not exist.

To be clear, I am not committed to the informative identification of moral goodness that I sketched above. I have no problem recognizing that I do not know what the objective foundation of moral goodness is (see, for example, this post). The sort of identification I just suggested is just one option. My point is that perhaps moral goodness is that, and at least that identification is no less plausible than the identification contained in DCT, and no less informative. Or maybe there is no informative identification at all – the ball has to stop somewhere, right? But the point is that the alternative I sketched above is more plausible than the one provided by DCT, and no less plausible than your proposed account of cruelty.

Granted, without identifying the 'such and such', and the 'etc.', the account I suggested does not provide a very informative account, but again, there is no such burden on my part (see, for example, this post), and moreover, the account provided by DCT is not more informative, since it tells us that to be morally good is to resemble God in some respects, but it does not specify what those respects are, the extent of the resemblance, etc. Furthermore, your proposed account of cruelty also leaves the 'such-and-such' open.

Bob: I disagree of course about the plausibility of such views, and about the existence of God. But that aside, here's a problem for your suggestion: you're reducing moral goodness to non-moral properties. That just does not work.

Alice: If you mean that I'm suggesting (though I'm not committed to the identification, as I pointed out) identifying moral goodness with properties describable in not moral [or not clearly moral] terms, like 'kind', 'loving', etc., then that is true.

But you're doing that too, by claiming that moral goodness is resemblance of some qualities of the only being who is omnipotent, omniscient, and maximally kind and loving. Remember, on pain of circularity, you can't reply to that by appealing to the definition of 'God' – or, in any case, I can ask you about the ontological foundation of greatness.

Moreover, you also are identifying cruelty with properties describable in terms not involving the word 'cruelty', or a synonyms.

Bob: Moral goodness is identified, on DCT, with some qualities of God, and even if I don't define 'God' in moral terms, he is indeed morally perfect. But you're identifying moral goodness with something that is not good, or not essentially good, like having some dispositions.

Alice: Actually, I'm not committed to the identification I proposed – I'm just saying it's more plausible than DCT -, but that aside, you're begging the question here by saying that those qualities are not good, or not essentially good. I'm suggesting that maybe those qualities (including dispositions to act and/or feel in such-and-such ways, etc.) are the same as moral goodness. This is not a semantic theory, but an ontological identification, which is semantically neutral. Just as what you're doing by proposing DCT. Objecting on the basis that those qualities are not morally good or not essentially morally good would be like objecting to DCT on the basis that God is not morally good or not essentially morally good, because you just identified her as the only omnipotent, omniscient, maximally kind and loving being, without using moral terms like, say, moral goodness, or greatness.

Now, if you do include greatness or moral goodness in your definition of “God”, and then you use that inclusion to attempt to make a distinction with regard to which one of us – if either – is identifying moral goodness with something allegedly non-moral or not essentially moral, you have a circularity problem again.

So, your objection fails – or else, a relevantly similar objection to DCT succeeds –; pick your choice.

Moreover, for that matter I may parallel your objection and say you're proposing identifying cruelty with properties that are not essentially cruel.

Bob: There is a relevant difference between the foundation of objective cruelty that I suggest and the foundation of objective moral goodness that you suggest: one is semantically closed, the other isn't. While Craig is not making a semantic but an ontological metaethical argument, in this particular case semantic closure shows that my suggestion in the case of cruelty is correct or probably correct, whereas you don't have the same semantic support for your suggestion with regard to the ontological foundation of moral goodness.

Alice: I don't see any clear and relevant semantic differences. Could you please elaborate on your objection, and defend it?

Bob: Alright, let's consider a few examples, where all of the people involved are adult human beings acting of their own free will.

S1: Jack likes torturing people just for fun, and does it every day.

Let's consider the questions:

Q1.1: Is Jack a cruel person?

Q1.2: Is Jack a morally good person?

Q1.3: Is Jack a morally bad person?

The answers are, of course, 'yes', 'no', and 'yes'.

However, there is a significant difference:

In the case of Q1.1, the question is semantically closed, in the sense that “I know S1 is true, but is Jack a cruel person?” would be similar to “I know that Jack is a bachelor, but is he unmarried?”.

On the other hand, Q1.2 and Q1.3 are both semantically open, in the sense that they're not semantically closed.

Alice: I don't see any particular reason to think that there is a difference between those questions in terms of whether they are semantically closed, or semantically open. Why do you think there is such a difference?

Bob: I reckon that there is a difference by reflecting on the meaning of the words, using my intuitive grasp of the meaning of the relevant terms.

Alice: My reflection does not lead to the same conclusion. In fact, it seems plausible to me that the cruelty vs. moral cases are equal in that regard – i. e., probably either the questions are semantically open in all cases, or in none.

Bob: That seems improbable. Moreover, most metaethicists would agree that the moral questions Q1.2 and Q1.3. are semantically open.

Alice: Most metaethicists would agree that Craig's metaethical argument fails, for that matter, but that aside, I was not taking a stance on whether the questions are open or closed. My point is that they seem to be either all closed, or all open. At least, by my intuitions. At any rate, how does it matter?

Bob: One may raise a situationist - or situationalist - empirically-based challenge to virtue ethics.

I don't believe the situationist challenge succeeds, but I don't think it can be defeated on semantic grounds alone.

So, for example, one may say “I know Jack likes torturing people just for fun, and does it every day, but is he a morally bad person? Perhaps, humans do not have traits stable enough for there to be a morally bad - or for that matter, a morally good – human being. Maybe Jack just happens to be in a very unusual situation every day.” What's the semantic problem with that?

Alice: One may raise exactly the same challenge if, instead of moral goodness or moral badness, one is talking about character traits like generosity, courageousness, greediness, or – as in our example -, cruelty, and the challenge looks no less persuasive.

So, if your challenge shows that Q1.2 and Q1.3 are semantically open, the parallel I just offered shows that so is Q1.1.

Bob: But what about immorality of actions?

I know you're focusing on moral goodness and badness for now, but for example, let's consider:

S2: Jack is torturing Jake just for fun, right now.

It seems that the question “Is Jack behaving in a cruel manner?” is semantically closed – and he is, of course -, whereas the question “Is Jack behaving immorally?” is semantically open – even though he is, of course, behaving immorally.

Alice: I don't see any good reason to think there is a difference in terms of whether they're open or closed.

Moreover, you have the following problem:

If the difference between open and closed questions is relevant and problematic for the identification of moral goodness I suggested, then that works against DCT as well, and for the same reasons.

For example, let 'Jane' name the only omniscient, omnipotent, maximally kind and loving being that exists – assuming here for the sake of the argument there is such a being.

Then, for any 'such-and-such', the question “I know Jill resembles Jane in such-and-such respects, but is Jill morally good?'” surely has no better claim to semantic closure than “I know Jack enjoys torturing people just for fun, and does it every day, but is Jack morally good?” (or “...is Jack morally bad?”), or than “I know Jack is torturing a person for fun right now, but is he behaving immorally?”.

Moreover, even the question “I know Jane is maximally kind and loving, but is Jane morally good?” has no better claim to semantic closure than “I know Jack enjoys torturing people just for fun, and does it every day, but is Jack morally good?” (or “...is Jack morally bad?”), or than I know Jack is torturing a person for fun right now, but is he behaving immorally?”.

By “no better claim to semantic closure” I mean it's not more probable, after reflecting on the concepts involved, that they are semantically closed in the relevant sense.

So, DCT does not have a better claim to semantically closing questions on moral goodness than the non-theistic alternative I suggested earlier. But again, why would semantic closure matter?

Bob: But God is by definition the greatest conceivable being, and thus is it a conceptual truth that God is morally perfect, and maximally morally good. And so, the question: “I know Jane is God, but is Jane morally good?” is semantically closed, and so is “Is God morally good?”

As Craig points out, asking why God is morally good is like asking why bachelors are unmarried. [2]

Similarly, the question: “I know Jill resembles God in the morally relevant sense, but is Jill morally good?” is plausibly semantically closed.

Alice: That reply fails, because the definition of 'God' as 'the greatest conceivable being' at most is used in the context of DCT to pick the right entity – since you can't just point your finger at God -, but it would be improper to reply to a question like “Why is the only omnipotent, omniscient, maximally kind and loving being, morally good?” by citing the definition of 'God' as 'the greatest conceivable being' and saying that the only omniscient, maximally kind and loving being, is God.

Similarly, it would be improper to claim that DCT is semantically closing the question of moral goodness on the basis of a definition of the word 'God'.

Regardless, in any event, I can ask you: “Why is moral goodness resemblance to some aspects of the only omnipotent, omniscient, maximally loving and kind being that exists?”

That is not semantically closed.

Bob: I'm not persuaded.

Alice: Why?

Bob: I just find your arguments counterintuitive; there is probably something wrong with them.

Alice: You're mistaken. But still, let us assume for the sake of the argument that DCT semantically closes the questions about moral goodness discussed above because God is the greatest conceivable being by definitionwhich is definitely not the case; that would be viciously circular -, or for some other reason – it does not, as I argued above. But let's assume it does close them.

Even then, questions about moral obligations would remain. For example is “I know that the greatest conceivable being commands Jill not to abort, but does Jill have a moral obligation not to abort?” semantically closed?

Bob: Maybe it is. Maybe not. I'm not sure, so I take no stance.

Alice: Do you think that if it's not semantically closed, then that's a problem for DCT?

If your answer is “yes”, then why do you believe that DCT is true, given that you are unsure as to whether it's semantically closed?

If your answer is “no”, then why would lack of semantic closure be any problem for the foundation of moral goodness I suggested?

Bob: I'm not sure, but something is probably wrong with your argument, even if I can't explain the reason. It's intuitive.

Alice: That's most certainly unpersuasive. You really don't have a case based on semantic closure or lack thereof, as I've been explaining.

Bob: In any case, let me point out that Craig does not make any of those semantic arguments. His case is entirely based on moral ontology, not semantics. I was just suggesting another potential objection to some of your points.

Alice: Fair enough, but since you introduced those semantic objections, I explained some of the reasons why they all fail.

Bob: I find your replies unconvincing, as usual. But okay, let's leave those semantic issues aside, and let's move on to another problem for your suggestion: disagreement.

People may disagree about what qualities, dispositions, etc., go in the 'such-and-such' in the foundation of moral goodness that you propose. Without a paradigm of goodness, it seems there would be no fact of the matter as to whether something is morally good, just as without a paradigm of 'meter' (whether it's a meter bar or something else), there would be no fact of the matter as to whether something is a meter long.

But then, moral goodness would not be objective. So, your proposed identification fails.

Objective moral goodness requires an agent who is a paradigm of moral goodness.

Alice: I don't see why disagreement would have that effect, but one can just turn the argument around:

For example, people may even disagree about what qualities a being ought to have in order to be the paradigm of moral goodness, or even disagree that such a paradigm is even an epistemically live option. In fact, some people might say that even assuming that there is an omnipotent, omniscient, maximally kind and loving being, a moral error theory is true, because – say – moral statements have contradictory ontological commitments.

Bob: That's not the same.

Alice: It's an analogy, but how is that not relevant? You brought up disagreement, and I matched that. But let's say that you have a proper answer to that. One can make an even closer parallel with cruelty, as follows:

People may disagree about what qualities, dispositions, etc., go in the 'such-and-such' in your proposed ontological foundation of cruelty.

So, if the identification of moral goodness I suggested fails because there is disagreement as you described, then so does your proposed identification of cruelty, and then we're back with the c-god argument; i. e., objective cruelty requires an agent who is a paradigm of cruelty, and who is essentially maximally cruel.

If you claim there is a relevant difference, what is it?

Bob: I'm not persuaded.

Alice: Why not?

Bob: It's intuitively clear to me that your objection fails, but let's say for the sake of the argument that on non-theism, there could be an ontological foundation of moral goodness, or moral badness, even though I don't find that plausible at all. So, what about moral obligations? How do you resolve that problem?

Alice: I don't claim to know a correct informative identification, and that is not a problem (see, for example, this post), but that aside, once there are viable alternatives in the case of moral goodness, moral duties can properly be handled similarly. I don't need to take a stance on which alternative works or is correct, but purely for example, the following is a potential suggestion:

What is for agent A to have a moral obligation to Y?

It's for A to be in a situation such that if A failed to Y, A would be acting immorally.

That suggestion would reduce the issue of moral obligation to that of immorality (or, if you like, moral wrongness).

Bob: Even if that worked, how do you handle moral wrongness/immorality?

Alice: Immorality can be handled more or less like moral goodness before: an action is immoral if and only if the agent has such-and-such intent, beliefs, justified beliefs, etc., and one may identify behaving immorally with behaving in such-and-such ways, or perhaps more precisely making such-and-such choices, or failing to make them, where the choices are made by some agents with such-and-such psychological makeup, having such-and-such information, etc.

Moral wrongness is the same as immorality, so that covers it.

This is only one potential suggestion, of course. Another one is to handle moral obligation in terms of rules – though they would in turn depend on the sort of mind.

I don't know which one is correct, if either one is.

But surely each of those suggestions is at least more plausible than DCT – or some other form of theistic moral ontology.

Bob: Of course, I disagree about the plausibility. But that aside, here is a problem for your proposed identification, or any other such non-theistic alternatives: it loses normativity.

Alice: What do you mean by that?

If you mean that I'm suggesting - though I'm not committed to the view, as I mentioned - identifying moral wrongness with properties describable in not moral [or not clearly moral] terms, that is true. But you're doing just that too, by claiming that moral goodness is resemblance of some qualities of the only being who is omnipotent, omniscient, and maximally kind and loving, and moral obligations are the commands of such a being.

Moreover, you too are identifying cruelty with properties describable in terms not involving the word 'cruelty', or a synonym, etc.

Bob: No, that is not the problem. The problem is that if what you describe is all there is to moral obligations, then one may ask: “Why should one not break one's moral obligations?”

Let's suppose Jack is a brutal psychopathic dictator who does not care about others or about right and wrong, but only about his own power, pleasure, etc. Why should Jack not torture and kill peaceful political opponents just to stay in power?

On theism, such actions would have consequences, in the form of afterlife punishment.

Alice: That's not about ontological foundations anymore. Rather, you're talking about consequences, and means-to-ends rationality given certain goals or values. A Buddhist may just reply that the dictator would poison his karma, or something like that. It's a matter of consequences and means-to-ends rationality, not about ontological grounding.

Bob: Maybe a Buddhist escapes my challenge, but you're not a Buddhist, so what is your reply?

Why should he not torture and kill peaceful political opponents just to stay in power?

Alice: There are options:

1. If your 'should' is the usual moral 'should' then torturing and killing his peaceful political opponents just to stay in power would be immoral, and it's tautological in that sense of 'should' than one should not behave immorally.

2. If you're asking why it's immoral to torture and kill peaceful political opponents just to stay in power, that is a first-order ethical question, and not the point, but I'd say it's because it's immoral for a human being to torture other people just to stay in power. If you're going to keep asking why?”, I will say the chain of reasons has to end somewhere, and this seems to be like a plausible place to end it.

3. If your 'should' is a means-to-ends 'should' (or m-e 'should'), then whether he m-e-should torture and kill them depends on the situation, and his goals or more generally values on which he bases his goals (by 'values' I mean what he values positively or negatively after reflection, which is not necessarily the same as what he believes is morally good, bad, etc.)

But I don't see any problems for the non-theist.

Bob: There are plenty of problems, but for example, in case 2., you're failing to explain why it's immoral for a human being to torture other people just to stay in power.

Alice: Not more than you're failing to explain why it's immoral to disobey the commands of the only omnipotent being that exists – assuming she exists.

Bob: But I'm identifying moral obligations with the commands of that being – who is God -, and it's tautological that it's immoral to break one's moral obligations.

Alice: And I'm suggesting – though I need to make no commitments on this – identifying immorality with behaving in such-and-such ways, or perhaps more precisely making such-and-such choices, or failing to make them, where the choices are made by some agents with such-and-such psychological makeup, having such-and-such information, etc.

You don't have a better explanation. What you have is an unwarranted claim of identification – and false, by the way -, whereas I don't make any such claims: I just suggest options more plausible than the account provided by DCT.

Bob: Of course, I disagree about the truth and warrant of DCT. But that aside, there is another problem for your alternative, namely that if your suggestion were correct, there would be possible agents and situations in which it would not be irrational for those agents to behave immorally, and furthermore, in some cases, it would be irrational for them not to behave immorally.

Alice: If you're talking means-to-ends rationality, from the perspective of some goals or values (“values” in the sense of what an agent values, after reflection; he may positively value immoral things), sure. But that's actually the case. There are metaphysically possible situations like that. And if you're talking m-e-rationality from the perspective of an agent's all values, it's also probably true, though probably infrequent. In any case, it's metaphysically possible that an agent is like that.

Bob: But here is a problem: it's irrational to have values that would entail that in order to maximize the expected value of his behavior – expected from the agent's own evaluative perspective - , an agent means-to-ends ought to behave immorally.

Alice: But then, you're apparently not talking m-e-rationality, but some other form of rationality.

It's not clear what sense of 'rational' you have in mind.

Bob: Now that I think about it, rationality in the sense in which I'm using the word – which is not m-e rationality – would not even exist without God, either. But such rationality does exist. So, God exists.

So, there is a successful argument from objective rationality (not m-e-rationality, and not epistemic rationality, either, though I think there might also be a successful argument based on either one of those) to the existence of God.

Alice: If that's your claim, I will ask you to make your case, please.

But before you go on, I'd like to point out that even granting for the sake of the argument that there is a usual sense of the term 'rational' that matches the way you're using it now, and also granting that there is objective rationality in that sense, I'm pretty sure I can properly reply to your arguments from objective rationality in ways that are relevantly similar to the ways in which I replied to the metaethical argument above, so pretty much the same exchange will be repeated, only talking about objective rationality instead of objective moral goodness or moral obligations.

Bob: I don't think you can defeat the argument from objective rationality, but that's a matter for another occasion. But there is a direct and decisive problem for your view: It's an extraterrestrial problem, so to speak. Craig makes that point eloquently. [10]

Let's assume unguided evolution – which does not follow from non-theism, but you believe in, anyway -, and let's suppose that an alien species – say, species#1 to give it a name or identification -, evolves with very different values, and different moral beliefs. Who would be correct? They or we? I mean, would our moral assessments be generally objectively true? Or would theirs be so? Or neither? And how do you know which ones?

Alice: I do not see a problem. This is – roughly - how I see the matter.

Let's say that another species (say, species#2) evolves with a different visual system. They have perceptions equal or at least similar to our perceptions of green, blue, red, etc. (this part is not required, but let's say so), but associated with very different frequencies. Let's say that they have words like 'green', 'blue', etc., that they use to talk about the world they see, much as we use color words in English.

Humans have color, and species#2 has #2-color. Humans generally make true color statements, and those aliens generally make true #2-color statements, etc. The truth conditions of color statements are different from the truth conditions of #2-color statements.

To be clear, it's not that our color statements have a built-in reference to species – they do not -, but they don't have built-in commitments to claims about the language and/or visual systems of other species, either. There might be species that has color vision and color language – if the universe is really big -, and species that have analogues, like #2-color vision and #2-color language.

Of course, that does not change the fact that whether you ran a red light is a matter of fact, not a matter of opinion. There is a fact of the matter as to whether the light was red or not.

Also, Nazi uniforms were not red, and they wouldn't have been red even if the Nazis had won the war and for some weird reason convinced everyone given sufficient time, killings, etc., that the uniforms were red. What happens with aliens has nothing to do with it. [d]

Now, to address scenarios like the one you propose, let's suppose species#3 evolved from something like elephants or mammoths – herbivores -, species#4 evolved from something like orcas, and species#5 evolved from something like, say, octopuses.

They all evolved on different planets, they're all intelligent, capable of space travel, they have language at least as complex as ours, etc., and let's say that they all have some language that they use in social contexts that somewhat resemble those in which we humans tend to use of moral terms – but not so closely; their psychology is quite different from ours.

So, for example, they can all feel emotions like guilt, remorse, outrage, etc. - perhaps not identical to human emotions, but similar; this issue is not crucial -, and they deploy their moral-like language in contexts and situations normally associated with such feelings - that's a resemblance -, and their judgments are usually action-guiding – like our moral judgments -, but on the other hand, their social structures are indeed quite alien.

In the case of species#5, for instance, we may stipulate they were solitary beings even when they had a capacity for making tools to resolve problems above the level of chimpanzees, and only after that they gradually became more social, etc. - just to give an impression of bigger differences, which one might simply stipulate and explain in greater detail if needed.

Even if that happens, I see no problem for moral objectivity, in the sense in which Craig is using 'objective', according to his own explanation of what he means by that.

So, for example, the following is also compatible with objectivity: #5-moral goodness, #4-moral goodness, #3-moral goodness and moral goodness are all different properties, as character traits, despite a significant overlap. Similarly, #5-goodness, #4-goodness, #3-goodness and goodness are different properties as properties of situations, results, etc. (e. g., “a good situation”).

I don't see any difficulty here.

What about moral obligations?

That is an interesting case. I offer a few suggestions:

1. Individuals of species#k have k-moral obligations, rather than moral obligations, and their k-moral sense is generally reliable at detecting their k-moral obligations.

2. They all have moral obligations, but they're very different. For example, sometimes humans have a moral obligation to prevent a very bad result, but #4 beings have a moral obligation to prevent a very #4-bad result, and #4-badness is not the same as badness.[e]

3. Individuals of species#k have k-moral obligations, and their k-moral sense is generally reliable at detecting their k-moral obligations. In the case of at least some species, they also have moral obligations, but they neither nor care about that. They care about k-morality, not about morality.

As in the color case, there seems to be no problem here.

Bob: A problem is that most people would reckon that aliens capable of language as complex as ours, science, etc., and humans would not be talking past each other. I mean, if the aliens feel something similar toor the same as - guilt, remorse, outrage, etc., and have concepts they use in social contexts usually associated with those feelings, make judgments in those contexts that are usually action-guiding, etc., then they have morality, not some alien morality, and there could be disagreement in a moral debate between those aliens and some humans, rather than miscommunication. That's what happens – for example – in most fictions involving aliens: they have morality, not some alien morality.

Alice: Fictional aliens are generally no good guide to what real aliens would be like, and meaning depends on usage. So, the meaning of the alien terms depend on how they use them.

Bob: The meaning of the aliens terms depends on their usage, but their usage in contexts associated with guilt, remorse, outrage, etc. -or something very similar, but let's say the same, since you're not making a metaethical distinction based on that -, and the usual action-guiding property of their judgments shows that they're using moral language.

Alice: I don't agree that that shows that they're using moral language.

For example, if their judgments are – after ideal reflection – associated with properties describable without such language or ours (e. g., dispositions to be kind in such-and-such situations, etc.), and our judgments are associated with some other properties after similar reflection, then they're not using moral language, and the truth conditions of their alien-moral language are different from the truth-conditions of our moral language.

In fact, this is so regardless of whether moral goodness is identified with having some dispositions, etc., or only supervenes on them – and the same for #3-moral goodness, etc.

To be clear, the 'ideal reflection' condition is an example, but I don't need to take a stance on how to best analyze the truth conditions of their language and ours. Perhaps, it's better to analyze that in terms of property-tracking, and in that case, if the aliens are tracking different properties (describable without their moral-like terms, etc.) from us, then they don't have moral but alien-moral language, etc. Or some other, similar variant. I see see no problem.

The point is that in cases like that, their usage of language does not show that they have moral language and/or that they could communicate with us successfully – rather than talk past each other.

On the contrary, if we stipulate that even upon ideal reflection, the aliens in question and humans end up with different judgments, and/or that the aliens track properties associated with their moral-like language that are different from the properties (describable in non-moral language) we track and are associated with our moral language, or more generally that the association between their moral-like language and some properties (describable in non-moral and non-moral-like terms) is different from the association between our moral language and some properties (describable in non-moral terms) in a way akin to the way in which alien-color language and color language differ with respect to the properties of light they're associated with, I reckon that the aliens do not have moral language, but moral-like language, with different truth conditions, etc., and if they were to talk to humans without realizing that, aliens and humans would be talking past each other.

Bob: There are plenty of atheist philosophers who would disagree with that.

Alice: True, but for that matter, there are theist philosophers who disagree with DCT, or even reject Craig's metaethical argument. Maybe that is less frequent among theists, but in any case, those are not central issues in our debate. If we're going to rely on poll numbers, I may just point out that most metaethicists reject the metaethical argument, and in fact do not believe that God exists, or that most metaphysicians hold the view that God does not exist. [11]

Bob: Okay, let's leave polls aside. It seems clear that any sufficiently intelligent aliens and humans would not be talking past each other, in situations such as the ones described above if they have the mental traits in question. As I said, if the aliens feel guilt, remorse, outrage, etc., and have concepts they use in social contexts usually associated with those feelings, make judgments in those contexts that are usually action-guiding, etc., then they have morality, not some alien morality, and there could be disagreement in a moral debate between those aliens and some humans, rather than miscommunication.

Alice: Actually, I reckon that if, say, #4 aliens and humans were to meet, they would be talking past each other if they tried to debate without realizing that moral goodness and #4-moral goodness are different properties; “Agent A is #4-morally good” and “Agent A is morally good” have different truth conditions, etc.

Bob: I disagree. After reflection on the relevant concepts, I can tell that you're mistaken.

Alice: After reflection on the relevant concepts, I can tell that I'm not mistaken – you are.

Bob: There is a Moral Twin Earth objection I'd like to make, but I'll leave it for later; first, I would like to raise a number of other issues. For example, you can't properly set up scenarios like that and say that there would be objective #3-moral goodness, etc., because even if there were moral goodness, #4-moral goodness, #5-moral goodness, etc., and those were all different properties, then moral goodness would not be objective at all, and neither would any of the others – or at most, one of them would be objective, but there would be no particular reason to believe moral goodness would be the one. Mirroring Craig's point [10], we can tell that human morality would have no better claim to be objective than #3-morality, #4-morality, #5-morality, etc.

Alice: I reckon that all of them would be objective – at least, that would be the proper assessment based on the information provided by the scenarios.

So, of course none of them would have a better claim to be objective than any of the others. In fact, just as color and #2-color can both be objective, morality, #3-morality, etc., can all be objective. I reckon they would all be so.

Bob: But that is not objectivity in the relevant sense of the word 'objective'. The relevant sense is the sense in which Craig uses the word 'objective' in the metaethical argument.

Alice: Actually, #3-morality, etc., could be objective in that relevant sense. There is an objective fact of the matter as to whether the Holocaust was immoral (it was), whether it was a bad thing (it was), and so on. Those are matters of fact, not matters of opinion. Assuming the alien scenarios, all of that remains unchanged. The same goes for #3-moral goodness: there is an objective fact of the matter as to whether some agent is #3-morally good, etc.

Bob: There are non-theist philosophers, like Sharon Street, who would not classify anything like what you describe as moral realism. [12]

Alice: True, that's not what she would call “uncompromising normative realism”. [12] Perhaps – though this is not entirely clear to me -, DCT would not qualify, either!

On the other hand, what I described above would be realism under other philosophical conceptions of the expression 'moral realism'. Purely for example, it qualifies under Geoff Sayre-McCord's conception[13], explained in the SEP entry on moral realism.

However, we're not talking about any of those definitions. What matters in this context is whether objective moral values and duties would exist, in the relevant sense of objectivity, which is the one used in Craig's metaethical argument. And they would – and they do -, for the reasons I've been giving. So, Craig is mistaken. The alien objection simply misses the mark entirely.

Bob: It's Craig's argument, and he knows what he means.

Alice: Actually, as I explained earlier, Craig is confusing two very different matters, so that might explain his mistake on the alien issue, but the alien scenario misses the point entirely, as explained.

Incidentally, in addition to the color example[14], Craig also said that moral disagreement and moral error presuppose the objectivity of moral values and duties. [6] But it is apparent that if species#3 has #3-morality, species#4 has #4-morality, etc., humans would still be able to disagree – with other humans, at the very least[f] - on whether, say, the 9-11 attacks were immoral. And humans can still make errors: for example, some people believe those attacks were not immoral. All of that remains and would remain even if zillions of different alien species had zillions of different alien moralities.

Bob: I don't agree with that. The fact is that the extraterrestrial examples show very well that, without God – or at least, assuming unguided evolution -, morality would not be objective, moral goodness, obligations, etc., would not be objective, etc., in the sense of 'objective' relevant in the second premise of the metaethical argument. Perhaps, Craig did not explain clearly what he meant by 'objective', but at least that much is clear.

Alice: No, it's not just that he was unclear – he was unclear, but it's not just that.

Craig gave the examples of green and red as examples of objectivity even when illustrating what he meant by 'objective', and also when defending the objectivity or moral values and duties[14], highlighted the clear distinction between matters of fact and matters of opinion – and made it clear that that was the relevant distinction -, explained the difference in terms of not depending on what a person believes, etc. Alien moralities may qualify as objective under that conception of objectivity.

Craig also made claims about disagreement and error, etc. [6], and clearly there can be moral disagreement between humans, etc., regardless of what the aliens do, what judgments they make, etc.

The fact is that the alien scenarios pose no threat whatsoever to objectivity, in the sense of 'objective' that is relevant in the context of Craig's metaethical argument.

Moreover, if that alien example were really an objection to moral objectivity in that context, you might as well argue:

P1.4: If unguided evolution happened, objective color would not exist.

P2.4: Objective color does exist.

C: Unguided evolution did not happen.

Of course, claiming that color is objective and then that if intelligent aliens (i. e., with language, science, etc.) evolved differently and had a different visual system (in the relevant sense), etc., there would be no objective color, would commit you to an unwarranted exobiology claim, namely that there are no such aliens.

Then again, you're already making an implicit – and also unwarranted - exobiology claim: you're apparently implying that one of the following is true:

1. There are no aliens of the relevant kind (i. e., smart, capable of talk, with complex language, with moral-like language, etc.).

2. There are aliens like that, but those aliens have a moral sense, like ours (not a #3-moral sense), etc., and make moral assessments generally matching ours, or else they are very confused about morality.

Bob: Actually, there are some non-theists who are also committed to the view that either 1. or 2. above is true.

Alice: True, and others are not. But that's not the point.

Bob: Craig said that in order to be objective in the relevant sense, moral values and duties have to be valid and binding independently of human opinion. [9] You're failing to factor that in.

For example, as Craig himself explains, if values were the product of evolution, they would not be valid and binding without previous agreement. But if the alien scenarios were true and there were moral values, then those values would be the product of evolution, and hence would not be objective.

Alice: What do you mean that they would not be binding without agreement?
Of course, the Holocaust would still be immoral regardless of agreement.

But that aside, Craig defined 'objective' in general. He did not suggest that 'objective' has a meaning in the context of morality, another one in the context of color, and so on.

In fact, one may properly reckon based on Craig's points on the matter and context[8][9][15] that a matter is objective if and only if it's a matter of fact, not a matter of opinion. A property P is objective if and only if whether some object O instantiates P is a matter of fact, i. e., an objective matter. And objective P actually exists if P is an objective property and is actually instantiated.

For example, objective moral goodness actually exists just in case moral goodness is an objective property, and is actually instantiated.

In fact, here one may use one of Craig's examples again. He gives that example in order to illustrate what it means to say that [6], and goes on to explain that to say that the Holocaust was objectively wrong is to say that it was morally wrong independently of factors such as who won World War Two, whether the Nazis who carried out the Holocaust believed it was wrong, whether the Nazis managed to change the world so that everyone would believe that the Holocaust was right and good, and so on. But of course, all of the above – about the Holocaust – seems clearly compatible with there being aliens with some analogue to morality – some alien morality if you like -, and there being objective #3-moral goodness, #4-moral goodness, and so on. If you claim it's not compatible, then why is that not so?

Bob: But even if #3-moral goodness, etc., were compatible with the statements about the Holocaust, etc., and even if agreement between humans were not required, moral values and duties would not be binding for the aliens, and so they would not be objective.

Alice: Whatever “valid and binding” means in this context, if #3-moral goodness, etc., are compatible with the statements about the Holocaust, etc., which is the very same example Craig uses in order to explain what 'objectively wrong' means in this context, then the alien scenarios pose no threat to objective moral values or duties.

Now, if Craig's “valid and binding” condition demands that in order for moral goodness, wrongness, etc., to be objective, there is no #3-goodness, #4-goodness, etc. - all different properties -, then he's just using a specific concept of 'objectivity' in the moral case that is different from the colloquial sense of 'objective' that distinguishes matters of fact from matters of opinion, and which is the sense in which color is objective.

But if Craig is using the word 'objective' in this alternative sense, then he made serious mistakes and confused matters by using the color example to illustrate what he means by 'objective', and also by using the Holocaust example to illustrate the difference, and also by pointing to the difference between matters of fact and matters of opinion, etc.

In short, if Craig is using the word 'objective' in a sense that would make objective moral values and/or duties incompatible with there being alien moralities in the way you claim, then he made serious mistakes when explaining what 'objective' means in this context. Additionally, in that case, he also made obviously false claims by saying that moral error, disagreement, etc. presuppose objectivity, since – very obviously – if there were #3-moral goodness, etc., we can still disagree about whether some person or behavior is morally good, make mistakes about whether some behavior was morally wrong, etc., regardless of what any aliens might do, believe, feel, etc.

Bob: The concept of 'objective' is the same, but you seem to be missing one requirement for objectivity of a property – namely, that the property be mind-independent. In your alien scenarios, those properties – even if they existed – would be mind-dependent.

So, there would be no objective moral goodness.

Alice: You're missing the point here. The alien scenarios – and the existence of #3-moral goodness, etc. - appear clearly compatible with all of the statements about the Holocaust. Moreover, in that case, moral matters would still be matters of fact, not matters of opinion.

So, if you claim that the existence of #4-moral goodness, #3-moral goodness, etc. - all different properties -, are incompatible with objective moral values and duties in the sense in which the word 'objective' is used in the metaethical argument, then you ought to show why that is so.

Bob: As I already explained, you're missing the mind-independence condition.

Alice: Craig did not include that condition in his explanation of what it means to be objective, in the defense of his metaethical argument. Granted, he identified objectivity with mind-independence when replying to a question, but as I already explained, Craig is confusing two very different matters when he talks about mind-independence.

Bob: I don't find your arguments persuasive, as usual. But in any case, the mind-independence condition is crucial. If there were #3-moral goodness, #4-moral goodness, etc., and moral goodness, then moral goodness would not be objective.

Alice: Given your answers, I guess no matter how many times I showed that moral goodness would still meet Craig's conditions, you would not be persuaded.

So, you're saying that if there were #3-moral goodness, etc., then moral goodness would be mind-dependent. Why?

Bob: Because it would somehow be generated by human minds, or the minds of other similar beings.

Alice: What do you mean by that? Why would it be any more mind-dependent than, say, mental illness?

Bob: Without God, there would be no objective illness, either, and that includes mental illness. That's because without a designer, there is no proper function, and hence no improper function.

Alice: You're mistaken, but that's a matter for another debate. Let's leave aside mental illness, and focus specifically on obsessive-compulsive disorder (OCD)? (or more precisely, the property of having OCD, or OCD-ness). That's objective, right? But how is OCD not generated by a human mind?

Bob: Without God, there would be no objective illness. Objective obsessive-compulsive disorder – the condition - could exist without God, but it would not be a disorder, so it would be a mistake to call it that.

Alice: That is not true, but in any case, whatever one calls it, the point is that OCD-ness would be objective without God too, in the relevant sense of 'objective'. But the condition of objectivity you give now – i. e., not being generated by a mind - would seem to render ODC-ness not objective. Hence, the condition 'not being generated by a mind' is not a condition for objectivity, in the relevant sense of 'objective'. Craig is mistaken about that, and so are you.

Bob: I don't think Craig is mistaken. And I'm not mistaken. But that aside, why do you think that there is objective OCD-ness?

Alice: Some people have OCD, and it's a matter of fact whether a person has it – not a matter of opinion. What does that have to do with whether OCD is generated by a mind? Why do you think there is objective moral goodness, objective immorality, etc.?

Bob: It's intuitively clear. For example, let's consider one of Craig's examples: [16] racism. Don't you agree racism really is immoral?

Alice: Yes, racism really is immoral, and some people really have OCD, and most stop traffic lights are really red, and so on, in the usual, colloquial sense of the word 'really', which is the one Craig is using.

Bob: But if moral goodness even existed without God and if the alien scenarios you suggest were true, moral goodness would somehow be generated by human minds, or the minds of other similar beings – if there were some other aliens, similar enough to humans to have morality rather than #5-morality, etc. -, and so it would not be an objective property, because it would not meet the mind-independence requirement. Also, moral wrongness would not be objective. And so on.

Alice: But again, how would moral goodness, or moral wrongness, be any more generated by a mind” than, say, OCD-ness?

How is moral goodness any different from OCD-ness, with respect to objectivity, under the assumption that some of the alien scenarios I sketched obtains? (i. e., that there are such aliens, with such faculties, and some properties like #5-goodness, etc.).

Bob: It's difficult to say where the difference lies, but I think it's clear that you misunderstand Craig. In the way he uses 'mind-dependence', clearly, OCD-ness would not be generated by a mind, so it would be mind-independent. But #5-moral goodness, if it existed, would be mind-dependent.

Alice: Why?

Bob: It's intuitively clear, but it is difficult to explain why. One potential distinction is as follows:

If, say, #3-aliens came to Earth – and they had enough tech, etc. -, they would be able to recognize obsessive-compulsive disorder by observing and studying humans, but they would be unable to recognize moral goodness if they only have a sense that recognizes #3-moral goodness, unless they relied on human input – that is, unless they relied on the minds of humans to make moral assessments.

While that is an epistemic difference, it plausibly results from the ontological difference between OCD-ness,which would still be objective – and moral goodness – which is objective, but would be subjective if there were #3-moral goodness, #4-moral goodness, etc.

Alice: Why do you think #3-aliens would be able to recognize OCD, but not moral goodness, without input from humans?

Bob: People with OCD behave differently from people without OCD. Sufficiently advanced aliens would be able to tell the difference.

Alice: Morally good people behave differently from people who are not morally good. Sufficiently advanced aliens would be able to tell the difference.

Bob: What about a bad person who behaves like a good person because it's in his self-interest? The aliens would not be able to observe a difference.

Alice: But in that case, the aliens would not be able to observe a difference relying on human input, either.

Bob: Okay, but what if the behavioral differences are tiny, but a human can still detect them?

Alice: Then a sufficiently advanced alien can detect them too.

Bob: Alright, but even if #3-aliens were able to observe differences in behavior between good and not good humans, they would not have a classification matching those differences, while they would probably have a classification based on whether a human does not have or has OCD, among many other mental conditions.

In other words, the aliens would still need to rely on human input to see any relevance in the specific differences between morally good and not good humans. Not so for OCD.

Alice: I don't see why that would be so. Why do you think that they would find the differences between humans with OCD and humans without OCD salient enough to make a classification based on it – or make it part of a broader classification considering many conditions -, but not the differences between morally good and morally not good people?

Bob: OCD is easier to detect than moral goodness or moral badness, at least for a being who has a #3-moral sense that detects #3-moral goodness and #3-moral badness, but not moral goodness or moral badness.

In fact, without a moral sense, #3 aliens would have no indication whatsoever that there is some significant classification between morally good and morally not good people, without asking humans to make assessments.

Alice: Actually, if the aliens observed human behavior, they may find the differences between morally good and not good people salient enough to warrant a classification, for all we know. But I still don't see how that has anything to do with objectivity.

Let's say, for example, #3-aliens studying the Earth do not have a word for “fish”, and have no classification between fish and non-fish. Would that mean that there is no objective fishness?

Bob: The aliens could observe fish, and the differences between fish and non fish.

Alice: They could observe morally good and morally not good humans, and the differences between their behavior.

Bob: As I already said, without human input they would not be able to recognize those specific differences (i. e., between morally good and morally bad people). Instead, the aliens would have a number of psychological criteria to classify humans, and some of the differences in behavior between morally good and morally not good people would be captured by some of those criteria, others by some other criteria, and so on, but there would be no specific classification matching the differences between morally good or not good humans.

Alice: Let's assume so – though I don't know about that -, my point is: the same could happen in the case of, say, fish. Let's stipulate that it actually happens; i. e., #3-aliens have no classification between fish and non-fish, and then, without human input, they wouldn't even think of that category. Would that suggest fishness is subjective?

If not, then I submit that probable alien classificatory schemes are not relevant when it comes to ascertaining whether a property is objective.

Moreover, you haven't even given any particularly good reason to think they would care to include OCD-ness in a classification.

Bob: I'm not convinced by your arguments, as usual. But let's leave that aside. Even if the aliens were able to recognize moral goodness by observing humans, they wouldn't value it positively. They would value #3-moral goodness positively, but not moral goodness.

Alice: But that's not related to the issue of objectivity. For that matter, a human psychopath may well not value moral goodness positively. And the question of whether #3-aliens would value moral goodness positively is not even related to the question of whether moral goodness is an objective property. Rather, it's a question about alien psychology – i. e., about the psychology of some hypothetical aliens -, unrelated to the question of the objectivity of moral goodness.

Bob: The problem is that in the alien scenarios you propose, the value would be mind-dependent, and so objective moral values would not exist.

Alice: It's not clear to me what you mean by that. But agents value things (other agents, properties, etc.). Psychopaths do not value moral goodness positively for its own sake – i. e., as an end -, thought they might sometimes value it positively as a means to an end. But that does not preclude the objectivity of moral goodness.

Similarly, #3 aliens in the scenarios I described do not value moral goodness positively for its own sake – though they do value #3-moral goodness positively for its own sake -, but that does not preclude the objectivity of moral goodness.

Bob: But the psychopath is making a mistake, failing to see that moral goodness is intrinsically valuable – i. e., valuable for its own sake. On the other hand, you're not suggesting the aliens are making any mistakes.

Alice: X is intrinsically [morally] valuable if and only if X is positively morally valued for its own sake, i. e., it's a good thing in and of itself, regardless of consequences.

Clearly – maybe even tautologically -, moral goodness as a character trait is positively morally valued, and even regardless of results – so, for its own sake, or intrinsically.

In the alien scenarios, some things, traits, results, etc., are intrinsically morally valuable, whereas others are intrinsically #3-morally valuable, etc., with some overlap.

There needs to be no mistake on the part of the psychopath, by the way, who may recognize that some things are intrinsically morally valuable – i. e., morally good in and of itself. He just wouldn't care about that, for its own sake, and he may well not be incurring m-e-irrationality, given that his own evaluative function does not value positively and for its own sake what is morally valuable for its own sake.

Bob: If something is intrinsically morally valuable, it can't be that it's not intrinsically #3-morally valuable.

Alice: I thought we were talking about objective moral values, not intrinsic value, but never mind that. Why can't there be intrinsic moral value, intrinsic #3-moral value, etc., in the sense of 'intrinsic' that is relevant in colloquial moral discourse – if any?

Bob: It's clear that there would be no intrinsic moral value. I can assess that by my intuitive grasp of the relevant terms and moral intuitions in general.

Alice: After reflection, and going by my intuitive grasp of the relevant terms and moral intuitions in general, I disagree.

Bob: Let me try a different approach, then.

Let's consider a hypothetical scenario (the F-scenario): in a distant future, some evil human scientists decide to make a small community of AI, who will be vastly intelligent, but will not care about moral goodness – either as a character trait (as in 'a good person'), or as a property of outcomes, situations, etc. (as in 'a good result') - for its own sake.

In fact, the scientists come up with a well-defined function F that they just made up. F assigns a certain value to different situations, outcomes, objects, etc. - I'll use “things” as a generic term to name situations, outcomes, objects, etc.

The F-value of a thing may be positive, negative, or zero. For example, it might be that F(torture for fun in such-and-such context)=+295; F(helping the needy in such-and-such situations)=-9283 F(God)=(-99999!!!!!!!!!!!!!!!) Granted, that's not precise, that wouldn't work as a definition of F, etc., but you get the picture.

Also, in some cases, some things have greater or lower F-value depending on their expected consequences, whereas in some cases, they have a certain F-value regardless of consequences.

So, the community of AI, whose members are called “functonians”, is successfully built. We may assume that in terms of m-e intelligence, language skills, math, etc., functonians vastly surpass even the most intelligent humans, and that functonians care about F-value, they can have negative feelings if they bring about F-bad results, etc. (we may assume some sort of F-consequentialism is programmed, to simplify, but if needed, we may add another function to play a role akin to moral obligations, etc.)

Going by your previous assessments, it would seem that you're committed to the conclusion that in that scenario, F-morality is objective, some things are objectively and intrinsically F-morally-valuable, and so on, despite the fact that the function F was just made up by those evil scientists, who engineered the functonians.

Alice: Actually, I don't think 'F-morally' would be a proper term. I used ' #3-moral', '#4-moral', etc., because of the similarly between those things and morality, but you seem to be setting up F-ing to be a bit too different.

Bob: That's just an irrelevant terminological issue, but let's leave the word 'moral' out of it if you like. The key point is that you seem to be committed to the conclusion that in the F-scenario, there is objective F-goodness, F-badness, objectively and intrinsically F-good things, and so on, and that the functonians – if programmed without errors, at least – would not be mistaken in their evaluations, despite the fact that F was constructed arbitrarily.

Alice: That seems to be correct, but I don't see why it's key. There is no problem.

Bob: Of course there is a problem!

And it's key because the F-scenario shows that clearly, something like that would be neither objective nor intrinsic, and in fact if the F-scenario were possible and happened, either the functonians would be massively mistaken about the objective value of things, or else we would be, or both, or there would be no objective value whatsoever. It's false that some things would be objectively and/or intrinsically good, but objectively and intrinsically F-bad, for example.

Alice: You are mistaken. There would be no mistake. Functonians would be talking about F-goodness, we would be talking about moral goodness, moral obligation, etc. Assuming the scenario, since some things are F-good, and there is a fact of the matter as to whether something is F-good, then clearly, there is objective F-goodness. And since some things have positive F-value regardless of their consequences, they are intrinsically F-valuable (i. e., for their own sake, not due to their consequences, or expected consequences, etc.).

Bob: But there is a sense in which F-value can't be intrinsic: intrinsic value has to be something ontologically different from other stuff, in some metaphysically important sense. In these scenarios, we can say things are 'F-valuable' only in the sense that the function assigns them a positive value. And similarly, if God did not exist, then nothing would be intrinsically morally valuable in the ontologically important sense I have in mind.

Alice: The concept of intrinsicality you have in mind in this objection is not a sense relevant in colloquial moral talk; it's not part of our moral language, even if some mistaken philosophical theories consider that there is such sort of thing. That's because the only relevant sense is that some things are good in and of themselves, regardless of consequences, as I reckon by conceptual analysis and moral intuitions.

Bob: You're mistaken. There is a sense of intrinsic value that is metaphysically important, and relevant in our moral talk, and is not the sense you have in mind, namely the sense that some things are good regardless of consequences.

Alice: Why do you think so?

Bob: I reckon that's the case by means of conceptual analysis and moral intuitions.

Alice: I reckon that's not the case by means of conceptual analysis and moral intuitions.

Bob: Some philosophers would agree with me on this.

Alice: Some philosophers would disagree with you on this.

Bob: Okay, I guess we'll have to disagree on this matter. By the way, our moral judgments do not have a built-in reference to species.

Alice: True, but I'm not saying they do. Rather, I'm saying that our moral judgments do not have a built-in claim about different aliens, AI, etc.

For example, analogously, our color terms do not say anything about whether aliens have color, #2-color, etc. There might be non-human very intelligent species (if the universe is large enough) with color vision (not #2-color, etc.), and/or there might be species with #2-color, etc. Our color language is neutral on this matter.

Our moral language seems neutral too, even if most people happen to believe that aliens who can talk, etc. - if they exist - have morality rather than #3-morality, etc. - just as most people believe that time is absolute, but that condition is not built-in the meaning of our temporal language, and temporal relativity does not lead us to an error theory of talk about time.

So, the point stands: functonians would be talking about F-goodness, not moral goodness; judgments of F-goodness would have truth conditions very different from judgments of moral goodness, etc.

Bob: You're correct about color language and about temporal language, but mistaken about moral language. Moral language has ontological commitments that make objective moral values and/or duties and/or intrinsic moral goodness, etc., incompatible with the alien, AI, etc., scenarios we're discussing. I can tell that by reflecting on them, intuitively.

Alice: After reflection, I disagree about that sort of ontological commitment. My intuitions say otherwise.

Bob: I see. So, let me raise another issue: let's leave intrinsicality aside, and go back to the matter of objectivity. F-goodness would not be objective.

Alice: Yes, it would be.

Bob: No, it would not. In fact, if F-goodness were objective, it would be independent of what everyone believes. But surely, in the F-scenario, F-goodness is not independent in that manner. It depends on the beliefs of the humans who programmed the functonians and made up the function F.

So, F-goodness is subjective. And similarly, in the alien scenarios, #3-moral goodness, and even moral goodness is subjective.

Alice: No, F-goodness does not depend on belief. The evil scientists just made up the function, but which value the function assigns to a thing is an objective matter. Of course, after programming the AI, any of the scientists can make a mistake about which value the function assigns to some thing, just as a person can make up an evaluative function, and later miscalculate the value that the function she came up with assigns to an object.

Bob: But if the evil scientists changed the function, they would change F-goodness.

Alice: One might say that they would be making beings that care about some other thing, say F2-goodness.

Bob: That's an irrelevant terminological issue. Regardless of whether we say they changed F, or they came up with a new F2, the point is that F-goodness is not objective because those evil scientists just made it up!

Alice: You're mistaken about objectivity, but perhaps the following color scenario will convince you:

Let's say that in the future, some aliens – for whatever reason – decide to genetically modify some humans, and give them a different visual system, which keeps the usual human color perceptions, but associated with very different wavelengths – which are chosen by alien engineers.

Granted, there are limits to what the alien engineers can do when modifying human color vision, at least if one stipulates that there are constrained by nomological possibility, but for that matter, nomological possibility would also constrain the F function, since there are different possible functions that would require increasingly complex AI to compute them, and the aliens can't make arbitrarily complex computers due to a lack of sufficient energy.

Also, granted, the aliens might be more limited when it comes to the array of options they have when modifying human vision than in the AI case, but that is only a side issue. Moreover, for that matter, the aliens might make other intelligent beings that are not genetically modified humans, and get very creative when it comes to their visual system, and to their color analogues. Purely for example, even a computer with a camera and adequate software can paint any part of a video any color one wants – within a vast range -, or even associate any wavelengths a camera can detect with any color on a screen. That indicates that beings with vastly different color-like systems are nomologically possible – one just needs an adequately sensitive eye and a brain to interpret the input.

So, the point is, the aliens genetically modify those humans, and then leave GM humans on a distant planet, also suitably modified to support human life.

Bob: How do the aliens transport the GM humans so far away? The distances are huge, and FTL travel is very probably not nomologically possible. Moreover, how do they find a planet that can support human life?

Alice: It doesn't matter how they do any of that. It's an irrelevant feature of the scenario.

But for example, one may stipulate that the aliens modify a planet to make it capable of sustaining human and GM human life. That might take them millions of years, but that's not relevant. The aliens have plenty of time.

Also, the aliens might take frozen embryos on a spaceship, or whatever. None of that is important.

By the way, we're assessing the concept of objectivity, so it wouldn't even be required to restrict ourselves to nomologically possible scenarios.

Bob: Okay, so let's say the aliens make those GM humans, and leave some of them on a distant planet. Then what happens? What's your point?

Alice: After hundreds of years living there, those humans have a language including GM-color terms. Let's also stipulate that there was no significant evolution of their GM-color vision in those hundreds of years since they were left on the planet. If a longer period is required for them to develop that language, we may stipulate the aliens stealthily make sure the GM-color vision is not significantly altered while language is developed, etc., using genetic engineering, culling or whatever. The specifics are not relevant.

The point is that in the scenario, those GM-humans can and usually do make true GM-color judgments. And whether something is, say, GM-green, is a matter of fact, not a matter of opinion.

So, just as greenness is objective, so is GM-greenness, despite the fact that the aliens just invented GM-color vision.

So, if you accept the objectivity of greenness and of GM-greenness, why do you not accept that F-goodness would be objective as well?

Bob: I'm certain F-goodness could not be objective in the relevant sense, and while I'm not entirely sure, it seems to me you just gave a pretty good argument against the objectivity of color. But who cares about color? We care about morality.

Alice: No, that's the wrong conclusion. It's a matter of fact whether, say, the traffic light was green, and it's a matter of fact whether some fruit on the distant planet is GM-green, or green – in the scenario.

And there is of course the possibility of error in the case of color, and of GM-color. And two GM-humans can disagree about whether the fruit was in fact GM-green.

Bob: Why would they disagree? Didn't they see the fruit?

Alice: Maybe one of them didn't look carefully enough, or had a visual illness, etc. It's not important. For example, sometimes human eyewitnesses in a court case give conflicting reports about the color of an object. So, some humans make color mistakes. In fact, color mistakes are not even so unusual - though how unusual they are is a side issue, not relevant to the matters at hand. Similarly, in the GM scenario, GM-humans can make GM-color mistakes. And there is color disagreement, and similarly, in the GM scenario, there can be GM-color disagreement.

But more importantly, there is a fact of the matter as to whether something is GM-green, etc.

Bob: Why does that show that GM-color is objective?

Alice: Well, if you're suggesting that even though in the scenario there is GM-color disagreement, and GM-color errors, and even though GM-color matters are matters of fact – not matters of opinion -, and even though some objects are GM-green, etc., there might not be objective GM-color, then one might as well ask why you think there is objective color, and moreover, why you think there is objective moral goodness, or objective moral wrongness, etc.

Bob: Okay, fair enough. I guess GM-color is objective after all, even if the aliens just made up that particular sort of color vision.

Alice: So, the fact that those evil scientists made up the function F in the F-scenario, does not preclude objectivity of F-badness, F-goodness, etc., either.

Bob: It's very difficult to come up with a correct definition of colloquial terms like 'objective', but whatever the reason is, it seems clear to me that F-goodness could not be objective. But let me try something else: social contract theory.

Alice: In my assessment, that is a false theory.

Bob: I agree it's false, but that's not what I'm getting at. My point is that for that matter, going by your assessments of objectivity, if social contract theory were correct, morality would be objective. But that is clearly false, since difference societies could come up with different social contracts.

Alice: Actually, that would not threaten objectivity, in the sense Craig uses the word, either – Craig believes it would, but as in the alien case, Craig is mistaken.

A good analogy to see this is legality – broadly speaking, including constitutionality when there is a constitution, etc.

Legal matters are – at least usually – matters of fact, not matters of opinion. So, they are objective in the relevant sense. And of course, there is plenty of legal disagreement, legal errors, and so on. And some things are legal, some aren't. So, there is objective legality. But that if so even if different people in different societies can make different laws.

Bob: But that's certainly not the sort of objectivity that Craig has in mind. In fact, he says clearly that social contract theory would not be objective [9], since morality would not be independent of human opinion.

Alice: I base my assessment on Craig's own explanation of what 'objective' means in his argument[8][9][15]. He is mistaken about what would be objective under that conception of objectivity. As the example above shows, whether – say – some behavior is legal or not does not depend on anyone's opinion. Even if some humans make the laws, it is a matter of fact whether a certain behavior is legal, and there is legal disagreement, legal error, and so on. The same would be true of morality if the social contract theory Craig mentions[9] were trueit is not true, but that is another matter.

Bob: But you're leaving aside some of the things Craig says when explaining what he means by 'objective'. You're picking and choosing.

Alice: I'm considering his explanation of what he means by objective (or, if you like, what 'objective' means, in that context), and ruling out some of his claims about what would or would not be objective, in that sense of the word 'objective'.

That said, if one takes all of the claims that Craig makes when defending and/or explaining his metaethical argument as a guide to what Craig means by 'objective' in the context of his defenses of the argument, then after reflection one ought to conclude that Craig is using the word 'objective' inconsistently in the defense of his metaethical argument.

Purely for example, he claims that if social contract theory were true, then morality would be subjective, because different contracts would yield a different morality, like the morality of Nazi Germany, South Africa during the Apartheid, etc., and allegedly that would make morality subjective because humans make the contracts. But it's apparent that such contracts would not prevent plenty of moral error, moral disagreement, etc., just as there is plenty of legal error, legal disagreement, etc. Yet, Craig maintains that moral disagreement and error presuppose the objectivity of morality. [6]

Moreover, legal matters are matters of fact, not matters of opinion – at least usually -, so they are objective. The same would happen in the moral case, under social contract theory. The fact that some humans made up the law or contract has no bearing on any of that.

In some other statements, Craig confuses two very different matters, and so on.

On the other hand, if one assumes Craig is using the word 'objective' consistently in the context of his metaethical argument, and takes only some of his claims as a guide to what he means – which ones is something one may need to assess by context –, then some of the other claims he makes are false – he just made some serious errors in assessing what is or would be objective, in the sense in which he is using the word 'objective'.

Pick your choice.

Bob: I think Craig made no mistakes when he explained what he meant by 'objective' and why social contract theories are not forms of objective morality – i. e., there would be no objective moral goodness, etc., if they were true.

Alice: I already showed conclusively that he made serious mistakes in that context.

Bob: I disagree. But in any case, the point is that without binding moral duties, there would be no objective moral duties. Now, how would moral duties bind species#5, given that they only have #5-morality?

Alice: There are options, like:

1. They would have #5-moral obligations but not moral obligations.

2. They would have moral obligations linked to the #5-good and #5-bad instead of the good and the bad.

3. They would have both #5-moral and moral obligations, but they would only care about the former.

There are other options, but I don't need to take a stance, since none of them seems to be a problem for objective morality, or for objective #5-morality.

Bob: Let's tackle your first option: if aliens of species #5 only have #5-moral obligations but not moral obligations, then they're not bound by morality. At most, they're bound by #5-morality. Besides, it's extremely implausible. Don't you think if they were to, say, experiment on humans causing horrible pain just to do science, they would be breaking their moral obligations?

Alice: That's not clear. Assuming 1., those aliens do not have a sense of right and wrong, but only a sense of #5-right and #5-wrong. Can beings who can't tell right from wrong (not without first studying humans or other beings with a sense of right and wrong, anyway) have moral obligations?

It's an interesting question, but not one I would have to take a stance on.

But still, if objective morality requires rejecting option 1., you haven't shown that there is objective morality. Yes, granted, the Holocaust was immoral, and it would have been so even if the Nazis had convinced everyone that it wasn't. Also, moral matters are matters of fact, not matters of opinion. And so on. But if objectivity in the sense in which Craig is using the word requires more than that, including claims about aliens, then I don't know that there are objective moral values and duties.

Bob: Well, I disagree. Let's now tackle option 2. In that case, maybe their obligations would be radically different from ours.

Alice: I'm not sure how radically. That's a matter of alien biology. Still, that particular point seems to count against option 2. But again, I needn't take a stance, so this is not a problem for my views.

Bob: Okay, how about 3?

They wouldn't care about their moral obligations. But would they be acting irrationally if they only complied with their #5-moral obligations, but not with their moral obligations?

Alice: In terms of means-to-ends rationality, generally no. And I don't see any sense of 'rational' in which they would be acting irrationally.

Bob: But I'm not talking about m-e rationality. Let 'v-rationality' stand for the sort of rationality under which there are values that is irrational for any agent to have. Would it be v-irrational for them to [positively] value being #5-morally good, but place no value on being morally good?

Alice: I don't need to take a stance, but if there is some concept – i. e., v-rationality -, maybe it would be v-irrational for them to [positively] value being #5-morally good, but not [positively] value being morally good.

But that's not a problem. For that matter, we may stipulate #5 aliens wouldn't care about v-rationality, but about #5-v-rationality, and maybe it would be #5-irrational for us humans to [positively] value being morally good, but not [positively] value being #5-morally good.

Bob: The point is that moral values and duties would not be binding.

Alice: I'm still trying to figure out what you mean by “binding”, given that moral statements would still be true or false regardless of what any person believes; moral matters would still be matters of fact, etc.

Regardless, let me put it this way: I'm committed to the view that the Holocaust was immoral, and it would have been so even if the Nazis had convinced everyone that it wasn't. I'm also committed to the view that moral matters are matters of fact, not matters of opinion. And so on.

If that is enough for moral values and duties to be objective in the sense in which you and Craig are using the words, then I accept that the second premise of the metaethical argument is true. But nothing that happens with aliens or alien moralities will help your case for premise 1 of Craig's metaethical argument.

If moral objectivity, in the sense you and Craig are using the words, is incompatible with the existence of #5-moral goodness, #3-moral goodness, etc. (all properties different from each other), then I do not accept premise 2.

Bob: It's obvious that there are objective moral values and duties, under a conception of objectivity that makes different alien moralities impossible. If those aliens made judgments very different from ours, either they would be very mistaken, or we would be.

Alice: I disagree. But let's go back to your F-scenario, or rather a variant of it. Let's suppose for the sake of the argument that DCT is true as you claim, but that someone programs the functonians. What would the functonians be mistaken about?

Bob: Actually, they would not even be conscious, since they would have no souls. They would not value anything; they would just look like they value it, etc.

Alice: That's not the issue. Let's modify the scenario, and let's consider the G-scenario. It's like the F-scenario, but instead of silicon-based AI, the scientists use genetic engineering to make beings with huge brains, and language, math, engineering, etc., skills vastly superior to those of the smartest human beings, and who evaluate things very differently from the way humans do, following a function G, which is not as horrible as F, but is very different from anything like human evaluations. Let's call these beings 'gontonians'.

Let's say gontonians would properly reckon that the existence of God is a G-bad thing – intrinsically so -, and surely wouldn't be taking orders from her even if they believed she exists.

What would you say then, about the gontonians?

Bob: I don't know, but plausibly, they would be making false moral claims, not true G-claims. They would be vastly confused about moral goodness, moral obligations, etc. They would have moral obligations if and only if God gave them commands, and if so, their obligations would be God's commands.

Alice: Whether they would have moral obligations is not the point. The point is that the gontonians would make objectively true G-good judgments, judgments of G-obligations, etc., regardless of God's orders, or resemblance to God, etc., because they – i. e. the gontonians – would be the users of the words, and the meaning of their words would be determined by their usage, not by God's words or properties.

Bob: It's their language, but actually, the gontonians would be using their terms to mean the same we mean by moral terms. The fact that they are making evaluations like that implies so.

Alice: It doesn't. And we may even stipulate that the gontonians themselves assess that G-goodness is not moral goodness, but they don't care about moral goodness. They care about G-goodness. There is no error on their part.

Bob: No, the gontonians would be vastly confused about morality, due to the actions of their evil makers. And it seems they would also be vastly confused above metaethics. But perhaps, God would allow them to see moral truth. What do I know?

Alice: I guess we'll just continue to disagree on that.

Bob: I guess so. Back to the F-scenario, I realized that we do not need to actually make the functonians. In fact, if you were right, then for each function F that assigns a certain value to things, there would be multiple properties – many, perhaps infinitely many of them instantiated -, like, say, the property of having F-value 394.

But that gives you a massively bloated ontology.

Alice: One can turn the argument around: given that you accept that there is objective color, similarly we don't need to actually design the GM humans to have GM-color, etc. Would that bloat your ontology?

But moreover, each of those properties you bring up might be identified with some disjunction of conjunctions of simpler properties, etc., as in the moral goodness suggestion. So, it does not follow that there would be so many different properties, any more than that follows from the fact that one may consider disjunctions of conjunctions of ordinary properties – unless the identifications fail, which might or might not be the case, but you haven't shown it is.

Regardless, it seems to me that chances are there are plenty of (perhaps not instantiated) properties, so I'm not sure how an ontology with many properties would be problematic. What is vastly more problematic is your implicit commitment to certain exobiology claims.

Bob: Okay, so just to be crystal clear: do you hold that scenarios like those alien scenarios – or relevantly similar – are likely to happen – at least eventually?

Alice: If you mean invasions of Earth and things like that, then no. But if you mean aliens evolving differently and, as a result, ending up with some alien morality that is different from morality, then yes, given sufficient time and space, that seems probable to me.

That's not to say they probably exist now. I don't know whether they do. But given that the universe will last for a very long time, if it didn't happen already, then it seems probable to me at some time in the future, some intelligent social aliens will evolve, with complex language, capacity for space travel, etc. But it's a very tentative assessment.

Now, if that happens – i. e., if such aliens evolve -, I would expect that they probably would have something like morality, though I don't how similar their social contexts – i. e., in which they use their moral-like language - would be to ours.

In any event, a considerable similarity between their alien morality and morality would be unsurprising, but a match would be surprising in my view, even if they evolved or will evolve from something closer to our ancestors than octopuses, orcas or elephants are.

All that said, this is a very tentative assessment.

Moreover, if I'm making a mistake in my probabilistic assessments of the alien scenarios, then my mistake, but that has no bearing on the metaethical points I made, or the points about objectivity.

On the other hand, you do seem committed to some exobiology claims.

So, let's also make this clear: do you hold that smart aliens with faculties relevantly different from our moral faculty do not exist, or if they do, those aliens would be very mistaken about morality? - and the same holds for all of the future of the universe.

Bob: Yes, that seems very probable.

Alice: Thanks for making that stance clear. It's a clearly unwarranted exobiology claim.

Bob: No, it's warranted on theism.

Alice: Theism itself is unwarranted – at least, after sufficient reflection -, and if theism commits you to such exobiology claims, then that also provides an extra argument against theism. And if it's only the particular type of theism you believe in that is committed to those exobiology claims, that provides an extra argument against that particular type of theism. But we're getting side-tracked. Would you like to go back to discussing the metaethical argument?

Bob: I disagree with you about warrant, etc., but okay, let's go back to the metaethical argument. So, even if I granted just for the sake of the argument that a species-specific morality might be objective in the relevant sense, there are further problems for a non-theistic view. For instance, that evolved morality would be the herd morality. But why follow the herd morality, instead of self-interest?

Alice: Human ancestors never lived in herds, but leaving your derogatory term aside, the point is that humans also value not behaving immorally, doing morally praiseworthy actions, etc.

Now, it's very common to talk about 'self-interest' referring by that term to some of the interests and/or values of a human being, but not all of them, so that's fine. But one ought to keep in mind that human beings – non-psychopaths, at least – also care about other human beings. Humans tend to care the most about close family members, or a spouse, etc., then about friends, etc., and then even about strangers. So, it's not that human beings only care about self-interest.

Moreover, non-psychopathic human beings normally do value [positively] being good people, not behaving immorally, etc. - an exception might be some philosophers (e. g., error theorists) and a few other people , but that's a minuscule proportion of the human population.

So, the values of non-psychopathic humans are certainly not limited to self-interest. So, even in a sense of m-e-rationality, we normally have reasons to not behave immorally, to behave in a praiseworthy manner, etc.

Bob: But even non-psychopathic humans sometimes value things that put them in conflict with the requirements of morality. Let's say a public employee is offered a bribe. He positively values not behaving immorally, he also positively values having more money, which he can get if he takes the bribe. Furthermore, suppose that corruption is widespread where he works, his bosses are in on it, etc., so he shouldn't expect punishment for taking the bribe. On the contrary, his job situation will become worse if he refuses – he would be seen as perhaps dangerous, a wild card who might spill the beans, etc. And he values positively being in good terms with his bosses. So, there is the question of what he values the most, given the situation, and what's the most likely result if he chooses one course of action vs. the other, etc.

Alice: What's your point?

Bob: What if, considering all of his values (i. e., what he values and how much he values it, positively or negatively), and the probability of the results, the expected value of behaving immorally – in this case, taking the bribe – outweighs that of not behaving immorally? Maybe all things considered, from the perspective of m-e rationality, he ought to take the bribe, which would be immoral.

Alice: At least, the vast majority of situations people actually face are not like that, but that aside, let's say that that happens. What is the objection?

Bob: The objection is that in a situation like that, even a non-psychopath, all-things-considered ought to behave immorally.

Alice: In the m-e sense of 'ought', if the end in the 'all-thing-considered' stipulation is to maximize expected value, then yes, maybe he ought to.

In the usual moral sense of 'ought', then obviously – I'd say transparently tautologically – he ought not to behave immorally. And if the usual moral sense of 'ought' reduces – but I'm not saying it does – to a m-e sense in which the end is not to behave immorally, then the answer remains the same: no – obviously, and transparently tautologically.

But that would still not challenge objective morality. So, what is the difficulty?

Bob: That would destroy the action-guidingness of moral judgments.

Alice: No, it wouldn't. In most cases, it would still be overall m-e irrational to behave immorally. Granted, in some cases that would not be so. But that's probably true. So, I see no problem.

Bob: What about psychopaths? Maybe they m-e rationally ought to do atrocious things, all things considered.

Alice: Usually, psychopaths have instrumental reasons not to behave immorally, such as punishment avoidance. But sometimes they have no such reasons. We've already been through this before. I see no challenge here.

Bob: But if theism is true, it would never be m-e rational for a human being to behave immorally, because if theism is true, people ought to believe it's true [that's an 'ought' of epistemic rationality], and ought to factor in afterlife punishment.

Alice: What a person should do in order to maximize expected value depends on the information available to them, and after reflection, people should conclude that theism is false. Not to mention all sorts of other problems. But those are issues for another debate.

Even if your claim that on theism, it would never be m-e rational for a human being to behave immorally, is both warranted and true, that's no good reason to believe that theism true, or to assign greater probability to the hypothesis that theism is true than before.

Bob: I disagree with your assessments about theism, of course. But that aside, there are other objections to the views your replies suggest. For example, the problem of moral disagreement. If God did not exist, our observations of moral disagreement would be a powerful argument against objective morality, even in the sense in which you understand the word 'objective'.

Alice: Craig says that moral disagreement presupposes objective morality[6]. Are you implying Craig is mistaken in thinking so?

Bob: Let me rephrase: If God did not exist, our observations of apparent moral disagreement would be a powerful argument against objective morality, even in the sense in which you understand 'objective'. They would indeed work as an argument from apparent disagreement to the conclusion that people are talking past each other, and so there is no common meaning of moral terms, etc.

Alice: Two points:

1. If our observations of apparent moral disagreement were a powerful argument against objective morality for that reason, it would be improper to deny that conclusion by assuming theism; rather, that would indeed be a powerful argument against objective morality.

2. One might mirror your argument: If a c-god did not exist, then apparent disagreement about cruelty would be a powerful argument against objective cruelty.

At any rate, if you think you do have an argument that supports theism based on disagreement or apparent disagreement, I would invite you to make it, but I will point out that all of this is already far removed from Craig's metaethical argument.

Bob: I disagree with your points 1. and 2., but I reckon that's a matter for another debate too. Now, let's go back to the alien scenarios, because there is an objection I haven't raised yet: Moral Twin Earth. Horgan and Timmons construct “Moral Twin Earth” scenarios[17][18], which are relevantly similar to your alien scenarios, and in Moral Twin Earth there is actual disagreement between Earthers and Twin Earthers, rather than miscommunication.

To be clear, I do not endorse the authors' metaethical views, and in particular – of course – I hold that there are moral properties and moral facts, but still, their Moral Twin Earth scenarios are useful in this context because it is intuitively clear that Earthers and Twin Earthers would actually disagree, rather than talk past each other. Your alien scenarios may obscure that fact for some people by introducing aliens that are more different from humans in a number of ways, but the point remains that the aliens and humans would disagree – rather than talk past each other.

Alice: The fact that the aliens are different from humans in several respects other than morality does not obscure any facts. If anything, the fact that t-humans look so similar to humans in many respects might obscure the fact that in the Twin Earth Scenario, they are in fact talking past each other - at least, if one may set some worries aside.

By the way, I would rather call the Twin Earthers “t-humans” to make it clear I don't think they would be human, at least given the psychological differences required for convergence to such different moralities.

Bob: What we call them is not the point. What worries are you talking about? And how would Moral Twin Earth be any [relevantly] different from your alien scenarios?

Alice: I do not claim that they are relevantly different. But I'm not certain that they are not. It depends on how one construes the Moral Twin Earth scenario.

On that note, a first worry is that the Twin Earth scenario in a paper[18] stipulates that Earthers converge to a consequentialist folk morality – which I think is false, but not the point -, and Twin Earthers converge to a non-consequentialist folk morality.

But that only seems to tell us about convergence in regard to obligations, not with regard to moral goodness or t-moral goodness, which might still be converging to the same place so to speak, and so at least Earthers and Twin Earthers might not be talking past each other with regard to goodnessand that would be different from the alien scenarios.

Still, I think one may properly stipulate that Earthers and Twin Earthers also converge differently in that regard.

A second worry is that the Twin Earth scenario in question[18] does not specify whether Twin Earthers would converge to their deontological morality only on their own – i. e., without Earthers -, or even if Earthers are present. That stipulation may not be required in the case of Horgan and Timmons's paper – given that they were reply to a specific sort of moral realist theory -, or perhaps it is implicit, but it seems relevant in this context, since a convergence that is dependent on whether they meet Earthers – or whether they study and understand the causes of the differences in their psychology, etc. -, raises a number of other issues, and that might make the scenarios relevantly dissimilar from the alien scenarios.

In the alien scenarios, it seems clearly implied that whether they meet humans does not affect where their morality converges to, but just on case, one may also add the stipulation that the aliens converge to different folk moralities under ideal reflection – assuming there is such reflection - even if they meet humans, or each other. Given that, one would need to add that condition to the Twin Earth scenario too, in order to to make sure it's relevantly similar.

A third worry is how similar the moralities – or analogues to morality - are. For example, if different people in the US use the word 'car' very slightly differently – a difference imperceptible in daily life – we do not say they're talking past each other. There is some tolerance in our colloquial language.

So, if the differences between Twin Earth folk morality, and Earth folk morality were so minuscule, maybe – with the usual tolerance in colloquial language – it would be proper to say they're not talking past each other.

I think the alien scenarios clearly indicated a much greater difference than that, given the fact that the psychological differences between any of those aliens and humans (or between each other) appear considerably greater than between Earthers and Twin Earthers, and in particular their social structures are in fact quite different from ours. But one may also add the stipulation that the differences in judgments and patterns of judgments are clearly noticeable, even after reflection, etc.

As I understand the Twin Earth scenarios, the differences between Earthers and Twin Earthers are big enough for them to be talking past each other – especially given that it's stipulated that after reflection they converge to different moralities, and the deontological vs. consequentialist stipulation -, but if that is not so and I didn't interpret the Twin Earth scenario properly, that would not affect the alien scenarios or the conclusions based on them.

A fourth worry is that the Twin Earth scenarios were constructed in order to deal with some specific types of moral realism, and so they may assume some of the implications of those forms of realism in order to use them against them. But the alien scenarios do not make such assumptions.

There are other worries, but I think those are the most salient ones.

Now, even though I do think in the Moral Twin Earth scenarios – as I understand them -, Earthers and Twin Earthers are indeed talking past each other – in fact, after reflection, that seems clear to me -, even if I'm mistaken about that – because, say, I missed a relevant feature of the scenarios and misinterpreted them -, that would not affect my points about the alien scenarios.

Bob: Suppose you may set those worries aside – e. g., the differences in the Twin Earth case are not so minuscule, etc. Then, it's still clear that there is real disagreement between Earthers and Twin Earthers, not just miscommunication. Both Earthers and Twin Earthers would still disagree about what to do, since morality is action guiding.

Alice: There may well be conflict due to the fact that some of the actions of Twin Earthers might be – depending on what they are – properly regarded as a bad thing by Earthers and some Earthers may well be motivated to prevent those bad things, and conversely some actions by some Earthers might be properly regarded as a t-bad thing by Twin Earthers, and so on.

But for that matter, there may well be conflict between an Earther - a human, of course -, and, say, a lion that is killing other humans, or a Humboldt squid, etc.

This is not to say that there is a disagreement about any facts of the matter, between the lion and the person, or between the squid and the person.

Granted, Twin Earthers are vastly more intelligent than lions or squid and can talk, etc., but the point is that conflict does not entail disagreement about any facts, even if the word 'disagreement' may also be used in some contexts to mean 'conflict', or something along those lines.

However, there is no disagreement about facts, and in particular about whether or not something is morally good or not.

Bob: But that is counterintuitive.

Alice: Not to me, after reflection.

Bob: I think you're biting a big bullet.

Alice: I don't think I'm biting any bullets. But I think you are, and your commitment to certain exobiology claim(s) is a particularly big one.

Bob: I disagree, of course. But let me raise a different challenge: even if there could be objective morality without God, morality would still be arbitrary. Some aliens would have a different morality, and morality would be just a byproduct of evolution.

Alice: The derogatory expression just a byproduct” is clear, but the rest is not. What do you mean by “arbitrary”? By the way, why would you say “byproduct”, rather than “product”? - other than as a derogatory word, that is?

If you're suggesting it would be a side effect of some adaptations, that does not follow. It may well be that some or even most aspects of our moral faculty are adaptations. It might even be that all of them are.

Moreover, we are the result of unguided evolution, so it's hardly surprising that our moral faculty is also the result of unguided evolution. In which sense would that be arbitrary?

Bob: It's not just our moral faculty, but rather, evolution decides what is morally good or bad.

Alice: That's like saying that evolution decides what's red or green.

Bob: Well, in a sense, it does, if some aliens could just evolve some alien-color vision.

Alice: Of course, some aliens could evolve some alien-color vision. In fact, even here on Earth, other animals have very different other-animal-color vision. If that's all you mean by saying that evolution decides what's red or green, or what's good or bad, I would say that you're not using the word 'decides' in a standard manner.

At any rate, if all you're saying is that if God did not exist, some aliens could evolve some alien moral sense, then that's true. But so what? In what sense would that make morality arbitrarily?

Bob: Let me put it this way: if we had evolved differently, maybe slavery just for profit would not be immoral. That makes morality totally arbitrary.

Alice: That's at best imprecise. If evolution had been different, maybe we would not exist, and while it would still be immoral for any possible human being to engage in slavery just for profit, maybe the beings who would live on Earth instead of us (say, alternates) would not have any alternate-morality such that it would be alternate-immoral to engage in slavery for profit.

That seems improbable, though. If they are intelligent social animals like us, probably, their alternate-morality would have an injunction against slavery just for profit. Alternate morality would not be morality, but there would probably be a large area of overlap, where they are similar.

Still, this is not central. What if slavery just for profit were not alternate-immoral?

That has nothing to do with what is immoral for us human beings to do.
Bob: But that shows morality would be arbitrary.

Alice: I don't see how. In which sense would morality be arbitrary?

Bob: In the sense it would depend on the contingent facts of evolution.

Alice: No, it wouldn't. It does not depend on evolution that it's immoral for any human being to engage in slavery just for profit. If evolution had been different, that fact would remain true. Maybe there would be no agents who would know that fact, or care at all about it, but what's the problem?

For that matter, if evolution had been different, but orange trees were like they actually are, their fruits would still be mostly orange, even if the alternates had a different visual system and didn't see or care about orangeness.

Bob: But then it would just be a byproduct or a product of evolution that there are agents who care about right and wrong.

Alice: Of course, as I said, we are the result of evolution. Why the negatively loaded word “byproduct”?

Bob: The point is that that would make morality arbitrary!

Alice: I don't see how. In which sense would morality be arbitrary?

Bob: I already explained that.

Alice: I see nothing in your statements supporting your claim that morality would be arbitrary in any interesting sense.

Bob: I disagree, of course, but given I will not be able to persuade you, let me raise another challenge: free will. If you believe God does not exist, how do you know that causal determinism is not true?

Alice: I don't know whether causal determinism is true or how that is related to God's existence, but at any rate, I'm a compatibilist, so I see no problem in that regard.

Bob: I see. I disagree, but a debate on free will is a matter for another time, and it seems we've covered the main topics, so let's call it a day.


3. Conclusion.

As I mentioned before, I reckon Alice wins the debate, hands down. But I invite any interested readers to try to improve Bob's arguments.


Acknowledgement.

Some of the main ideas in this post are from a poster that goes by the pseudonym “Bomb#20” at talkfreethought.org. I make no claim that he agrees with most or all of my points, though, since I also took some ideas from several other sources, and added several more I came up with.


Notes:

[a] I posted another reply to Craig's metaethical argument elsewhere, in a more traditional format – i. e., not as a hypothetical debate. Some parts of it are outdated, in the sense that I would write them somewhat differently if I did it now, but for the most part, I would make the same arguments today.

Still, some of the arguments are improved and/or more thoroughly developed in the dialogue above.

[b] One might use a more precise definition in order to deal with questions like 'What if the bar is heated? Does an oldmeter remains the same?', but that is not necessary in this context. In any case, one may just stipulate for the purposes of the example that 'oldmeter' is the distance between the lines in question even if the bar is heated.

[c] It is debatable whether there is an ordinary term 'great' precise enough to do the philosophical work required in that context, but I'm granting that for the sake of the argument, to simplify.

[d] There are differences between human languages when it comes to color, and some languages distinguish colors differently from others. Also, there are some differences between the color vision of different humans with normal color vision. But the point I'm trying to illustrate does not require that I address those issues, and it would make the example too long. At any rate, Craig actually considers red and green as examples of objectivity, even in the context of explaining what he means by 'objective' in the metaethical argument, and in the context of defending it. [13]

[e] I'm not a consequentialist, but I think it's plausible sometimes humans have a moral obligation to act in a certain way because that is required to prevent a bad result, or more precisely, a moral obligation to choose to act in a certain way because the agent reckons and/or should reckon that that is required to prevent a bad result, given the information available to her – even if given more information, perhaps she would have a different obligation.

This is not crucial to the reply to the alien objection, though. The basic point is that humans have morality, #3-aliens would have #3-morality, etc.

[f] Or even with some of the aliens, if – for instance - some of the aliens learned about [human] morality, made a computer based on human brains that can properly assess morality – not #3-morality, etc. -, and then use the computer to debate with humans, just for fun. But that's a side point.


References:

[-1] As argued by Matthew Flannagan in the comments thread on a post at the Secular Outpost.

Link to the post: http://www.patheos.com/blogs/secularoutpost/2015/09/21/divine-commands-and-informative-identity/

[0] http://www.reasonablefaith.org/does-god-exist-the-craig-law-debate

[1]

William Lane Craig, “The Most Gruesome of Guests”, in “Is Goodness Without God Enough: A Debate on Faith, Secularism, and Ethics”, edited by Robert. Garcia and Nathan King.

Mark Murphy, “Theism, Atheism, and the Explanation of Moral Value”, in “Is Goodness Without God Enough: A Debate on Faith, Secularism, and Ethics”, edited by Robert. Garcia and Nathan King.

Also, Morriston explains what Craig means by 'ontological foundation' or 'grounding' in “God and the Ontological Foundation of Morality”, Religious Studies (2012) 48, 15–34 f Cambridge University Press 2011

Link: http://spot.colorado.edu/~morristo/DoesGodGround.html

[2] http://www.reasonablefaith.org/is-the-foundation-of-morality-natural-or-supernatural-the-craig-harris

http://www.reasonablefaith.org/how-are-morals-objectively-grounded-in-god

http://www.reasonablefaith.org/the-euthyphro-dilemma-once-again

[3] http://www.reasonablefaith.org/defining-god

[4] http://physics.nist.gov/cuu/Units/meter.html

[5] http://www.reasonablefaith.org/the-euthyphro-dilemma-once-more

[6] http://www.reasonablefaith.org/defenders-2-podcast/transcript/s4-19

[7] http://www.reasonablefaith.org/theistic-ethics-and-mind-dependence

[8] http://www.reasonablefaith.org/objective-or-absolute-moral-values

[9] http://www.reasonablefaith.org/defenders-2-podcast/transcript/s4-20

[10] http://www.reasonablefaith.org/defenders-2-podcast/transcript/s4-21

[11] http://philpapers.org/surveys

[12] Street, Sharon, “A Darwinian Dilemma for Realist Theories of Value,” Philosophical Studies 127, no. 1 (January 2006): 109-166.

“Reply to Copp: Naturalism, Normativity, and the Varieties of Realism Worth Worrying About,” Philosophical Issues (Nous), vol. 18 on “Interdisciplinary Core Philosophy,” ed. Water Sinnott-Armstrong, 2008, pp. 207-228.

Source: https://files.nyu.edu/ss194/public/sharonstreet/Writing.html

[13] Sayre-McCord, Geoff, "Moral Realism", The Stanford Encyclopedia of Philosophy (Winter 2014 Edition), Edward N. Zalta (ed.). http://plato.stanford.edu/archives/win2014/entries/moral-realism/

[14] http://www.rfmedia.org/RF_audio_video/Defender_podcast/20040801MoralArgumentPart3.mp3

http://www.reasonablefaith.org/Those-Who-Deny-Objective-Moral-Values

[15] http://www.reasonablefaith.org/a-christian-perspective-on-homosexuality

[16] http://www.reasonablefaith.org/warrant-for-the-moral-arguments-second-premiss

[17] Horgan and Timmons, “New Wave Moral Realism Meets Moral Twin Earth”; Journal of Philosophical Research 16:447-465 (1991)

http://philpapers.org/rec/HORNWM

[18] Horgan and Timmons, “Analytical Functionalism Meets Moral Twin Earth”, in Minds, Ethics, and Conditionals: Themes from the Philosophy of Frank Jackson”, edited by Ian Ravenscroft. Oxford University Press (2009).

Link:

http://thorgan.faculty.arizona.edu/sites/thorgan.faculty.arizona.edu/files/Analytical%20Moral%20Functionalism%20Meets%20Moral%20Twin%20Earth.pdf

http://philpapers.org/rec/HORAMF

3 comments:

Fox ITK said...

Hi I'm a big fan of your blog. It is in my opinion the best blog on these arguments available.

I have a question regarding Craig and infinites. Something strange seems to arise for me at least with the notion of atemporal simultaneous causation and an all powerful God.

Youpeak of nothing counter intuitive about an infinite number of galaxies.

Presumably Craigs god could create atemporally an infinite number of stars, one for every natural number in one singular act of power.


In one sense Craig allows for one universe to be created atemporally- why not two though? Seems logically possible. Why not then three? And so on. This still takes no time at all. so there is no maximal limit there is potential infinity that could be created with no impediment.

And yet Craig would say that once created there could 'always be one more'. So on one hand there seems no issue no logical limitation and yet Craig would not allow it.

Does this mean there is at some atemporal state, a maximal limited number of stars god could create? Or a potential infinite that once actualised instantly becomes finite?

I understand how creating in time Craig holds this problem off- there's only ever a potential infinite, because it takes time to create stars say naturally. But what if that potential infinite took no time at all?

Any thoughts? Is this just a Tristam Shandy type scenario easily refuted?

Thanks.

Angra Mainyu said...
This comment has been removed by the author.
Angra Mainyu said...

Hi Fox in the Know, and thanks.

I think this discussion would be more suitable for a post on the Kalam argument, but briefly, Craig's position is that God cannot create infinitely many stars, galaxies, etc., but on the other hand, Craig does not say there is a maximum number n1 such that it's impossible that God creates more than n1 stars, and apparently accepts God possibly creates any arbitrary natural number of them.

So, on his view, it seems that for any natural number n there is at least one possible world W(n) where there are at least n stars, but no possible world W where there are infinitely many stars.

That seems to require that every possible world has a present time, else it seems there would be a possible world at which there are in total (i.e., including the future) infinitely many stars (or people, etc.).

Craig does say that if it's possible that God creates an infinity (e.g., infinitely many stars), it's not one at a time, but all at once, regardless of whether he does so in time or as the first creation. But he argues against that possibility on the basis of the Hilbert Hotel arguments and similar ones, not arguments from finite addition.

I don't think that the Hilbert Hotel argument (or similar ones) is persuasive at all. I addressed it in my reply to the KCA.