Thursday, February 9, 2012

Metaethical Arguments Provide no Support for Theism

Download in .pdf format

Alternative link


Metaethical Arguments Provide no Support for Theism



0) Introduction

1) Terminology

2) Linville's epistemic argument, and Darwinian counterfactuals

2.1) Color, truth, and extraterrestrials

2.2) Morality, truth and extraterrestrials

2.2.1) A side note on contact

2.3) Objectivism/realism, or species-relativism?

2.4) A semantic challenge: morality for all sufficiently intelligent beings

2.5) Supervenience

2.6) The ancestral environment: a difference between color and morality

3) The road so far

4) Ontology

4.1) Ontology, property identity and semantics

4.2) Color ontology

4.2.1) Color and supervenience

4.2.2) Color antirealism

4.3) Moral ontology

5) The Open Question Argument

5.1) General considerations

5.2) Science, color and morality

5.3) Filling the gap

5.3.1) Moral goodness

5.3.2) Moral badness, immorality and moral wrongness

5.3.3) Moral obligations, and 'ought'

6) Prescription and description, 'is' and 'ought', and related matters

6.1) Science and description

6.2) Is and ought

6.3) Is, ought, and moral obligation

6.4) Morality, description, and prescription

6.5) Science and morality, part 2

7) Motivation

7.1) Psychology, psychopaty, and morality

7.2) Aliens again

7.3) Moral phenomenology and moral judgments

7.4) A potential evolutionary account

8) The road so far - II

9) Linville's argument from personal dignity

9.1) "Why is that immoral?"

9.2) Bayoneting alien cyborgs for fun

9.3) Delinquent mathematicians and alien robots

9.4) Someone has been wronged

9.5) Tracking mental properties: direct tracking, indirect tracking, and ontology

9.6) Attempted crimes and punishments

9.7) Moral obligations

9.8) Moral rights

9.9) Alternatives under evolutionary naturalism

9.10) Mind-independent value?

9.11) Darker and darker

9.12) Where is the bottom?

9.13) Persons and evolutionary naturalism

9.14) Values and perceptions

10) Disagreement

10.1) The Fall

10.2) A potential evolutionary hypothesis

11) Too many beliefs?

12) Heroism

13) Psychology, not ontology

14) Materialism

15) Freedom, libertarian 'freedom', and determinism

16) A practical diversion

16.1) Accountability

16.2) Motivation again

17) The failure of Divine Command Theories (DCT)

17.1) Ontological Divine Command Theories

17.1.1) Metaphysical possibilities

17.1.2) The moral obligations of a personal creator

17.1.3) Rebuttal to a potential theistic objection

17.2) Semantic DCT

17.2.1) Obligations and commanders

18) Copan's metaethical arguments against evolutionary naturalism

18.1) Moral truths and valuing

18.2) Valuing instrumentally and valuing finally

18.3) Arbitrary morality?

18.4) Explanatory power and bloated ontologies

18.4.1) Illness and moral badness

18.4.1.1) Language

18.4.1.2) Facts

18.4.2) Redness and moral badness

19) Conclusion

Notes and references



0) Introduction

1) In this article, I will make a case against metaethical arguments for theism.

I will use mostly Linville's[1], Craig's[2] and Copan's[3] arguments as examples, but this article is not limited to those metaethical arguments: instead, my aim is to show that no theistic metaethical argument provides any support for theism.

2) Metaethical arguments for theism essentially intend to show that some actual feature of morality is incompatible with non-theism, or at least with what Linville calls "evolutionary naturalism" (EN):

Linville defines EN as "the combination of naturalism and an overall Darwinian account of the origin of species".

Theistic claims may be epistemic (e.g., if EN is true, then there is no moral knowledge, or moral knowledge is undermined), or ontological (e.g., if EN is true, then there are no moral facts).

Now, I have some doubts about the coherence of the natural/non-natural distinction, but there is no need to get into that:

We may understand EN to mean that:

a) There are no souls or similar beings,

b) There are no Platonic realms, or generally Platonic objects, and

c) Human faculties – i.e., faculties shared by our species – are the result of an evolutionary process not guided by any designer.

I'm assuming that the concept of a soul is coherent, but it's very improbable that a theist could object to that without contradicting some of his theological views.

Still, if the concept is of a soul is not coherent, let's understand EN as before, only removing the reference to souls or similar beings.

Similarly, if the concept of Platonic realms or objects is incoherent, let's assume EN as before, but removing the reference to Platonic realms or objects.

Showing that no theistic metaethical argument shows that, plausibly, if EN – so understood – is true, then there is no moral knowledge, or moral truths, etc., suffices to defeat all theistic metaethical arguments.

It is true that Linville's conception of evolutionary naturalism might be slightly different from the one I just defined, so if I say – for instance – that Linville argues that there is no moral knowledge under EN, I do so with the understanding and recognition that his claim might have been slightly different from what the previous understanding of EN would entail.

However, that is not a problem in this context, because:

a) His concept of evolutionary naturalism and the one I defined are probably very similar, even if not a perfect match, and

b) In any case, showing that Linville's metaethical epistemic arguments – and any similar ones – fail to show that there is no moral knowledge if EN is true, and under the understanding of EN I just defined, is enough to show that such arguments provide no support for theism.

3) Generally, I will not assess arguments against moral ontology, or against moral knowledge, that might be raised by different moral anti-realists, but are incompatible with theism.

That would be unnecessary in this context: since any defender of a metaethical argument for theism is committed to the failure of all such arguments, we may safely leave those arguments aside, without failing to address any arguments a theist might make.

4) Before I go on, I'd like to say that the main ideas on which I base this argument against metaethical arguments are from others, not from me.

In particular, I'd like to acknowledge an anonymous poster who goes by the nickname 'Bomb#20' at www.freeratio.org as the source of several of the key ideas I've used in this article.

Other ideas are quite common and come from a number of other sources, though I do not know where they originated, nor recall where I saw them first.

While I've also used some ideas I came up with, but I have no good reason to think I'm the first one to come up with them, so there is no claim of novelty on my part.

1) Terminology

1) By a 'moral agent' I mean a being who has moral properties (e.g, she may be morally good, morally bad, etc.), and/or some of whose actions have moral properties.

To be clear, moral agency is not about whether it's morally good or bad to treat a being in some way.

For instance, it's immoral for humans to torture cats for fun, but that does not mean that cats are moral agents. They are not.

2) I will use the word 'argument' loosely, to refer to both the formal argument, and the informal arguments used to support the premises of the formal argument. I think this is a common way of speaking, and context should prevent any ambiguity despite some notational abuse.

3) I have doubts about the coherence of the natural/non-natural distinction, so I will avoid those terms whenever possible. When it comes to supervenience, for instance, I will prefer to talk about whether moral properties supervene on non-moral properties, rather than whether they supervene on natural properties. I will address the matter in more detail later.

4) As stated in the introduction, by "evolutionary naturalism" or "EN" I mean the view that:

a) There are no souls or similar beings,

b) There are no Platonic realms, or generally Platonic objects, and

c) Human faculties – i.e., faculties shared by our species – are the result of an evolutionary process not guided by any designer.

I'm assuming that the concept of a soul is coherent, but it's very improbable that a theist could object to that without contradicting some of his theological views. Still, if the concept is of a soul is not coherent, let's understand EN as defined above, only removing the reference to souls or similar beings.

Similarly, if the concept of Platonic realms or objects is incoherent, let's understand EN as defined above, but removing the reference to Platonic realms or objects.

5) By "strong intelligence" I mean an average human IQ, or greater.

6) When I say "mental state" I mean it in a general sense, including intentions, beliefs, etc.

7) I will use words like 'behavior' and 'behave' in a general sense, including omissions, unless context indicates otherwise.

8) I only refer to different parts of this article as "sections" or "subsections" - i.e., no sub-subsections, etc.; I will use links between the relevant parts of the document as required, to prevent any ambiguity.

2) Linville's epistemic argument, and Darwinian counterfactuals

So, I will begin my counterargument by addressing an epistemic objection that Linville raises, in his "Argument From Evolutionary Naturalism" (AEN), as a means to introduce most of the issues under discussion:

Linville: (p. 409)

Had the circumstances of human evolution been more like those of hive bees or Galapagos boobies or wolves, then the directives of conscience may have led us to judge and behave in ways that are quite foreign to our actual moral sense.

First, it should not be assumed that any kind of social organization is compatible, given the way our universe works, with the evolution of social beings with strong intelligence.

While we have no good reason to assume that our moral sense would have to be shared by every social species with strong intelligence, we should not assume that everything is an evolutionary possibility in our universe, either. That is an empirical matter than needs to be assessed.
Even though some environmental conditions may be very different – e.g., different planets -, others are common in any social animal which is evolving and becoming increasingly intelligent – for example, the very fact that they're living in social groups of increasingly intelligent individuals -, and that might constrain the kind of social organization strongly intelligent
social animals might have, as a result of the evolutionary process.

Second, leaving that aside, it is not true that a different evolutionary history would have led us to behave in such ways, quite foreign to our moral sense.

In fact, we would not have behaved in any way at all, because we would not have existed at all.

Third – and this goes to the heart of the matter -, it seems that those entities who would have evolved in a different environment would not have made moral claims at all.

But this is a point I will address in much greater detail below.

Fourth, actually, for all we know, scenarios of different evolutionary histories and senses other than a moral sense may very well be actual:

In other words, for all we know, something like that may have already happened.

On that note, let's suppose that at least one social species s with strong intelligence and with a kind of sense more or less akin to our moral sense – say, an s-moral sense – evolves, on average, for every ten million galaxies.

In that case, there would be over a million such species.

Now, it seems that someone raising Linville's AEN, or any argument that resembles it, is committed to the claim that one of the following is true:

P1: It's not the case that at least one social species with strong intelligence, for every ten million galaxies.

P2: There are over a million such species, and all of the s-moralities are actually morality: their s-moral sense picks the same properties as the moral sense.

So, for each such species s, s-morality is the same as morality, s-moral properties are moral properties, and so on.

In other words, all of them, on every single planet on which they evolved, got a system that tracks just the same properties as our own moral sense.

P3: There are over a million such species, but only humans and perhaps some of the others are special ones and have morality, whereas the others have their respective s-moral sense, which fails to track moral properties, so they're vastly confused.

It's even worse, though, because if the defender of the AEN did not reject P3 as very implausible, then that would undermine their belief that we're among the select ones. In other words, if other species ended up with unwarranted moral beliefs, why should we believe that we're not among those?

So, it seems they're even committed to the view that (P1 v P2) is true, which is indeed a very bold claim about exobiology, and with no good evidence to back it up. Moreover, even if we take the triple conjunction – i.e., (P1 v P2 v P3) -, that's still a very bold claim about exobiology, without any good evidence to back it up.

Thus, the theistic position is unwarranted.

In other words, it's the theist defender of an AEN who is espousing an unwarranted view.

To be clear, I'm not making a claim that the s-moralities would be variable. In other words, I'm not denying the disjunction. I'm merely pointing out that such a position is unwarranted.

It might be that the constraints on the evolutionary possibilities of social entities with strong intelligence are such that every such entity gets morality – i.e., all the s-moralities are in fact morality -, but that's surely not an assumption that we should make.

In fact, it seems animals with similar IQ may have rather different evolved social propensities, so that level of constraint seems rather implausible to me, but there is no need to take a stance on that here, and in any case it's a matter for biologists to study.

In any event, if the constraints are so strong that no such variation would occur, then that alone blocks Linville's AEN, or any other similar epistemic metaethical argument.

However, since that variation would be – contrary to Linville's claim – not a problem for moral knowledge under EN, let's assume, from now on, that on EN such variation happens, and let's see why theistic metaethical arguments fail regardless of that.

2.1) Color, truth and extraterrestrials

Another defender of a metaethical argument for theism – William Lane Craig – brings up an analogy between color and morality. I will extensively use that analogy to make a number of points, not only when addressing some of Craig's metaethical arguments, but Linville's arguments as well, and generally when addressing a number of metaethical issues that some theists raise or might raise.

William Lane Craig: (from his podcast) [2]

We could imagine a world of color blind people, where everybody was colorblind, so that nobody saw there was a difference between red and green, but that wouldn't mean that there isn't any such thing.

Or imagine that a world where everybody, say, was deaf: that wouldn't mean that there were no sounds or something like that; that wouldn't mean that sound waves weren't therefore produced.

So, don't think that moral values are something that just sort of exist in your head. I'd say they're external to the body. They're "out there".

Okay, so there are differences between red and green, and differences between right and wrong.

A person who accepts EN may of course grant all of that, without any difficulty.

Now, we already know that there are animals that don't see the world in the same kind of colors we do.

It's entirely possible that there social beings with strong intelligence on other planets in other galaxies who have a different visual system.

Let's say that zurkovians are one such species.

They experience something that looks similar to color vision, but they don't quite see the same.

In particular, let's an object that we see as red and green looks monochromatic to a zurkovian with normal z-color vision. Also, they see no difference between a red traffic light and a green traffic light.

On the other hand, zurkovians perceive differences in objects that we see as monochromatic, which actually have different reflective properties in part of the ultraviolet spectrum which is visible to them.

So, are the zurkovians making false color statements all the time?

Is their color vision not truth-aimed, and only ours is?

Are humans the special ones, and those poor zurkovians would have to accept a z-color error theory?

The answer is clearly no.

Just as humans make (generally) true color statements, and our color vision is truth-aimed, zurkovians make (generally) true z-color statements, and their z-color vision is truth-aimed as well.

There is a difference between red and green, as Craig points out.

However, zurkovians cannot see it.

On the other hand, there is also a difference between, say, z-red and z-green – two colors that zurkovians talk about -, but humans cannot see it.

Furthermore, it may even be that z-green looks to zurkovians as green looks to humans, and z-red looks to zurkovians as red looks to humans – assuming similar perceptions among different humans, which is at the very least conceivable -, but it remains the case that z-red is not red, z-green is not green, and there are real differences between red and green that zurkovians can't see, and real differences between z-red and z-green that humans can't see.

Humans and zurkovians, then, have different visual systems, and both can see real differences in the world around them, even if not the same differences.

If, in a distant future, humans and zurkovians were to make contact, then if they thought that they can correctly translate z-color statements into color statements and vice versa, they would be making a mistake and talking past each other.

But plausibly, those scientifically advanced star-faring humans and their zurkovian counterparts will easily realize that they're just not talking about the same properties.

So, a solution would be to just "translate" color statements – and z-color statements – into something both sides can understand. Perhaps, their physics theories are close enough to turn z-color statements into statements that humans understand, and color statements into something that zurkovians understand, preserving referent if not meaning.

But obviously, none of the above would in any way undermine our warrant in assessing that, say, a traffic light was red, and not green.

2.2) Morality, truth and extraterrestrials

Let's say that, on their planet, zurkovians have evolved differently, and they have a social organization and some kind of sense analogous but different from our moral sense that allows them to navigate their social world. Let's call that sense a 'z-moral sense'.

Now, as zurkovians have a z-color vision that allows them to make (generally) true z-color assessments, they have a z-moral sense, which allows them to make generally true z-moral assessments.

On the other hand, humans have color vision, which allows us to make generally true color assessments, and a moral sense, which allows us to make generally true moral assessments.

None of the senses, either human or zurkovian, is infallible.

Now, Linville and some other theists maintain or imply that the possibility, on EN, that beings such as zurkovians might evolve, would undermine the warrant of our moral assessments, even to the point of rendering such assessments unwarranted. But they do not seem to explain why, or how.

It's apparent that our color statements would not be affected in such a way, so it's not the case that different evolutionary histories and different senses would always undermine the warrant of our assessments in some domain. So, when a theist claims that our moral assessments would be so undermined, they ought to argue their case. The burden is on them, and it's a burden they've not discharged – not even close.

Let's take a look at the matter from another perspective.

Let us suppose that, in the future, we or our descendants in fact do make contact with an alien civilization, and the aliens happen to have not a moral sense, but something different, like a z-moral sense.

Should we, or our descendants, conclude that our assessment that the Holocaust was immoral, is unwarranted, just because some aliens on another planet happened to evolve differently? Why?

Linville and defenders of similar arguments are committed to a 'yes' answer to the first question above, but give no good reason to think that that is the case.

Instead, they merely claim that, under EN, such different evolutionary histories would be a possibility, and then jump to the conclusion that that would be a problem for EN, perhaps quoting some non-theist moral error theorists who make similar claims. But that does not go anywhere near meeting their burden.

Again, let's suppose some aliens evolved differently, like the hypothetical zurkovians.

How or why would that have anything to do with the warrant of our assessment that the Holocaust was immoral?

The unmet burden is very obvious at this point.

2.2.1) A side note on contact

There are of course differences between morality and color when it comes to, say, the consequences of encountering beings with different evolutionary histories and which may have a z-moral sense and z-color vision instead of a moral sense and color vision, and perhaps also when it comes to whether such beings would have moral properties vs. color properties.

But those matters are not relevant to the question of whether our moral assessments are warranted. If someone claims otherwise, they ought to defend their claim.

That aside – and though this is a side note -, with respect to the different consequences, if humans made contact with zurkovians, and neither size realizes that morality is not the same thing as z-morality, they may well end up talking past each other. That's no different from color and z-color.

On the other hand, if – for instance -, both human moral psychology and zurkovian z-moral psychology are developed enough for both sides to realize that they're not talking about the same thing, conflict may still arise, which does not happen in the case of color and z-color.

After all, morality and z-morality are motivational in a way that color and z-color are not – let us stipulate that z-morality is as motivational to zurkovians as morality is to humans.

So, even understanding that the other species is not talking about the same thing they're talking about might not be, on its own, enough to avert conflict.

For example, some humans might say:

Okay, so we realize that zurkovians aren't being z-immoral, but they're still being immoral in a number of ways, and they won't change their ways because they don't even care about morality!

They just care about z-morality! Let's punish the immoral zurkovians!

Similarly, some zurkovians might say:

Okay, so we realize that humans aren't being immoral, but they're still being z-immoral in a number of ways, and they won't change their ways because they don't even care about z-morality!

They just care about morality! Let's punish the z-immoral humans!

That raises some questions, such as:

Can zurkovians be immoral, or behave immorally, etc.?

In general, can an entity that does not have a moral sense be a moral agent?

Would zurkovians not more properly be characterized as non-moral agents, like a lion or a dolphin, even if much more intelligent than those animals?

Perhaps, zurkovians are not moral agents at all. Perhaps, humans are not z-moral agents at all.

Perhaps, it depends on how different the z-moral sense is from the moral sense.

Perhaps, zurkovians could have some moral properties, but not others, depending on how similar zurkovians are to humans, psychologically.

Those questions do not arise in the case of color, and are difficult ethical matters, but do not present a problem for the non-theist. The color analogy is an analogy, not a perfect match.

The point of the color analogy in this context is that just because some beings may evolve differently, make z-color assessments and the like, that does not undermine our color assessments at all.

So, it's not the case that merely the fact that evolution may have taken a different path somewhere else would make our assessments on a specific subject (e.g., color) unwarranted. If a theist claims that morality is different and that, in the specific case of morality, such different evolutionary paths – or even the possibility of them – would make our moral assessments unwarranted, they would have to argue for that. The burden would be on them, and as I pointed out, they've not met it.

As for the questions mentioned above, we may point out the following:
First, a non-theist need not take a stance on how similar z-morality
and morality would actually be.

In fact, given insufficient information, we shouldn't take a stance at all. We shouldn't be committed to claims about exobiology for which we don't have enough evidence.

Second, a non-theist need not take a stance on how similar those beings would have to be in order for them to be moral agents. That's a difficult ethical question, but not one related to the issue of the general warrant of our moral assessments, as far as one can tell. If a theist claimed otherwise, they would have to argue their case.

Third, even if morality evolved with humans, and even if zurkovians would not be moral agents, that does not imply or suggests that only humans can possibly be moral agents:

For example, at least nearly all of the fictional extraterrestrials in movies, TV shows, novels, and the like, would be moral agents if they existed, because their minds are very similar to human minds – which is unsurprising, given that they were invented by humans.

Similarly, entities posited by different religions, and who in English are usually called 'gods', 'spirits', 'monsters', 'demons', etc., would also be moral agents, at least in most cases – which is unsurprising, given that they were invented by humans, though arguing that point is beyond the scope of this article.

Someone might say: but who imposed moral obligations on them?

That, however, would be confused: there is no need for any Supreme Commander in order for moral obligations to exist. It's enough that some actions of those entities would be immoral, and in order for that to be the case, all that's needed is that the minds of such entities are sufficiently similar to human minds.

The above is enough to conclude this side note, so we may move on to the next subsection.

I will now add a side note within the side note and speculate about what we might expect in case of such contact, but the following isn't relevant to the metaethical argument:

Even if it turns out that zurkovians can be and usually are immoral, it would plausibly be immoral for some humans to start an interplanetary war that would likely result in massive suffering and death for millions or even billions of humans, for no reason other than punishing immoral zurkovians.

Perhaps, it would similarly be z-immoral for some zurkovians to start an interplanetary war that would likely result in massive suffering and death for millions or even billions of zurkovians, for no reason other than punishing z-immoral humans. But if that is not so, then such is life

What if, say, one side is way too advanced for the other to be a threat?

In that case, I think it would be immoral for humans to annihilate the defenseless zurkovians just to punish them, even if they generally behaved immorally. There might be other reasons, such as preventing them from annihilating a third species, if that cannot be averted in a different way.

If it's the other way around, perhaps it would be z-immoral for zarkonians to annihilate the defenseless humans. If not, such is life.

In any event, the chances of any kind of contact with advanced extraterrestrial species in the near future does not appear to be high (i.e., they're extremely low, in my assessment), and if contact happened in, say, millions of years, humans or post-humans would plausibly be advanced enough not to be defenseless. If not, again such is life.

We may speculate based on the information available to us, but what we shouldn't do is engage in wishful thinking and just assume that every social species with strong intelligence will have a moral sense, as we do. That would be an unwarranted claim about exobiology.

2.3) Moral realism, or species-relativism?

A theist philosopher might claims that the previous account would only give us some kind of species-relativism, rather than true moral realism. Alternatively, or additionally, they might claim that that would not be objective morality, and/or that that wouldn't be absolute morality.

It's not clear to me that any such that claims would be true, but in any event, whether or not the technical philosophical terms 'realism', 'objective', or 'absolute' apply to views of morality like the evolutionary account sketched in this article is irrelevant to the matters at hand, since the situation of morality would be, under such an account, exactly the same as that of color, with regard to the issues of realism, objectivism, absolutism, etc., and we have color truths, knowledge, etc.

In other words, under an account like the one sketched in this article, it remains the case – to use something like Craig's Holocaust example[2] - that the Holocaust was immoral, regardless of what anyone believed about it, just as, say, Nazi uniforms were not red regardless of what the Nazis or anyone else may have believed.

Moreover, under such an account, the Holocaust would still have been immoral even if the Nazis had brainwashed everyone and convinced them otherwise, just as Nazi uniforms would not have been red even if the Nazis had brainwashed everyone, convincing them that their uniforms were red – given the same uniforms, of course.

Furthermore, how the Nazis felt about any of those matters, or how other people who assessed the matters felt or feel about them, or how they perceived them, etc., also does not affect the truth that the Holocaust was immoral, and Nazi uniforms were not red.

The fact that the Nazis had no intentions of making anyone believe that their uniforms were red, or that we ascertain moral truth by means different from those by which we ascertain color truth is of course beside the point here: The point here is that if words such as 'realism', 'objective' and 'absolute' are used in philosophy in such a way that there is no objective color, absolute color, or color realism is not true just because another species might have a different visual system and see things differently, we still have of course color facts, like the fact that Nazi uniforms weren't red, and that does not depend on who's making the assessment, how some people feel about it, what they believe, etc., we still have color knowledge, and so on.

Given the above, claiming that there would be no moral realism, objectivism, absolutism, etc., on EN, would be a moot point: regardless of whether such technical terms apply to this account, our moral assessments would be warranted just as our color assessments are.

That aside, incidentally, the name "species-relativism" would probably not be adequate, since – among other reasons – it might give the impression that only humans are possible moral agents, which is not the case, as I explained earlier.

2.4) A semantic challenge: morality for all sufficiently intelligent beings

A theist might claim that somehow our moral language is committed to the impossibility of zurkovians and their z-morality:

According to this objection, our moral language commits us to the claim that (at least) all social beings with strong intelligence would have a moral sense – not z-moral sense, or anything like that -, since that is logically entailed by statements ascribing moral properties (such as 'X is morally wrong').

However, that would have to be argued for, and the burden would be on the theist.

On the face of it, it seems implausible that that's a semantic requirement of moral statements, since zurkovians are conceivable, and intuitively don't seem to lead us to any conflict with the idea of humans making true moral claims.

In other words, it's conceivable that social beings with strong intelligence and with some sense different from our moral sense exist elsewhere in the universe. It's also conceivable that they make generally true z-moral judgments; none of that seems to interfere with our moral judgments, though. [4]

But we may look at this from a different perspective, as before:

Let us suppose that, in the future, we or our descendants in fact do make contact with an advanced alien civilization, and the aliens – which/who are strongly intelligenthappen to have not a moral sense, but something different, like a z-moral sense.

Should we, or our descendants, conclude that – for instance – our assessment that the Holocaust was immoral, is unwarranted, just because some smart aliens on another planet happened to evolve differently, without a moral sense?

Should we withdraw the clear assessments that, say, Hitler and Ted Bundy were bad people, merely because some intelligent aliens orbiting a distant star do not have a moral sense, but something only somewhat similar instead?

As before, if a theist claims so, the burden would be on them.

Linville and others seems to assume a 'yes' answer to all such questions, but they don't give any good reason for the answer.

2.5) Supervenience

According to Linville, if moral properties supervene on "natural properties" - as Sturgeon maintains -, then Darwinian counterfactuals are a serious problem, since the smart wolves would have a moral sense that would give them false beliefs.

So – the objection goes – why think that the human moral sense does any better?

As I mentioned, there is no good reason to think that any kind of social organization could result, in our universe, in beings with human-like intelligence, so we don't know whether such smart wolves would actually evolve: it may very well be that, if wolves were to evolve into animals with strong intelligence, their social structure would also change during that evolutionary process.

Still, leaving that aside, the relevant point here is that the conclusion that the wolves would have false moral beliefs is unwarranted.

In fact, it seems that they would have true w-moral beliefs, like the zurkovians would have true z-moral beliefs, and humans have true moral beliefs.

However, the wolves would not have moral beliefs – true or false -, just as the zurkovians wouldn't.

Given that, then supervenience of moral properties on non-moral properties is unproblematic, as is supervenience of color properties on non-color properties.

2.6) The ancestral environment: a difference between color and morality

In his argument, Linville quotes Richard Joyce to raise another point:

Richard Joyce (quoted from Linville's argument)

It was no background assumption of that explanation that any actual moral rightness or wrongness existed in the ancestral environment” (Joyce 2006, p. 183).

So, a theist could say: 'well, okay, color vision evolved because there were green and red things in the ancestral environment, but there was no morally wrong behavior.'

That's a more interesting objection, but it doesn't work, either: Let's take a look first at the color case:

There were actual differences in the environment – differences in wavelengths reflected or emitted by different objects -, and so our ancestors evolved color vision, which allows us to see objects with certain different reflective properties, differently.

In the case of morality, and going by a possible evolutionary account, the relevant environment was the social environment in which our ancestors lived, and the differences their, say, protomoral sense picked were differences in the minds of one another.

As our ancestors evolved, they gradually changed; in particular, their minds and behavior gradually became more complex, and so did the mental properties picked by what we might call their protomoral sense.

As a result, moral properties are the ones picked by the human moral sense – or a sufficiently similar sense -, and those are properties of humans and similar entities.

It's debatable whether other extant animals have them. Can chimpanzees behave immorally?

Regardless, the point is that there is no reason for the properties that are picked by our moral sense today to be the same properties picked by the protomoral sense of an animal that lived 20 million years ago, when there none of their behavior would be immoral, or morally right – if it wasn't; the non-theist does not need to take a stance on which animals can be morally good or bad; the point is that the evolutionary account is compatible with the existence of morally good behavior, morally bad behavior, etc.

This is similar to the way in which many other biological systems evolve, like – purely for example – recognition of conspecific individuals or sexual attraction: trivially, there were no humans in the ancestral environment before humans appeared, and yet humans have the capacity for recognizing the faces of other humans better than those of any other primate.

Also, humans normally feel sexually attracted to humans, not to something that looks like our non-human ancestors – or much more than to them, at least.

So, the point is that, when it comes to extant animals, many of the properties that their faculties actually pick in their conspecifics did not exist in their ancestral environment several million years ago, but there were similar properties, which were picked by the faculties of their ancestors.

As those animals evolved, so did the properties their faculties were picking.

So, this objection, while more interesting, does not work, either.

There is a possible semantic objection, though: someone might claim that our moral language commits us to a claim of existence of a clear cut-off point – a first moral agent -, while EN would result in gradual changes: no first sparrow, no first human, no first tiger, and no first moral agent.

While EN plausibly yields that result – i.e., no first moral agent, but gradual changes -, it would be up to the theist making the claim to show that somehow our moral language commits us to such a claim.

In fact, that would make moral language different from language in most other cases, when we talk about the world around us.

For instance, plausibly there is no "first nanosecond" at which a person is an adult – our language is not so precise -, there is no "first lion", or "first parrot", and so on.

Furthermore, if we gradually change the image of a car into that of an SUV on a computer – which involves a finite number of steps -, plausibly there is no first step at which the image is not the image of a car (i.e., no first step at which that would not be a car if it existed): the word 'car' isn't so precise.

Given that none of the above results in error theories about cars, adults, lions, parrots, sparrows, tigers, etc., nor does it imply we lack knowledge about those things, nor does it cause any problems, the burden is squarely on the theist if he claims that the moral case is exceptional in a relevant sense.

3) The road so far

In his AEN, Linville claims that if EN is true, then morality is a by-product of natural selection.

Given the previous arguments, it seems non-theist objectivist can safely grant that if EN is true, then morality would be the product of natural selection, in the same sense in which, say, color vision is the product of natural selection.

On the other hand, the term "by-product" is negatively loaded, and also might be interpreted as a claim that morality would not be adaptation but a side effect of other adaptations, which would be an unwarranted assumption.

Whether our moral sense is constituted partially of entirely of adaptations is an empirical matter on which we shouldn't make assumptions.

But other than that, granting that claim does not seem to be problematic.

That aside, a theist might argue that if unguided evolution has happened, then our faculties are generally unreliable. That kind of argument fails as well, but it would be beyond the scope of this article to show that. Instead, I will just point out that that would no longer be a metaethical argument for theism.

4) Ontology

Another common theistic claim is that under non-theism – and thus, under ENis that, somehow, on EN there is no 'ontological foundation' of moral properties – or values, obligations, etc.

In other words, EN would allegedly have ontological commitments that make it incompatible with the existence of moral properties, or at least would make it very improbable that moral properties would exist, and/or that moral facts would exist, or some similar variant.

As an introduction, let's quote a passage from Hume.

Hume (from Linville's argument)
Here is a matter of fact; but ‘tis the object of feeling, not of reason. It lies in yourself, not in

the object. So that when you pronounce any action or character to be vicious, you mean

nothing, but that from the constitution of your nature you have a feeling or sentiment of

blame from the contemplation of it. Vice and virtue, therefore, may be compar’d to sounds,

colours, heat and cold, which, according to modern philosophy, are not qualities in objects,

but perceptions in the mind. (Hume 1978, p. 469)

Yet, Hume mentions vice and virtue alongside colors, sounds, heat and cold.

In fact, Hume does not provide any good reason for distinguishing, say, morality and color, in a sense that would be relevant in his argument, so if a good metaethical case for theism could be based on something like this, it's hard to see why one couldn't just make a metachromatic argument for theism.

Linville does not try to use Hume's points as part of the argument, but he, Craig and others make other meta-ethical ontological arguments.

So, in this section, I will take a look at some relevant ontological issues; I will also

So, let's consider the matter of ontology in more detail:

4.1) Ontology, property identity and semantics

Can two terms, say 'property1' and 'property2' have different meanings, yet pick the same property?

In other words, does property identity require semantic equivalence of the words picking a property – even if, perhaps, a non-transparent semantic identity?

The answer to that question is relevant to the matter of what could reasonably be expected from an ontology of color properties, moral properties, etc.

A related and also relevant question would be:

Do ontological accounts of properties require semantic closure of the relevant questions?

For instance, if a correct ontological account of color holds that the property 'redness' is identical to the property of 'having property a1-ness or a2-ness or a3-ness...or ak-ness', does that imply that "X is red" follows from 'X is a1 or X is a2 or... or X is ak", just by the meaning of the words?

Perhaps, there is more than one usage of 'property' in philosophy; if so, the answer to some or all of the previous questions in this subsection might be different depending on the meaning of 'property'.

In any event, I will not try to answer the aforementioned questions in this article.

Instead, I will assess different possibilities, concluding that regardless of what the answers are – within the main possibilities considered in present-day philosophy -, theistic arguments trying to exploit the Open Question Argument, or generally raise ontological issues, fail.

4.2) Color ontology

An ontological account of color would try to answer questions such as, 'What's greenness?', 'What's redness?', and so on.

Before trying to address the matter, one might wonder: 'What kind of answer should we be looking for?'

In other words, the question itself is not clear at all, though what is clear is that there is no object 'redness' floating 'out there', so to speak, but red objects. But once we've included objects with certain reflective properties and visual systems/minds that can react in certain ways to them, it seems nothing else needs to be added.

Still, I will consider different alternatives, and then make a parallel between the color and the moral cases, using the lack of a relevant difference as a means to show why the theistic arguments are confused.

But let's start with a potential ontological hypothesis about redness. For instance, let's say that a proposed account is:

B1: The property 'redness' is the property of emitting light in r(l) wavelengths, or reflecting light in r(l) wavelengths under conditions n, etc. [of course, the wavelengths r(l) and the 'such and such' conditions would have to be specified; let's assume that they have been specified, to simplify the matter].

Would a semantic challenge defeat such an account, simply by pointing out that the question remains open?

In other words, a competent English speaker may well rationally say 'I know that an object reflects light in r(l) wavelengths, under conditions n, but is it red?'.

No amount of conceptual analysis will resolve the matter.

If that is enough to debunk such a proposed ontology, it seems that one may just point out that color terms apparently cannot be reduced to non-color terms without loss of meaning, and so, it seems – on this understanding of 'property' -, that color properties can't be put in terms of non-color properties.

But then again, if that's the case, it seems plausible no further color ontology is required, apart from positing properties such as blueness, greenness, etc.

If so, then it seems that the non-theist can just provide a similar reply to a question about moral ontology, thus blocking any metaethical ontological arguments for theism. As long as the answer works in the case of color without being a problem for EN, then it seems it works in the case of morality as well, and moral statements cannot be reduced to statements in non-moral terms without losing meaning.

So, the non-theist may simply posit – for instance – that the term 'moral goodness' is not definable in non-moral terms, and leave it at that – just as she can in the case of color. She might even add the suggestion that moral goodness is some mental property, which is also not a problem for EN.

Perhaps, someone might demand semantic closure while using other terms, for instance 'horse' might mean the same as 'being with such-and-such properties', and that involves a long description.

Similarly, they might demand a description like that for moral or color terms.

However, even if 'horse' can be described in that manner, we can then take the terms in the description of 'horse' and repeat the procedure. In a finite number of steps (there are finitely many words in any of our languages), we're going to reach a point at which we can go no further, and some terms would remain undefined, or defined in terms of semantically equivalent terms.

So, it seems that moral terms like 'moral goodness', or color terms like 'redness', can only be put – at best – in terms of other moral or color terms respectively. That fact, however, is not at all a problem for ENit does not even have anything to do with whether EN is true.

So, it seems that if semantic closure is needed, theistic metaethical ontological arguments are blocked. The non-theist might simply point out that moral terms can only be put in terms of other moral terms, and that's it.

What if no semantic closure is required of an ontology of properties?

Then, an account like B1 may or may not be true, but in any case, there are a few important points to be made:

First, an account like B1 was not available for most of the history of our civilization, and it only became doable after considerable advances in physics, and the introduction of technical terms.

Second. if B1or something like it – is correct, it's clear that we do not need to posit any entity above and beyond our visual system, brain, objects with certain reflective properties, and light – we surely do not need a Supreme Color Commander, or any other weird entity above and beyond what I've just described.

Third, if nothing similar to B1 is correct, there is still no good reason to suspect that one requires any entity above and beyond our visual system, brain, and the light that reaches our eyes from other objects.

Fourth, in any event, in order to test such an account, we would need to rely on human color vision, even if under controlled conditions: for instance, in the case of B1, we would need to rely on the human color vision in order to test whether the proposed wavelengths r(l) actually match red.

4.2.1) Color and supervenience

Someone might posit that color properties, such as 'redness', are not properties that can be described in terms of wavelengths and the like, but rather, they supervene on them.

So, on this account, reflective (or emitting, etc.) properties aren't the same as color properties, but, say, two objects with the same reflective properties necessarily have the same color properties.

However, on this account, there is also no need to posit any entity above and beyond our visual system, brain, objects with certain reflective properties, and light.

So, it's difficult for me to see what ontological difference there is between this account and the account in terms of wavelengths, etc.

In fact, it seems to me that someone proposing the account in terms of wavelengths, etc., and someone proposing the account that states that color properties supervene on those properties described in terms of wavelengths, etc., may well agree on the physics of light, of the objects around us – moreover they may accept the same physics theory, for that matter -, and the biology of our visual system.

Moreover, they're not positing entities above and beyond that to account for color.

So, it's not clear what they would disagree about; perhaps, the difference lies in the way the word 'property' is being used, making a difference between requiring semantic identity and not requiring it?

Regardless, we don't need to resolve any of that here:

Given that – in any event – no entity beyond the objects around our us with some reflective properties, light, and generally properties and/or entities EN can handle, needs to be posited, whether color properties are the same or supervene on properties expressed in terms of wavelengths, etc, is not a matter that concerns us here.

Also, the analogies with morality work as well on the supervenience account, as on the 'properties expressed in terms of wavelengths, etc', account.

For example, with regard to the Open Question Argument, the question 'I know X has a property that supervenes on the property of reflecting such-and-such wavelengths under such-and-such conditions, but is X red?" is just as semantically open as 'I know X has the property of reflecting such-and-such wavelengths under such-and-such conditions, but is X red?'.

So, from this point on, and just to simplify, I will take the example of the account in terms of wavelengths, etc., as an example of color ontology (unless otherwise specified), in order to compare it with any proposed moral ontology, but keeping in mind that we could always use the supervenience account instead, or some alternative ontology that would still be compatible with EN.

4.2.2) Color antirealism

Someone might posit an antirealist color theory, and reject any ontology of color, objecting to the analogy between color and morality.

According to some of those theories, we can make true color judgments, we have color knowledge, there is no error theory, etc., but there is still no ontology of color to be found.

However, if a theist accepted a theory like that, a non-theist might suggest the possibility of a similar moral theory, which the theist would need to refute to make an ontological metaethical argument.

To be clear, I'm not espousing any such theories.

Rather, I'm addressing a potential objection by pointing out that a theist espousing one of them in the case of color would have to also show that no similar theory can work in the case of morality – else, why assume there is a moral ontology at all? -, and it's hard to see what kind of an argument they might make.

If the bottle cap on my desk is blue without any color ontology to be found, then why can't the Holocaust be evil without a moral ontology to be found?

So, those color antirealist theories do not appear to be a viable option for a theist defender of a metaethical argument.

There are other antirealist theories, such as error theories.

According to a color error theory, all statements like 'that cap is blue', 'that banana is yellow', etc., are false. But that would be difficult to believe – not to mention, on theism, the problem of a deceitful creator.

So, it seems that those aren't good options for a theist defender of a metaethical argument.

There may be other color antirealist views, but I will just point out that if we can have color truths, knowledge, etc., without any color ontology, then the question can be raised about morality too.

If, on the other hand, there is a correct color ontology, then even if we don't exactly know what it is, it remains the case that demanding semantic closure isn't a viable option.

So, whatever the correct ontology of color might be, all the considerations I will make about Open Questions and related matters remain.

4.3) Moral ontology

As a result of the evolutionary process, humans acquired the abilities to pick some mental properties; for instance, we can tell, in many cases, when someone is angry, in pain, or is happy, etc.

On a possible evolutionary account of morality, humans also evolved a faculty (or a combination of them) to pick some other mental properties – which were relevant in social life -, as well as, perhaps, some consequences of the actions, and/or some other social relations. As our ancestors evolved, so did the properties that they were picking. As a result, moral properties would be some of the properties that one or some our faculties pick – which we may call our 'moral sense' -, in the same sense color properties are some properties our color vision picks.

So, when it comes to moral ontology, we can point out the following:

First, given the complexity of our social environment, morality is likely to be complicated, and we shouldn't expect that moral properties would just be the same as properties picked by a non-moral term in non-technical language, even if semantic closure is not required.

In other words, even assuming that semantic closure is not required, we shouldn't expect a simple account in terms of properties described by non-moral, non-technical terms, like human happiness, or anything like that.

Second, given that psychology is much less developed than physics, we shouldn't assume that a correct account in non-moral terms will be available any time soon, even assuming that semantic closure is not required.

In fact, it's not even clear that we have such an account in the case of color; even if we do, it may well take centuries or more to develop an ontology of moral properties, and it might take the development of technical terminology, as in the case of color, to make the account manageable in terms of length.

So, clearly, there is no burden on the non-theist to produce anything like that.

Granted, a theist might argue that no account will do, if EN is true. But the burden is squarely on him.

Of course, if semantic closure is required, then it seems no correct account could ever be given in non-moral terms, so unless new moral terms are invented, no correct account beyond positing moral goodness, moral wrongness, etc. - or, perhaps, equivalents in already existing moral terms -, can be given.

Third, even if a correct ontology were given in technical terms, we shouldn't demand that it would semantically close the question, if we don't demand such closure in the case of color.

On the other hand, if semantic closure is required in the case of morality and color, then it seems moral properties can't be described in non-moral terms, and similarly color properties can't be described in non-color terms. However, in that case, someone who accepts EN may posit that, say, moral goodness is some mental property that can't be described in non-moral terms, and that would be it. There appears to be no further question, if that's all one is asking.

Fourth, in order to test any proposed ontological account, we would need to rely on the human moral sense, even if under controlled conditions: otherwise, we wouldn't know whether whatever matches the description provided by the proposed ontology also is, say, morally good.

In case no semantic closure is required, there is a difference between color and morality in that regard:

In the case of accounts such as B1, we need to measure wavelengths in order to test whether the proposed wavelengths r(l) actually match red, whereas in the case of a proposed ontology of, say, goodness, we would only have to test by our own moral sense that there are no scenarios that present exceptions to the proposed ontological account.

In other words, we would only have to use our moral sense – even if, perhaps, in some controlled conditions – to test whether in any hypothetical scenarios in which A matches the description provided in the proposed account of moral goodness, A is morally good. To do that, we wouldn't need to create real scenarios, but just test many hypothetical ones.

However, that difference is not significant with regard to the matters at hand, since it's only a difference in the way we test the proposed account.

5) The Open Question Argument

A common argument against metaethical views that are often called "moral naturalism" is Moore's Open Question Argument.[5]

The meaning of the expression "moral naturalism" is very misleading, and the Open Question is raised against any view that attempts to reduce statements using moral terms to statements using non-moral terms, without losing meaning.

However, as explained earlier, demanding semantic closure would allow the non-theist who accepts EN to simply point out that, say, 'moral goodness' is not a term definable in terms of non-moral terms; she might also posit that moral goodness is a mental property.

Whether 'moral goodness' can be defined in terms of other moral terms is a matter of moral semantics, but irrelevant to the Open Question Argument.

On the other hand, if no semantic closure is required, the Open Question Argument fails just because of that.

So, either way, the Open Question Argument fails to present any difficulty for a person who accepts EN.

Still, just in case, I will make a parallel with color, showing that the situation is similar in regard to the Open Question Argument. I will use an ontological account of color based on reflective properties, wavelengths, etc., but that's only an example: the parallel I will make would similarly work on any ontological account without semantic closure.

5.1) General considerations

Thomas Hurka[5]

Especially in Principita Etica, Moore spent much more time defending his other non-naturalist thesis, of the autonomy of ethics, which he expressed by saying the property of goodness is simple and unanalyzable, and in particular is unanalyzable in non-moral terms. This meant the property is “non-natural,” which means that it is distinct from any of the natural properties studied by science.

However, with that criterion, someone might say that color properties are "non-natural" because color language is not analyzable in non-color terms.

As I explained before, a question like "I know that an object reflects light in r(l) wavelengths, under conditions n, but is it red?", is open in the same manner as a question about goodness is.

Are we talking about reducing moral language to non-moral language with no loss of meaning?
If so, it seems that that's not doable, but the same can be said about – for example – color language.

So, should we accept that we need to posit a color ontology over and above light, reflective properties, and all other properties that EN can handle?

It seems clear that that would be confused. The confusion would only increase if we were to also use the obscure term "natural" to say that color properties are non-natural properties.

In any case, the point is that a claim that there is a moral ontology over and above humans (or similar beings) and their minds just because non-moral statements do not entail moral ones just by the meaning of the words would be as confused as a claim that there is a color ontology over and above light, reflective properties, and perhaps some other properties that EN can handle, just because non-color statements do not entail color statements just by the meaning of the words.

5.2) Science, color and morality

Someone might say that color properties are "natural" properties because they're the kind of properties studied by science, but moral properties are somehow different.

Such a claim would be very obscure at best: if "natural" properties are just those described in non-moral terms, then why not just call them that – instead of "natural"?

But let's let that pass.

The crucial point is that this sense of 'natural' is not related to science at all:

With regard to science and morality, once again the situation is very similar to that of color, even if there is a considerable difference in complexity – morality is, of course, considerably more complex.

It seems that humans often can ascertain whether a certain behavior is morally good, morally bad, etc., and they do that in an obviously finite number of steps.

If future scientists can come up with a more precise description of moral goodness, etc. - or a property on which moral goodness supervenes -, without semantic reduction – something akin to color and wavelengths -, and manage to develop an algorithm that allows them to ascertain whether a behavior is morally good, etc., then a future supercomputer would probably be able to ascertain what's morally good, bad, etc., much faster than any human could, without the difficulties associated with human weaknesses and propensities that the computer would not have.

Of course, it's also conceivable that no future scientists will ever figure that out, but that should not be assumed.

Moreover, the point is that apart from the difficulty resulting from the complexity of the human mind, there does not appear to be anything particularly salient in the case of morality (or moral goodness, etc.), which would make moral goodness beyond scientific understanding and/or detection.

By the way, the previous scenario (i.e., the supercomputer, etc.) does not seem to be even incompatible with theism.[6]

Granted, scientists would need to trust the human moral sense, at least until some kind of controlled conditions, in order to try to describe moral properties – or the properties on which moral properties supervene – in some technical terms, figure out an algorithm, etc.

However, the same is true if they want to make a machine that can distinguish colors – they would have to rely on human color vision -, though the task would be much simpler in that case.

Granted, also, moral disagreement would be a difficulty for finding an algorithm, since it's a lot more common than color disagreement.

However, as long as there is a species-wide moral sense, the task is not an impossible one, even if sometimes it's difficult to make moral assessments:

Since reaching moral truth is doable in many cases, that can be used to study moral properties, as the human visual system can be used in the case of color properties.

That would be a lot of work, which might take centuries if not more. They would need to use controlled conditions in many cases in which people normally agree, use computers to process enormous amounts of data, find causes of disagreement, etc.

So, that would be a truly daunting task, but given a species-wide moral sense, again there is nothing here but a difference in degree of difficulty between color and morality, when it comes to the possibility of scientific study.

Finally, someone might claim that those properties that the computer determines, in any case, would not be moral properties, but at most, moral properties would supervene on those picked by the computer.

However, with that criterion, the same might be said about color detected by some machine.

In any case, the fact remains that there seems to be no relevant difference at all when it comes to the possibility of scientific study of moral vs. color properties: both are equally possible, even if the moral study is much more complex.

Also, it would be a mistake to believe that just because we need to use the human visual system to study color, in a color ontology we need to posit some extra entities or properties apart from light, surfaces with reflective surfaces, and generally properties EN can handle in the case of a color ontology.

Similarly, it would be a mistake to believe that just because we need to use the human moral sense to study moral properties, in a moral ontology we need to posit an extra entities or properties apart from humans (or, potentially, entities with similar minds), their minds, and generally beings and/or properties EN can handle.

Finally, a theist might quote scientists saying that science can tell us how things are, not how they ought to be.

However, aside from what scientists with some metaethical commitments may say about it, if science can tell us what's red, then with a lot more work, it will plausibly tell our successors what's immoral, and thus what they have a moral obligation to do, and thus what they ought to do.

So, it seems clear that there is no difference between color properties and moral properties with regard to what science can or cannot tell us about it – at least, given sufficient scientific development -, except for degree of difficulty.

I will assess the matter of science and morality once again later.

5.3) Filling the gap

Perhaps, a theist might claim that there is a relevant difference between color and morality in terms of "filling the gaps", as Hurka illustrates in the SEP[5] in the case of water and H2O.

Thomas Hurka[5]

Again, however, Moore could respond. The property of being water is that of having the underlying structure, whatever that is, of the stuff found in lakes, rivers, and so on; when this structure turns out to be H2O, the latter property “fills a gap” in the former and makes the two identical. But this explanation does not extend to the case of goodness, which is not a higher-level property with any gap needing filling: to be good is not to have whatever other property plays some functional role. If goodness is analytically distinct from all natural properties, it is metaphysically distinct as well.

In the case of the color red – for instance – there is no structure of any stuff, either, so there seems to be no difference in that regard.

Someone might suggest, then, that there is some property that our color vision tracks, and which under normal conditions elicits our judgment 'red', and that property is the property of being red, or the property of redness.

Let's say that that's the case.

Then, exactly the same may be said about our moral sense and goodness.

Someone might say that 'redness' is physical property, whereas 'goodness' is not.

I'm not sure about the coherence of the physical/non-physical distinction, but leaving that aside, that would be no problem, either, since in the case of goodness, the property might be a mental (or mental/consequential, etc.) one.

We don't need to get into the issue of whether mental properties are physical ones here – whatever that means -, since the relevant point is that a property would still be filling the gap: whatever property our moral sense is tracking, and which under normal conditions elicits our judgment 'morally good': that would be the property of moral goodness.

Granted, that is not at all elucidating. But then again, the same applies to the case of redness.

A theist might object that that still does not close the gap, in the case of moral goodness, from a semantic perspective.

According to this objection, someone might say "I know that behavior X has the property our moral sense is tracking, and which under normal conditions elicits our judgment 'morally good', but is X morally good?"

I suppose someone pondering a moral error theory might raise such a question, but then again, the same can be said about someone pondering a color error theory and the corresponding question about redness.

So, it seems that if that means that there is no semantic closure in the color case, then the same applies to the moral case.

Perhaps, someone might suggest that even someone who is not considering an error theory might ponder the question in the moral case, but not in the color case; they would have to make a case for it, but as it stands, and given the previous considerations, it seems clear that there is no difference in that regard.

In the next subsections, I will argue that moral goodness plausibly is a mental property, but I will not claim semantic closure in terms of non-moral terms, of course. I will also make some suggestions about other moral properties.

However, before I go on, I'd like to reiterate that the non-theist has no burden to do any of that. So, even if the hypotheses posited below in this subsection were incorrect, that would not affect the overall conclusion of this case against metaethical arguments for theism, or the specific conclusion of this sectionnamely, that the Open Question Argument does not provide any support for a metaethical case for theism.

5.3.1) Moral goodness

If we're told that a person is morally good, we learn something about the person's character.

In other words, we learn about her mind, we get an idea of how she tends to act and what kind of dispositions to act she has, etc. That may not give us a lot of detailed information about her mind, but it gives us some.

Also, it seems that the same goes for individual morally good actions: that a person's action is morally good is plausibly a claim about the mind of the person carrying it out.

So, it seems to me that moral goodness is a mental property, perhaps involving attitudes towards others, care in choosing the means to act, etc.

So, a good person would be one with a character such that she's predisposed to generally do good actions and not bad ones, or something along those lines – the details are difficult and not important for the purposes required here.

At least, it seems clear that to say that an action is morally good is to say something about the mind of that person.
On the other hand, it seems to me that actual consequences do not seem to enter the equation: for instance, if agents A1 and A2 carry out an action for which they have the exact same amount of information, the same intentions, etc., the same expected results, etc., and the action of agent A1 is morally good, then so is the action of agent A2, even if – for some unexpected reason – the action of A2 resulted in harm to third parties – for instance -, which the action of A1 did not.

Of course, that's just a hypothesis I posit, but whether or not actual consequences enter the equation is irrelevant from the perspective of EN:

If it turns out that moral goodness is not a mental property but a complex property including a mental component and a consequential component – for example -, that's perfectly okay with EN as well.

Granted, again, a theist might claim that all mental properties are a problem for EN, but that would no longer be a metaethical argument for theism.

5.3.2) Moral badness, moral wrongness, and immorality

The case of moral badness seems similar to that of moral goodness: the best candidate is a mental property.

The reasons for that are the same as in the case of moral goodness, so I won't repeat them.

Also, as in the case of moral goodness, if it turns out that moral badness is not a mental property but a complex property including a mental component and a consequential component, that's perfectly okay with EN as well, as long as mental properties are no problem.

As for moral wrongness, it seems that the property of moral wrongness is the same as moral badness, for any behavior, omission, attitudes: the term 'morally wrong' applies to behavior, omissions, attitudes, etc., not to people and behaviors (unlike 'morally bad'), but it seems clear that, when it comes to behaviors, etc., 'morally bad' and 'morally wrong' mean the same, so the properties are the same.

The same goes for immorality: it seems that, both for behaviors, etc., and people, the property of being immoral is the same as the property of being morally bad.

What about, say, a morally bad political regime?

In those cases, perhaps more than one thing may be meant, depending on context.
For instance, the person making that assessment might be saying that:

a) The leaders are behaving immorally by imposing such regime on the rest of the population, or

b) The system results in morally worse actions, in terms of seriousness and number (or some combination of those), or

c) Both a) and b)

There may be other possibilities, of course, and those are difficult matters, but in any event, none of that would be problematic for EN.

5.3.3) Moral obligations, and 'ought'

As we saw above, it seems moral goodness, moral badness, moral wrongness, and immorality do not seem to present a problem for EN.

Someone might suggest that 'ought' and moral obligations are a different matter, and that that there is a problem for EN in that case. But it would be up to them to show that mental properties or, perhaps, combinations of mental properties and relations, etc., wouldn't be enough, and something else – over and above that – is required.

Moreover, it seems that even talk of (moral) 'ought' can be reduced to talk in terms of moral goodness, moral wrongness, etc.; since EN can handle moral goodness, moral wrongness, etc., and no further property is needed for a moral ontology, EN can handle it. I will make a more detailed case for such semantic reduction later.

As for moral obligation, 'A has a moral obligation to do X', seems to be equivalent to 'A ought to X', so it seems that that would not create any problems for EN, either, as long as 'ought' does not create them.

Still, even if talk of moral obligation couldn't be reduced to talk in terms of moral goodness, moral wrongness, etc., the burden would remain on the theist to show that something other than mental properties or complex properties involving mental properties and relations, etc., are required in a moral ontology.

6) Prescription and description, 'is' and 'ought', and related matters

Somewhat related to but different from the Open Question Argument, a theist might raise issues like is/ought, or description/prescriptions, and make claims like the following:

C1: Science only deals with descriptive 'is', but not with prescriptive 'ought', so science cannot tell us what we ought to do.

C2: You can't derive an 'ought' from an 'is'.

Some theists give a number of different reasons why, allegedly, C1 or C2 would be a problem for EN.

In this section, I will take a look at the issues of is and ought, prescription and description, and what science does, in order to clarify the matters at hand, showing that there is no particular difficulty involved – leaving aside practical difficulties due to the complexity of human psychology, which aren't relevant in this context, since accepting that human psychology is very complex is not a problem for EN.

6.1) Science and description

A more or less common claim is that science can't tell us what we ought to do, but can only deal with what is. I've already addressed the matter of science and morality earlier, and will do so again later in this section, but in this subsection, I will make other points about what science describes, or what scientific descriptions entail.

In particular, I'm interested in the fact that science doesn't just deal with what is, but also what was, will be, would be, will happen, would happen, happened, etc., what we or other animals feel, etc.

That can easily be seen by looking a few statements that can be made by science, or based on science.

So, let's consider the following statements (I will leave the 'such-and-such' conditions unspecified for simplicity):

ST1: [pointing at a glass of water]If we heat up that water to 100 degrees Celsius, it will boil.

ST2: [pointing at a glass of water]If we heated up that water to 100 degrees Celsius, it would boil.

ST3: About 65 million years ago, a large asteroid hit the Earth, and caused the extinction of many species.

ST4: On such-and-such day, there was a solar eclipse, visible from such-and-such regions. In other words, on such-and-such day, a solar eclipse happened, and a person with normal human vision looking at the sky from such-and-such regions would have been in a position to see it.

ST5: On such-and-such day, there will be a solar eclipse, visible from such-and-such regions. In other words, on such-and-such day, a solar eclipse will happen, and any person with normal human vision in such-and-such regions will be in a position to see it.

ST6: A billion years ago, there were no humans on Earth. Moreover, there were no other primates, either.

ST7. [pointing at Joe]If someone stimulated such-and-such parts of Joe's brain in such-and-such manner, he would feel pain.

ST8: [pointing at Joe]If someone stimulated such-and-such parts of Joe's brain in such-and-such manner, he would have a headache.

ST9: [pointing at Timmy, a capuchin monkey]If someone stimulated such-and-such parts of Timmy's brain in such-and-such manner, he would have a headache.

It's apparent that all of this statements can be made based on sufficiently advanced science. Someone might object to ST9, but they would have to make a case for it, since it seems clearly unproblematic. Biologists usually do deal with what's painful to non-human animals. In any case, ST9 isn't even needed to make the main points of this section.

That aside, I'll point out that at least some of those statements should be understood as having implicit conditions: for instance, ST1 has the implicit condition that we do not, say, increase the pressure to prevent boiling. That kind of talk with implicit conditions is normal both in science, and in daily life.

For instance, we may say 'if you put the cheese in the freezer, it will last for x days', with the implicit condition (among others) that the freezer will not stop working.

All or most of this is probably rather obvious, but I included this subsection in order to more clearly set the stage for later subsections of this section.

6.2) Is and ought

In this subsection, I will argue that any moral 'ought' or 'should' statement is semantically equivalent to moral statements using only 'is', 'was', 'would be', etc.

To show that, I will consider some statements, in which the 'ought' and 'should' are moral ones (not, say, means-to-end ones), and compare them with moral statements that do not contain 'should', or 'ought'.

O1: Hitler ought not to have ordered the Holocaust.

SH1: Hitler should not have ordered the Holocaust.

I1.0: Hitler behaved immorally by ordering the Holocaust.

I1.1: Hitler's order to carry out the Holocaust was immoral.

I1.2: Hitler's behavior consisting in ordering to carry out the Holocaust, was immoral.

I1.3: The action 'Hitler ordered the Holocaust' was immoral.

It seems apparent that each statement entails all of the others, just by the meaning of the words.

In other words, statements like 'O1 entails I1.2', or 'I1.2 entails O1' are analytical, they're true just by the meaning of the words – of course, defining 'O1', and 'I1.2' as before. Readers will as always use their own grasp or moral terms to assess the matter, but it seems very clear.

Other statements about what someone ought to have done, or should have done, are handled similarly.

Now, let's consider the following statements:

O2: Bob ought to pay the rent.

SH2: Bob should pay the rent.

I2: If Bob fails to pay the rent, he is acting immorally.

As before, it seems that 'O2 entails SH2' is analytical, and so are 'I2 entails O2', 'I2 entails SH2', etc.

Someone might raise an objection like the following.

But let's say that O2 is true, and then someone kidnaps Bob's family, credibly threatening to kill them if he pays the rent. Then, Bob fails to pay, but he's not acting immorally, so I2 isn't true.

The problem with that kind of objection is that it fails to take into consideration the implicit conditions in statements such as O2, I2, and SH2, and generally most moral statements.

More precisely, if someone says that I2 is false in the alternative scenario, they are interpreting I2 without an implicit 'ceteris paribus' clause that prevent that kind of alternative scenario.

In other words, they're interpreting I2 without the condition that his family is not so threatened.

But under such unqualified interpretation, O2 is false as well. What might be true is something like 'Bob ought to pay the rent provided that (among other implicit conditions) his family is not credibly threatened if he pays.', but not an unqualified 'Bob ought to pay the rent no matter what'.

In particular, the statement 'Bob ought to pay the rent even if he has good reason to believe that his family will be killed if he does pay the rent' is not true in the alternative scenario which states that I2 is not true.

In yet other words, the alleged counterexample is making a distinction between O2 and I2 by means of including different implicit conditions in each of them. But that's not a counterexample. What the objector would be doing is making a distinction between one possible interpretation of O2 with some implicit conditions, and one possible interpretation of I2 that has different implicit conditions, or none at all.

For that matter, we might as well distinguish between different interpretations of O2 itself, just by including different implicit conditions in the two different interpretations.

So, the objection misses the point. As long as the implicit conditions are the same, then one is true if and only if the other one is, and we can tell that – I claim – by the meaning of the words alone.[4]

That means that moral 'ought' and 'should' statements can be reduced to statements that do not contain those terms, but 'is', 'was', 'would be', 'will', etc., plus terms such as 'immoral', 'morally wrong', etc.

Someone might say that this is not much of a gain, or even that it's obvious.

However, I'm interested in making this point as a means to set the stage for later subsections in this section.

6.3) Is, ought, and moral obligation

In this subsection, I will argue that any moral statement like 'Agent A has a moral obligation to X' is semantically equivalent to moral statements using only 'is', 'was', 'would be', etc.

Given the result of the previous subsection, it's enough to show that statements about moral obligation are semantically equivalent to moral 'ought' statements, though we may add 'is' or similar statements as well. [4]

For instance, let's consider the following statements:

MO1: Bob has a moral obligation to pay the rent.

O3: Bob ought to pay the rent.

I3: If Bob fails to pay the rent, he is acting immorally.

The two seem clearly equivalent.

Also, we might consider the following:

MO2: Bob has a moral obligation to pay $4000 to Alice.

O4: Bob ought to pay $4000 to Alice.

I4: If Bob fails to pay $4000 to Alice, he is acting immorally.

Once again, the equivalence seems pretty straightforward. [4]

Other statements including the term 'moral obligation' can be handled similarly.

6.4) Morality, description, and prescription

Let's tackle the matter of the distinction between prescription and description.

In which sense of 'prescriptive' are moral judgments prescriptive?

Are they not descriptive?

A more or less common claim is that moral 'ought' statements are prescriptive, whereas, say, color 'is' statements are descriptive.

Now, as we saw before, moral 'ought' statements can be reduced to moral statements including only 'is', 'was', 'will be', etc., plus terms like 'immoral', 'morally wrong', etc.

So, this would entail that statements like 'Hitler was a morally evil person', or 'It is immoral for any moral agent to torture people for fun', etc., are also prescriptive.

But they look descriptive.

By saying that, for instance, Hitler was a morally evil person, I seem to be describing his character, even if I'm not providing particular details as to why he was morally evil. So, it's not a detailed description, but it looks like description nonetheless.

What about 'Bob ought to pay the rent?'

As I argued before, that statement is semantically equivalent to 'If Bob fails to pay the rent, he's acting immorally', which also looks like a description.

Someone might claim that the 'is' form gives the wrong impression, but that would have to be argued for.

But let's take a look at the matter from another perspective, even without considering the is/ought equivalence: Commands, like 'Pay the rent', are not true or false, but judgments like 'Hitler was morally bad', 'You ought to pay the rent', and generally moral judgments, are.

Some philosophers might object to that, and claim that moral judgments aren't true or false, and/or that they're not statements, but that seems very implausible, at least given how people use the words.

Someone might suggest that a moral judgment is a combination of a command and a description of a state of affairs. But that would entail that only part of a moral judgment is true or false, which is also implausible.

The same goes for, say, views that suggest that moral judgments are partially or totally suggestions, or pleads, invitations, etc., instead of commands.

In addition to the previous points, it's clear that if moral judgments were commands, or combinations of a command plus a description, etc., that would be of no help for a theistic argument, either. In fact, it would be a problem for them.

Some antirealist metaethical views hold that even though moral judgments are true or false, they do not describe states of affairs or facts, but instead they're some kind of expression of desires, hopes and/or commitments to certain rules. Such views usually posit a deflationary theory of truth, according to which even some judgments that do not describe states of affairs can be either true or false.

While it would be beyond the scope of this article to analyze such views, it seems clear that they would be of no help for any theist defender of a metaethical argument. At least, no such defender has ever posited any such views. Quite the opposite, actually, since such views tend to deny moral facts, which would be incompatible with theism.

So, at least for the purpose of this article, we may rule out such views as well.

In short, plausibly, we may rule out the views according to which moral judgments are, totally or partially commands, pleads, suggestions, invitations, etc. Moreover, we may also rule out both such views as well as other anti-realist views that posit that moral judgments don't describe states of affairs, at least for the purpose of this article, on the basis that a theist defender of a metaethical argument is committed to rejecting such views.

But then, it seems that moral judgments, like 'Hitler was morally evil', or 'Bob ought to pay the rent', etc., describe states of affairs, and do not involve commands, suggestions, etc.

So, the question remains: in which sense of 'prescriptive' are moral judgments prescriptive?

When we make moral judgments, normally we are motivated not to behave in the way we judge to be immoral, even if such motivation is defeasible. When we do behave in a way we deem immoral, or when we deem some of our past actions immoral, we feel guilt, regret, and so on. Some other judgments, like, say, color judgments, do not normally motivate us like that.

However, that moral judgments motivates us appears to be a matter of human psychology. For that matter, an assessment that, say, certain behavior will cause us pain, or suffering, also normally motivates us to avoid it, even though the motivation may be defeasible.

Such matters of human psychology and motivation do not seem to be a problem for EN. If a theist claims otherwise, he would have to argue for it, so the burden would be on the theist. Still, in the next section, I will give further arguments against a claim that moral motivation would be a problem for EN.

That aside, a theist might posit that moral 'ought' judgments are prescriptive in the sense that they're about commands given to us by God. However, if he posits such a metaethical theory, he would have the burden to show it's true. Moreover, I will later show that such theories are not true.

Alternatively, a theist might say that even if moral 'ought' judgments aren't about commands, moral obligations are in some sense constituted by God's commands. That would also be a burden on the theist, though. Moreover, I will later show that such theories are not true, either.

6.5) Science and morality, part 2

Earlier, I've tackled the issue of science and morality, in the context of the color analogy, and the Open Question Argument.

Here, I'd just further explain how confused the claim that morality is somehow outside the realm of science is, at least if we accept – as theists do -, that there is moral knowledge, moral truths, etc.

Indeed, moral assessments like 'Hitler was a morally evil person', or 'Bob ought to pay the rent' describe some state of affairs, as, say, 'Nazi uniforms were not red', or, say, 'Hitler was ill', 'Hitler had syphilis', and so on. If moral assessments are also prescriptive, that is a matter of human psychology, and thus can be tackled by science as well.

It is true that, in order to make a machine capable of assessing whether someone is behaving immorally, or to find necessary and/or sufficient conditions for immoral behavior, scientists would need to trust the human moral sense, even if under certain controlled conditions, and that would make the matter difficult.

However, that's not different from other cases, like color or illness, except perhaps in terms of degree of difficulty, which is not a relevant difference in this context.

How could we, say, find out what wavelengths correspond to green light, without trusting the human visual system in our experiments, even if under controlled conditions?

How could we make a machine that ascertains the color of an object, without trusting the human visual system in our experiments, even if under controlled conditions?

How could we, say, ascertain that a person is ill, without trusting human intuitions about illness, at least under controlled conditions?

Someone might suggest that we can ascertain illness so by looking at certain patterns of behavior, or how the person looks like, or look for a virus, or generally figure out whether some traits of that person match the conditions established in a book or manual issued by a professional association of physicians.
However, and regardless of how we go about that,
we would in the end rely on human intuitions to figure out which conditions are illnesses. For instance, if we have a manual that tells us that such-and-such conditions are illnesses, we – i.e., some humans – need to write the manual first, and in order to do that, we need not only to study such conditions, but also to intuitively apprehend that they're illnesses.

But it's not only color, illness and morality. In fact, even if we want to ascertain whether there is a supermassive black hole in the center of the galaxy, we need to make some observations, trust our observations or the observations of others, and rely on theories that were posited by other humans and tested in conditions in which they had to rely on human faculties as well (e.g., to read the result of the experiments).

Granted, humans do not have a 'black hole detection mechanism', while we do have the capability to intuitively apprehend color, immorality or illness under some conditions, so there is a difference with regard to how direct the connection between our intuitions and what we're trying to figure out is, but my point is that more or less directly or indirectly, trusting human faculties in science is inevitable, and there appear to be no good reasons to exclude color, illness or morality from the scope of science.

In brief, the claims that somehow moral 'ought' statements are beyond the scope of science, and/or that somehow normative or prescriptive 'ought' are a problem for EN, are both baseless and false.

As a side note to finish this section, I'd like to point out that even under Divine Command Theories – which are all false, but leaving that aside for the moment, and for the sake of the argument -, there would be no reason why science wouldn't be able to tackle morality, or to make a machine that can figure out moral truth.

After all, moral assessments would still describe certain state of affairs, and we would still have the means to ascertain moral truth, in finitely many steps.

So, scientists could – for instance – study how humans normally make those assessments, use controlled conditions to find out when something is interfering with the normal system, and so on, and then try to make a machine that would be able to do the same.

To be clear, I'm not claiming that such a project would succeed. It depends on how complex the matter is, but the point is that even DCT do not entail the failure of such projects – a side note, of course, since DCT are all false.

7) Motivation

Another issue that the theist might raise – perhaps, but not always, in the context of the is/ought issue – is that of motivation.

That's an issue usually raised by expressivists, quasi-realists, etc. Theists are committed to the rejection of such anti-realist views, so they're not in a position to make all of the arguments anti-realists make.

Still, a theist might make some arguments, so I will consider the matter to some extent:

7.1) Psychology, psychopathy, and morality

First, plausibly, as a matter of human psychology, humans normally feel inclined to avoid behaving immorally, just as they're inclined to avoid, say, pain, hunger, thirst, social shunning and/or ridicule, predators, acting irrationally, eating rotten meat, and so on.

Some of those aversions might be shared by all other species (e.g., aversion to pain), whereas others might not (e.g., aversion to eating rotten meat), but in any event, those are matters of human psychology.

But none of that is a problem for EN: it's a matter of human psychology.

For that matter, we can easily conceive that, as a matter of zurkovian psychology, a zurkovian who assesses that she has a z-moral obligation to do Y, normally is also motivated to do Y.

Second, a potential exception to the moral motivation in humans would be the case of psychopaths.

If it's true that some humans with abnormal minds do not feel motivated at all to do X when they sincerely judge that they have an obligation to do X, that shows that the motivation is not always present, even in humans.

On the other hand, it might be that any human who makes a sincere moral judgment that he has a moral obligation to do X, feels a motivation (even if very weak) to do X.

If so, then that's a motivation that persists to some extent even in the case of seriously abnormal minds, such as those of psychopathic serial killers, etc.

That would have to be argued for, but it would be an irrelevant point in the context of metaethical arguments for theism, given that EN can handle it, either way.

7.2) Aliens again

Perhaps, someone might say that any entity – human or not – who makes a sincere moral judgment that she has an obligation to do X, feels motivated to do X, at least to some extent.

So, let's consider the following scenario:

Millions of years into the future, humans make contact with zurkovians.

Things go mostly smoothly for the most part, despite some difficulties, and after a few centuries of communication, on a joint scientific mission, some zurkovian scientists tell some human scientists that all humans behave z-immorally when they do X – for some X that humans do not have any moral obligation not to do -, but it's not a serious case of z-immorality; it's a minor z-immorality.

Now, the zurkovians in question are the experts in z-morality.

Given the evidence, and since they find no good reason to believe that those zurkovians are lying, the humans in question conclude that, in fact, all humans act z-immorally when they do X.

Human scientists also come to know, by similar means, that human beings are z-color 1, 2, and 3, or z-color 1,2, and 5, or z-color 1,3, and 5.

Thousands of light years away, and thousands of years later, some human colonies receive the information with curiosity.
Those humans rationally come to believe that they are, in fact, z-color 1, 2, and 3, or z-color 1,2, and 5, or z-color 1,3, and 5, and that they act z-immorally when they do X.

Yet, they feel no motivation at all to refrain from doing X.

Let's now reverse the roles of humans and zurkovians.

It might be that some zurkovians come to believe that some of their actions are mildly immoral.

That belief may or may not be true, of course.

In fact, I'm not even taking a stance on whether zurkovians could be moral agents; if required, we can just stipulate that some humans deceived some zurkovians about that, for some reason. and the zurkovians didn't figure it out.

In any case, the point is that those zurkovians feel no motivation whatsoever to stop doing the actions they believe to be immoral.

Someone might raise the issue of the sincerity of the judgments.

However, just as the humans were making sincere z-color and z-moral judgments, even though humans can't see z-colors, and have no z-moral sense, the zurkovians were making sincere color judgments – even though they can't see colors, but z-colors -, and they were making sincere moral judgments as well.

In short, there was no lack of sincerity on anyone's part.

True, zurkovians are just imaginary characters I made up. However, that is not relevant.

The point I'm making here is that situation is conceivable, so plausibly there is no semantic requirement of motivation.

It seems, then, that EN can handle moral motivation without a problem, as a matter of human psychology.

Granted, the theist may well deny that moral motivation is a matter of human psychology.

However, there appears to be nothing in our moral language requiring otherwise, so simply denying that it is a matter of human psychology fails to meet the burden of showing that EN even has any difficulty handling moral motivation.

Perhaps, someone might object that the zurkovians weren't making moral judgments at all.

According to this objection, in order to make a moral judgment, it's necessary to have a certain phenomenology associated with it, and zurkovians do not have that phenomenology – or they do, but associated with z-morality, not with morality.

I will consider that objection, as well as another one, in the following subsection.

7.3) Moral phenomenology and moral judgments

Someone might claim:

O1:

a) A moral sense, with a certain phenomenology associated with moral judgments, is required for making moral judgments.

b) zurkovians may have that phenomenology – or a similar one -, but associated with z-moral judgments, not with moral judgments.

So, zurkovians are unable to make moral judgments.

c) Any agent, human or not, who (sincerely) judges that she has a moral obligation to do X, feels motivated to do X, at least to some extent.

However, that objection would seem to lead to moral anti-realism, which blocks any theistic metaethical arguments, independently of other considerations.

That is because, if moral judgments assert propositions, and asserting propositions is enough for making moral judgments – i.e., they do not also involve a certain attitude, feeling, etc. -, then it seems clear that no specific associated phenomenology is required.

For that matter, a blind person can assert 'my shoes are brown', and that is a color statement.

Moreover – though not required here -, the statement may be justified and true – let's say he got the information from reliable sources.

Similarly, a colorblind person can make the judgment 'that apple is red', and that's a color judgment (and that may be true, justified, etc.)
Similarly, zurkovians
can make color judgments, and humans can make z-color judgments, and so on.

The same happens in the case of moral judgments, z-moral judgments, and so on.

Still, in any event, the color examples, zurkovians, etc., show that this is not a particular characteristic of moral statements.

So, let's consider a somewhat similar objection that might not lead to antirealism:

O2:

a) A moral sense, with a certain phenomenology associated with moral judgments, is required for grasping the meaning of moral terms.

b) zurkovians may have that phenomenology – or a similar one -, but associated with z-moral judgments, not with moral judgments.

So, zurkovians are unable to grasp the meaning of moral terms.

c) Any agent, human or not, who grasps the meaning of moral terms and sincerely judges that she has a moral obligation to do X, feels motivated to do X, at least to some extent.

Regardless of whether that is true, the crucial point here is that O2 links the phenomenology of a moral sense with grasping the meaning of moral terms, and the latter with motivation, but that particular phenomenology is a feature fixed by human psychology, based on the human moral sense.

In other words, human psychology would be fixing the referent, in the following sense: if O2 is true, then any entity who grasps the meaning of moral terms has a psychological makeup similar to that of humans in the relevant sense: namely, she has a moral sense with a certain phenomenology.

If that is the case, then of course if that phenomenology involves motivation in humans, it would involve them in any agent with the relevantly similar psychology.

For that matter, a zurkovian might say the same about z-morality, which would have a similar phenomenology.

But none of this is problematic for EN, since it's again a matter of the psychology of the entity making the judgment.

7.4) A possible evolutionary account

A basic sketch of a potential psychological, epistemic and ontologicalnot semantic – account, which handles motivation and is compatible with EN, might be as follows:

a) Humans have an evolved sense that allows them to track some properties in other humans.

It would also work on sufficiently similar beings, precisely due to similarity; how sufficient they have to be is a matter for future research.

b) When triggered, that sense provides some motivation to act in certain ways, more specifically to avoid some actions and perform others.

c) The properties in question are plausibly mental properties involving matters such as concern for other humans and – perhaps, by similarly – other beings.

In other words, those properties are plausibly mental properties involving other-regarding mental properties, and maybe some related properties and/or relations as well.

d) The causes of the evolution of a sense picking those particular properties and not others are the conditions in the evolutionary environment.

e) As language developed, people invented words that they may use when that sense is triggered; those are moral terms.

So, moral properties would be those properties, or – maybe depending on how we use the word 'property' - properties that supervene on them, but in any case we would not have to add any further entity to an ontological account.

In other words, an ontological account would not have to contain anything but beings with certain kinds of minds – humans suffice -, just as an ontological account of color needs nothing but light, some objects with reflective properties, and generally properties and entities that EN can handle.

Conceivably, other entities, such as zurkovians, may have evolved a sense that is equally motivating to them and maybe even feels just the same way when triggered, and that tracks similar properties, but not quite the same properties.

Also, clearly moral knowledge would not be a problem in that case, just as color knowledge is not.

As for semantics, a non-theist does not need to provide a theory, just as a theist wouldn't have to, if the theist were making no metaethical argument.

Furthermore, a non-theist does not need to provide any psychological or ontological theory, either, but I'm providing a sketch of a potential one as a means of showing the challenges faced by the theist.

On the other hand, if a theist is making a metaethical argument for theism and claims that there is something in the meaning of moral terms that requires properties and/or beings other than the ones the previous hypothesis could account for, he would of course have the burden of showing that.

So, the theist would have to make a semantic/ontological metaethical argument, in the sense that they would have to show that moral language is such that assertions such as 'X is immoral', 'Agent A ought to Y', etc., entail the existence of properties and/or beings that the previous hypothesis cannot handle.

Moreover, the above is not the only account compatible with EN, so the theist defender of a metaethical argument would have to argue against all of them.

8) The road so far II

So far, we've established the following points:

1) Purely epistemic challenges to moral knowledge under EN fail.

Indeed, there are accounts of moral knowledge compatible with EN.

Granted, a theist might argue that such accounts would not really account for moral ontology – e.g., that those accounts would entail, plausibly, no moral properties.

However, that would no longer be an epistemic challenge, but an ontological one.

Granted, also, a theist might make an argument against all knowledge under EN, but that would no longer be a metaethical argument.

2) The Open Question Argument fails to provide any support for any theistic metaethical argument.

3) Accounts of moral motivation compatible with EN are available.
Granted, a theist might raise a
semantic-ontological challenge to such accounts, but the burden is on them.

4) Generally, it seems, the following options might be available to the theist:

a) A semantic-ontological challenge.

In those cases, the theist would argue that some moral sentences such as 'Agent A ought to do X', 'Y is immoral', etc., entail, by the meaning of the moral terms, the existence of entities and/or properties that accounts such as the one I suggested earlier, or any other account compatible with EN, cannot handle.

The kind of entities and/or properties the theist might want to use might be some kind of 'mind-independent value' (whatever that might mean), or souls, or libertarian free will.

In the following sections of this article, I will address such attempts. In particular, I will focus on Linville's 'Argument From Personal Dignity' and some of Craig's arguments as examples, but I will make general points that apply to any potential variants of their arguments, and to a number of other arguments claiming that moral language commits us to the existence of similar properties and/or entities.

While I can't of course address all possible arguments theists might ever come up with, my objective is to present counterarguments that – perhaps, with some minor adjustments – will deal with all present-day ones, or any similar variants, and a few more possibilities as well.

b) An empirical challenge:

In this case, a theist might make one of the following arguments:

i) If EN were true, something like a species-wide moral sense would not have evolved

ii) If EN were true, even if a species-wide sense tracking some mental properties relevant in social life evolved, that sense would be different from what the moral sense actually is.

iii) As a matter of fact, if EN is true, then humans just don't have such a sense, and the properties that our respective apparently moral senses are tracking vary wildly from human to human.

iv) The disjunction of i), ii) and iii) is true.

Those are not at all common arguments raised by theists, but still, I will address the matter of heroism, and – just in case – disagreement.

Granted, in the future, some theists might raise alternative variants I've not addressed, but at least, this article should cover present-day arguments, and a bit more.

9) Linville's argument from personal dignity

Back to Linville's arguments, he makes an ontological metaethical argument based on "personal dignity".

I will address his argument in this section, but I will also make a number of general considerations that apply to other theistic metaethical arguments based on moral ontology.

9.1) "Why is that immoral?"

A question like that is common in moral discussions.

For instance:

Tom: Same-gender sex is immoral.

Alice: Why is that immoral?

In asking that question, Alice – and generally, the person asking the question – is asking for reasons as to why it's immoral.[7]

So, the discussion or debate goes on, and the person trying to persuade someone, tries to give reasons that would be accepted by the other person.

However, that reason-giving cannot continue indefinitely, since the discussion has a finite duration.

It might be that the disagreement will persist, but the idea is to manage to present reasons for one's moral assessment that the other side will find persuasive, appealing to commonly shared intuitions.

Theists are not in a better position to do that, of course.

For instance, if a theist said 'That is immoral because Yahweh says it is', his interlocutor might question both the existence and the trustworthiness of Yahweh.

If a theist says 'That is immoral because God says it is' meaning 'That is immoral because an omnipotent, omniscient, morally perfect being says it is', then a question would be: 'Why do you think so?', and the person making the claim would not only have the burden to of showing that some omnipotent, omniscient entity exists and makes such a claim, but – among other things -, the burden of showing that such entity is morally good – furthermore, that he's plausibly morally perfect -, and even then, that God plausibly has no mysterious reasons to lie – even though he somehow mysteriously created a world with pain, suffering, moral evil, etc.

In any case, and leaving that aside, actual moral debates are about reasons, not about ontology – at least, when they're not confused -, and they don't try to reach some ontological bottom so to speak.

So, that's as far as daily life goes.

On the other hand, a theist defender of a metaethical argument for theism might demand an ontological account of morality from the non-theist.

He might ask, for instance.

Q1: Why are actions in category C immoral?

Here, 'category C' is say, instances of people torturing other people for fun.

Outside ontological debates – in daily life -, that would be a very odd thing to say.

What people normally try to do in the case of moral debates in daily life is to frame the matter in terms that would allow their interlocutors to ascertain, by their own lights, that a certain specific, concrete behavior is or was immoral, or that a certain category of behaviors only contains immoral ones, etc.

In daily life, in the case of category C, we already reached a clear moral truth, so the question would be puzzling.

A perfectly fine reply would be 'it's obvious; what else do you need?'

There is no need to keep delving any further, or try to develop an ontology.

Yet, the theist in this case is not asking a usual, daily life question, but demanding some ontological account from the non-theist, which is puzzling. However, there is no burden on the non-theist to provide any ontology of morality – or of color, for that matter -, as I explained earlier.

Moreover, there is a serious problem with the insistence on questions like Q1, which we can see by means of taking a look at the color case. Let's consider the following question.

Q2: Why is that object over there, that looks obviously red to us, red?

What kind of answer would be expected?

Let's said someone replied something like:

R2: That object over there, that looks obviously red to us, is red because under such-and-such conditions, it would reflect such-and-such wavelengths (of course, they might also clarify both 'such-and-such' in great detail).

But then, for that matter, someone might ask:

Q3: Why is that object over there, which under such-and-such conditions, would reflect such-and-such wavelengths, red?

R3: Any object which, under such-and-such conditions, would reflect such-and-such wavelengths, is red.

Q4: Why is it that any object which, under such-and-such conditions, would reflect such-and-such wavelengths, is red?

And so on...

It seems that, even in the case of color, and despite a considerable advanced science, there is no bottom to be reached.

Moreover, it's not clear that any end would be reached, ever, if someone just keeps asking.

Perhaps, that's not the answer the theist is looking for, but one that semantically closes the matter, in the moral case. But there is no burden on the non-theist.

Moreover, in the case of color, we may ask: What kind of account would help in the case of, say, Q2?

Q2: Why is that object over there, that looks obviously red to us, red?

Again, an issue is: what kind of answer is the theist even looking for?

I'll come back to these matters later, after some considerations on Linville's "personal dignity" case:

9.2) Bayoneting alien cyborgs for fun

As part of his case in support of a 'mind-independent value', Linville uses the hypothetical example of soldiers bayoneting babies for fun, defending the hypothesis that an ontological account of the moral wrongness of the actions of the soldiers requires positing some property in the babies.

Linville: (p. 419, 420)

Thomas E. Hill (1991) offers a potentially usable model here. He asks, if we do not think that, say, natural environments or works of art enjoy moral standing in their own right, might we explain our “moral unease” on contemplating their destruction by asking the question, “What sort of person would do a thing like that?” Our attention is thus shifted from a question of rights or direct duties owed anyone or anything, to an assessment of character. Surely, an even harsher judgment is appropriate regarding Dostoevsky’s soldiers. Perhaps some combination of those mentioned can work together to arrive at the conclusion that infanticide is impermissible. But such answers, even taken together, seem altogether unsatisfactory. Surely, if bayoneting babies for fun is morally wrong, the wrongness must be explained chiefly in terms of what is done to the baby.

The word 'character' seems to imply a more or less permanent or enduring set of dispositions in the perpetrator, and a claim that a particular action is immoral does not seem to entail that. While in the case of soldiers bayoneting babies for fun, it seems that those soldiers do have that enduring set of dispositions – i.e., they're evil -, but even people who aren't generally bad may sometimes carry out immoral actions.

So, I don't think that a general ontological account in terms of character would be correct – even though if it is, character is something that EN can account for, so there is no need to deny it, either.

On the other hand, I will argue that there is no need to posit some property in the baby in an ontological account of the moral wrongness of the soldiers' actions. Furthermore, I will argue that such an ontological account would be erroneous.

More precisely, it's the mind of the soldiers during their actions – and, perhaps, leading to them – what any correct ontological account of the moral wrongness of them has to point to.

That is not to say that I'm objecting here to the claim that babies have the right not to be bayoneted for fun by soldiers. I'm not, but just objecting to including 'rights of the baby' in any ontological account of the immorality of the soldiers' actions.

More specifically, the position I'm suggesting here, as an example of an account compatible with EN, is the following: [8]

a) The judgment 'Babies have the right not to be bayoneted for fun by adult humans'[8], follows from 'If an adult human bayonets a baby for fun, then the adult human is behaving immorally', by the meaning of the words.

b) The property we are identifying when we say – for instance - 'It's immoral for an adult human to bayonet babies for fun' is not a property of the babies at all, but a property of the soldiers – of their minds, more specifically. In other word, the immorality of the soldiers' actions is, from an ontological perspective, a mental property of the soldiers who are acting: in other words, the claim that their behavior is immoral identifies a property of the soldiers' minds, not a property of the babies.

Let's consider the two claims:

Point a) seems to be clear enough.[4]

As for b), the kind of beings the soldiers are harming as far as the soldiers can tell does matter from a moral perspective in this context, but that's because of the mindset of the perpetrator. To see that, let's consider the following scenario:

Some very advanced aliens have reached Earth, and are studying Earth's biology.

They're particularly interesting in humans, they send some cyborgs to take a closer look.

In particular, woman-looking cyborgs and baby-looking cyborgs are there, instead of women and their babies.

The cyborgs are mostly human, including blood, organs, etc. However, they have no human brain, but a computer instead.

They mimic babies' reactions well enough to fool the soldiers, who have never encountered alien cyborgs, and they don't even have a concept of such thing.

However, they cyborgs feel no pain, no fear, etc., and do not suffer at all when bayoneted, and the computer that works as a brain is well-protected, encased in armor – but the soldiers don't try to bayonet the heads.

So, the soldiers have no way of telling that those aren't babies, based on the information available to them at the moment.

After the soldiers leave, the aliens retrieve their cyborgs, discard the biological tissue, and then grow new tissue. No entity suffered, but the soldiers enjoyed themselves very much, since they really liked bayoneting babies for fun, and they had the same experiences as if they had been bayoneting babies for fun, down to the belief that they were, in fact, doing so.

No babies were harmed in the previous scenario. No being suffered at all at the hand of the soldiers.

However, it seems clear that the actions of the soldiers were just as evil as the actions of those who actually bayoneted babies for fun.

So, it seems clear that the immorality is in the mind of the soldiers, not in some 'mind-independent value' (whatever that means) of the baby.

Of course, it's not the case that the wrongness is in the harm the soldiers do to themselves – if any.

That would be a false theory.

However, whatever property our moral sense tracks and normally elicits our assessments 'morally wrong', it seems to be a mental property of the perpetrator.

When doing ontology, we shouldn't confuse other-regarding mental properties of the minds of the perpetrators with actual properties of other minds.

In particular, the fact that, as far as the soldiers can tell, they're bayoneting babies – and not alien cyborgs that cannot suffer – of course makes a big moral difference.

However, from an ontological perspective, that's still a difference in the mind of the perpetrators – even if it's a difference in their beliefs about the targets of their actions -; as long as the perpetrators' mental states are the same, the moral wrongness is the same.

While the attitude that the perpetrators have towards entities that, as far as the perpetrators could tell based on the information available to them, have such-and-such properties, relations, etc., is morally relevant in many cases, such entities do not need to be present or even to exist for the actions to be immoral. Moreover, the degree of immorality depends only the perpetrators' minds, not on the actual presence of any other entity.

That said, if that is not the case and the previous analysis is mistaken, then that is not necessarily a problem for EN, either: for instance, if it turns out that moral wrongness of an action is not a mental property of the actor but a complex property including a mental component and a consequential component, and/or involves relations, etc., that's fine with EN as well, since those properties can be accommodated just as easily as any complex mental property.

Of course, a theist might argue that all mental properties are a problem for EN, but that would no longer be a metaethical argument.

Also, a theist might argue that some property other than any of the ones I mentioned above is required as Linville does, when he posits mind-independent value (whatever that means, if anything) -, but no good reason to suspect that that is the case has been provided.

9.3) Evil mathematicians vs. alien robots

Let's consider another example; Linville constructs a scenario in which you parked your car near the Mathematics Department, and when you come back, it turns out that some people left the car on blocks and painted theorems on it with graffiti, and so he asks whom the delinquents wronged.

Linville (p. 419/420)

But, this side of the Bay area, we are not likely to find people suggesting that they have wronged the car, done it an injustice or violated its rights. Cars are not plausibly thought to have moral standing – not even Bentleys. Rather, we might suppose that the wrongness of such vandalism stems from the violation of a direct duty to you to respect your property rights or the like. And that direct duty carries with it an indirect duty regarding the car.

First, let's suppose that, before the mathematicians vandalize the car, the owner dies of a heart attack, leaving no descendants.

Furthermore, let's suppose that it's less expensive for the city to destroy that old car than bothering selling it. Hence, nobody was harmed.

Yet, the mathematicians did just the same.

Of course, readers will make their own assessments, but I maintain, once again, that their actions are equally immoral as if the owner had been alive.

Someone might raise issues about an afterlife, and duties to the dead owner, or duties to the city or state.

So, let's consider the following variant:

Let's suppose that the apparent car is no car at all.

Instead, it's an alien robot that looks like a car, left there by an alien robot that looks like a human, and which is studying humans.

The car's skin is just for show, and can go back to its initial form without a problem – i.e., there was no damage.

Moreover, the attack on the fake car actually helps the aliens gather information about human behavior.

By the way, the aliens that sent the robots do not find the actions at all bothering.

To them, it's just more data, and they're content with that.

No one was harmed. No one suffered. Nothing suffered. No car was damaged. No robot was damaged.

It's true that the situation of those humans fooled by aliens might elicit some sympathy from some of us, but I would say that the conclusion that the mathematicians acted just as immorally as in Linville's example, is clear.

At any rate, we can always hypothetically have the mathematicians abducted by the aliens and put in a holodeck without their knowing, and reach the same conclusion: in that case, the holodeck does not need to be perfect, but just good enough so that the mathematicians with the same mental states – including attention to details, of course – as in Linville's scenario, can't tell the difference.

Generally, the point here is essentially the same as in the 'alien cyborgs' casenamely, that the moral wrongness is in the mind of the perpetrator, and no other entity needs to be posited.

9.4) Someone has been wronged

Sometimes we can truthfully claim that A has wronged B.

Someone might ask: 'What to make of that, if it seems that actions would be just as immoral if no one had been wronged, but A had not been in a position to tell the difference (e.g., holodecks, cyborgs, etc.)?'

But then, an adequate reply would be: 'It's a complex claim, buy why would that be a problem?'

The claim that A wronged B is a complex claim, but we don't need to introduce any further entities in any ontological account.

The properties required for the claim to be true seem to be:

1) A complex mental property of A; he acted immorally, and this particular case of immorality involved either a deliberate attempt to harm B, and/or failing to care about her in some way.

2) B is a moral agent. Again, that's a property of B's mind.

3) It might also require certain effects on B, or social effects at least.

It's not entirely clear to me, though.

Let's say that A intends to kill B, just for fun, but fails utterly, does no damage to anyone, and neither B nor any other human ever finds out.

Surely, A acted profoundly immorally. But did A wrong B, or tried to wrong B but failed?

Regardless, there seems to be no difficulty accommodating any of those properties under EN.

9.5) Tracking mental properties: direct tracking, indirect tracking, and ontology

At this point, someone might raise the following questions:

1) Isn't it clear that, in our moral experience, we need to keep track of complex networks of social relations, obligations of one person to another, and so on?

2) Doesn't that show that the correct ontology of moral properties such as 'moral badness', 'moral goodness', or 'moral wrongness' does not exclusively involve mental properties of the people who are morally good or bad, or who carry out actions that are morally good, wrong, etc., but at least relations, consequences, etc.?

The answer to the first question is yes, but to the second one, it's no.

We track mental properties indirectly, by means of behavior.

For instance, in order to ascertain that someone is in pain, or afraid, or that they have certain beliefs, we generally take a look at how that person behaves – including, but not limited to, statements about herself.

Granted, there are other a few other ways in which we might ascertain or try to ascertain that a being has a certain mental property, in some cases:

For instance, A might tells us that B has mental property P.

However, but that generally is also an indirect way of looking at behavior. [9]

Generally, the fact is that we track mental properties that may regularly differ from human to human by means of observing behavior.

So, if the property 'moral goodness' is a mental property, then plausibly the way to track it is by observing behavior.

And while we can learn that certain behaviors are morally good – for instance – after observations of what other behaviors they're regularly connected to, there plausibly are behavioral cues that normal humans can all intuitively track – else, it's difficult to see how we would be able to learn more.

However, when it comes to moral ontology, what we need to look for is not the properties that we track as a means of indirectly tracking yet other properties, but the latter – the ultimate ones, in a way.

What the examples in the previous subsections show is that, plausibly, moral wrongness – or moral badness, immorality, etc. - is a mental property of the perpetrator, and shouldn't be explained, from an ontological standpoint, in terms of rights of the victims, or any other properties of the victims.

In other words, moral badness is in the mind.

Similar considerations can be used in case of the property 'moral goodness'

Other properties such as justice, or fairness, may require further analysis, so I'm not making a claim that all moral properties are like that.

But that's unproblematic, since EN does not have a problem with n-valued properties like social relations, complex properties involving mental properties of many people, and so on. Although I'm not sure why that would need to be included in an ontology, that would not be a problem, either.

9.6) Attempted crimes and punishments

Someone might suggest that if moral wrongness is in the mind of the perpetrator, then the law should punish attempted crimes just as much as committed crimes.

Similar considerations might be raised in cases of negligence.

That actually sounds rather plausible, though I suppose there might be alternative possibilities.

For instance, generally, we track mental properties – including moral properties – by behavior.

On average, failed crimes plausibly show less commitment to committing such crimes than successfully committed ones – i.e., he tried, but on average, he probably tried less hard.

I guess it might be argue that that might justify somewhat lesser punishments in some cases.

However, plausibly the level of commitment is usually better ascertain on other grounds only, so I'm not sure that that's a significant factor.

With regard to negligence, acts of negligence that do not result in victims are plausibly, on average, acts of less negligence than those that do.

However, as before, I'm not sure that that's a significant factor.

In any case, those considerations are about lesser punishments because of different mindsets in the perpetrators, so that does not affect any of the considerations in the previous subsections.

It remains clear to me that the cyborg-bayoneting soldiers would be acting as immorally as the baby-bayoneting soldiers, and the same goes for other cases.

In addition to the above, when it comes to punishment, there is another matter to consider:

Punishment is costly.

Prior to organized judiciary systems, plausibly it would not have been immoral for a victim of serious negligence – for instance – to impose some limited punishment on the perpetrator.

Moreover, perhaps some other people would have had a moral obligation to assist on that.

On the other hand, with no victims of the negligent act, it's more difficult to see that others would have had the obligation to engage the perpetrator, at least in many cases. There are always some risks.

Translated to this day, a judiciary system substitutes private retaliation, but a tendency like that might remain.

However, it's not clear that it's justified now that the risks of punishment are usually less severe.

That's a highly speculative hypothesis about human psychology, and I'm no way claiming it is true. I'm just speculating about some possibilities, but in any case, they're not related to the matter at hand, which is that given the same mental state of the perpetrator, the immorality is the same.

That much seems clear.

9.7) Moral obligations

With regard to moral obligations, it seems to me that they too can be accounted for in terms of mental properties of the person having the obligation. I already argued for a semantic reduction to other moral statements earlier.

9.8) Moral rights

In the case of moral rights, a possible ontological account would be that rights are some mental properties of the agent having them, and that would be fine under EN.

On the other hand, there is an alternative: it may well be that a right of an agent should not be included in a correct ontological account of moral properties, but rather, it can be accounted for in terms of certain moral obligations of other agents, and so no further property needs to be included in an ontology.

9.9) Alternatives under evolutionary naturalism

An important point here is that even if my counterarguments so far failed to show that, say, 'moral wrongness' is a mental property of the perpetrator, that would not be necessarily be problem for morality under EN.

Instead of a complex mental property, the property or properties our moral sense is tracking and which usually elicits our judgments 'immoral', 'morally wrong', etc., might be a combination of complex mental properties of the actors and mental properties of their victims, and perhaps some relations between them. That would be fine under EN too.

So, it would be up to the theist to show that none of those properties or relations can plausibly account for moral properties.

That aside, let's assess a crucial claim about something Linville and other theists call 'mind-independent value'.

9.10) Mind-independent value?

As explained earlier, there is no good reason to suspect that there is a moral ontology just around the corner, or that the simple hypotheses presented so far are true.
So, even if the non-theist has no ontological account of morality, that's on its own not a problem: the theist may simply point out that plausibly, it would take centuries to figure things out.

Moreover, even if the theist presents a coherent ontological account, they would have to refute something like the evolutionary-mental properties account I sketched so far, and even then, they would have to refute many other possibilities before they have a case.

Still, in this subsection and the following one, I will assess Linville's ontological account:

Linville defines 'intrinsic property' of a thing as one that does not involve any essential reference to any other thing, which means the same (according to Linville's usage) as being a non-relational property, and which entails (according to Linville) mind-independence.

He gives the example of market value to exemplify that a property that is determined by what others are willing to pay – that's not an intrinsic property.

Then, Linville's claim is that humans have what he calls 'dignity'.

While I would say that humans in many cases have dignity, I would also say that 'dignity' does not mean what Linville claims: rather, it also seems to be a mental property, and it does not seem to have to do with anything like 'intrinsic value'.

As I also explained earlier, an ontological account of why an action is morally wrong need not – and should not – include any references to properties of the victim, in case there is one.

Moreover, even if that point about moral wrongness were incorrect, mental properties of the victims would qualify, it seems, as intrinsic properties, so if the correct ontological account involved mental properties of the perpetrators and of the victims, that still does not require anything beyond mental properties, so the matter is still a psychological one.

In addition to all of the above, the claim of 'intrinsic value' is actually difficult to make heads or tails of.

What would that even mean?

a) A claim that person A values person B is unproblematic, but is not a moral claim.

b) A claim that every person ought to value person B is a moral claim that appears to be equivalent to saying that, for every person P, if P does not value person B, then P is being immoral.

c) A claim that person B ought to be valued is somewhat more difficult to understand.

Perhaps, on a charitable interpretation – which may well be correct – is that they mean the same as in b), and that is fine.

On the other hand, the person making the claim might be basing it on some odd and mistaken ontology; if so, asking for clarification would be in order.

d) But a claim that humans have mind-independent value is frankly difficult for me to make heads or tails of.

If the claim only meant that every moral agent ought to value humans, that's a claim I can grasp.

However, that does not appear to be what Linville is saying – he seems to be making some ontological claim; but what does that mean?

Linville (p. 432)

And to be told that one ought to value persons intrinsically would seem to imply

that persons just are of intrinsic moral worth.

Let's consider the statements.

T1: One ought to value persons intrinsically.

A person making that claim may well mean that one ought to value persons as ends, not only as means to other end.

We can easily distinguish between valuing some thing or entity as means to an end, and to have ends, which we value for their own sake.

On this point, it is crucial not to confuse non-instrumental value, or final value, with some sort of obscure 'mind-independent value' - a key issue that will address again in a later section.

In other words, it's very important not to confuse the idea of valuing some being, action, etc., intrinsically, in the comprehensible sense of valuing it for its own sake, etc., and the obscure and perhaps incomprehensible claim of "intrinsic value" in some sense of non-relational, and in particular mind-independent value.

We value a being B non-instrumentally, finally, or intrinsically, just in case we value it as an end, and we value B instrumentally if we value B as a means to an end.

Of course, someone might value a being B both instrumentally and finally/intrinsically, since she might value B as a means to obtain C, but also for its own sake – so, in particular, she'd value B even if she already had C, or even if she couldn't obtain C through B.

That is all understandable, and in particular, so is talk about valuing some being intrinsically.

Also, in that sense, there is no need to posit any extra entity in order for people to value other people as ends/intrinsically, and not merely as means. In particular, then, there is no need for anything like mind-independent value (whatever 'mind-independent value' might mean, if anything at all).

And so, the claim T1 may very well mean that we ought to value persons as ends, and not just as means. No further ontology is required, and no obscure claims are required.

Let's consider the other claim:

T2: Persons are of intrinsic moral worth.

That claim is very odd, but someone might use it just to mean, for instance:

T2': Every moral agent has a moral obligation to value every person.

That is unproblematic when it comes to understanding the claim, even if it's obviously false.

Maybe we should add the condition that the agent has to know about the person's existence, and perhaps some extra conditions.

However, that's not what Linville seems to mean by T2 at all.

Rather, he's making an at best obscure ontological claim.

In fact, even if the word 'value' is used in his claims, that claim seems unrelated to what in daily life we mean by "value".

We might as well call it not 'intrinsic value', but instead 'property Z', and ask the theist why we should value beings with property Z.

In any case, even assuming that the claim of mind-independent value is coherent, the points made earlier in this section show that the previous claims he makes in his account are not true, and that even if they were, that would still be unproblematic for EN.

So, given all of that, the conclusion is that the argument from 'personal dignity' presents no challenge to moral facts under EN.

In particular, it would do nothing to refute evolutionary accounts of morality like the one outlined earlier in this article.

9.11) Darker and darker

While the previous subsections are subsections show that Linville's "personal dignity" metaethical case fails, independently of other reasons, but there are other issues I'd like to address, on the subject of certain ontological questions: Linville makes other very obscure claims about that.

Linville (p. 429)

The reason rape is wrong, and, indeed, the reason that it is committed only by bad people, is that persons ought never to be treated in that way.

The claim is worded in a very odd manner, but let's assess it carefully:

L1: rape is wrong.

More specifically, the claim is about a moral agent engaging in rape, and there is a victim who is a person, given the claim that people ought never to be treated that way.

So, L1 might mean something like:

L1': For all A, B, if A is a moral agent, and B is a person, and A rapes B, then A acts immorally.

If that's not what Linville means by 'rape is wrong', in that context, then what is it? [10]

But let's take a look at L2:

L2: persons ought never to be treated in that way.

That's very odd, because the use of the passive voice makes it unclear who the moral agent who ought never to treat persons that way actually is.

However, it may be a claim about all moral agents, so let's try to put L2 in clearer terms:

L2': For all A, B, if A is a moral agent, and B is a person, then A ought not to rape B.

Let's now combine L1' and L2', in the context of explaining why rape is wrong, and assert (with the corresponding substitutions):

L3: The reason why L1' is that L2'.

But that would not be an example of reasons-giving!

How would that explain anything?

For that matter, one might assert:

L4: The reason why L2' is that L1'.

But that does not lead us anywhere, either.

We can go in circles, for that matter.

If that's not what Linville meant, then what is it?

Perhaps, it's some other obscure claim about mind-independent value.

9.12) Where is the bottom?

Let's leave aside issues of obscurity and apparent circularity, or the question of whether 'mind-independent value' is meaningful, and let's take a look at the matter from another angle – which shows yet another problem.

L5: Rape is immoral because people ought not to be treated that way.

Now, assuming that that is coherent and even true, someone may turn the tables on the theist defender of the metaethical argument and ask:

Q5: Why is it that people ought not to be treated that way?

Of course, a possible reply is that it's obvious, but that's of no help for the theist, since, for that matter, the non-theist may reply in the same manner to Q1.

Q1: Why are actions in category C immoral?

Here, 'category C' is say, people torturing people for fun.

If the theist prefers not to accept the answer 'it's obvious' from the non-theist, the non-theist, turning the tables on the theist, may keep asking such questions, and 'it's evident' seems to be blocked for the theist, else the non-theist may use it just as well.

So, if the theist wants to prevent the endless questions that result when the non-theist turns the tables on him, the theist might try to say that some of his answers are tautological, so there are no more questions to be asked.

However, as I explained before, that alone would seem to block the theistic argument – even without counting all of the other problems I've been explaining, and which make it untenable on other, independent grounds as well -, since the non-theist may just say that 'moral goodness' can't be put in terms of non-moral words, and that's it. But there is no need to go further, as there isn't in the case of 'redness' (still, the theist might posit, if she likes, that moral goodness is plausibly a mental property).

9.13) Persons and naturalism

Let's say that EN is true.

How would that possibly be a problem for the existence of person, one might wonder?

Let's see what Goetz and Taliaferro have to say:

Goetz and Taliaferro (from Linville's argument)

The Astonishing Hypothesis is that ‘You,’ your joys and your sorrows, your memories and your ambitions, your sense of identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules. As Lewis Carroll’s Alice may have phrased it: ‘You’re nothing but a pack of neurons.’ This hypothesis is so alien to the ideas of most people alive today that it can be truly called astonishing. (Goetz & Taliaferro 2008, p. 22)

The 'nothing but' and 'no more than', and so on, clearly may seriously confuse people, somehow suggesting that they have no sorrows, memories, ambitions, free will or sense of identity, etc.

Of course, humans have their memories, sense of identity, sorrows, freedom, etc.

Most people claim that there is an entity 'soul' behind them. That appears to be not true, but that's beyond the scope of this article.

The point here is that, if the ontological claim is false and there is no soul behind them, people still have their memories, ambitions, desires, sorrows, and freedom.

Given that all of that exists, and since that does not seem to be a problem for EN, this kind of claim goes nowhere – except to the extent that claims like 'nothing more', etc., may confuse people.

That aside, Linville seems to make a demand of explaining minds in terms of the non-mental.

It should go without saying, but still, let's say it: resolving the hard problem of consciousness is not a burden the non-theist bears – not even close -, but a project for future research.

The theist, of course, has no hypothesis whatsoever about how a soul interact with particles, either, but in any case, he would have to show that there is some problem for consciousness under EN, not just that he has a coherent account.

Also, clearly, all of this no longer bears any resemblance to a metaethical argument for theism, so I will leave the matter at that, concluding that Linville's arguments do not present any problem for moral facts or moral knowledge EN, and neither do any similar metaethical epistemic or ontological arguments, or any other covered by the considerations made so far.

Before moving on to the next section, I will address another potential objection about values:

9.14) Values and perceptions

Earlier, and throughout this article, I've suggested that moral properties like moral goodness plausibly are mental properties. However, we shouldn't expect to find any description in non-technical terms any time soon; our successors, perhaps centuries from now, might come up with a description of said mental properties in technical terms, but we shouldn't demand semantic closure if we don't demand it in the color case.

Here, I'd like to consider the following potential objection, which fits in with the previous discussion about the so-called 'mind-independent value'.

The objection would go as follows:

O3: Moral properties, like moral goodness, cannot be some complex mental properties even potentially describable in non-moral, perhaps technical terms, since moral properties are valued properties, and some mental properties like certain complex combination of beliefs, attitudes, etc., are not valued.

So, they're ontologically distinct.

Let's begin by presenting an alternative:

CO3: Color properties, like redness, cannot be some complex properties involving wavelengths, etc., since color properties are perceived properties, and wavelengths, etc., are not.

It seems that, with that criterion, color properties also aren't the same as those other properties.

Should we posit some extra color ontology?

That would be very confused.

What happens is that there are some things around as – light of certain wavelengths, etc. -, and they interact with our eyes and brain, resulting in our perception of color, and normally eliciting assessments like 'red', 'green', and so on.

Whether the property 'redness' is the same as the property of reflecting or emitting light in such-and-such wavelengths, etc., or supervenes on them, seems to depend on issues related to the meaning of 'property', but what should be clear is that there is no need to posit any entity beyond the light in question.

Similarly, we have a sense that tracks some mental properties indirectly, and we experience characteristic mental states, which normally involve valuing some mental states in other people positively, and some negatively. No further entity or queer stuff is required for morality – at the very least, no good reason to believe otherwise has been given.

Granted, the analogy is not a perfect match, but there appears to be no difference that would be relevant in this case.

Still, based on some of the potential differences, someone might raise an objection as follows:

O4: Color properties supervene on but aren't the same as properties that can be described in terms of wavelengths, etc. So, the fact that the latter aren't perceived properties but color properties are, is not a problem.

However, if moral properties were just some mental properties we track – rather than supervening on them -, then those mental properties we track would have to be mind-independently valued properties, regardless of our perceptions or valuations.

That objection fails for a number of reasons, mainly:

First, the 'idea' of 'mind-independent value' seems as incoherent as 'mind-independent' perception. Agents value other beings and/or properties, but what would it even mean for a property and/or entity to be mind-independently valued?

Such a claim appears to be incoherent. [4]

Second, even if that idea of 'mind-independent value' were coherent, it would not follow that we would need any of the sort:

If color properties supervene on but are distinct from any properties that can be described in terms of wavelengths, etc., then color properties are still some properties that our visual system tracks, and which are perceived by us (or relevantly similar beings).

Similarly moral properties may well be mental properties that our moral sense tracks, and which are valued by us (or relevantly similar beings).

No need to posit anything like mind-independent value, even if we assume for the sake of the argument that 'mind-independent value' is coherent.

10) Disagreement

Another issue a theist might bring up is that of moral disagreement.

They might claim that, if EN is true and we have a shared, species-wide moral sense, we should expect to see much less moral disagreement.

Of course, a non-theist has an easy reply.

After all, why should we expect that evolution would give us a much more reliable moral sense than a designer?

Further, if the designer is omnipotent, omniscient, and morally perfect, it seems clear – not to theists, of course – that at the very least, he would not create any moral agents with an imperfect moral sense, since he could instead make beings who will always know right from wrong, with no errors.

So, moral disagreement actually works as a good objection against theism.

In any case, if the theist is a Christian, he might claim that, if we have an evolved moral sense like we have an evolved color vision, we shouldn't expect much disagreement – color disagreement exists, but it much rarer than moral disagreement.

On the other hand – this objection would go -, if Christianity is true, the Fall explains the flawed moral sense.

To be fair, I haven't seen any philosophers raise an objection of this sort, and I wouldn't expect them to do so, but I've seen Christians use the Fall in a number of arguments, and I can't rule out that some Christians might raise such an objection, so I'm considering this possibility just for the sake of thoroughness:

10.1) The Fall

Even if we leave aside the Problem of Suffering and the Problem of Evil, the fact that Yahweh is not morally good [11], that in the real world, preachers who claim superpowers do not have them, and so on, the fact is that the account of the fall doesn't make sense.

First, it never happened:

While some of today's Christians claim that it was an allegory in the first place, the fact is that traditionally, the claim was taken literally.
Moreover, if it was an allegory, then how would that explain a flawed moral sense?

Second, how is it that the moral fault of Adam and Eve would affect the moral sense of all of their descendants?

Yahweh is the one creating new souls, and he is creating them, it seems, with faulty moral senses.

So, it's not Adam or Eve causing it; it's Yahweh.

Third, even assuming that, in some mysterious way, Adam and Eve managed to damage the moral sense of future souls, Yahweh could have – at least – easily fixed the moral sense of all of the descendants.

The 'free will' defense would fail:

Let's suppose someone, deliberately or perhaps negligently, infects an entire population with a virus that affects their genes.

As a result, their descendants are born with some genetic illness.

When some people find a cure, they just choose not to make it available to the population, even though they could do so at no cost, to somehow respect the free will of the long dead original perpetrators of the infection.

It just does not make sense to say that those people who found the cure but chose not to use it are morally good.

So, in short, the Christian has no reasonable account for disagreement, and essentially neither does the theist, even if not a Christian.

How come a morally perfect, omniscient and omnipotent creator decides to create beings with a flawed moral sense?

Even if the moral sense is usually reliable, surely it isn't perfect.

So, it seems clear that the theist has no account for this, so the non-theist can hardly be worse equipped to handle disagreement.

At worst, she might say she has no account, either.

But let's take a look at a possible suggestion to see that, whatever its merits, it's at least something.

10.2) A potential evolutionary hypothesis

So, how might an evolutionary account explain moral disagreement?

The following reasons can be given, which provide a sketch of a (partial) explanation compatible with EN.

a) Complex social life, fitness costs and limited brain power:

It might be useful to have, say, a thousand times better memory, or being able to reason much better – faster, more reliably, etc.

But, of course, the brain can't grow indefinitely; bigger brains have advantages, but also costs, from energy consumption to a seriously dangerous delivery to more frequent mental illness.

In a complex social world like ours, it's sometimes very difficult to ascertain accurately other individual's intentions, and well as things such as how much care they put in figuring out rational ways of achieving their ends, etc.

It's even much more difficult than it was in the much simpler ancestral environment, given the size and complexity of today's societies, the number of individuals to keep track of, etc. - though even then, it was not trivial.

As a result, moral judgments are in some cases very difficult to make.

Even if some mental faculties can improve through efficiency rather than brain size, that takes time, is also limited, and there are other, competing selection pressures: being better at making moral assessments may be one advantage, but other social skills are also advantageous, and so given limited brain size and power, improvements in a moral sense are limited as well.

b) Other motivations and emotional commitments may impair moral judgment

For instance, in-group loyalty is a very powerful motivation, and it may bias people significantly.

As a result, they might demonize outsiders and see group-members as better – being hostile to outsiders, in many cases at least, might be another evolved tendency.

In short, since there is no designer, our minds result from an assortment of adaptations that sometimes tug in different directions – and, of course, environmental factors that reinforce some of them.

In some cases, that results in false moral judgments, as they result in false judgments on other matters.

c) Religion

As a specific example of environmental interferences with proper moral judgments, people are usually indoctrinated to believe that the claims of some religion is true.

Now, strong perceived links between group-membership, group identity and group loyalty – among other factors – may predispose people to defend those beliefs, even at high cost, rather than using their moral sense to make moral assessments.

In other words, in many cases, someone may have to make a moral assessment of a certain matter. However, instead of using their own moral sense, they accept an assessment that follows from a religious tradition or book that is at least centuries old, and which was developed by people who of course did not have access to the present-day situation to be assessed – not to mention that they had false beliefs resulting from false views of origins, little time to assess matters carefully, etc., and their own biases.

Of course, that hypothesis is just a preliminary suggestion, and it may be challenged.

However, the fact is that even a quick and partial suggestion is a lot better than anything that the theist has to offer.

11) Too many beliefs?

Another potential objection would contend that evolution would not give us a full-blown morality: those are too many beliefs to code into DNA. There are infinitely many possible situations.

A reply is that there is no need to code all those beliefs: only a finite number of patterns have to be detectable, and then the difficulty is to classify complex behaviors in those categories.

In any event, the burden would be on the arguer.

12) Heroism

Someone might object to evolutionary accounts on the ground of acts of heroism, sometimes resulting in the death of the person making the sacrifice.

A theist might raise issues such as:

Why would self-sacrificial actions take place?

Why would they some of them be considered heroic – and hence, morally good?

Wouldn't an evolved moral sense be in conflict with that?

However, that's not how evolution works:

The predispositions to act have to lead to reproductive success on average in the ancestral environment, and given an evolved moral sense, acts in which the agent chooses to help some other groups members are plausibly so adaptive.

Also, valuing such actions positively when performed by others may well be adaptive as well.

Of course, in the case someone has to actually make a decision that would involve self-sacrifice other, non moral motivations would be present as well, as a result of predispositions that evolved not as a result of a social environment, but simply to keep the agent from harm, which is generally negative for reproductive success.

Additionally, of course, it's not all in the genes; environmental factors, including a person's previous decisions, rearing, etc., shape a person's mind as well.

So, as a result of all of that, it's unsurprising that decisions may go one way or another, though in most cases, people actually do not do such lethal heroic acts.

A moral obligation to carry out a self-sacrificial act to help other people is implausible in nearly all realistic cases, but when such an act is carried out, it may well be often morally good: we may assess that using our moral sense, and it's not against what we know about evolution.

On that note, there are plenty of actions, some carried out by humans and some by other animals, that do not result in reproductive success. In fact, some of them have a negative effect on reproduction.

Agents have competing motivations, resulting from both predispositions and environmental factors; the predispositions usually evolved as adaptations because they were on average conducive to reproductive success in the evolutionary environment, but there is no way they would always result in behavior that is so conducive.

But conducive to reproductive success or not is not the point; the point is that some self-sacrificial actions are morally good.

Of course, some cases of self-sacrifice are also very immoral: suicide bombers who murder innocent people are an obvious example.

Neither those actions nor the morally good ones seem to be problematic for EN.

13) Psychology, not ontology

Let's consider the following argument:

William Lane Craig: [12]

On the naturalistic view, human beings are just animals, and animals have no moral obligation to one another. When a lion kills a zebra, it kills the zebra, but it doesn’t murder the zebra. When a great white shark forcibly copulates with a female, it forcibly copulates with her but it doesn’t rape her–for none of these actions is forbidden or obligatory. There is no moral dimension to these actions.

It's not clear that the shark doesn't rape the female, though it's clear that he does not have any moral obligations, so he does not behave immorally.

But that aside, a brief reply is as follows:

When a lioness kills a zebra, she doesn't fly.

And when a zebra escapes from a lioness, he doesn't fly, either.

Zebras, lionesses, gazelles, rats, cats – none of them flies.

So, mammals do not fly.

On a biologist's view, bats are just mammals.

But mammals do not fly, and many bats do.

That refutes biologists' claim that bats are just mammals...

Parodies aside, and leaving also aside Craig's contemptuous "just" in "just animals", the fact that some animals are not moral agents is no good reason to believe that no animals are moral agents.

Of course, there are obvious psychological differences between different species, and in particular, between humans and great whites, or any other species for that matter. [13]

What actually matters, when it comes to the question of whether a being is a moral agent, is not the ontology of the agent's mind, but the psychology of her mind.

The ontology could only have an indirect effect, to the extent to which it conditions the psychology.

I think that, after reflexion, that much should be clear.

Still, in case someone is not persuaded, let's consider the following scenario:

First, let's assume theism for the moment, and let's suppose that God creates a universe in which there are no souls.

Instead, he creates some sort of panpsychist universe, where there are some basic, essential particles with some sort of basic phenomenal consciousness, but no intelligence, no pain, etc., essentially much simpler than a mosquito's mind – they shouldn't even be properly called 'minds', I think.

Then, in that universe, through theistic evolution, complex beings arise.

They do not have souls, but they do have minds, with the full range of emotions, knowledge, etc., of the animals we're familiar with.

In fact, one of the species God creates has a psychological makeup similar to that of humans, and is capable of making moral assessments, just like we can.

Let's suppose that one of those beings engages in torturing other such beings for fun, and – of course – others assess that he's behaving in a very immoral manner.

Would he not be acting immorally?

Would the others be in error?

It seems clear that they would not be mistaken, regardless of the fact that their minds are made up of the same kind of basic stuff that the rest of the minds of the other animals in that universe – including all of those without a moral sense -, and even made up of the same kind of basic stuff as tables, chairs, and the like.

Of course, the tables, etc., do not have table-minds, etc: the basic particles that make them up do have the most basic phenomenal consciousness, but they're not combined in a way that makes up any less basic consciousness. They're only connected externally, so to speak.

So, chairs, tables, etc., are not moral beings – obviously -, and neither are mosquitoes or similar entities, but on the other hand, entities with human-like minds are moral beings, just as humans are, and regardless of what their minds are made of.

We may also consider an alternative scenario, just like the above but in which there is no phenomenal consciousness in the basic particles – i.e., no panpsychism -, and minds emerge when some kind of combinations of particles happen – God just made the particles with the properties that when they combine in a certain manner, they combination acquires awareness.

The conclusion is the same: beings with human-like minds are moral agents, regardless of the ontology of the mind.

A theist might say that those scenarios are impossible, and that God would not do that.

But for that matter, if we're going to use our moral sense to assess what God would or would not do, the Problem of Evil and Problem of Suffering against theism would be decisive – though, of course, theists wouldn't accept that.

Still, let's modify the previous scenarios so that, instead of God, the creator is another powerful unembodied intelligent being – assuming that unembodied intelligent beings are coherent.

Once again, it seems intuitively clear that those created beings would be moral agents.

Now, if we remove all creators, the intuitive moral assessment remains the same. [14]
True, a theist might insist that such scenarios are, for that reason, metaphysically impossible.

However, that issue would be beside the point in this context.

What the scenarios would still show is that the meaning of moral terms is not such that it ontologically commits us to the existence of souls, or to any particular ontology of minds.

In other words, the meaning of moral terms is not such that if we say, "Agent A acted immorally", we're implying that A has a soul, or in any case that the mind of agent A is made up of some kind of basic substance that is different from the kind of basic substances that chairs, tables, and mosquitoes are made of.

Given that there is no semantic requirement of souls or any such ontology, this matter is no problem for EN at all.

A theist might also claim that the meaning of moral terms is such that any moral judgment assigning moral properties entails that there are souls, by the meaning of the words, but the meaning is non-transparent, so competent users of moral terms will not notice it.

However, for that matter, someone might say that the meaning of moral terms is such that any moral judgment assigning moral properties entails that there are no souls, by the meaning of the words, but the meaning is non-transparent, so competent users of moral terms will not notice it.

The point here is that there is nothing as far as one can tell that would even suggest that our moral language is such that souls are a requirement for moral agency, or a problem for it, or generally that some ontological difference between the kind of basic stuff that chairs, mosquitoes, lions, and people are made of, is required for moral agency to exist.

Given that there is no semantic requirement for souls or any other such entity, then it seems that what would be left to the theist here would be an empirical challenge: namely, they might argue that if EN is true, then the kind of mind that we have would not exist.

However, that would have to be argued for, and it seems to me that that would no longer be a metaethical argument at all.

Before moving on to the next point, I'd like to further clarify a matter about ontology and psychology.

According to Craig, without souls, there is no qualitative difference between humans and other animals.

However, say, humans and sharks are clearly are qualitatively different, in the sense that their minds are very different.

That qualifies as a qualitative difference in usual terms, if anything does.

In fact, they're very different kinds of entities – precisely because of the different minds -, so maybe we can say that they're ontologically different as well.

What is not ontologically different is the basic stuff they're made of.

However, if no ontological difference in the basic stuff they're made of means that they're not ontologically different, then so be it:

It's just a matter of terminology that presents no problem for the person who accepts EN, since the fact remains that – as shown earlier in this section – what matters is psychology, not ontology of the basic stuff they're made of.

14) Materialism[15]

Another argument for theism that Craig defends is based on the claim that determinism is somehow a problem for morality:

William Lane Craig: [16]

Secondly, if there is no mind distinct from the brain, then everything we think and do is determined by the input of our five senses and our genetic make-up. There is no personal agent who freely decides to do something. But without freedom, none of our choices is morally significant.

While is true that freedom seems to be required for moral responsibility, it's not true that determinism is a problem for freedom.

I will address that in the following section, but in this one, I will argue that even assuming that libertarian free will is coherent and actually is a correct understanding of freedom, there is no good reason to think that that would be a problem for EN.

In fact, there is nothing in EN that entails determinism, and some interpretations of quantum mechanics are non-deterministic.

Someone might suggest that that quantum non-determinism does not provide the adequate kind of non-determinism to allow libertarian freedom.

In reality, non-determinism might hinder but never help freedom, and libertarian 'freedom' is no freedom at all, so there is nothing to this point.

But leaving that aside and assuming otherwise for the sake of the argument, there is no good reason to think that quantum mechanics prevents brains from having whatever kind of freedom souls are supposed to have.

To see this, let's say that there are particles, not souls, and the universe is indeterministic.

Now, the fact remains that humans have minds. We can love, believe, feel, and so on.

So, the conclusion is that particles can interact with each other in ways that result in minds.

Hence, it remains the case that not all the properties of particles are those described by present-day physics, since present-day physics says nothing about either forming minds or interacting with them.

In addition to that, it remains the case that not all the properties of particles that are causally effective are described by present-day physics – particles have the property of being capable of forming minds, for instance.

Moreover, any arguments and/or evidence that we – by assumption – have in support of libertarian freedom would remain, since modern physics says nothing about minds, or what kind of freedom they may or may not have.

Someone might claim that only non-mental properties of particles, or of combinations of particles, would be causally effective in that scenario.

But that would have to be argued for:

Why should we suspect that causal efficacy is limited to non-mental properties – some described by present-day physics, some not – of particles and of certain configurations of particles, but mental properties have no effects?

15) Freedom, libertarian 'freedom', and determinism

As I explained in the previous section, Craig's arguments – or similar ones – fail to show that EN, materialism, or physicalism entail determinism and/or a lack of freedom, even assuming that a libertarian free will hypothesis is a correct account of human freedom.

In this section I will show that libertarian free will is not freedom, and should more properly be called 'random will'.

Let's consider the following scenario:

Alice has been a good police officer for ten years.

She's kind, committed her job, good to her children, and so on.

Now, one morning, Alice goes to work as usual.

The police get a call about a domestic disturbance, and Alice and another officer are sent to the address they're given.

When they arrive there, they encounter Harry, a thirteen-year old kid high on drugs, acting completely irrationally.

He tells Alice: 'You're a police officer, so you're evil. Why don't you shoot me?'

Alice has no reason at all to shoot Harry.

He poses no threat to her, and can be easily arrested if needed.

However, it's clear that she has the power to shoot him, and is free to choose whether to shoot him.

All she'd have to do is pull her gun, point it at Harry, and shoot.

No one would see that coming, so no one could stop her if she did that – no human, anyway; the point is that she wouldn't be stopped.

But Alice – of course – feels no inclination whatsoever to shoot Harry, does not shoot him, and follows procedure.

The point is that saying that Alice can shoot Harry, that she has the power to shoot him, that she is free to choose whether to shoot him, etc., means that she would shoot him if she chose to do so, that she's not being coerced, etc.

It does not at all mean that, even given Alice's mental state at the time she chose to follow procedure, and even given all the conditions of the world at that time and previous times – including Alice's goals, beliefs, character, etc. -, it was still possible that Alice would shoot Harry.

On the contrary, if, given all those previous states, it was possible that Alice shot Harry, then it seems that there is a possible world W with the exact same past as ours prior to Alice's decision to follow procedure, at which Alice shot Harry instead.

But that is not an exercise of freedom, in the usual sense of the words. Rather, it's an unfortunate event that happens to Alice.

To see this, let's consider Alice's mental processes leading to her 'decision' to shoot Alice – say, decision D. Alice never considered shooting her, and had no desire, intention, etc., before decision D happened.

However, at some time, earlier states of the world, including her earlier mental processes did not determine her later mental processes. There is an event "Alice decides to shoot Harry" that happens irrespective of any previous states of Alice's mind, and no matter how much Alice would loath being a murderer.

All of Alice's previous reasoning, desires, behavior, intentions, etc., are incapable to stop 'decision' D from happening. But how's that Alice's decision?

It seems D is not a decision Alice made, but rather, it's something that happened to Alice.

It's not something Alice could have anticipated, or prevented: at some point her mental processes changed from normal to 'shoot Harry', without forewarning, and without any cause in previous mental processes.

Someone might claim that necessarily, there is always some hidden reason to shoot people, or to do anything one can do, but that would have to be argued for, and even then, that would not change the fact that, in that case, Alice could not have prevented his mental processes from changing at some point from normal to 'shoot Harry', no matter what she did before – and that change could not be reasonably said to be her decision, since she had never considered that before, and the change took her by surprise.

Those considerations show that that kind of thing should not be called 'freedom', but more like 'an unfortunate kind of randomness'.

That does not mean that human non-determinism isn't true. But that is surely not required for freedom, and in fact, it might undermine it, as the previous scenario shows.

Perhaps, there are situations after which, after assessing the pros and cons, a human is undecided between A or ¬A; if so, maybe there is a truly random outcome generator for such cases (which might involve also several mutually exclusive options: A1, A2, A3,...)

However, if that is the case, that is not required for free will: a random generator that delivers 'decisions' in cases in which the mind remains undecided clearly does not result in more freedom than a mind that actually makes decisions.

So, if there is such indeterminism, as long as the indeterministic events happen when a person is undecided (based on her previous feelings, desires, reasoning, etc., she is undecided and does not cause any outcome), maybe that randomness is compatible with free will, but that's all.

On the other hand, if there is an indeterministic feature of human behavior that happens to be like Bob's example above – i.e., if it happens against everything that the person stood for, his previous considerations, etc. -, then, and as the previous example shows, that kind of indeterminism – at least, when it happens – would actually preclude free will; rather, the 'decision' would be an unfortunately random will.

There is another way to see this, taking into account that even under the exact same preexisting conditions – including, of course, the previous mental states of the libertarian-free agent.

So, let's consider the following scenario (relativizing time as required):

Alice is a libertarian-free human, and at t(s), the state of worlds W and W' is exactly the same – that includes, of course, Alice's mental processes.

Later, Alice libertarian-freely chooses A at W, and B at W', even though the states of the worlds prior to Alice's decision were the same (A is different from B).

In other words, W and W' are exactly the same until Alice's mental processes diverge.

Now, let p be a Planck time, and n a non-negative integer, starting with 0.

Let's consider times t(s)+n*p, and the states of W and W', W(n) and W'(n) respectively.

Let n(l) be the last n such that W(n) = W'(n).

Since the 'decision' was made even given the exact same prior conditions, it seems that the 'decision' happened between t(s)+n(l)*p, and t(s)+(n(l)+1)*p = t(s)+n(l)*p+p, in other words, the 'decision' was made in not more than a Planck time.

That's way too fast for any human conscious decision, though. So, it becomes clearer that the first indeterministic event E that distinguishes between W and W' is same random alteration of Alice's mental processes.

Someone might suggest that, previous processes in her mind made E in probable, but weren't enough to bring it about, something still altered her mind randomly; let's assume that that would be a coherent interpretation of probability (else, this objection fails already).

Even then, the fact would remain that her mind was altered without a cause, and with nothing she could do earlier to stop it; moreover, in some cases, the improbable 'decision' might happen. And in those cases in which the improbable 'decision' happens – i.e., the decision that her previous mental processes made improbable – we're back with something like the unfortunate case of the libertarian-free police officer.

Someone might still object that, if such a random change in her mind happened, she still could have changed her mind, and refrained from carrying out the decision – in the case of the first example, the shooting.

The problem is, though, that if you can have such a random event between t(s)+n(l)*p, and t(s)+(n(l)+1)*p, it seems you can have another one at every single Planck time that follows, until the "decision" that was completely against everything the person previously stood for, actually happens.

But let's suppose someone introduces some fuzziness in some way – which they would have to explain, of course; else, the previous reasoning stands. Even then, the fact would remain that the agent would have a random component – a change in her mind she can't bring about, because it happens no matter what she tried previously; it's just that we wouldn't be able to see that by means of analyzing the process step-by-step, but all of the other reasons I've given above remain.

A theist might say that that's 'actually the agent acting', or something like that but – whatever that means -, the fact would remain that that would a partially random agent acting, not one in which mental processes are sufficient to bring about behavior; it would be an agent with a randomly altered mind – i.e., a mind that suffers some alterations that have no sufficient causes; it's akin to dice-throwing, and in some cases, it might go against everything the agent had stood for up till then.

So, for all of the previous reasons, the claim that non-determinism is required for freedom ought to be rejected.

That does not mean we can't act of our own accord, of course. We can and sometimes do have freedom; it's just that indeterminism is not required for that.

Now, there is an objection available to the theist, which seems to be Craig's position: namely, that is lack of causal determination that is required for freedom to exist, not lack of determinism.

However, if an event is determined by previous conditions, then it seems it's causally determined too, since some the previous conditions would be causes. How would it be otherwise?

It seems puzzling.

But regardless, we can make a case against the requirement of causal indeterminism independently. If causal indeterminism is true, then no matter what Alice does up to some time t, all of her thought processes, intentions, desires, memories, reasoning, are all insufficient to bring about her decision. So, it seems that the "decision" might just happen to her, and she might still shoot Harry.

Again, the theist might say that that's the agent acting. But how can she act so quickly – indeed, instantaneously?

In any case, as I explained in the previous section, whatever the correct account of freedom is, if a soul can do it, there is no good reason to think that souls are required but particles can't do it.

16) A practical diversion

While not actually metaethical arguments, Craig gives 'practical' arguments for the existence of God, in the context of his metaethical ones.

I will assess them in this section, and make a general point against all similar 'practical' arguments.

16.1) Accountability

Craig raises the issue that someone might get away with evil, without theism.

According to Craig, if there is no God, there is no moral accountability, and somehow it doesn't matter how one lives. [16]

However, even without any kind of afterlife, Craig's claim is false: our choices affect our future, and the future of others; they can cause happiness, suffering, etc., to us and/or to other people, and of course that normally matters to us and to others.

Moreover, there is in many cases moral accountability even if God does not exist, like bank robbers going to prison.

So, there does not need to be an afterlife for justice to be done in many cases.

There may not always be accountability, but there is in many cases.

That said, the previous considerations, while correct, are actually minor in this context, since we're talking about a 'practical argument' for belief in God, and that's epistemically disastrous.

It would be irrational for a person to come to believe that God exists just because he or she realizes that some bad people would get away with murder under EN, and that makes them feel sad – or however it makes them feel.

That would be some kind of wishful thinking.

The fact that having belief B would make a person feel better does not warrant having belief B.

In fact, it's not clear to me how this is even psychologically doable.

How would someone trying to engage in that kind of wishful thinking go about it?

Bob: Let's see: I do not have the belief that God exists, or enough reasons that would convince me of that. However, without God some people will probably get away with evil, and that is disheartening. So, from now on, I will believe that God exists.

That's just not doable – i.e., I don't think that that would actually result in belief.

Someone might suggest some kind of Pascal Wager-style conversion, in which people practice the rituals of a religion in order to somehow gradually convince themselves that said religion is true.

I'm not sure that that would be doable.

At least, I'm pretty sure that for many of us, it wouldn't be, though it might be for others.

However, in any event, that would be another irrational course of action.

To be fair, Craig does not attempt to use the practical argument alone.

Instead, he proposes to use practical arguments to "back up or motivate" the acceptance of what he believes are sound theoretical arguments.

However, that would be irrational as well: if the person has not been persuaded by the theoretical arguments, they would still be engaging in wishful thinking to come to believe in God.

In short, it's still an epistemic nightmare.

16.2) Motivation again

Another 'practical argument' Craig gives is based on the issue of motivations for doing the right thing.

Of course, as in the case of the previous 'practical argument', it would be irrational to believe on account of this.

That aside, Craig contends that sometimes self-interest is in conflict with morality. [16]

Of course, Craig is using 'self-interest' in a way that excludes a person's interest in doing what's right, simply because it's the right thing to do.

That usage is common, so it's not a problem, but we need to keep in mind that that's what's meant by 'self-interest'; it's not the only interest people have, of course:

Because of their own psychological makeup, human beings are motivated not to act immorally; that's also one of our interests, even if not covered under the label 'self-interest'.

That motivation may be defeasible, but it's there, with the possible exception of some psychopaths.

Moreover, it seems to me that in order for an action to be morally good, motivation counts.

For instance, it seems clear that helping people out of fear of damnation would not be morally good. It wouldn't always be morally wrong, either. But it wouldn't be morally good – the behavior might just be morally neutral.

It is true, though, that fear can prevent some people from behaving immorally.

On the other hand, if we're engaging in practical considerations – which have nothing to do with moral ontology, moral semantics, moral knowledge, or whether God exists -, let's also assess some of the potential consequences of coming to believe that God exists.

The fact is that, in addition to the irrationality of adopting a belief for practical reasons, usually such beliefs are not just some kind of unspecified theism, but some version of Christianity or Islam, with all the baggage of false beliefs – including false moral beliefs -, attached to them.

False moral beliefs tend to cause people to behave immorally, believing that they're doing the right thing.

Someone might point out that some non-theists have engaged in terrible behavior, perhaps in the name of communism or some other ideology.

That is true, but the point remains that adopting false moral beliefs generally results in more immoral behavior, regardless of whether the false ideology the false moral beliefs come from is religious or not.

In practical terms, people who become non-theists are not likely to engage in such actions, whereas people who become theists are likely to become Christians or Muslims aren't likely to kill in the name of their religion, either, but are likely to follow some of its false moral teachings, so conversions do tend to have such negative consequences.

Still, none of that is the main point here.

The main point is the irrationality of using so-called 'practical arguments' for belief, regardless of what the belief is about.

17) The failure of Divine Command Theories (DCT)

There are several theories, about moral ontology and/or moral semantics, that may be called 'Divine Command Theory'.

A semantic DCT would posit that 'Agent A has a moral obligation to to X' means 'God commands agent A to do X', or something like it.

I'm not sure many philosophers would defend semantic DCT, but refuting them will be useful – though not required – as a means of showing why one of Craig's arguments failseven if his metaethical argument is an ontological one, he seems to make some (false) semantic assumptions.

An ontological DCT claims that moral obligations/duties are – in some sense of 'constituted – constituted by the commands of God, without necessarily making a semantic claim like the one made by semantic DCT. [17]

For instance, William Lane Craig made such a claim in his debate with Sam Harris.[2]

It's not entirely clear what 'constituted' means in this context, since moral obligations aren't some entity that may have a certain composition, and the claim is not one of semantic equivalence, either. Still, I will let that pass and in any case present an objection that succeeds in spite of the obscurity of the claim, since it works under any plausibly understanding of the word 'constituted'.

Before I get to that, I would like to clarify that there is no need to refute DCT in the context of a case against theistic metaethical arguments: while a theist might present a DCT in the context of such an argument and demand that the non-theist offer an alternative, there is no burden on the non-theist to present another metaethical hypothesis, or to refute the theistic one, as I explained in earlier sections.

In fact, even if the non-theist has no metaethical theory and the theist does, that does not, on its own, place the theist in a better epistemic position, just as someone who posits an account of human origins (e.g., Young Earth Creationism) is in no better position that someone who has no hypothesis about the origin of humans just because of it.

Furthermore, it would be up to the theist proponent of a DCT to defend his hypothesis, and if he has no good reasons to believe it's true, then his position would of course be worse than that of a non-theist who had no alternative hypothesis to offer.

Still, DCT are false, and I will now proceed to show that they are:

17.1) Ontological Divine Command Theories

Before I address the heart of this matter, I will address a specific issue about metaphysical possibility, in order to preempt certain potential objections.

17.1.1) Metaphysical possibilities

Let's consider the following hypothetical dialogue:

Alice: Water is H2O.

Tom: I don't believe it.

I believe that that's a scientific conspiracy.

Water is not H2O, but Hg2Po.

Alice: What? Hg2Po? That's absurd!

Tom: That's easy to say, but do you have any evidence?

And don't tell me to look at papers or textbooks. They're all in on the conspiracy.

If you want to persuade me that water is not Hg2Po, then show me that it is not, and then maybe you can try to convince me that it's H2O.

Alice: Hmm...let's see: Do you know what the composition of sulfuric acid is?

Tom: Yes, that one is real. Sulfuric acid is H2SO4.

Alice: Good. Let's see:

If water were Hg2Po, then the molecule of water would be heavier than the molecule of sulfuric acid.

Maybe we can use that to test the theory.

Bob: I'm sorry, Alice, but that's impossible.

I agree with you, of course, that water is H2O.

However, given that water is H2O, it's metaphysically impossible for water to be Hg2Po, so your conditional has a metaphysically impossible antecedent.

So, I'm afraid that you're constructing a metaphysically impossible scenario.

Tom: Well, water isn't H2O, but if it were, then Alice would indeed be constructing a metaphysically impossible scenario. So, Alice, your suggestion fails. Try again.

Alice: What are you two even talking about?

I'm not suggesting that Alice's test is a good one, or that entertaining Tom's absurdities is a good idea, either, but my point here is that Alice's claim 'If water were Hg2Po, then the molecule of water would be heavier than the molecule of sulfuric acid' is clearly a true claim.

The objections raised by Bob and Tom are very confused.

That it is metaphysically impossible for water to be anything but H2O has nothing to do with the truth value of Alice's conditional.

Incidentally, as a side note, if someone is a theist and shares Bob's confusion, he might object to any argument that has a premise stating 'If God does not exist...', since theists usually hold that God exists necessarily.

17.1.2) The moral obligations of a personal creator

If there is no personal creator of all other personal beings, then God does not exist. Then, it is not the case that our moral obligations are constituted by God's commands, and so DCT are not true.

So, let's assume in the rest of this subsection, and for the sake of the argument, that there is a personal being, creator of all other personal beings.

Let's name that being 'Alex'.

In other words, by 'Alex' I mean 'The personal being who created all other personal beings.'

I don't mean anything else by 'Alex'.

Now, here Alex is a personal being, and not a baby.

So, let's see that Alex actually has moral obligations.[18]

To show that, my strategy is in a sense similar to Craig's strategy in support of the second premise of his metaethical argument, which appeals to people's intuitive assessment that, say, the Holocaust was morally wrong, that torturing a child just for fun is immoral, and so on.

So, Alex is a person who created all other personal beings.

My claim – which I would ask readers to please assess by their own sense of right and wrong – is the following:

S1: If Alex were to create other personal beings for the specific, deliberate and exclusive purpose of torturing those beings for all eternity, then Alex would be acting immorally.

It seems to me that S1 is obviously true.

I'm not even talking about eternal punishment in Hell – I maintain that that would also be immoral, but that's a matter for another article.

In the case under consideration in S1, there is no punishment for any sin; we're talking about a person creating personal beings with the specific, deliberate and exclusive purpose of torturing them for all eternity.

So, if Alex created other personal beings for the specific, deliberate and exclusive purpose of torturing those beings for all eternity, then Alex would be acting immorally.

Someone might object that if Alex is God, then the antecedent of the conditional S1 is metaphysically impossible.

However, that would be a very confused objection, as explained in the previous subsection.

Alternatively, someone might suggest that my argument is circular, because – allegedly – I would be somehow assuming that there is moral knowledge without God.

But that is not the case:

First, the previous sections already show that metaethical arguments for theism fail to show that there is any problem for moral knowledge under EN.

So, as long as I'm justified in holding that there is moral knowledge, it seems I'm also justified in holding that God is not required for that, and a theist is not in a position to challenge a claim that I'm justified in holding that there is moral knowledge.

Second, even leaving all of the previous arguments aside, here I don't need to hold that there is moral knowledge without God. The arguments that I'm making in this section are against DCT, not against theism.

So, I don't even need to assume that Alex is not God. I'm not making any assumptions on that matter, one way or another. Rather, what I'm doing is:

a) Accepting that there is moral knowledge.

b) Assuming that Alex exists.

In other words, I'm assuming that a personal creator of all other personal beings exist.

c) I'm not making further assumptions about Alex; in particular, I'm neither assuming that Alex is God, nor that he or she is not God.

d) I'm using my sense of right and wrong to conclude that S1 is true – and asking readers to use their own moral sense, of course.

In other words, I'm concluding, using my moral sense, and some assumptions that are entailed by theism, that if Alex were to create other personal beings for the specific, deliberate and exclusive purpose of torturing those beings for all eternity, then Alex would be acting immorally.

So, this objection would fail as well.

There is no improper assumption or circularity on my part.

Now, I will appeal to the reader's grasp of moral terms, and claim that – just by the meaning of the words – S1 entails:

S2: Alex has a moral obligation not to create other personal beings for the specific, deliberate and exclusive purpose of torturing those beings for all eternity.

Readers will use their own grasp of moral terms to make their own assessment, of course, but I contend that S2 follows from S1 just as 'Barack Obama is not a bachelor' follows from 'Barack Obama is married'.

So, Alex has a moral obligation.

But that moral obligation is not constituted by one of Alex's commands – hopefully, that is clear.

In other words, Alex's moral obligation not to create other personal beings for the specific, deliberate and exclusive purpose of torturing those beings for all eternity, is not constituted by Alex's 'command to Alex' not to create other personal beings for the specific, deliberate and exclusive purpose of torturing those beings for all eternity – there is no command from Alex to Alex.

Hence, it is not true that moral obligations are constituted by Alex's commands.

Now, if God exists, then God and Alex are the same person, since God is the creator of all other personal beings, and Alex is the creator of all other personal beings.

Therefore, it is not true that moral obligations are constituted by God's commands.

Therefore, Divine Command Theories are not true.

17.1.3) Rebuttal to a potential theistic objection

A theist might claim that Alex is a person but not a moral agent, so it's not true that it would be immoral of Alex to create other personal beings for the specific, deliberate and exclusive purpose of torturing those beings for all eternity.

However, if Alex is God – this objection would hold -, then Alex is not morally good in the sense we use the word, but in some analogous sense.

However, when theist non-philosophers say that God is morally good, there is no reason at all to suspect that they're not using the term "morally good" in the usual sense of the words.

In fact, the same seems to apply to most philosophers:

For instance, they try to come up with explanations as to why God creates a world with suffering, allows moral evil, etc., based on some arguments about how a morally good creator would act.

If they mean something else, their claims would be puzzling: What would they be talking about?

So, if a philosopher claims that the above is not true, and that "morally good" is usually used in some analogous way in the case of God, they have the burden to show that that is the case.

Alternatively, if they just want to use the term "morally good" in a non-standard manner, they ought to define it, but that would not block the case against DCT.

Finally, a theist might suggest that I'm being circular, because God may have moral properties, but has no moral obligations. However, as I explained in the previous subsection, there is no circularity: I'm assessing – not assuming – that he has moral obligations.

17.2) Semantic DCT

Any semantic DCT can be refuted more easily, by taking a look at how theists themselves use moral terms:

First, there are many theists, who believe that Yahweh had or has a moral obligation to honor his covenant with the ancient Hebrews.

It is apparent that their belief that Yahweh has a moral obligation to honor his covenant with the ancient Hebrews is not a belief that God commanded Yahweh to honor his covenant with the ancient Hebrews.

Second, there are also theist philosophers who say that God has moral obligations; for instance:

Richard Swinburne: [18]

God has a moral obligation to make himself known

Swinburne is most certainly not saying that God commanded God to make himself known, or anything of the sort.

Third, there were plenty of people, in different societies, who did not believe in God, or didn't even have the concept of God, and yet they made moral claims without a problem.

A theist might claim that they had the concept, even if not the belief.

That's not true – in many cases, the entities they believed in didn't remotely resemble God, and had not even consider such a being -, but there is no need to address that, since their lack of belief suffices to make the following point:

If 'Agent A has a moral obligation to do X' means 'God commands agent A to do X', or something like it., unbelievers would plausibly immediately realize that they are affirming the existence of God all the time.

Someone might object that some semantic identities aren't transparent.

That may be true, but that is not plausible in this particular case: if someone is actually making a claim that a person issued a command forbidding some behavior, it seems difficult to see how they would all fail to realize that they're talking about that person.

To make the matter more concrete, let us consider a specific example: Japan.

Japan is a country in which there is no tradition of belief in God: the main religions – both traditionally, for a long time, and in the present – are Buddhism and Shinto, not Christianity, or Islam, or any other religion that posit the existence of God, even if some of them posit other odd entities.

In fact, today as in the past, the vast majority of people do not believe in God – while different polling methods yield different results, all of them agree in that it's a significant majority. [19]

Yet, clearly, and with the exception of cases of severe mental illness, Japanese adults grasp the meaning of moral terms, can and do use them competently, etc., and would realize it if they were making claims about God issuing commands.

So, they're not making claims about commands or prohibitions issued by God, or anything of the sort.

Hence, semantic DCT are not true.

17.2.1) Obligations and commanders

Even though, in the context of his metaethical argument, Craig is concerned with moral ontology rather than moral semantics, Craig contends that, on atheism, there are no moral obligations of prohibitions because there is no competent authority to issue moral commands or prohibitions:

William Lane Craig: [12]

Moral obligations or prohibitions arise in response to imperatives from a competent authority. For example, if a policeman tells you to pull over, then because of his authority, who he is, you are legally obligated to pull over. But if some random stranger tells you to pull over, you’re not legally obligated to do so.

Now, in the absence of God, what authority is there to issue moral commands or prohibitions? There is none on atheism, and therefore there are no moral imperatives for us to obey.

Yet, Craig does not provide any good reasons at all to even suspect that having an obligation would require having an authority issuing commands.

Moreover, the fact is that there is no semantic requirement.

As I explained earlier in this subsection, semantic DCT are not true.

Someone might suggest another semantic connection.

They would have to explain themselves, but it seem the same kind of considerations would rule that out.

Now, Craig's argument is an ontological one, not a semantic one.

However, the non-theist objectivist is not challenged in the least by an ontological challenge in absence of a successful semantic challenge.

Why should we even suspect that without such a competent authority issuing commands, moral obligations would not exist, if the semantics of the words do not require it?

Moreover, as I showed earlier, ontological DCT aren't true, either, so clearly no such requirement exists.

Finally, Craig's police officer analogy does not work, either:

If the police officer analogy is an attempt to introduce a semantic challenge, suggesting that moral obligations entail a Supreme Commander by the meaning of the words, then the challenge fails, for the previously explained reasons.

Second, if it's not meant to suggest any semantic requirement, then Craig provides no good reason to suspect that morality and legality are indeed analogous in the case under consideration.

Furthermore, the case against ontological DCT still applies.

So, in brief, Craig's argument about a "competent authority" provides no good reason to even suspect that a lack of a Supreme Commander would be in any way a problem for morality.

18) Copan's metaethical arguments against evolutionary naturalism

With slightly different terminology Paul Copan also makes a metaethical argument against EN[3].

Given that Copan's claims are similar to those of some of the previous arguments, there is no need to add this section. However, I decided to add it in order to address some specific points in greater detail.

18.1) Moral truths and valuing

Copan begins by arguing that humans possess a built-in moral sense, and that there are some moral truths that we can't fail to know.

Copan:[3] (p. 142)

Likewise, despite flawed moral judgments, there still are certain moral truths that we can’t not know—unless we suppress our conscience or engage in self-deception. We possess an in-built “yuck factor”—basic moral intuitions about the wrongness of torturing babies for fun, of raping, mur­dering, or abusing children. We can also recognize the virtue of kindness or selflessness, the obligation to treat others as we would want to be treated, and the moral difference between Mother Teresa and Josef Stalin.

I would agree that some of those assessments are true in all possible cases – e.g., it's always immoral to torture babies for fun -, but others do not appear to be so.

For instance, plausibly there is not always a moral obligation to treat others in the way we would like to be treated.

In fact, that 'we' applies to all humans, but clearly, a murderer has no moral obligation to help other murderers escape justice, even if he would want to be helped by others in that fashion. Someone might say that the murderer has a moral obligation not to try to escape justice. But that's not the point. Rather, the point is that it seems not all humans always have a moral obligation to treat other humans the way they would like to be treated.

Also, let's suppose that Bob does not want to be treated the same way Alice wants to be treated. Does Bob always, under all circumstances, have a moral obligation to treat Alice the way he would want to be treated, regardless of how she wants to be treated?

That seems very implausible. Perhaps, someone might say that the proper way of understanding treating others the way we would want to be treated includes treating others the way they want to be treated, because we would want to be treated in that fashion as well. But that is not always true, either, since we do not always have a moral obligation to treat others in the way they would want to be treated. For instance, a murderer might want to be allowed to go free, but we have no moral obligation to let him do so.

So, it seems to that treating others the way we would want to be treated is not always a moral obligation, though sometimes it may well be.

As for Stalin and Mother Theresa, we can recognize of course moral differences, but Copan seems to be suggesting that Mother Theresa was a particularly good person, which is at the very least very debatable. Of course, there is no doubt that Stalin was far, far worse.

Still, those are side issues in this context.

The key points here are the issue of a built-in 'moral sense', and that morality is not invented but recognized.

I've already explained this matter through this article, so I won't get into that any further. The important point here is that a non-theist who accepts EN may well accept Copan's claims that we have a built-in moral sense, that we do not invent morality, etc., without any complications.

The moral sense is a species-wide human trait; on a potential account compatible with EN, just as zurkovians might have their z-moral conscience, humans have a moral conscience, as a result of the evolutionary process.

Of course, someone who accepts EN also has no difficulty accepting that if the members of a tribe believe that it's morally acceptable for them to sacrifice their firstborns – to use one of Copan's example -, they would be very mistaken. She may go further and point out that religion often perpetuates terrible moral mistakes, like sacrificing newborns to alleged deities or, say, burning a woman to death just because she's the daughter of a priest and a prostitute, as Mosaic Law monstrously commands. Incidentally, Copan also insists that without the Law of Moses, Gentiles would still have a conscience "written in their hearts". But apart from the fact that the law of Moses was profoundly immoral[11], the claim of a conscience 'written in their hearts' is not problematic at all, as explained earlier.

18.2) Valuing instrumentally and valuing finally

We can distinguish between valuing some being, action, etc., instrumentally, or finally/as a goal.

Roughly, that can be characterized as follows:

a) Agent A values X instrumentally if and only if A values X as a means to obtaining some other thing Y.

b) Agent A values X finally, or as a goal, if A values X even for its own sake, even if X does not help A obtain any other thing Y.

For instance, it may be that a chimpanzee – say, Jack -, values a makeshift spears as a means of getting bushbabies (i.e., food). In that case, Jack instrumentally values the makeshift spear.

It may be that Jack values bushbabies meat for its own sake and not only as a means to some end, so he values the meat finally, though perhaps it's more accurate to say he values meat instrumentally, as a means of feeling better by eating it. But Jack may well value his mother finally, and be willing to defend her even if she's old and he would get nothing out of it.

That aside, so far in this subsection I've been talking about agents positively valuing things, though I've left the 'positive' implicit (which is standard usage); now, let's consider the case of agents negatively valuing things.

Generally, as with positive valuing, agent A may value X negatively because of X itself – i.e., regardless of X's consequences -, or because of some of X's consequences.

Human minds are more complex than chimpanzee minds, but the previous classification works just as well.

For instance, a human, Alice, might value her car instrumentally, value a business associate both finally and instrumentally, and value her mother exclusively finally. She might also value a hurricane negatively because of its consequences, and not value a similar storm on an uninhabited planet either positively or negatively, and she may well value immoral behavior negatively on its own sake, not only for its consequences.

On the other hand, a hypothetical zurkovian might value humans instrumentally – for instance -, but value other zurkovians exclusively finally, or both instrumentally and finally. Or she might – for instance – value humans negatively because of the consequences of human presence.

So, in brief, agents might value things positively or negatively, and might value things for themselves, or because of some of their consequences.

As for valuing something intrinsically, that term may be used to mean the same as valuing something finally, and that's understandable. But there is also considerable confusion in some metaethical arguments when the word 'intrinsic' is misused. Now, let's see what Copan claims:

Copan:[3](p. 143)

Such an affirmation of human dignity, rights, and duties is something we would readily expect if God exists—but not if humans have emerged from valueless, mindless processes (more below).

Clearly, our ancestors valued some things positively and others negatively long before there were any humans. But of course, on EN, there was no intelligence directing the process and making evaluations. If there were some problem with entities capable of valuing under EN, then a metaethical argument would be a moot one. For that matter, a tigress is not a moral agent, but she may well value a good steak, so a tigress would be a problem for EN as well. Of course, I do not think that valuers are a problem for EN, and anyone who claims that they are would have the burden, but as I mentioned, that would no longer be a metaethical argument, and Copan is not even trying to argue for that, other than his general claim that consciousness is a problem for ENwhich isn't a metaethical argument, either.

So, let's consider another one of Copan's claims:

Copan: [3](p. 146)

Why think impersonal/physical, valueless processes will produce valuable, rights-bearing persons?

On a potential account of morality compatible with EN, what the evolutionary process produced is beings with a moral sense, and who value some things negatively and others positively.

There is no good reason to think that such a process wouldn't produce such agents, and on the other hand, there is plenty of good reasons to believe that such a process can produce and has produced many valuers, with species-wide and species-specific senses.

So, the person who accepts EN is not at a disadvantage. On the contrary, it's the theist who has to engage in serious mental gymnastics to persist in his belief in an omniscient, omniscient, morally perfect creator despite the amount of suffering in the world, the existence beings with imperfect moral senses, etc. Worse, even, the theist is usually not a generic theist, but a Christian, or a Muslim, etc., so he also has to deal with the specific actions of the alleged creator described in the Bible or the Quran, many of whom are profoundly evil (I argued that elsewhere, in the case of the Bible[11]; the Quran is relevantly similar), whereas none of those problems exist under EN.

But leaving ethical challenges to theism and to specific religions aside, the fact remains that Copan's arguments fail to present any difficulty for morality under EN, as explained, and seem to be based on a confused understanding of valuing; on that note, let's assess another claim:

Copan [3] (p. 146)

So if humans have intrinsic, rather than instrumental (or no) value, the deeper, more natural context offering a smoother transition is a personal, supremely valuable God as the source of goodness and creator of morally responsible agents.

Here, Copan is talking about some mysterious 'intrinsic value', which he does not define or explain.

But regardless, the point is that agents value things (using 'things' broadly, encompassing other agents, behaviors, etc.). Sometimes, they value those things instrumentally, sometimes finally/intrinsically, sometimes both; sometimes, they value things positively, and sometimes negatively. But there is no entity 'value' floating around so to speak; that would be a confusion: we're just nominalizing and using a noun instead of a verb, but that's certainly no reason to add things to our ontology.

That aside, some humans may well value some other humans intrinsically, of course.

Moreover, it may be that it's immoral for a human to value another human only instrumentally.

But none of that is a problem for EN, or makes it a requisite to posit some mysterious entity 'value' that somehow emerges from the evolutionary process. What emerges are entities that value different things; in other words, valuers emerge, or more precisely evolve.

Valuers aside, Copan claims that consciousness is a problem for naturalism. Now, it is true that consciousness emerges from beings without minds on EN, unless panpsychism is true and the basic phenomenal consciousness counts as 'consciousness' in the relevant sense. But in any case, an argument to the conclusion that consciousness is a problem for EN is no longer a metaethical argument, so I will not address that issue.

So, leaving consciousness aside, the point is there is no need to argue for any kind of weird supervenience.

All that is needed is that valuers – i.e., entities who value other things – evolve from non-valuers, which does not seem to present any particular difficulty for EN if minds do not present a difficulty for EN. I do not see any good reason to believe that minds present any difficulty for EN, but there is no need to address that here, since – as I mentioned – that would not be a metaethical argument anymore.

18.3) Arbitrary morality?

Another part of Copan's metaethical argument against EN is based on an alleged claim of 'arbitrariness'.

However, Copan does not explain why the possibility that evolution could have taken a different path would make morality arbitrary under EN. For instance, we don't say that color is arbitrary just because other species have different visual systems, or because evolution could have taken a different path.

So, we shouldn't say that morality would be arbitrary on EN just because of that.

A better description would be 'specific to the human species and any relevantly similar being, real or hypothetical', rather than 'arbitrary'.

That aside, let's assess Copan's arguments.

Copan [3] (p. 152)

Given naturalism, it appears that humans could have evolved differently and inherited rather contrary moral beliefs (“rules”) for the “chess game” of survival. Whatever those rules, they would still direct us toward surviving and reproducing. Ruse (with E. O. Wilson) gives an example: instead of evolving from “savannah-dwelling primates,” we, like termites, could have evolved needing “to dwell in darkness, eat each other’s faeces, and can­nibalise the dead.”

Actually, we couldn't have evolved like that, since those beings wouldn't be we. In that scenario, we wouldn't have evolved at all.

Also, Copan seems to make an assumption that goes too far, for the reasons I explained earlier, namely that whether any kind of social organization would evolve in our universe in strongly intelligent social beings is an empirical matter, and should not be assumed.

On the other hand, it should not be assumed that any strongly intelligent social beings will have exactly the same sense as our moral sense. In fact, it seems that different social species with similar IQ may evolved rather different propensities for social behavior, so the assumption in question seems implausible to me, but it's a matter for biologists to tackle.

All that aside, if such variation does exist, that's not a problem for EN, either, as I explained earlier, when I analyzed Linville's argument. Copan seems to assume that that would be a problem, but gives no good reason to believe that that would be the case.

But let's look at the matter from another point of view:

As before, let us suppose that, in the future, we or our descendants in fact do make contact with an advanced alien civilization, and the aliens – which/who are strongly intelligenthappen to have not a moral sense, but something different, like a z-moral sense.

Should we, or our descendants, conclude that – for instance – our assessment that the Holocaust was immoral, is unwarranted, just because some smart aliens on another planet happened to evolve differently, without a moral sense?

Should we withdraw the clear assessments that, say, Hitler and Ted Bundy were bad people, merely because some intelligent aliens orbiting a distant star do not have a moral sense, but something only somewhat similar instead?

Oddly, Copan seems to assume that the answer to those and any similar question is always 'yes', but without giving any good reasons to believe so.

Ironically, Copan, Linville and other theists believe that potential variation from species to species is an advantage that theism has over ENindeed, maybe even a defeater for EN! -, whereas in reality that potential variation is a very serious disadvantage for theism, since it shows that the theist is committed to a bold claim about exobiology without any good evidence in support of it, as I explained earlier.

That aside, Copan reaches yet another unwarranted conclusion: He suggests that practices such as so-called 'honor killings', or 'female circumcision' somehow can't be rationally condemned if EN is true. Yet, Copan provides no good reason whatsoever to believe any of that.

In fact, and while he's somewhat unclear on the subject, Copan appears to believe that, under EN, whatever helps maximize survival – or, perhaps, reproductive success -, would be morally acceptable. But that clearly does not follow, and he once again provides no good reason in support of such a hypothesis.

On a potential account of morality compatible with EN, humans have some moral sense that tracks some mental properties, and humans also normally value some of those properties positively, and others negatively. That sense and propensities evolved because they were on average conducive to reproductive success in the ancestral environment, if the moral sense is entirely the product of adaptation. However, even that would not mean that doing the right thing is always conducive to reproductive success, or even that it was always so in the ancestral environment.

Moreover, on EN, our moral sense does not even have to be a sum of adaptations. It may be a combination of adaptations and side-effects of other adaptations, in which case the requirement of being on average conducive to reproductive success is not even needed in all cases.

Also, the propensity to act immorally in some cases is also not surprising, since our mind is the product of a process involving multiple selection pressures, sometimes going in different directions at different times in our past.

In brief, the claim of 'arbitrariness' fails to make a dent on the hypothesis that EN is true and compatible with moral truth, moral knowledge, moral progress, etc.

On the other hand, Copan's claims only highlight one of the theist's unwarranted commitments – in this case, a bold unwarranted claim about exobiology.

18.4) Explanatory power and bloated ontologies

Another objection that Copan raises[3] maintains that, on EN, adding what he calls 'objective moral values' results in a bloated ontology, since there is no explanatory need for 'ought', and 'is' can do the job just as well. As an example, he claims that Hitler's actions can be explained describing that he was angry and bitter, and had false beliefs about the Jews, etc., without using any moral terms.

Now, 'ought' statements seem to follow from 'is' statements of the form 'is immoral', etc., but leaving that aside, if we can explain Hitler's behavior without using moral terms, that is not at all a good reason to even suspect that he wasn't morally evil, or that somehow morality bloats our ontology on EN, and Copan does not explain why that would be so.

To see this, we can use two analogies, independent from each other, and each of them sufficient to show that Copan's argument is misguided: illness and redness.

18.4.1) Illness and moral badness

18.4.1.1) Language

Let's say we observed that Joe was screaming, moving erratically, etc.

We may account for Joe's behavior by explaining that, say, he a virus caused such-and-such effects on such-and-such organs, etc., that caused pain, and so on. We do not need to say that Joe was ill, in order to account for his behavior.

Also, let's say that we observed that Ahmed was behaving in an uncharacteristic manner lately. We can explain that by pointing out that some of the cells in his brain developed in such-and-such way – which we may also call 'brain tumor' -, and that that had such-and-such effect on certain parts of the brain that are associated with certain behaviors, so the alteration affected his behavior in such-and-such manner, and so on. Once again, we do not need to point out that Ahmed was ill in order to account for his behavior.

Obviously, none of the above suggests that Joe or Ahmed wasn't ill, or that somehow illness bloats our ontology if EN is true.

In fact, it seems that we do not need to add anything at all to an ontology just because Joe and Ahmed were ill:

We may simply posit – for instance – that humans evolved a sense that allows us to detect some of the things around us (including what we now may call 'ill organisms'). As language developed, our ancestors came up with words that they used when they perceived and/or contemplating some of them, etc. As a result, today we have words such as 'ill'. In addition, as a result also of the evolutionary process, we're predisposed to feel and generally respond in certain ways when finding such organisms.

A similar sense may be posited in the case of morality, and also certain psychological reactions to finding morally bad organisms.

If a theist claims there is a difference that is relevant in this context, they would have the burden to explain why that is so, and why we would need to add anything to an ontology in the moral case, always keeping in mind the following:

First, we can explain Joe's and Ahmed's behaviors without resorting to illness language, but that does not entail that they were not ill, or that our assessments about illness are unjustified or false, or that illness bloats our ontology; generally. If we don't need to use illness language, that is no problem for our assessments about illness.

Second, and as a consequence, it is not always the case that, for all X, and for all Y, if we can explain phenomenon X without using Y-language, then our Y-assessments are unjustified, or would commit us to a bloated ontology, etc.

So, if the theist claims that in the moral case, if we can explain all behavior without using moral language under EN, that would be a problem, then it's up to the theist to explain why that is so. Merely claiming that we can explain Hitler's actions without using moral language is not enough.

Note that an attempt to make an Open Question Argument fails here, as such an argument always does, and for the reasons explained before. We may additionally point out that a question like, say, 'I know that Ahmed has a tumor in his brain, but is he ill?' does not appear to be any less open than, say, 'I know that Lex likes to torture children for fun, but is he a bad person?'.

Someone might say that illness statements are descriptive, not prescriptive. In fact, Copan raises that issue. I've already dealt with that matter sufficiently in an earlier section, so I will refer readers to that part of this article, but here I will point out that such an issue is beside the point in this context, since the point is that it is not always the case that, for all X, and for all Y, if we can explain phenomenon X without using Y-language, then our Y-assessments are unjustified, or would commit us to a bloated ontology, etc., so it would be up to the theist to explain why, if we can explain behavior without using moral terms under EN, that would be a problem for EN. Copan fails to provide any good reason for believing that that would be a problem.

18.4.1.2) Facts

At this point, a theist might say that the problem is not that we can explain his actions without using moral language, but that we do not need to posit moral facts. But what are they saying? What distinction are they making?

In any case, we can make the previous analogy anyway, in the following manner:

The only reason Copan gives for the claim that we can explain Hitler's behavior without positing moral facts is that we can describe his behavior in non-moral terms, and in doing so, explain why he acted as he did. Thus, and for the same reason, we can similarly explain Joe's and Ahmed's behaviors without positing illness facts.

Yet, that does not entail that illness facts bloat our ontology: Indeed, it seems we do not need to add anything at all to our ontology:

We may simply posit – for instance – that humans evolved a sense that allows us to detect some of the things around us. As language developed, our ancestors came up with words that they used when they perceived and/or contemplating some of them, etc. As a result, today we have words such as 'ill', which we can properly apply when detecting such things (e.g., ill people).

Would that mean that we need to posit some entity 'illness' above and beyond, say, animals and other agents infected with viruses, or having cancers, etc.?

It seems clear that no further entity is required.

In other words, it seems that the fact that there are illness facts seems not to require any extra entity, so our ontology is not bloated.

Why would moral facts be any different, then?

On these accounts, we also developed psychological responses to finding ill organisms, or morally bad ones, but that's also unproblematic under EN.

The theist might claim that both illness and moral badness are a problem under EN, or try to make a relevant distinction, explaining why the moral case is problematic. But in any case, they would have to explain why, and so the burden would be on them.

18.4.2) Redness and moral badness

According to Copan, Hitler's behavior can be explained using only a non-moral description, and somehow that would mean that moral facts would bloat our ontology on EN. As we saw in the previous subsection, Copan's claims fail to present a challenge for the person who accepts that EN is true.

But now, let's take a look at the matter from another perspective.

Presumably, what he's saying that all behavior can be explained with a non-moral description under EN, and without positing moral facts. Otherwise, the alleged lack of explanatory power of moral facts would not be so.

But let's consider our assessments that, say, a man who tortures children purely for fun is a morally evil person, or that Hitler was a morally evil person.

Under EN, we may posit – for instance – that we may well have a sense that allows us to ascertain moral facts, thus explaining our behavior in making said assessments. So, it seems moral facts can play such an explanatory role on EN, as color facts or illness facts do.[20]

Now, someone might say that even if moral facts can play such role, we do not need them, and we may provide an explanation of our moral assessments excluding them. But what would that explanation be?

Someone might offer an alternative explanation such as the following: we develop a sense that makes us falsely believe that, say, Hitler was a morally evil person, etc., because that was advantageous in the ancestral environment, in the sense that it was on average conducive to reproductive success to have such false beliefs.

But similarly, we may offer the following explanation: we developed a sense that makes us falsely believe that, say, some fruits are red and others green, because that was advantageous in the ancestral environment, in the sense that it was on average conducive to reproductive success to have such false beliefs.

The same could be said, of course, in the case of illness.

The question is not whether such explanations are logically compatible with EN, but whether they would be plausibly correct explanations, accepting that EN is true, or at least, whether they would not be improbable.

Now, Copan seems to imply that some explanations that do not posit moral facts would be not only compatible with EN, but also preferable, or better explanations for our observations, etc.

But why would such explanations be preferable? Why would, say, a moral error theory account for our observations – including the fact that we make moral assessments – better under EN, but not a color error theory, or an illness error theory?

Copan does not provide any good reasons to even suspect so.

Perhaps, Copan believes that color is also a problem for EN, and/or that illness is, but then it seems that the alleged problem is not related to metaethics in particular, but it's allegedly a general problem under ENa problem Copan would have the burden to argue for, of course.

Granted, there are theists who argue that all of our beliefs – or most of them – would at least be suspect if EN is true. However, that would no longer be a metaethical argument or anything related to it.

19) Conclusion

Metaethical arguments for theism try to use ontological, epistemic and/or semantic considerations to make their case, but they contain numerous confusions, from semantic confusions to even implicit commitments to bold and unwarranted claims about exobiology, and fail to provide any good reasons to believe that theism might be true, or that evolutionary naturalism might be false or incompatible with moral knowledge, moral facts, moral truth, etc.


Notes and references

[1]

Source: Linville, Mark D., "The Moral Argument", in "The BlackWell Companion to Natural Theology", Edited by William Lane Craig and J. P. Moreland, © 2009 Blackwell Publishing Ltd. ISBN: 978-1-405-17657-6
Pages 391-448.

[2]

Podcast: http://www.reasonablefaith.org/site/PageServer?pagename=podcasting_main

Debate between William Lane Craig and Sam Harris: http://www.mandm.org.nz/2011/05/transcript-sam-harris-v-william-lane-craig-debate-%E2%80%9Cis-

good-from-god%E2%80%9D.html

On-line argument, "The Indispensability of Theological Meta-Ethical Foundations for Morality": http://www.leaderu.com/offices/billcraig/docs/meta-eth.html

[3]

Copan, Paul, "God, Naturalism, and the Foundations of Morality".

Source: http://www.paulcopan.com/articles/

[4]

Readers will of course use their own grasp of the terms to assess the matter, as always.

[5]

Hurka, Thomas, "Moore's Moral Philosophy", in the Stanford Encyclopedia of Philosophy. http://plato.stanford.edu/entries/moore-moral/

[6]

At least, it does not seem to bring any new problems if we assume that generally the evidential Problems of Evil and Suffering fail to show, beyond a reasonable doubt, that theism is not true. My position is that they succeed, but that's beyond the scope of this article.

[7]

Tom's claim is not true, by the way, but that's not the point here.

[8]

I clarify that they're adult humans because, say, a case of chimpanzees bayoneting babies for fun might raise issues that would be a distraction at this point.

[9]

There might be exceptions: for instance, we might make the assessment believing that A is a telepath, or got the information from a chain leading to a telepath. But that's not what normally happens, and not exactly a reasonable way for us to try to ascertain truth.

Also, we might assess that a being has a belief – perhaps, on her own existence – only on account the way she looks (i.e., looks like a living adult human); however, that might also more or less indirectly be based on behavioral observations; in any case, at the very least, in cases in which some mental property varies from human to human, it's clear that observations of behavior are the way to track such property.

[10]

It is possible to construct hypothetical scenarios that show that the claim is false, so another condition – like stipulating that the rape in question is for fun, or just for power – is required.

However, that's only a secondary issue, so we may as well assume for the sake of the argument that the claim about rape is true.

[11]

That's explained in another article:

On line: http://angramainyusblog.blogspot.com/2011/12/moral-case-against-christianity.html

For download: http://www.4shared.com/document/cVCYWgqD/A_Moral_Case_Against_Christian.html

Briefly, I'm not suggesting that, say, every precept of the Law of Moses was immoral. However, it contains many horrendously evil commands, and many more less evil but still quite immoral, so as a result, we can justly say that, overall, the Law of Moses was a profoundly immoral law. In any case, we can give more details by considering the commands on a case by case basis.

Similarly, not every one of Yahweh's actions was immoral, but then again, the same can be said about the actions of a serial killer. Given the atrocities he engaged in, he is definitely not morally good.

Also, to be clear, I'm not suggesting he exists; just as we can say that Darth Vader isn't morally good without making a claim that Darth Vader exists, we can make a similar assessment about Yahweh, or other hypothetical characters, based on the description of them in some story. The details of all of these matters are beyond the scope of this article, and are given in the other article, so I will refer readers to it.

[12]

http://www.mandm.org.nz/2011/05/transcript-sam-harris-v-william-lane-craig-debate-%E2%80%9Cis-

good-from-god%E2%80%9D.html

[13]

That does not mean that humans are the only animals that are moral agents.

We know enough about evolution to know that humans evolved gradually from other species, and that there is no clear-cut line.

Whether some extant non-human animals – such as bonobos or chimpanzees – are moral agents as well is more debatable, but there is no need to get into that – we may as well assume here that they are not, and that's no problem for EN, either.

[14]

Of course, readers will make their own intuitive assessments of these issues. But that's always the case.

[15]

I have doubts about the coherence of the material/non-material distinction, and to some extent the physical/non-physical distinction.

However, I will leave that aside for the sake of the argument, and show that in any event, Craig's points or similar ones do not pose any challenge for the non-theist objectivist.

[16]

http://www.leaderu.com/offices/billcraig/docs/meta-eth.html

[17]

Someone might also use the expression 'Divine Command Theory' to denote other theistic metaethical frameworks, even if commands do not play a central role. That's of course a matter of notation. In this article, the term is limited to either the kind of semantic or ontological theories that I've described.

[18]

Someone might wonder whether zurkovians are moral agents at all, and suggest that Alex might not be one, either.

However, it seems clear to me that a personal being is a moral agent except, perhaps, in the case of babies and maybe a few other exceptions involving mental limitations, but that is not the case here.

Alternatively, we can argue as follows: if Alex is not a moral agent, then the creator has no moral properties, and so theism is not true. Hence, we may assume that Alex is a moral agent.

[18]

Swinburne, Richard "The Existence of God", Second Edition.

Clarendon Press Oxford. Page 130.

[19]

https://secure.wikimedia.org/wikipedia/en/wiki/Religion_in_Japan

http://www.adherents.com/largecom/com_atheist.html

[20]

Of course, if we do posit – for instance – that we may well have a sense that allows us to ascertain moral facts, thus explaining our behavior in making moral assessments, and conclude that that moral facts can play such an explanatory role on EN, as color facts or illness facts do, what we're saying is that we have a sense that allows us to detect and track certain things. We're not saying that there is some mysterious entity called 'fact' over and above those things.

But the explanatory power of 'facts' should not be understood as positing any mysterious beings in the case of color, illness or for that matter planets, cars or mathematics, so anyone claiming morality is in some way exceptional when it comes to facts, and that that's a problem for EN, should defend their claims.

No comments: