Wednesday, November 30, 2011

A case against "The Existence of God" - a reply to Swinburne's case for theism

Download in .pdf format (alternative link)

Download in .html format

Download in .odt format



A case against "The Existence of God": a reply to Swinburne's case for theism


0. Introduction

1. Simplicity, scope, metaethics, and the probability of God

1.1. Simplicity, scope and intrinsic probability

1.1.2. Simplicity

1.2.2. Scope

1.2. The concept of God

1.2.1. Freedom

1.2.2. Moral perfection

1.2.2.1. Some implicit assumptions

1.2.2.2. Swinburne's account of morality

1.2.2.2.1. An alien threat

1.2.2.2.2. More aliens

1.2.2.2.3. Omniscience and perfect freedom without moral perfection

1.2.2.2.4. Morality, objective values, and reasons

1.3. The intrinsic improbability of theism

1.3.1. Simplicity

1.3.2. Scope

1.3.3. Consequentialist deities

1.3.4. Estimating the intrinsic improbability of God

1.4. The final improbability of theism

1.4.1. More alternative deities

1.4.1.1. Simplicity

1.4.1.2. Probabilistic assessment

1.4.2. Choice

1.4.3. Gambling deities

1.4.3. Simplicity, revisited

1.5. Conclusion of Part 1

2. Power, moral assessments and probability

2.1. What can God do?

2.2. What would God likely create, or refrain from creating?

2.3. Conclusion of Part 2

3. More probabilistic assessments, and more theistic improbability

3.1. The improbability of God's creations: possible worlds as an example

3.2. Theism and the probability of humans

3.2.1. What's a 'type' or 'kind' of action?

3.3. 'Humanly free' agents, physicality, and dimensionality

3.4. Dimensionality and some probabilistic assessments

3.5 Conclusion of Part 3



Notes and references


0. Introduction

In this article, I will argue that the case Swinburne defends in "The Existence of God" [1], fails to provide any support for theism.

The article is divided in three main parts, each of which constitutes a case against Swinburne's case for theism, and each of which is independent of the two others.

In the first part, my strategy will be to focus on the issues of simplicity, probability, and some metaethical claims he makes, arguing that either the case for theism is blocked by a division by zero, or the conclusion would be that the probability that theism is false is either 1 or arbitrarily close to 1.

In the second part, I will challenge Swinburne's take on power, most of his moral assessments, and part of the reasoning he uses to try to back some probabilistic assessments.

In the third part, I will concede for the sake of the argument some of Swinburne's claims, and from that I will argue from there that God does not exist.

In the course of my arguments, and unless otherwise specified I will make assume for the sake of the argument that the properties Swinburne posits (e.g., omniscience, "perfect freedom") are coherent. That does not mean I'm taking a stance on that.

Also, I will not challenge all of the controversial claims Swinburne makes. For instance, I will not challenge the distinction between personal and scientific explanations.

On a terminological note:

a) I will only refer to different parts of this article as "sections" or "subsections" - i.e., no sub-subsections, etc.

I think links between the relevant parts of the article will prevent any ambiguity.

b) All references to page numbers are references to Swinburne's book. [1]

c) If I talk about dimensionality and dimensions without further specification, I'm talking about spatial dimensions; in other words, I'm not counting time.

Finally, and as usual, I don't claim there is any novelty in the ideas on which I base these arguments.

1. Simplicity, scope, metaethics, and the probability of God

1.1. Simplicity, scope, and intrinsic probability

According to Swinburne, "the simpler a theory, the more probable it is"[2].

Given context, it's clear that he means that that's the case all other things equal, since greater scope may reduce the theory's probability, as Swinburne asserts [3].

However, he also claims that greater scope does not have as much relevance as the other criterion, since allegedly reducing scope also reduces simplicity, due to the alleged introduction of arbitrary restrictions.

However, less scope does not reduce simplicity in all cases, all other things equal.

In fact, it may increase it. For instance, let's consider the following two hypotheses:

P1: There exist at least 1000000000 rational particles (or substances, etc.)

P2: There exist at least 1000000000000000000000 rational agents (or substances, etc.)

It seems clear that P2 has greater scope – you're asserting more, so you're more likely to be mistaken, following Swinburne's reasoning -, but it's also less simple, since P1 postulates fewer entities.

In any case, Swinburne leaves aside for the most part the criterion of small scope because – he claims – typically a theory that loses scope loses simplicity [4] - but even if that is true, leaving aside the criterion of small scope requires that one already accepts simplicity as much more important.

In defense of his claim that simplicity is much more important, Swinburne presents Newton's theory as an example of a theory that, despite its enormous scope, was judged enormously probable.

However, that it was judged enormously probable as a theory that applies to everything is not a good criterion. In fact, it turned out that it didn't apply to everything – i.e., some predictions it makes are mistaken.

Moreover, Newton's theory, or any theory in physics, does not seem to have nearly as much scope as theism, which posits an entity that is a creator of everything, which is omnipotent, etc.

So, even if simplicity were more important than scope, that does not mean that any scope, no matter how massive, may be ignored.

That said, let's take a closer look at Swinburne's criteria for assessing simplicity and scope.

1.1.2) Simplicity

In the case of scientific explanations, Swinburne claims that the criterion for simplicity is that the hypothesis postulates few entities, kinds of entities, and "few and simple kinds of powers and liabilities"[5]

He offers as an example powers describable by simple mathematical formulas.

His criterion for simplicity in the case of personal explanations is based on the following conditions: [6] Few entities, few types of entities, few properties, few constant intentions, and "continuing basic powers", and "simple laws – constant predictable ways in which persons acquire beliefs from their surroundings." [6]

Yet, the complexity of a mathematical description involved in the mental properties of substances in personal explanations is nowhere to be found.

To illustrate the point, let's suppose that we have a hypothesis H(1) that asserts the existence of a universe U(1), which is described by some mathematical model M(H(1)).

Similarly, let's consider hypotheses H(n), for all natural n, and the corresponding mathematical models.

As the models increase in complexity, it seems, so do the hypothesis.

Let's now consider the following hypotheses:

AH(G(n)): There exists an entity who fully understands M(H(m)), for all m≤n, and who is the creator of all things.

Now, clearly, for all n, AH(G(n)) is no more complex than theism, since AH(G(n)) does not postulate more entities, kinds of entities, or more or more complex powers and liabilities.

Moreover, the scope of AH(G(n)) is much less than that of theism.

Also, obviously it's not less probable than theism – it's entailed by theism.

Now, let's consider the following hypotheses:

H(n): There exists a universe describable by M(H(n)).

For all n > 1, it's clear that the mathematical description in H(n) is simpler than the mathematical description in AH(G(n)).

Yet, AH(G(n)) is considered much simpler than H(n) - since AH(G(n)) is simpler than "God exists" -, in spite of the fact that the former would include a more complex mathematical description.

In the case of mental properties, it appears that, for some reason, Swinburne's position is that no unpacking of mathematical models is needed when considering simplicity.

If we needed to unpack, that would make theism more complex than a claim of existence of any universe, no matter how complex its description might be – and that would be enough to block Swinburne's case for theism.

But let's grant Swinburne's criterion for the sake of the argument.

1.2.2. Scope

Unlike the case of simplicity, Swinburne does not say much about how to assess scope, perhaps because he rejects that criterion as important when it comes to assessing the intrinsic probability of theism and rivals, which – he maintains – have the same scope as theism, as they are all "worldviews". [7]

This is not clear, since – for instance – Christianity makes a lot more claims about the world than theism. Wouldn't Christianity have greater scope?

Apparently, Swinburne counts that as greater complexity, not scope. For instance, he claims that adding fallen angels to the hypothesis that theism is true, complicates it – not that it increases its scope. [8]

So, let's grant that that would increase complexity.

Even then, the rivals of theism do not have to be physicalism, or naturalism, etc. - even assuming that such hypotheses have the same scope as theism.

Someone might posit the universe as a brute fact[9], and make no claims whatsoever about whether or not there are other realms, maybe other universes causally disconnected from ours, and also brute facts, etc.

On the other hand, theism posits a creator of everything, making a claim about an entity that controls the whole of reality.

So, it seems to me that theism has greater scope than some alternatives.

1.2. The concept of God

In this section, I will take a look at some aspects of the concept of God, as proposed by Swinburne – in particular, to moral perfection.

According to Swinburne, theism can be defined as follows: [10]

Theism: "There exists necessarily a person without a body (i.e. a spirit) who necessarily is eternal, perfectly free, omnipotent, omniscient, perfectly good, and the creator of all things"

That being is called "God", and the hypothesis that theism is true is called "h" in Swinburne's case – I will call it either "h", or simply "theism".

By, "moral perfection", Swinburne means that "God always does a morally best action (when there is one), and never a bad one. [10]

That definition does not state that God would never do a bad action, but that should be understood in that way, in context.

As for "perfectly free", that means that his choices aren't causally influenced[11].

1.2.1) Freedom

While I don't agree with Swinburne's conception of freedom, I'm not going to challenge it in the main part of the argument. I will make some brief comments in the first appendix, but for now, I will just point out the following:

According to Swinburne "an inbuilt detailed specification of how to act is a much more complex person than one whose actions are determined only by his uncaused choice at the moment of choice"[12]

Yet, Swinburne posits a person who is morally perfect, which amounts to a detailed specification of how to act, in infinitely many specific situations.

In fact, if an inbuilt detailed specification as to how to act is incompatible with "perfect freedom", then moral perfection and "perfect freedom" are mutually incompatible, and thus theism is logically impossible.

Still, let's leave that aside, and assess Swinburne's argument from omniscience to perfect freedom to moral perfection.

1.2.2) Moral perfection

In order to argue for the claim that omniscience and "perfect freedom" entail moral perfection, Swinburne makes the assumption that moral judgments are propositions that are true or false.

I will grant that metaethical assumption, but there are implicit assumptions that require further consideration.

1.2.2.1. Some implicit assumptions

Apart from the explicit assumption I just granted, Swinburne makes other assumptions, which aren't trivial but he does not state. In this subsections, I will consider three of them.

a) He assumes not only that non-cognitivism is not true, but also that speaker relativism and culture-relativism aren't true. I will grant these implicit assumptions as well.

b) He implicitly rejects a moral error theory; for some reason, he addresses and rejects the claim that moral judgments aren't true or false, but he does not mention the claim that all moral judgments like "X is immoral", "X is morally obligatory", etc., express propositions but are false.

I will grant this assumption too.

c) He assumes that any omniscient, "perfectly free" agent would be a moral agent or moral being – i.e, a being with moral properties, and/or whose actions have moral properties.

Moreover, given his conception of moral goodness, and in context, he seems to assume that any rational agent – or at least any sufficiently intelligent rational agent – is a moral agent.

Even under the previous assumptions, this is not at all clear.

For instance:

Let's suppose an alien species evolved on another planet, and instead of morality, they have other intuitions and standards – let's call that "morality*.

They're free in whatever sense in which humans are free, but they have no sense of right and wrong, even though they have a sense of right* and wrong* - which yields different results from a sense of right and wrong, in some cases.

For instance, a behavior might be morally bad, but morally* neutral, or morally* bad, but morally neutral.

If they decided to hunt and kill humans for sport, would they acting immorally, assuming that they're not acting immorally*? Or would they be not morally anything, like a lion or a tiger?

Or let's consider an alternative:

On yet another planet, some aliens design a smarter organism that ends up overtaking their whole planet, and becomes nearly their entire biosphere.

This being – let's call it "Guk" - sends probes to Earth, abducts humans and conducts all sort of experiments on them, some horribly painful.

After it's done experimenting on a human, it discards her – i.e., it kills her.

Guk has a mental makeup that its very different from that of its makers, and from that of humans.

In particular, it has no concern whatsoever for the well being of other beings, no sense of right or wrong – nor some analogue -, no sense of guilt – nor some analogue -, etc.

However, it is extremely intelligent; it can design and make computers, starships, genetically engineered organisms – even some smarter than humans -, and so on, and is free to act as it pleases – at least, as free as humans, and as free as its makers.

Would this being, Guk, be evil?

Or would it be a non-moral entity, so that its actions aren't morally anything (like those of a lion or a tiger)?

It's unclear to me.

1.2.2.2. Swinburne's account of morality

Swinburne's argument from omniscience and perfect freedom to moral perfection is based on his conception of moral goodness and reasons, which the following passages briefly explain:

Swinburne: [13]

Having a reason for an action consists in regarding some state of affairs as a good thing, and the doing of the action as a means to forwarding that state.

Swinburne: [14]

I understand by an action being morally good that it is overall better to do it than to refrain, that there are overriding reasons for doing it; and by an action being morally bad that it is overall better to refrain from doing it than to do it, that there are overriding reasons for refraining.

Swinburne: [15]

But the suggestion that someone might see refraining from A as overall better than doing A, be subject to no non-rational influences inclining him in the direction of doing A, and nevertheless do A is incoherent. But the suggestion that someone might see refraining from A as overall better than doing A, be subject to no non-rational influences inclining him in the direction of doing A, and nevertheless do A is incoherent.

However, whether a behavior is better than another one – even "overall better" - depends on the standards of evaluation. The same goes for refraining from a behavior.

For instance, behavior A might be better than behavior B as a means to achieve certain goal G.

Or it might be that A is overall better than B, in the sense that there are a number of goals G(1),..., G(n), and some order between the goals – some standard S -, and according to standards S, A is overall better than B.

Also, it may be the case that the standards are implicit. In particular, it may be that the implicit standard is morality, so in that context, "overall better" would mean "morally better". And it might even be that that's what humans usually mean by that.

However, there is no guarantee that other rational beings would care about moral standards; for instance, they might care about some other standards, say moral* standards.

There would be no irrationality on the part of said beings.

So, let's consider some possibilities:

1.2.2.2.1. An alien threat

For instance, let's consider the aliens I introduced above, in some more detail.

Some aliens evolved on a different planet.

Instead of morality, as a result of their evolutionary process, they have other intuitions and standards – let's call that "morality*".

They're free in whatever sense in which humans are free, but they have no sense of right and wrong, even though they have a sense of right* and wrong* - which yields different results from a sense of right and wrong, in some cases.

For instance, a behavior might be morally bad, but morally* neutral, or morally* bad, but morally neutral.

So, they find Earth, and decide to hunt humans for sport. They don't see anything immoral* about their behavior, and in fact, moral* standards are such that it's not immoral* for those aliens to hunt humans for sport.

Whether it's immoral for them to do so is another matter, but the point is that they do not care. They care about morality*, not about the standards that humans care about; in particular, they do not care about morality.

In fact, even if the aliens, by means of studying their human prey, come to the conclusion that there are some standards that humans care about, and that according to said standards, their actions – i.e., the aliens' actions – are very negatively regarded, the aliens just won't care.

It seems clear to me that there is nothing irrational about the actions of these aliens.

They might be evil – or not; maybe they're not moral beings at all.

But that's beside the point.

Moreover, the conceivable aliens do not even have to be social beings:

1.2.2.2.2. More aliens

Let's consider the extraterrestrial being that I posited earlier, and make the story a little more detailed:

Guk sends probes to other planetary systems, and studies a number of species, including humans.

In particular, by studying the humans his probes abduct, Guk gains knowledge about the human species, including some specific psychological traits, etc.

So, Guk takes some humans to its planet for further study, and comes to know that humans tend to describe it as "gray, blue and red".

It also concludes that those are judgments based on the normal human visual system, and that, given the light conditions, the judgments are very likely true, since the system tends to be reliable.

So, Guk learns he's (very probably) gray, blue, and red – though Guk can't experience the perceptual sensation of seeing those colors: its way of perceiving the world around it is different and vastly more complex than that of humans.

By studying humans, and due to its knowledge of biology in general, Guk also reckons that basic human intuitions are usually reliable.

Also, due to its studies of human psychology, Guk finds out that humans tend to make judgments and say behaviors are "morally good", "morally evil", etc. - in different languages, but in the same context -, and that those judgments appear to be based on human basic intuitions.

Guk also learns that all of the humans he studies - abducting them, sometimes subjecting them to horrible experiments, in the end always killing them - consistently state that it - i.e., Guk - is morally evil and/or that its actions are evil, etc.

So, Guk comes to the conclusion that the judgments are probably true, so that it probably is evil.

However, that does not motivate it to stop any more than learning it's gray, blue and red does – i.e., that does not motivate Guk at all.

It also comes to realize that some the actions it performs on the humans motivate the judgment.

Guk continues studying them and it then tests kind of torments, and also rewards, etc., puts humans into a variety of complex situations, etc., in order to study what moral judgments its human guinea pigs will make - and it continues studying them.

Eventually, Guk has satisfied its curiosity with regard to humans, and it kills the subjects of its study that are still alive.

Also, it reckons that, if humans are left alone, in millions of years they or their successors might advance enough to be a serious nuisance.

So, it designs some biological weapons, and sends robots to attack the Earth.

The biological weapons kill nearly all of the human population, and then the robots launch a more traditional assault against the survivors to finish the extermination – and succeed, wiping out the human species.

The fact that Guk has learned that its actions are probably evil, however, fails to motivate it in the least to stop them. It does not care about moral evil, or good.

Clearly, Guk does have reasons for acting – studying those other species entertains it.

Moreover, Guk does not have any reasons whatsoever for refraining from acting as it does.

Someone might object that Guk can't really grasp the meaning of moral terms.

But the point is that Guk does not care about the standards humans care about – whatever those are -, except to the extent that it's curious about humans and want to know more about them.

Guk cares about its own entertainment, not about the well-being of anyone else.

1.2.2.2.3. Omniscience and perfect freedom without moral perfection

Earlier in this section, I posited a conceivable entity, Guk, who does not care at all about the suffering of humans or other beings, and subjects some humans and other beings to a lot of suffering in order to learn about them, because that entertains it.

Also, I posited some other aliens who have a different set of standards, other than morality – namely, morality*.

Now, clearly Guk is not omniscient or perfectly free, and neither are the other aliens.

However, that would not change the relevant point: omniscience and perfect freedom would guarantee that a being would know what the best course of action to achieve its goals is, but it does not guarantee that it will have a particular set of goals; in particular, it does not guarantee which standards it cares about.

If the being is also omnipotent, that will guarantee that its goals – if compatible with each other – will be accomplished, but that too does not tell us what those goals are.

For instance, conceivably, an omnipotent, omniscient, perfectly free being could create humans and other intelligent physical beings because it enjoys watching them struggle. In other words, it wants to have enjoyment – that's its goal -, and it enjoys the show.

So, the creator would create a universe with painful parasites, deadly earthquakes, etc, because it enjoys watching lions, dinosaurs, humans, etc., fight, suffer, succeed or fail, etc.

There is nothing contradictory about such a being.

Such being would surely know what's immoral – and what's immoral* as well, if those aliens were also created.

But – once again – it just wouldn't care – not beyond the entertainment value of watching those beings in such a situation.

Whether such a being would be a moral agent at all is debatable, but one way or another, what's apparent is that it wouldn't be morally perfect.

1.2.2.2.4. Morality, objective values, and reasons

Swinburne claims that there are objective values [14], so perhaps someone might raise the objection that Guk and the other alien beings in question would fail to see the true, objective value, which is given by moral standards – not by moral* standards, or by whatever standards Guk may have.

While I've granted a number of Swinburne's metaethical assumptions, they do not entail anything about what agents value, and what an agent values depends on its particular mental makeup.

Moreover, it's not entirely clear at all what it would mean for "values" to be objective in that sense.

If by "objective values" Swinburne means moral judgments that are true or false independently of who makes them, etc., I've granted that already, but it has no bearing on my points.

If Swinburne means that there are things (actions, agents, properties, etc.) such that it would be immoral for any sufficiently intelligent, knowledgeable and free agent not to value them, whether or not that's the case has no bearing on my points.

On other other hand, if Swinburne is saying that there are things (actions, agents, properties, etc.) such that it would be irrational for any sufficiently intelligent, knowledgeable and free agent not to value them, and that moral goodness is among them, then I don't see any reason to grant that assumption.

Moreover, the entities I posited – say, those behaving according to moral* standards -, would not appear to be irrational in any way, or making any mistakes about the properties of anyone or of any behavior.

But moreover, even if it turned out that the word "rational" actually picked some specific final or terminal values – i.e., the things, behaviors, etc., that an entity values for their own sake -, that would only mean that "rational" also increases the complexity of the claim that theism is true, as it is giving a description of the terminal values of the entity that is not contained in the description "omniscient" or "perfectly free".

In other words, in any case, from "omniscient and perfectly free", nothing follows about what the terminal values of the entity in question are.

For that matter, an entity might value its own fun for its own sake, might find the torture of humans fun, and might not have any stronger terminal value. So, that entity might torture humans for fun.

There is nothing in "omniscient" or "perfectly free" that would imply or even suggest otherwise.

1.3. The intrinsic improbability of theism

In this section, I will grant for the sake of the argument that there is such thing as an objective intrinsic probability of theism – which may or may not have a precise numerical value – and argue that it's very improbable.

1.3.1. Simplicity

As I explained earlier, Swinburne's criterion for simplicity in the case of personal explanations is based on the following conditions: [6] Few entities, few types of entities, few properties, few constant intentions, and "continuing basic powers", and "simple laws – constant predictable ways in which persons acquire beliefs from their surroundings." [6]

My hypotheses will only posit one type of entity and one entity each, no more properties than the theistic hypothesis, no ways of acquiring beliefs other than theism (I will only stipulate omniscience), and the same basic powers than God has – i.e., omnipotence.

Thus, given Swinburne's criterion for simplicity, my hypotheses will be no more complex than theism.

Granted, someone might posit an alternative criterion for simplicity, or for when to unpack, etc., but picking an arbitrary criterion won't do. It would be up to the claimant to justify the choice of the criterion.

1.3.2. Scope

Another criterion that Swinburne's consider in order to assess the intrinsic probability of a hypotheses, is scope – but Swinburne claims it's far less important than simplicity. [7]

However, Swinburne says the scope is always the same for all "world views", so we can just ignore it when comparing the intrinsic probability of the hypotheses in question.

1.3.3. Consequentialist deities[16]

In this section, I will propose some alternatives to God.

For any entity EN that I propose, by "AH(EN)" I mean the hypothesis that AH(EN) exists.

So, let's propose first a type of consequentialist deity that I will call "Con(1)".

Con(1)'s definition is like God's, minus moral perfection, plus the property that it is consequentialistically perfect, in the sense that it always does a the consequentialistically best action when there is one, and never a consequentialistically bad one – though it might do a morally bad one -, according to some system that rates actions depending on some of its consequences.

More precisely, on this consequentialist system, an action X is consequentialistically better than an action Y if it satisfies more desires.

If the number of desires they satisfy is the same, then X is consequentialistically better if X satisfies desires that are more intense, on average.

If they satisfy the same number of desires, and neither of them satisfies desires that are overall more intense, then neither action is consequentialistically better than the other – they're consequentialistically equivalent.

So, the hypothesis AH(Con(1)) can be defined as follows:

AH(Con(1)): There exists necessarily a being who necessarily is eternal, perfectly free, omnipotent, omniscient, consequentialistically perfect, and the creator of all other things.

Given Swinburne's account of simplicity, we can tell that that AH(Con(1)) is no less simple than theism. Since the scope is the same, the prior probability of AH(Con(1)) is no less than that of theism.

Next, let's introduce also consequentialist deities that focus on some type of beings.

Let T(n) be types of possible beings who can have desires and such that T(n) T(m) if n m.

A being is n-consequentialist if it will always maximize the satisfaction of desires of beings of type T(n), in the same way explained in the previous case – in other words, equal numbers of desires and/or intensity are handled as before.

AH(Con(1,n)): There exists necessarily a being who necessarily is eternal, perfectly free, omnipotent, omniscient, n-consequentialistically perfect, and the creator of all other things.

Given that there cannot exist more than one creator of all other things, clearly the hypotheses AH(Con(1,n)) are pairwise disjoint.

Also, given Swinburne's account of simplicity, we can tell that AH(Con(1,n)) is no more complex than theism, and if k is only tautological evidence, we can tell:

P(AN(Con(1,n))│k) ≥ P(Theismk) [17]

Now, considering that the hypotheses are pairwise disjoint, if there is some integer r such that P(hk) ≥ (1/r), then we obtain:

P((AN(Con(1,1)) or AN(Con(1,2)) or... AN(Con(1,n)))│k) = P(AN(Con(1,1))│k) + P(AN(Con(1,2))│k) +... + P(AN(Con(1,n))│k) = ((r+1)/r) > 1.

Since that is impossible, it follows that there is no r such that P(hk) ≥ (1/r).

Moreover, we can further simplify the hypotheses, by introducing fewer claims about the properties of the entities:

AH(Con(2,n)): There exists a being who is the creator of all other things, and who is n-consequentialistically perfect.

Since AH(Con(2,n)) is entailed by AH(Con(1,n)), it's no less probable, and given that we're leaving aside the conditions of omnipotence, omniscience, etc., that makes the alternative simpler.

At this point, someone might suggest that n-consequentialistialistic perfection is a condition on the persons' inclinations, and thus precludes perfect freedom.

However, with that criterion, the same would apply to moral perfection, making moral perfection and perfect freedom incompatible, and thus making God logically impossible.

Also, someone might say that behaving in that way is an imperfection, not a perfection.

However, that objection would miss the point: I'm simply positing beings who always behaves in the way I described above, and who always would behave that way.

If there is some conception of perfection that those beings would not match, that's beside the point.

If someone prefers not to call it "n-consequentialistically perfect" but something else, that's a matter of taste, but it does not affect the point I'm making.

1.3.4. Estimating the intrinsic improbability of God

Given the results of the previous subsections, we've concluded that P(Theismk) is no greater than 1/r, for any natural number r.

That leaves us with the following options:

a) P(Theism│k) = 0

b) P(Theismk) = i

Where i is a nonzero infinitesimal hyperreal, assuming probabilities can have hyperreal values.

c) P(Theism│k) > 0, somehow mysteriously does not have a numeric value, even though it's no greater than (1/r), for all natural numbers r.

Even then, n* P(Theism│k) is also no greater than 1/r, for any natural numbers r and n, so P(Theism│k) still is some kind of infinitesimal – even if a mysterious one.

d) P(Theism│k) = i

Where i is some sort of nonzero infinitesimal – whatever that is -, but the point is that n* P(Theism│k) is also no greater than 1/r, for any natural numbers r and n.

Moreover, the alternative hypotheses I introduced are just a few – well, infinitely many few, but the point is that there are many more.

So, the above shows that even if one assumes that for some reason, theism is somehow a simple hypothesis, the math still shows that theism is intrinsically really improbable.

1.4. The final improbability of theism

In the previous section, I concluded that the prior probability of theism is no greater than 1/r, for any natural number r.

In this section, I will show that the final probability of theism is also no greater than that, under an assumption that Swinburne requires to make his case.

So, if that assumption fails, Swinburne's case is blocked. And if that assumption is true, then the probability of non-theism, given our evidence, is greater than x, for any real x < 1, and so we can conclude beyond a reasonable doubt that non-theism is true.

The assumption is question is P(e&k) > 0, where e is – following Swinburne [18]- the conjunction of e1, e2,..., en, which are different propositions that people bring forward in arguments for or against theism, and k is mere tautological evidence.

Without P(e&k) > 0, it would not be possible to apply Bayes' theorem, since it's impossible to divide by zero, and while Swinburne divides his argument in several steps, in the end he has to estimate P(theisme&k) using Bayes' theorem, and that requires dividing by P(e&k).

So, let's assume P(e&k) > 0, and introduce some alternative hypotheses:

1.4.1. More alternative deities

Let's introduce a class of universes:

U(1): A universe where e obtains.

In other words, U(1) is a universe where all the propositions in question are true (i.e., the universe is ordered, there are physical beings who are moral agents, etc.)

Then, let's introduce U(n), for all natural n > 1, which are possible alternate physical universes, which behave in accordance to rules described by some mathematical model M(n), such that the rules are different for any two numbers n and m.

No rational being lives in any of those alternate universes.

Now, let's consider a class of standards:

V(n):

a) Creating at least one universe of each of the types U(1), U(2),..., U(n) is V(n)-good.

b) Creating a universe of type U(n+m) is V(n)-bad, for any natural m.

c) All other actions are V(n)-neutral.

d) A being is V(n)-good if and only if it does at least one V(n)-good action, and never does any V(n) bad action.

Finally, let's consider hypotheses asserting the existence of certain deities, called "Alternative Deity(n)":

AH(Alternative Deity(n)): There exists a being who is omnipotent, omniscient, V(n)-good, and the creator of all other beings.

Let "AH(AD(n))" stand for the hypothesis that Alternative Deity(n) exists; as usual, "h" stands for theism, so I use "h" and "theism" interchangeably; "k" is tautological evidence.

1.4.1.1. Simplicity

Given Swinburne's account of simplicity, it seems that AH(AD(n)) is no less simple than theism:

It does not posit any more entities, kinds of entities, properties, or constant intentions.

As for the powers, they're just as "simple" as in the case of theism – i.e., omnipotence -, and they do not have liabilities – since they are omnipotent.

Also, while Swinburne claims that a detailed specification of how to act makes a person more complex [12] than one that is perfectly free, in this case I've stipulated that the entities in question are perfectly free.

The fact that there is a specification given by the fact that Alternative Deity(n) is V(n)-good would not seem to preclude perfect freedom, just as the specification that God is morally perfect does not preclude perfect freedom, either – and if someone claimed otherwise, the burden would be upon them: why would morality be the specification that just does not preclude perfect freedom? [19]

That aside, someone might object that AH(AD(n)) are more complex than theism since they entail, with probability 1, that there will be a physical universe of a certain kind.

However, if that counted as complexity, then if Swinburne were correct that God would create other beings with probability 1, that would make theism less simple than positing just one being.
Also, Swinburne claims that theism entails a probability of a physical universe greater than 1/2, so if that counted as reducing simplicity, it should count in Swinburne's argument too.

Still, I will present an alternative in the following subsection.

1.4.1.2. Probabilistic assessment

For the reasons given above, theism is no simpler than AH(AD(n)), for any natural n. [20]

Also, given that they are "world views", it seems that AH(AD(n)), for any natural n, has the same scope as theism.

Hence:

a) P(AH(AD(n)))│k) ≥ P(Theism │k)

b) P(e│k&AH(AD(n))) = 1.

c) P(AH(AD(n))│e&k) = P(e│k&AH(AD(n)))*P(AH(AD(n))│k) / P(e&k) ≥ P(e│k&Theism) * P(Theism │k) / P(e&k) = P(Theism │e&k)

Given that AH(AD(n)) and AH(AD(m)) are disjoint if nm, if there is some natural r such that P(Theism e&k) ≥ 1/r, a contradiction follows as before.

So, the result is established.

1.4.2. Choice

Someone might object that I haven't defined the U(n), explicitly, and maybe infinite choices is then somehow a problem.

I don't see how that would be a problem, but just in case, one may as well proceed as follows: first, let's assume that there is some natural r such that P(Theism e&k) ≥ 1/r, and let's consider U(1), U(2),...,U(r+1), and the corresponding hypotheses, and a contradiction follows as above.

1.4.3. Gambling deities

First, let's define a class of actions.

A(n): A coin-toss like action that might create a universe of type U(k), for some k≤n, with probability 0.5/n, or no universe at all.

For instance, if an omnipotent being in question were to carry out a A(n)-type action, he would choose to leave it at random which universe to create with his action, if any – there would be a 0.5 chance that no universe is created by that action.

Now, let's define a class of standards W(n):

a) An A(n)-type action is W(n)-good.

b) An A(m)-type action is W(n)-bad, if nm

c) All other actions are W(n)-neutral.

d) A being is W(n)-good if and only if it does at least one W(n)-good action, and never does any W(n)-bad action.

Finally, let's consider hypotheses asserting the existence of certain deities, called "Creator(n)":

AH(Gambling God(n)): There exists a being who is omnipotent, omniscient and W(n)-good, and the creator of all other beings.

The new conditions do not entail that, with probability 1, there will be a physical universe.

They do guarantee, however, that P(ek&AH(Gambling God(n))) 0.5/n.

Given Swinburne's account of simplicity, we can tell that that AH(Gambling God(n)) is no less simple than theism, for all n. The details are the same as in the previous case. [21]

Also, given that they are all "world views", it seems they all have the same scope as theism.

Let "AH(GG(n))" stand for AH(Gambling God(n)) - as usual, "h" stands for theism, so I use "theism" and "h" interchangeably.

Hence:

a) P(AH(GG(n)))│k) ≥ P(h │k)

b) P(e│k&AH(GG(n))) ≥ 0.5/n.

c) P(AH(GG(n))│e&k) = P(e│k&AH(GG(n)))*P(AH(GG(n))│k) / P(e&k) ≥ 0.5*P(AH(GG(n))│k) / (n*P(e&k)) ≥ 0.5*P(e│k&h) * P(h│k) / (n*P(e&k)) = 0.5*P(h│e&k)/n

Given that AH(GG(n)) and AH(GG(m)) are disjoint if nm, if there is some natural r such that P(theism│e&k) ≥ 1/r, a contradiction follows because the harmonic series is divergent.

An objection regarding infinite choices is handled essentially as before, with the only difference that assuming P(theism│e&k) ≥ 1/r for some fixed r, we derive a contradiction by considering U(1), U(2),..., U(m) - and the corresponding hypotheses -, taking – for example – m = 3^((2*r)+1).

1.4.4. Simplicity, revisited

Someone might raise the following objection to the previous results: they might claim that even though specifications as to how to act do not preclude perfect freedom, it still remains the case that including specifications as to how to act in specific situations does make a hypothesis more complex, and the more detailed the specifications, the more complex the hypotheses becomes.

Moreover, they might insist that, while morality is somewhat complex, it's less complex than some of the different alternatives I posited.

But why would that be?

It's not because of how long it would be to, say, write the number n if unpacked, or how long the descriptions of the mathematical models in my alternatives would be: as we saw before, Swinburne's criterion for simplicity does not require unpacking numbers or mathematical models when they're in the minds of the entities posited as personal explanations – and rejecting that criterion would require unpacking omniscience, making theism an enormously complex hypothesis.

Why would then the specifications of behavior in the different alternatives I posited be more complex than the specification of behavior in the theistic hypothesis?

Someone might say that unpacking is sometimes needed, and that that affects my alternatives, but not theism.

It would be up to the claimant to defend that claim, though.

In addition to that, by admitting that specifications of behavior increase the complexity of a hypothesis, they would have to concede that that applies to the theistic hypothesis.

Perhaps, someone will come up with a system of assessing simplicity that just happens to make only theism very simple, but not any of the alternatives I presented. But as usual, they would have to defend their choice of criterion.

So, at this point, I can rest my case.

1.5. Conclusion of Part 1

Given the previous considerations, the conclusion is that Swinburne's arguments in "The Existence of God" do not provide any support for the hypotheses that theism is true, even without considering a number of other claims – e.g., about freedom and causality, or about personal vs. scientific explanations – that are controversial to say the least.

A theist might modify Swinburne's case in order to block some of the objections I raised, or defend Swinburne's account of morality, but in any case, the burden would be on them.

2. Power, moral assessments and probability

Leaving aside the previous objections for the sake of the argument, Swinburne's case rests, to a considerable extent, on his moral assessments of certain actions, plus some reasons to back them up.

My moral assessments of the behaviors in question, however, are in most cases radically different from Swinburne's; in what follows, I will explain my take on some of those behaviors, using the same method he uses (i.e., I will use my sense of right and wrong, give reasons, etc., as usual in moral discussions), and so presenting a different perspective to readers – who can of course make their own assessments on these matter.

In addition, I will challenge much of Swinburne's reasoning in support of some probabilistic assessments, and also challenge Swinburne's claims about God's power. The issue of power, however, is not required for the rest of the arguments in this section to succeed.

2.1 What can God do?

Swinburne defines God as omnipotent, yet he claims that God cannot do evil.

Also, according to Swinburne, God does not have the freedom to choose between good and evil. [22]

However, Swinburne's definition of omnipotence is that "he is able to do whatever it is logically possible (i.e. coherent to suppose) that he can do". [10]

Now, it's perfectly coherent to suppose that he can, for instance, torture everyone else for eternity, and that he is able to do it.

What is not coherent is to suppose that he would do it.

If, however, we accepted Swinburne's conception of freedom and power for the sake of the argument, then it seems that God indeed would not have the power to bring about evil, because that would somehow entail that it's logically possible that an entity E is God and does evil, and that is a contradiction.

But that would be very odd, and it would have other odd consequences, such as the following one:

Let's consider the following hypothesis:

AH(Gin): There exists a being who is omnipotent and who has the property that he will never communicate with other intelligent beings.

Let "Gin" be the being whose existence is asserted by AH(Gin).

Then, by Swinburne's rationale to conclude that God cannot do evil. it follows that Gin is omnipotent, but it cannot communicate with other intelligent beings. How would that be omnipotence, in any interesting sense of "omnipotence"?

Note that the definition is not contradictory if that of God isn't, since I merely stated some propensity that Gin will always follow – i.e., not to communicate with other intelligent beings – just as there is a propensity – i.e., doing always what's morally good – built-in the definition of "God".

2.2. What would God likely create, or refrain from creating?

I will now assess Swinburne's moral assessments, both by challenging his arguments to back up some of those assessments, and by presenting my moral intuitions in opposition to his moral intuitions.

Let's consider some of Swinburne's claims, then:

a) There is no world W such that no other world W' is better. [23]

Since we're making moral assessments, that would be a claim that no world is morally better. Normally, we assess moral goodness of people and/or their behaviors, so this is rather odd.

Perhaps, one could judge the moral goodness of a world counting how much moral goodness there is – or how much evilness.

But Swinburne argues that the moral goodness of a world consists in having a finite or infinite number of conscious beings who will enjoy it, so the more, the better, and so there is no world W such that no other world is better. But why would that be so?

I see no good reason to believe so.

Regardless of more or less obscure metaphysical issues, the only information about God's motivations provided in the definition of 'God' is that he's perfectly [morally] good[10], which means he'll never do a morally bad action, and he'll do a morally best action when there is one.

That's Swinburne's only definition.

If we go by that, whenever an action is neither morally best nor morally bad, it appears that we simply have no means whatsoever of assessing whether God would do it, or assigning any probability to it.

On the other hand, if we extend moral perfection to a propensity to do morally good actions even if not the morally best when there is no morally best action, it might be argued that that would give us, in some cases, a means of rationally assigning probability that God would carry out some morally good action.

But the question is: how?

How do we make such assessment, where there are infinitely many possibilities?

Moreover, even granting for the sake of the argument that, sometimes, we can make rational probabilistic assessment about God's potential actions even if such actions are only morally good and not morally best, we still would have no means of making any assessment when it comes to morally neutral actions, since we do not know anything else about the character of God that would allow us to make such an assessment. Indeed, the only mental trait of God that tell us anything about his motivations in perfect moral goodness, so only moral motivations can be used (at best) for probabilistic assessments.

That aside, this particular claim, combined with other claims Swinburne made, allow us to raise another objection, which I will raise in the next part. But for now, let's get back to some of Swinburne's moral assessments.

b) God does not have the right to permit or impose unlimited suffering to anyone against that agent's choice. [24]

I agree.

Moreover, it seems to me that unless it's a sacrifice to prevent a greater evil that might come to others, it would clearly be irrational for an agent to choose to suffer for all eternity when she can choose otherwise.
Also, it is obvious – I hope – that making an irrational decision to suffer forever does not make a person deserving of infinite punishment, and that it would be morally bad for a moral agent to impose such infinite suffering on someone who makes such an absurd choice.

c) It would be better to create lions and tigers than just lions, and generally more the more species, the better. [24]

I strongly disagree. Frankly, that's just puzzling.

Swinburne gives no argument other than calling it "plausible". I would rather call it "extremely implausible".

Intuitively, there is no moral advantage in doing so as far as I can tell.

In fact, I'd say that creating any such predators create unnecessary suffering, and so God would not do so. But leaving aside lions and tigers and considering other beings if required, if it's not immoral for an omniscient, omnipotent moral agent to create such beings, then making more or fewer kinds would appear to be morally equivalent, all other things (e.g., suffering) equal.

In fact, under the assumption that in some cases it wouldn't be immoral for an omnipotent, omniscient moral agent to bring about predators that cause horrible suffering on other beings (an implausible assumption), then it would seem to me that that's not a moral choice, but more like choosing between eating an apple or a banana. The choice of creating in any such circumstances is neither morally good nor morally bad. It's morally neutral – of course, assuming that there aren't any other factors involved that have moral relevance. [25]

However, if it's not morally good or bad but neutral, we cannot rationally make any assessments about the prior probability that God would do it, since moral perfection is the only information about the motivational aspect of God's mind that the definition of 'God' provides, and so only moral reasons ought to be used to assess how he would behave.

In other words, if there is a category C of potential actions by God such that any action in C would be morally neutral, then it seems that we do not have the means to assess the probability that God would carry out an action in that category.

d) Plausibly, it's better for God to create than not to. [24]

Swinburne claims that a perfectly morally good being would inevitably try to make other good things. However, intuitively, I don't see how not being inclined to make anything would mean a being is not morally good, or any less good. There appears to be no immorality in not creating other beings, as far as I can tell.

Perhaps, it could be argued that making other morally good beings is morally good, even if not morally obligatory. That is not at all clear if there already is an omnipotent, omniscient morally perfect being. Still, even granting for the sake of the argument that, in some cases, an omnipotent, omniscient moral agent would be doing something morally good (even though not obligatory) by creating other moral agents, clearly in other cases it would be immoral (e.g., it would be immoral to create a moral agent who would suffer horribly forever no matter what she chooses), so this at best would lead us to the question of whether it's morally good for an omnipotent, omniscient moral agent to create the kind of moral agents that actually exist. I will later argue against that idea.

e) Swinburne: [26]

"Animate substances are substances of a better type than inanimate ones".

That is very obscure. What does Swinburne even mean?

What is a substance of a 'better type'?

Swinburne oddly classifies substances in types, according to which ones are allegedly better. But that seems obscure at best, and in any case not related to moral intuitions and assessments.

In any case, what is relevant to the matters at hand is what would be morally good or even morally acceptable for an omnipotent being to do, not some obscure statements about 'better type' of stuff, even if we assume them to be coherent.

Still, if Swinburne means that it's always morally better to create animate substances that inanimate ones, that's surely false. For instance, it's morally better for an omnipotent being to create a rock than an intelligent being (even a moral agent) that always suffers horrible pain she cannot stop, no matter what she does.

But if he does not mean that, then what does Swinburne mean?

A lion is not of morally better than a rock. There is no moral dimension to compare them, as far as I can tell. Nor is it generally morally better to create lions than rocks, as far as I can tell.

f) Swinburne: [26]

"Humanly free agents are substances of a better type than animals".

Again, that's very obscure.

If Swinburne is saying that somehow it's always morally better to create – essentially that it's morally better to create entities with limited moral awareness than with no such awareness at all -, I can see no particular reason to believe so.

In fact, under certain circumstances – which I maintain apply to the case of an omnipotent, omniscient moral agent -, it would be immoral to create humans or similar beings, but creating beings with no moral awareness is not always immoral.

So, it's not always morally better to create beings with moral awareness than without it.

But if he means something else, what is it?

In any case, and as before, what is relevant to the matters at hand is what would be morally good or even morally acceptable for an omnipotent being to do, not some obscure statements about 'better type' of stuff. But let's consider the following category of actions:

C1: Moral agent A brings moral agent B into existence.

Some actions in such category are immoral. For instance, it would be immoral for an omnipotent, omniscient moral agent to carry out any action in the subcategory 'moral agent A brings into existence a moral agent B who will suffer horrible pain no matter what she does'.

On the other hand, it's usually not immoral for humans to have children, so some actions in some category C1 aren't immoral.

Plausibly, also, some actions in C1 are morally good.

Now, let's consider the subcategories:

C2: An omnipotent, omniscient moral agent A who exists in a world devoid of any other moral agents, brings about moral agent B, who is a morally flawed agent who will behave immorally sometimes (a behavior that A predicts).

C3: An omnipotent, omniscient moral agent A who exists in a world devoid of any other moral agents, brings about moral agent B, who is a moral agent who will endure suffering she does not deserve.

Using the same method used by Swinburne – who makes his claims by means of his own moral intuitions-, I would say that any actions in subcategories C2 and C3 appear intuitively morally bad, and so God would not carry them out. I will address this matter again later.

Note that claiming that there might be a morally sufficient reason is not enough to block an assessment that all actions in categories C2 and C3 are morally bad. For that matter, the following category contains only morally bad actions:

C4: An omnipotent, omniscient moral agent A who exists in a world devoid of any other moral agents, brings about moral agent B, who is a moral agent who will endure immense suffering for all eternity, regardless of how B behaves or tries to behave.

Surely, any action in category C4 would be immoral. And saying that God might have morally sufficient reasons to bring about an action in C4 would not provide any reasons to undermine our intuitive moral assessment, for the simple reason that there is no good reason whatsoever to even suspect that God might have morally sufficient reasons to bring about an action in C4.

But then again, there appears to be no good reason to suspect our intuitive assessment of categories C2 or C3: while the degree of immorality of many actions in either of those categories is much less than the degree of immorality of any action in category C4, they still appear clearly immoral, going by my sense of right and wrong.

Swinburne assesses otherwise (though he appears also confused by introducing the matter of the 'better type' of agents agents); I suggest that readers make their own assessments of these matters.

g) A world with God alone would be a bad state of affairs. [22]

That is puzzling. What would it mean for it to be a 'bad state of affairs'?

A state of affairs where there is a lot of immoral behavior, perhaps?

But in this scenario (i.e, God alone), there surely is no morally wrong action involved, and the state would be one in which the only substance in existence is morally perfect.

If he means something else, what is it?

h) God needs to share, love, interact, etc. [22]

Actually, that is not entailed by the concept "God", which gives no details about the inclinations of the agent other than moral perfection.

Instead, one would need to include more claims about the psychological makeup of God to reach that conclusion, making the theistic hypothesis even more complex than it already is.

As it stands, we have no information on whether God needs to share, interact, etc., or even love.

Perhaps, someone might claim that a morally good agent has a need to love, share and interact. However, after a brief analysis. we can tell that that is not plausible. To see this, let's consider the following two scenarios:

Scenario SC1:

During an expedition to colonize another planetary system, a human being, Bob, who has always been a good person, ends up alone, stranded on a spaceship at drift.

Bob has no way and will never have a way of contacting anyone else. But he has some frozen GM dog eggs and sperm, and a machine that can use that to make GM dogs using artificial wombs, etc. (GM dogs are just like regular dogs, but sterile; they can only reproduce by artificial means).

The ship's reactor and recycling system are still operational, and can sustain Bob for the rest of his life; in fact, they were designed to sustain a much larger community for thousands of years, so that won't be a problem.

The ship AI can also sustain a good number of dogs, if Bob is dead and can't do it himself.

Knowing that, Bob makes a dozen dogs to keep him company, so that he can share, interact, etc., at least to some extent. Dogs, of course, aren't the most suitable animal for Bob; humans would do much better. But Bob does what he can.

Scenario SC2:

Similar to scenario SC1, but instead of Bob, the person who gets stuck on the ship is Alice, who also has always been a morally good person. However, Alice is not human. She's a Vulcan-like alien who does not have the same propensities as Bob. Alice might make the dogs as Bob did (they were originally both humans and aliens of Alice's species on board), but she does not find that option appealing, and chooses to spend the rest of her life meditating instead. So, she does exactly that.

The point here is that, going by scenarios sc1 and sc2, we should not conclude that Bob was a morally better person than Alice, or that his choice of action was a morally better one – nor the other way around. Nor are we entitled to conclude that Bob was probably a morally better person, or probably took a morally better course of action.

The point here is that there is nothing in the concept of moral goodness that suggest that a morally good agent would need to share, interact, love, etc.; moreover, even if it's true that a morally good agent would love at least some other agents that already exist, there is no way of telling, from the meaning of the concepts alone, that a morally good agent would have an inclination to create other beings to share, etc.

Still, perhaps someone might say that a morally good agent would have an inclination to create other morally good agents to share, interact, etc., and dogs do not count since they're not moral agents.

Even if that were true, that would only lead us to the issue of creating other moral agents, and not non-moral agents, or things that aren't agents.

So, much of Swinburne's reasoning would be blocked anyway, since he's claims that it would be morally good for God to make all sorts of things other than moral agents.

Moreover, it's not at all clear to me that we can tell a morally good agent would have an inclination to create other morally good agents, regardless of any other features of the morally good agent's psychological makeup.

In fact, that looks counterintuitive to me.

To see why, we can modify the previous scenarios, as follows:

Scenario SC3:

During an expedition to colonize another planetary system, a human being, Bob, who has always been a good person, ends up alone on a spaceship, as a result of a serious accident.

Many ships already reached their destination, and more are on their way.

Bob can use the tech on board the ship to clone some of the dead humans and make more humans, whom he would raise.

Now, regardless of whether or not Bob makes other humans, the ship will reach its destination, with its cargo. Moreover, colonization will go on one way or another. So, Bob decides to make more humans, raises them, and some of his descendants reach the target planetary system.

Scenario SC4:

Similar to scenario sc3, but instead of Bob, the person who gets stuck on the ship is Alice, who also has always been a morally good person.

Alice is not human but an alien (though we might as well pick a human here), who decides not to clone anyone, since she does not feel like reproducing and raising children. The ship gets to its destination regardless, and colonization goes on unhampered.

As in the case of scenarios 1 and 2, it seems to me that scenarios 3 and 4 do not entitle us to conclude that Bob was or probably was a better person than Alice, or that his course of action was morally better.

If so, then there is nothing in the concept of 'morally good' that suggests that a person would need to share, interact, or love. Sometimes, a moral agent sometimes has a moral obligation to interact with other moral existing agents; maybe, sometimes, she would have a moral obligation to love them.

However, the question in the context of this particular part of Swinburne's argument is whether we can tell, from the assumption that an agent is morally good, that he would need or at least feel inclined to create other morally good agents in order to interact with them, share things with them, love them, etc., and the answer appears to me to be negative.

Still, even if we were entitled to infer such propensities, the question would still be what kind of moral agents, and my assessment would still be that an omnipotent, omniscient, perfectly morally good agent would not create the kind of moral agents that we observe.

i) Significant free choice is evidently good. [22]

Swinburne claims that what he calls 'significant free choice' is evidently good, and from that he goes on to say that it's something God would want to do.

The issues at heart here are what Swinburne means by "significant free choice", whether it would be morally good for an omnipotent, omniscient creator to bring about such beings in some cases.

So, a category to consider here is:

C5: A moral agent brings about the existence of one or more entities with what Swinburne calls 'significant free choice' into a world previously devoid of such agents.

A subcategory is:

C6: An omniscient, omnipotent moral agent brings about the existence of one or more entities with what Swinburne calls 'significant free choice' into a world previously devoid of such agents.

The questions are whether at least some actions in the subcategory would be morally good, and

whether some of them would be morally neutral if none of them would be morally good.

If all actions in the subcategory would be morally bad, then God would not do any of them, and then we can tell that God does not exist.

If some actions in the subcategory would be morally neutral but none morally good, then we have no means of telling whether God would do any of them.

If some actions in the subcategory would be morally good, it's still not clear that we have the means to assess whether God would do them.

Now, Swinburne argues that what he calls "humanly free agents" do have what he calls "significant free choice", but – surprisingly – God, angels and some other beings do not.

Swinburne seems to be saying that an entity that has a fixed morally good character does not have significant free will. That is really odd, because of the following reasons:

First, if there is an omnipotent moral agent, he can do evil, even if he wouldn't because he's morally perfect; I already made that point earlier.

Second, even leaving the previous point aside, Swinburne himself claims that God could create a universe without what he calls 'humanly free agents', and thus without beings with what he calls 'significant free choice'[27], but he also claims that God could also create a universe with humanly free agents.

So, by Swinburne's own account, God could create a universe that will have moral evil, or refrain from doing so, which seems to suggest that God can make choices that make a difference between good or ill, which seems to imply that God has significant moral freedom.

In order to avoid that, it seems our conclusion here should be that Swinburne's stipulation that a being with what he calls 'significant free choice' means that said being sometimes (under some circumstances) would behave immorally.

But if having 'significant free choice' is having a character such that, under some circumstances, one would behave immorally, then it seems intuitive that an omnipotent, omniscient moral agent would be behaving immorally if he were to create such beings when he can instead create beings without such flaw, and without any propensity to make beings with such flaw.

Regardless of what Swinburne meant to say, if someone with a fixed morally good character does not have what Swinburne calls "significant free will", because they do not and would not act immorally, then it seems to me that creating beings with that feature – which shouldn't be called 'significant free will', but rather something like 'limited depravity', or something like that – would be a morally bad thing, all other things equal, and for an omnipotent, omniscient moral agent who could choose to make morally perfect beings instead, as I said earlier.

Swinburne also claims that significant free choices are those that can make real differences for good or ill, but oddly claims that humans have such choices because of non-rational influences and temptations to do what's not good...which make them not what Swinburne calls 'perfectly free'.

So, in brief, what Swinburne's claim entails is that even when one could just create morally perfect beings, it's morally good to create beings with limited moral knowledge and temptations to act immorally, in order to achieve the morally good end of...having entities with limited moral knowledge and a temptation to act immorally, and who would on occasion behave immorally!

Calling those limitations and temptations "significant free choice" might make the act of creation sound the behavior morally good, but once the obscurity is removed to a sufficient extent at least, it should be clear – I hope – that the claim ought to be rejected.

Swinburne's claim entails that, in a world of moral perfection, it's morally good to introduce moral imperfection, even if one could choose not to do so, and indeed to introduce more morally perfect beings instead, if one so chose.

Granted, many, plausibly most humans often do morally good actions, and I wouldn't say that making humans or similar creatures would always be morally bad if the creator were limited – but we're talking about an omnipotent, omniscient one, who has the power to create beings who do not have any inclination to do evil, and whose moral beliefs are always true.

On top of that, in order for the choices to be "significant", it seems – according to Swinburne's idea – that it's also a good thing to give these morally imperfect beings a certain even if limited amount of power to do evil, and to inflict suffering on the innocent.

How can it be a good thing not only to introduce evil temptations into the world, but even to allow people who act on them to make good people, or non-moral agents, suffer?

What would be the greater good? That someone resisted the temptation?

How would bringing about a world full of beings of imperfect moral knowledge and tempted to do evil – who often do good and sometimes do a lot of evil – be morally good, or even morally acceptable, if one can instead bring about a world of morally perfect beings who would never do any evil?

After carefully considering Swinburne's claim, I would say – for the aforementioned reasons – that that claim not only isn't evidently true, but it appears intuitively evidently false – to me, at least.

Now, Swinburne defends this claim by saying that it's a good thing that our children make their own choices in good or ill, and that their choices influence whether they're good or bad.

It's true that humans usually want their children to learn and become good people in the way limited beings like humans do learn, but generally human parents do not want their children to be morally bad, or to make immoral choices.

Still, let's suppose we could choose to have children who will always make morally good choices, or children who might do morally good or morally bad choices, for all we know. Would it be okay to choose the latter?

Moreover, a human instinctive desire to have normal offspring may affect our first impressions on the matter, but that does not affect the previous considerations.

But let's consider the following scenario, as an analogy:

Let's suppose we can make a strong AI: an artificial being with far greater intelligence than a human being – or even that all humans put together. We can do the following:

1) We can make sure the AI is going to be friendly, and in particular, it's going to be similar enough to humans to be a moral agent (even if a vastly more intelligent one), and will always do a morally good action.

2)We can make sure that the AI will be a moral agent, but a certain number of times (say, at least 10% of the times, for instance), the AI would make a morally bad action.

3) We can make part of the programming random, so that we won't know in advance whether the AI will always do a morally good action, or sometimes or even often morally bad ones.

4) We can refrain from making any such AI.

Isn't it clear that both 2) and 3) would be immoral courses of action for us to take?

Of course, this is only an analogy, and hence not a perfect match.

So, someone might point out to differences between that and the situation of an omnipotent, omniscient moral agent in a world devoid of any imperfect moral agent, from the differences in power and knowledge on one hand to the fact that we don't live in a universe devoid of imperfect moral beings in the first place, and suggest that they make a relevant moral difference. But that was an analogy to reply to Swinburne's analogy about parents.

In the end, what matters is the actual case, not the analogies, and for the reasons I've been explaining and my intuitive assessment of the situation, I would say that even if we could tell that God would in fact create other morally good beings, he would not create what Swinburne calls 'humanly free agents'; in particular, God would not create humans.

Granted, someone might say that there might be some hidden sufficient moral reason to create humans or humanly free agents that I have missed, that we're not omniscient but Go is, and so on.
However, as I argued earlier, simply claiming that there might be sufficient reasons is not enough to block my point. Moreover, for that matter, someone might claim that, perhaps, God would have sufficient reasons to bring about the actions of, say, Pol Pot, who might have been for all we know rightfully following his orders. Of course, such a claim is preposterous, and we can tell that Pol Pot was a very immoral agent.

The point of the analogy is not to imply anything about the degree of immorality involved in the creation of imperfect moral agents by a moral agent that is omnipotent and omniscient – which would actually depend on the specific action in the category -, but to stress the point that if someone objects to a moral assessment by saying that God might have sufficient moral reasons, they actually ought to provide plausible reasons, otherwise the clear moral assessment stands.

All that said, even if the previous considerations were insufficient to conclude that God would not create humans or what Swinburne calls beings with 'significant free will', a weaker result is that no good reasons have been provided to suspect that he would do so, and that weaker result is enough, on its own, to block Swinburne's case for the existence of God, independently of other considerations.

So, while my position is that the arguments I've given above show that God would not create what Swinburne calls 'humanly free agents', even if I'm wrong about that, I would say that at least they show that Swinburne has not provided good reasons to suspect that God would create such beings.

In particular, if one were to accept the objection to my previous argumentation based on the claim that an omnipotent, omniscient moral agent just might have morally sufficient reasons to bring about what Swinburne calls 'humanly free agents', even if we are unable to ascertain what those reasons might plausibly be, then a similar objection would work in the other direction: for that matter, an omnipotent, omniscient moral agent just might have morally sufficient reasons to never, ever create what Swinburne calls 'morally free agents', even if we are unable to ascertain what those reasons might plausibly be (though we actually are able to ascertain what those reasons plausibly are, which is what I've been arguing for above, but leaving that aside now just for the sake of the argument).

So, if that particular objection were to be accepted, it seems we wouldn't be able to tell whether God would have sufficient reasons one way or another. That would block probabilistic assignments – a 0.5 assignment in such cases would not work since, for that matter, the same "reasoning" (i.e., mysterious sufficient moral reasons) would yield the same assignment for, say, humans (not just what Swinburne calls 'humanly free agents'), or a number of other beings, as well as any sort of behavior, such as the chance that, perhaps, Pol Pot was God behaving like that for mysterious reasons.

We may also add the problem of creating moral agents who endure suffering they do not deserve – another kind of action that, in my assessment, God would not carry out.

In addition to the previous objections, there is yet another problem for a theist defending Swinburne's argument, which is the following:

While I do not think that what Swinburne calls 'perfect freedom' has anything to do with the actual concept of freedom, the fact is that Swinburne says that God is 'perfectly free' and is not limited by irrational desires, and then goes on to say that what he calls 'significant free will' is 'a good thing' (apparently implying that it would be morally good for God to create them), where that so-called 'significant free will' would be a kind of freedom that a 'perfectly free' being would not have, since he's not limited by irrational desires...

So, once again, the choice of the term 'significant free will' might make it look the idea look appealing at first glance, and even give the impression that it would be morally good for an omnipotent, omniscient, moral agent to create beings with such 'significant free will', but once the obscurity of Swinburne's words is removed to a sufficient extent, it's clear that this so-called 'significant free will' amounts – and based on Swinburne's own account freedom – to a kind of imperfect freedom, which falls short of perfect freedom due to irrational impulses. How would it be morally good for an omnipotent, omniscient moral agent to create that, when he can create other perfectly free agents?

j) It would not be worse for God to create humanly free agents than to create other divine or semi-divine beings. [27]

For all of the reasons given above, I disagree.

It would be much better for an omnipotent, omniscient moral being to create beings with full moral knowledge and no inclination whatsoever to do evil (i.e., morally perfect ones), than to create "humanly free" agents (i.e., beings with limited freedom and moral awareness, and apparently with inclinations to do evil and such that at least some of them will surely do some moral evil).

k) God has reason to create a beautiful physical universe, independently of any reasons to create 'humanly free' agents, and even if there were no one but God himself to see that world. [28]

While Swinburne says that "of course" God has reasons to make a physical universe, actually Swinburne does not have any good reason whatsoever for even suspecting that God would have such reasons, since whether God would be inclined to make such a thing would depend on features of his mental makeup other than moral perfection, or anything that is part of the definition of "God", or follows from it.

Therefore, an even more complex hypothesis than theism – such as some particular version of theism – would be required if we are to make an assessment as to whether God would create a material world.

That extends beyond what Swinburne apparently calls 'the physical universe', since it applies to lifeforms as well.

But purely for example, and to illustrate my point here, let's consider the case of a beautiful rose, which Swinburne considers as part of the inanimate world, apparently, as the quote above implies (though whether a rose counts is beside the point here; we could pick anything else in what Swinburne calls the 'inanimate world').

Do we have any good reason to suspect, a priori, that God would have any motivation whatsoever to create such a thing?

Let's consider the following scenario:

On a distant planet, some intelligent aliens (say, kelorans) evolve. As a result of a different evolutionary past, any normal keloran would find anything that looks like a rose nauseating, and some alien plant-like organism (let's call it a 'k-rose') very appealing, while any normal human would find a k-rose nauseating as well.
Assuming that kelorans also have a moral sense and are assessing the prior probability of God (I see no good reason to think they would, but leaving that aside), then we could ask:

Would kelorans be justified in assessing that, say, God would be likely to make k-roses, or more likely to make k-roses than something as nauseating to a keloran as a rose?

Clearly, the answer is no, kelorans would not be so justified.

But then, and for the same reasons, humans are not justified in assessing that roses are more likely created by God than k-roses. We have no information about God's predispositions to behave other than his moral perfection.

Note that the previous example does not require 'assuming' unguided evolution. It's enough to point out that the kelorans are conceivable, and conceivably rational, even if they would find a rose nauseating.

Of course, the species 'rose' is not really the point here.

The point is that, as far as we can tell by the meaning of the words:

1) There might be rational and moral agents (and even morally good ones) who find roses appealing and k-roses nauseating.

2) There might be rational and moral agents (and even morally good ones) who find k-roses appealing and roses nauseating.

3) There is no good reason to suspect that agents such as those described in a) are more or less probable than those described in b), given that words like 'rational', 'moral', 'morally good', etc., have no implications whatsoever regarding how appealing an agent would find either roses or k-roses.

Of course, the previous example of roses is merely that: an example.

We may well also conceive of rational, moral and morally good beings that would look at what Swinburne calls the 'inanimate world' and find it nauseating.

The crucial point here is that there is nothing in the concept of moral goodness, and/or the concept of rationality, that would provide any information as to whether an entity would find the 'inanimate world' (or, for that matter, roses) appealing, nauseating, or would just be indifferent to them.

So, Swinburne's claim that God has good reason to create a 'beautiful inanimate world' is unfounded. In fact, we ought not to make that assessment; rather, and for the aforementioned reasons, we ought to conclude that we cannot tell or even probabilistically assess whether God would have reasons to create an inanimate universe that looks at all like the inanimate parts of our universe (I'm limiting this to the inanimate parts to assess the matter without contamination from the problem of suffering, pain, etc.).

Granted, Swinburne also claims that God has moral reasons to create such a universe, as that would be required to make what he calls 'humanly free agents', and it seems Swinburne believes it would be morally good for God to bring about such agents, at least under certain conditions.

However, that's another matter, which I have already addressed.

l) In general, it's better to create more humanly free agents, more animals, etc. In other words, in general (though not always), the greater the number, the better. [29]

There appears to be no good reason to think there is any moral dimension to the numbers, if we assumed that creating such entities when one can make morally perfect ones instead would be morally acceptable.

However, in the case of "humanly free" agents, it seems to be immoral for an omnipotent, omniscient moral agent to create them, for the reasons I explained above, so God would not create them. Given that such entities exist, we can tell that God does not exist.

m) The probability that there will be other beings, if God exists, is 1. [30]

For the reasons explained previously in this section, it appears to me that there is no good reason to assume so – at least, no good reason provided by Swinburne.

n) The probability that there will be "humanly free" agents, if God exists, is 1/2. [30]

For all the reasons I gave above, in my assessment the probability that God, who is omnipotent and morally perfect, would create what Swinburne calls 'humanly free' agents, is 0, or at least nearly 0.

2.3. Conclusion of Part 2

Based on the previous arguments in this section, it seems we're justified in concluding that God would not create humans, and so he does not exist.

However, that conclusion is not required: a far weaker one suffices, namely that the aforementioned arguments at least show that Swinburne fails to give good reasons for his assessments about what God would likely create.

3. More probabilistic assessments, and more theistic improbability

In this part, I will tackle Swinburne's probabilistic assessments from a different perspective.

I will not challenge Swinburne's moral assessments.

In fact, I will even grant some of them for the sake of the argument, and reason from there. Moreover, I will also grant some of his probabilistic assessments for the sake of the argument. Of course, those concessions are limited to this part of my reply to Swinburne's case, unless otherwise specified.

So, let's consider some of Swinburne's claims:

In order to make his assessments about what kind of worlds God would be likely to bring about, Swinburne makes the following claim:

P1: There is no best possible world. [31]

According to Swinburne, if there were a best possible world, God would bring it about, since bringing about such a world would be the only morally best action, and God only does such an action when there is one (that follows from the definition of 'God' [10]).

However, Swinburne claims that there is no such unique best world, and supports that claim by saying that a world with another Swinburne that does exactly what he does, all other things equal, would not be a better or worse world, and the same would happen for other worlds.

So, Swinburne concludes that there is no unique best act of creation of a world.

Let's grant that for the sake of the argument; Swinburne also makes the following, stronger claim:

P2: There is no world W such that no other world W' is better. [23]

Swinburne says that even if there is no unique best world, if there were worlds W such that no world is better (even if others are just as good), then it would be an equal best act to create such a world, and allegedly, God would carry out at least one of those acts of creation.

However, he concludes that there is no such world W: for every world W, there is a better one W'. Let's also grant that for the sake of the argument, and let's consider two more claims Swinburne makes:

P3: If God faces a choice n possible, mutually exclusive and equally best actions, the probability that he will carry out any specific one of them is 1/n. [30]

P4: If God faces a choice n possible, mutually exclusive and equally best kinds of actions, the probability that he will carry out at least one action of any specific one of those kinds is 1/n. [30]

For the sake of the argument, let us grant P3 and P4 as well.

3.1. The improbability of God's creations: possible worlds as an example

Let's suppose that God has a choice between two equally morally best actions, A(1) and A(2), that {W(n,1)} is a finite set of equally good worlds, and that the cardinality of the set is m.

What can we say about the probability that God would bring about W(j,1), for a specific j?

Given that it's equally good to bring about W(j,1), for any j, mirroring the reasoning that apparently led Swinburne to P3, it seems that the probability of God's bringing about W(j,1) is no greater that the probability of any other W(i,1) in the set.

Hence, the probability of each one is no greater than 1/m.

Let's introduce a variant: {W(n,2)} is a finite sequence of worlds such that W(j+1,2) is better than W(j,2), and the cardinality of the set is m.

Then, and for the same reasons, it seems that the probability that God would bring about W(j,2) is at least no greater than the probability that God would bring about a better world, namely W(j+1,2).

In particular, the probability that God would bring about W(1,2) is no greater than 1/m.

Combining this with the claim that, for every world W, there is a better one W', it follows at once that, for every world W, the probability that God would bring about W is no greater than 1/m, for any natural number m.

This result is merely an example, but the kind of reasoning used to establish will be useful as a means to establish more relevant results.

3.2. Theism and the probability of humans

As Swinburne claims [35], if there is a best action to bring about, God will bring it about.

But Swinburne also maintains that if there is a best kind of action to carry out, then God will carry out an action of that kind. Based on that, Swinburne separates actions in different kinds, in order to make his probabilistic assessments.

In particular, Swinburne concludes that God will create something else, because – Swinburne says – the kind of action consistent in creating something is a better kind than the kind consistent in not creating anything, and given that they are mutually exclusive, the kind of action of creating something is a best kind of action.

Also, Swinburne makes a case in support of his contention that the category (or type, or kind) of action of bringing about HFA is neither a better type or a worse type than bringing about other kinds of worlds. In other words, Swinburne maintains that to create and not to create HFA are equally best kinds of actions. [27]

Based on that, he assigns probability 0.5 to the hypothesis that God will create HFA.

Now, let P(k) be the intrinsic probability of theism P(HFA) be the probability that there would be some 'humanly free' agents given theism, and P(Hu) for humans.

Going by the same reasoning, someone might say that to create some humans and not to create some humans (in addition to whatever else he brings about) would be acts of equal best kinds), resulting in a 0.5 probabilistic assessment for humans. In other words, P(Hu k) = 0.5.

Granted, humans are a subcategory of HFA, but that's not the point: I'm merely defining categories or types or kinds of actions that aren't the same defined by Swinburne, but then making probabilistic assessments about them by his own reasoning.

Alternatively, there is another way to see that Swinburne's reasoning and claims entail that P(Hu │k) = 0.5 Given that the existence of humans entails the existence of HFA, then P(Hu │k) ≤ P(HFA│k) = 0.5.

Hence, P(Hu │k) ≤ 0.5.

So, the kind of action of 'to create humans' is either of equally best or of worse kind than the kind of action 'not to create humans'.

Now, if the kind of action 'not to create humans' is a better kind of action than the kind of action 'to create humans', then the kind of action 'not to create humans' would be a best kind of action, and then God would never create humans. It would follow, then, that the kind of action consistent in not creating humans would be a best kind of action, and thus God would never create humans. But Swinburne claims that God created humans.

Then, we have P(Hu│HFA&k) = P(Hu&HFA&k) / P(HFA&k)= P(Hu&k) / P(HFA&k) = (P(Hu│k) * P(k)) / (P(HFA│k)*P(k)) = 1.

That would mean that the probability that, on theism, God would create humans given that he creates 'humanly free' agents (henceforth, HFA) is 1. Moreover, picking humans was just a choice among infinite possibilities for HFA; the point here is that this would mean that the probability of any specific HFA is 1 on theism, given that there are at least some HFA, at least as the powers of the specific HFA in question are limited as those of humans are. Now, let H1 be some conceivable HFA, with such limited powers.

Then, as in the case of humans, P(H1k) = 0.5, and P(H1HFA&k) = 1.

Then, P(H1HFA) P(H1&k│HFA) =P(H1&k&HFA) / P(HFA) = P(H1 HFA&k) * P(HFA&k) / P(HFA) = P(k│HFA).

In other words, that implies that the probability that theism is true given that there are HFA (as there are) is no greater than the probability of any conceivable species H1 of HFA given that there are HFA, as long as the powers of H1 are sufficiently limited.

Granted, the theist might argue that other pieces of evidence (apart from the existence of HFA) increase the probability of theism, but still, this result is very problematic regardless.

Moreover, there is another problem: since P(H1HFA&k) = 1, that would commit a theist to accepting the existence of H1, for any such conceivable beings, as long as they accept Swinburne's reasoning to the conclusion that P(HFA│k) = 0.5, since the same kind of reasoning leads to P(H1k) = 0.5.

There is yet another problem, which I will address below.

3.2.1. What's a 'type' or 'kind' of action?

Someone might raise the following objection to the previous reasoning: they might suggest that 'to create humans' and 'not to create humans' aren't kinds of actions in the sense Swinburne uses the term. But how would that be so?

Swinburne picks categories, or kinds of actions, as he consider them useful to make the points. For instance, he considers the types of action of creating only members of particular species, such as lions or tigers, even if in that case only to conclude that none of them constitutes a best kind. [24]

So, it seems that Swinburne defines any category of actions as the actions that have a certain property, and then he assesses whether that category is a best kind, which is no more or less than what I'm doing here by dividing the categories 'to create humans' and 'not to create humans'.

The previous conclusion can be supported also by Swinburne's reasoning on page 116, and what follows:

In particular, he contends that in order to be able to assess what God must bring about, we would need to separate substances into finitely many kinds such that creating substances of each of those kinds beyond a certain amount of goodness is either better or just as good as creating members of any level of goodness of a class that does not include those types.

But that's exactly what I'm doing in the case of humans:

First, I divide substances in two types, kinds or categories: humans and non-humans.

Second, I ask whether creating substances of type 'non-humans' beyond a certain level of goodness, is better than creating any number of substances in the category 'humans', which is pretty much what Swinburne is doing in the case of HFA, only defining a different category.

If it is better, then not creating humans is a best kind of action, it seems, so God would do it with probability 1.

Otherwise, since not creating humans is not of worse kind than creating them (else, God would create humans with probability 1), then it follows that creating humans and not creating them are actions of equal best kinds, and so going by Swinburne's own method for assigning probability in such cases, the intrinsic probability that God would create humans seems to be 0.5.

Granted, Swinburne claims he does not rely on exact values, so perhaps the probability isn't exactly 0.5, but very close to it. But that does not affect my reasoning in any significant manner, since the problem for theism that I raised earlier would only be very slightly modified, and would remain a serious problem, and the problem I will raise later would not be affected in the least, since it only requires that we conclude from Swinburne's position that the intrinsic probability that God would create humans be greater than 1/m, for some natural number m – any m, not necessarily 2 -, and that conclusion is reachable by the reasoning given in this section, regardless of whether 1/2 is the exact number.

It is true that Swinburne focuses on four types of substances, which he considers "important": namely, inanimate substances, animate substances without free will or moral awareness, animate substances with limited knowledge, free will and power, plus moral awareness, and animate substances with unlimited knowledge, free will and power. [26]

Oddly, Swinburne also claims that a substance with unlimited free will and power would lack what he calls 'significant free will'. But I've dealt with that in part 2. so leaving that aside, the point here is that Swinburne chooses to focus on those types substances that he considers important to make his probabilistic assessments. That's fair enough, but I'm just as entitled to focus on types of substances that I consider important as a means to refute his case for theism.

Still, if someone objected to my choice of types or categories and claimed that only some specific categories of actions can be used to assess the probability that God would engage in one of them, the burden would be on them. If we have the means to make probabilistic assessments in the case of those categories of actions, why not in other mutually exclusive categories?

As a side note, I will point out that Swinburne's types do not appear to be exhaustive of all conceivable substances, and Swinburne provides no good reason to suspect that they're exhaustive of all possible ones.

In particular, a substance of limited power but unlimited knowledge would not fall in any of Swinburne's categories or types. Yet, such a substance appears to be conceivable is an omniscient one is, at least prima facie, and Swinburne has not made a case against either the conceivability or the possibility of such substances.

3.3. 'Humanly free' agents, physicality, and dimensionality

According to Swinburne HFA need to have physical bodies [28][30].

Let's say, for the sake of the argument, that Swinburne is right about that, and so HFA need to have physical bodies.

That does not tell us anything about the number of spatial dimensions of those physical bodies. We have 3-dimensional physical bodies[32], but surely if God exists, he would not be limited to creating 3-dimensional bodies.

While our limited cognitive capabilities prevent us from imagining say, 4-dimensional physical bodies, even we can conceive of them, and God, being omnipotent and omniscient, could make 4-dimensional physical bodies, or (3+k)-dimensional ones for that matter, for any natural number k.

3.4. Dimensionality and some probabilistic assessments

Let us now consider the following categories or types of worlds, and corresponding types of actions, for each natural natural number k. If HFA require at least k(0) dimensions for some k(0), then let's stipulate that k is no less than k(0).

Cl(k): A world W is of type or category Cl(k) if and only if there are k-dimensional HFA in W, but there are no j-dimensional HFA for any j<k.

Tl(k): An action is of type Tl(k) if it's the action of bringing about a world of type Cl(k).

Clearly, the types Cl(k) are mutually disjoint, and the types of actions, incompatible ones.

It seems that Cl(k) is not a better or worse type of worlds than Cl+1(k), and the corresponding types of actions are not actions of a better or worse type, for the following reason:

If a world W is of category Cl(k), we may consider another world W', as follows.

Let a j-dimensional realm be any maximal j-dimensional part of W (though maximality is not required for my argument, it simplifies matters a little):

i) If there are HFA in a j-dimensional realm in W, there is a (j+1)-dimensional realm that corresponds to it in W' and which will also have HFA, in similar or greater numbers (whatever is better), and restricting their power to do evil just as much (i.e., as good as possible).

ii) If there are no HFA in a j-dimensional realm in W, then there is either a similar realm in W' as

well, or a (j+1)-dimensional realm with no HFA, either, but much greater numbers of things. Assuming with Swinburne non-moral agents can be categorized as 'good' somehow in a sense that's relevant to Swinburne's case[34], we also stipulate that W' has many more good things that aren't moral agents in those realms.

iii) In addition to the above, W' also has many (say, 1000000000) extra k-dimensional physical universes without HFA, full of good things if it make sense to count such things and/or their existence as 'good'.[34] If only one such universe is possible, than W' has one extra such universe.[34]

So, W' is no worse than W.

Still, if the above reasoning is not enough to show that Cl+1(k) is no worse than Cl(k)[34], then at least it shows that there is a subcategory C'l+1(k) of Cl+1(k) that is no worse than Cl(k) - namely, the subcategory constituted by the W' that correspond to each W in Cl(k) -, and a corresponding subtype of actions, and the following probabilistic result obtains as well:

The intrinsic probability that God would carry out an action of type Tl(k) is no greater than 1/m, for any natural number m, since actions of type Tl(k) and Tl(p) are mutually exclusive if pk, and neither of them is more probable than the other.

Someone might object to the previous reasoning and say that it's not possible for there to be beings of different dimensionality in the same world. I do not see why that would be impossible, but if it is impossible, then the construction is even simpler, because in that case any world W in Cl(k) only contains k-dimensional physical beings, and we may consider a world W' with only (k+1)-dimensional ones, also with similarly limited HFA, but many more good things, just with one more dimension. Then, at the very least, W' is no worse than W, and the previous conclusion follows as before.

An alternative objection might claim that that a world W of type Cl(k) may have some good that a world of type Cl(k+1) is deprived, like k-dimensional HFA, and that makes the world W better than any alternative W'.

However, that second objection does not succeed, either, for the following reasons:

First, assuming with Swinburne (against good reasons, as argued in part 2) that there is something good about an omnipotent, omniscient moral agent bringing about HFA into a world devoid of them, and that that goodness consists in what Swinburne (improperly) calls 'significant free choice', then the alternative world W' contains that as well, and in similar or greater numbers (whatever is better).

Second, any kind of non-moral goodness (whatever that is) that those k-dimensional moral agents may have in W can be matched or surpassed either by some (k+1)-non-moral goodness in W', or by the goodness of some non-moral k-dimensional things in W', which we may add as required since we're talking about some non-moral goodness, assuming that somehow that is relevant to the argument at hand. [34]

So, the conclusion remains: the intrinsic probability that God would carry out an action of type Tl(k) is no greater than 1/m, for any natural number m.

In particular, this implies that the intrinsic probability that God would create humans is less than 1/m, for any natural number m, since the category of worlds with humans is contained in the union of the types Cl(j), for all natural numbers j equal or less than the dimensionality of humans, and so the probability that God would create humans is no greater than the probability that he would carry out an action of some type Tl(j), for some natural number j equal or less than the dimensionality of humans.

Since the probability of the latter is no greater than 1/m, it follows that the probability that God would create humans is less than 1/m. But as I argued above, Swinburne's own reasoning leads us to the conclusion that the probability that God would create humans P(Huk) = 0.5, contradicting the conclusion that P(Huk)(1/m), for all natural numbers m.

3.5. Conclusion of Part 3

Given the arguments given in this part of the article, we ought to reject Swinburne's case for theism, on the grounds that the reasoning leading to his probabilistic assessments contains decisive errors.


Notes and references

[1] Swinburne, Richard "The Existence of God", Second Edition.

Clarendon Press Oxford

[2] Page 53.

[3] Page 55.

[4] Page 56.

[5] Page 86.

[6] Page 65.

[7] Page 72.

[8] Page 239.

[9] Of course, we don't need to claim it's a brute fact, or that it is not. We may remain undecided on the subject.

[10] Page 7.

[11] Page 94.

I would rather call that "perfectly random", but that would be beyond the scope of this article.

[12] Page 98.

[13] Page 100.

[14] Page 101.

[15] Page 104.

[16] I will not define "deity", even though the concept is very vague at best.

If that's a problem, we can call call these entities "other beings" - for instance -, and the arguments are not affected.

I just thought "deities" was more fitting, since it stresses the fact that I'm providing alternatives to theism.

[17] In fact, one needs much less than P(AN(Con(1,n))│k) ≥ P(Theism│k)

It would be enough to take – for instance – something like:

P(Theism│k) ≤P(AN(Con(1,n))│k) * (n*(log(n+20)+(10^100)))

[18] Page 17.

[19] Actually, Swinburne does claim that an "inbuilt detailed specification of how to act" would make a hypothesis more complex than one stipulating perfect freedom. However, moral perfection is such as specific account, so we have to assume that such a specification does not preclude perfect freedom – else, God would be logically impossible.

[20] We can even further simplify the hypotheses, if we so choose:

AH(Alternative Deity(n,1)): There exists a being who is V(n)-perfect and the creator of all other beings.

Since there cannot be two different creators of all other beings, AH(Committed Deity(n,1)) and AH(Committed Deity(m,1)) are disjoint if nm, and the rest of the argument follows as before.

[21]

We can even further simplify the hypotheses, if we so choose:

AH(Gambling God(n,1)): There exists a being who is the creator of all other beings and W(n)-perfect.

Since there cannot be two creators of all other beings, AH(Gambling God(n,1)) and AH(Gambling God(m,1)) are disjoint if nm, and the rest of the argument follows as before.

[22] Page 119.

[23] Page 115.

[24] Page 116.

[25] Someone might object that, under some circumstances, it's not morally neutral to choose an apple instead of a banana – there might be a disease that spreads through apples, etc.

However, I'm talking about a case in which all other things are equal.

In other words, my point is that the choice of – say – an apple over a banana, when there are no other factors that might be morally relevant, is morally neutral.

The same applies to other cases in which I say that some actions are morally equivalent, or neutral.

[26] Page 118.

[27] Page 120.

[28] Page 121.

[29] Page 122.

[30] Page 123.

[31] Page 114.

[32] Someone might raise the issue of multiple dimensions in our universe and suggest that, perhaps, our physical bodies are not 3-dimensional after all. However, that issue is not relevant to the points I'm trying to make. If our bodies have m dimensions, for any natural number m, for that matter God could create physical beings with (m+1)-dimensional bodies.

[33] In this and other cases, unless otherwise specified, I'm considering all the physical beings at that world across time. For instance, if at world W there are only 3-dimensional physical beings before time t1 and only 4-dimensional ones after t1, that world would not be in category C(3) or C(4).

[34] In reality, as I argued in part 2, I think all this talk about better types of action and that conflation of morality and things completely unrelated to morality is a confusion.

Moreover, the talk of value seems confused as well.

Swinburne seems to have conflated the issue of whether moral statements state true propositions with the existence of some mind-independent value (whatever that might be), and then conflate moral and non-moral 'value'.

However, at this point in my argument I'm following Swinburne just for the sake of the argument.

[35] Page 113.


No comments: