God vs The Multiverse

Click here for God vs The Multiverse: a rational argument for the Existence of One God who intelligently designed one universe.

Wednesday, June 27, 2012

God vs The Multiverse (Part 5: The Origin of Life)

We are going to make a short digression into biology and the problem (and attempted solution) of the origin of life.  We want to make it very clear that we are not using the problem of the origin of life in our proof of God.  We are relying upon the fine tuning of the constants of nature and the initial conditions of the big bang.  The reason we are introducing the proposed solution to the origin of life is because multiverse physicists attempt to extend this type of solution to explain the phenomenon of fine tuning in the universe.

The 'origin of life' problem can be roughly expressed as an 'origin of something as complex and special as DNA' problem.  A short historical and scientific background on the theory of evolution will help explain why the unsupplemented theory of evolution cannot explain the origin of life itself.

The key development which enabled the theory of evolution to emerge, was the discovery that the age of the Earth was much greater than scientists historically had evidence for.  The expansion of the known age of Earth based upon geological evidence in 1785 by James Hutton, and then further developed by Sir Charles Lyell in his book, Principles of Geology (1830), opened up the possibility for a much deeper understanding of the complexity of life.

Charles Darwin's supplied this understanding with the theory of evolution, in his famous book On The Origin of Species (1859).  The modern version of Darwin's theory which includes genetics, called Neo-Darwinism, was developed around 1950.  It is a very elegant, simple theory that explains the wonderful diversity of life, and gives you an appreciation for how an amazingly complex cell can emerge from the information encoded in DNA.  (We are not taking a stance on whether or not Neo-Darwinism is entirely sufficient and complete to explain all the facts about life.  That being said, there is definitely something right about it.)

The essential element of biological evolution is the self replicator (DNA), which is something that makes near perfect copies of itself.  The replicated copy is the next generation of replicator, which continues the process of nearly perfect replication. 

It is necessary for the functioning of natural selection, that the process of replication not be perfect.  Slight variations in each generation which arise from the "failure" to reproduce an exact replica (because of the occurrence of a mutation), are what allow the process of natural selection to act on those differences and select the fittest organisms for survival.

The key point is that it is intrinsically impossible to explain the existence of the first replicator itself (the first DNA molecule) through the theory of evolution.  This is because evolution and natural selection only operate once  a replicator exists.  In a sense, the science of biology begins after the first replicating molecule comes about (given the proper properties of the environment.  See the first comment below for an elaboration of this point.)

Many biologists speculate that there was another, long forgotten, yet simpler replicator that was the ancestor to the first DNA.  This pushes the problem back to how the first replicator emerged, as any replicator which is sufficient for evolution to operate on, would probably be a highly complex entity.

This problem is known as the origin of life problem.  Any solution to it bridges the gap between chemistry and biology (between the inanimate and the animate).  The biological theory of evolution cannot solve this problem.  Where did the first replicator come from?


The main approach to resolving this problem is by invoking luck (chance).  Since you only need to get lucky once (after you have the first replicator, biological evolution takes over), it becomes more reasonable to speculate that perhaps it all started by a lucky break.  While this might initially sound like a very forced answer, the weak anthropic principle (which we'll explain) elucidates why it might be a fairly reasonable solution.

It is important to clearly understand the difference between the strong anthropic principle, which we used to refer to a teleological explanation (in post 3), and the weak anthropic principle, which is a very different type of causal explanation.  Once again, labels are not as important as concepts.

It is speculated that perhaps there is some way that some inanimate thing should accidentally combine with some other inanimate thing, and produce the first living replicator (an ancestor of DNA).  Once we have DNA, the theory of evolution claims that the rest is just details. While the emergence of DNA by chance might seem highly improbable to occur, since there are many, many planets in the universe which are in theory hospitable to life, even something very unlikely may become probable given such a large number of possible tries.

A simple analogy makes this reasoning clear.  If your odds of winning a lottery are one in a million assuming that you buy only one ticket, then your odds increase dramatically if you buy trillions of tickets.  In fact, given enough tickets, your odds of winning become highly likely.  If you win, you're not really as lucky as you may feel.  The law of probabilities operates very efficiently when big numbers are involved.

Should someone ask, "maybe it is likely for life to randomly occur once, but what are the odds that it would be here on Earth?"  To that, the weak anthropic principle is invoked.  Essentially, it says that there is an easily overlooked, causal relationship between an intelligent observer and the development of life.  Namely, life is a necessary condition in order to have an observer even ask the question about why life is here on Earth.  Only on those planets that life exists, is it even possible to have observers, and therefore we should not be surprised to find ourselves on a planet with life.  There aren't any intelligent beings on planets without life.  By this line of reasoning, it is superfluous to invoke a teleological explanation (i.e., the Earth was designed in order to produce the first DNA) in order to explain life on Earth.

It is not necessary to know the precise numbers of planets vs. the exact odds of a DNA molecule emerging by chance.  You just need to match them to roughly the same order of magnitude (basically, that they're "closely" matched).  Should those odds be close to the number of planets, we would have a good explanation for how life started.  The fact that we are on the one planet in which it did occur is obviously not a question, as the existence of life is a necessary condition for us observing life and asking the very question in the first place.

As of yet, it is still unclear that the number of hospitable planets suffices, given that science does not currently have a well established theory for a chain of progressively more complex replicators that lead to DNA.  The odds of getting a DNA molecule itself seem greater than the number of hospitable planets in the observable universe; but it is conceivable that we may find evidence of a simpler replicator that will allow us to compare its odds against the estimated number of planets, which is known to be a very big number.  (By the way, the multiverse theory solves this problem too, as it posits a nearly infinite number of hospitable planets.)

The key conceptual point to take away from this for the next post is that this type of reasoning only works because there are known to be many planets that are hospitable to life.  Therefore, even though it is highly improbable on any particular planet for life to spontaneously generate by chance alone, it can become likely if there are enough possible planets for it to occur on.

This line of reasoning is inapplicable if there is only one known planet. It is not a good explanation to say that a highly improbable event occurred, given that there was only one try.  Before scientists observed the many, hospitable planets, it was not reasonable to say that life originated from inanimate matter through chance alone.  That is too much of a coincidence to accept!

(This reasoning is also implicitly contingent on the very reasonable assumption that whatever happens on one planet does not affect the results of a different planet.  If the results on all the planets were correlated to each other in a way that whatever happened on one planet also occurred on all the others, it would be equivalent to having a trillion copies of one lottery ticket.  This point is very obvious and we only mention it because it will be important in the next post.)

The next 10 minute video is about the origin of life by biologist Richard Dawkins.  (It is a part 3 of 5.  Click here for Part 1, Part 2, Part 3, Part 4, and Part 5.)  We will only be embedding Part 3, as it nicely transitions into stage two of our posts about the multiverse.  Dawkins first summarizes the contents of this post.  He then distinguishes between the Many Worlds Interpretation of Quantum Mechanics which is not relevant to the fine tuning of the constants, and multiverse theory that is relevant for the fine tuning.  He then discusses how physicists try to explain the fine tuning with the weak anthropic principle and the multiverse, though he acknowledges that it is only a satisfying solution for fine tuning if there are other independent reasons for postulating the multiverse (which Dawkins believes there are).



(We will answer Dawkins' question of "Who designed God?" in stage three of the proof, when we explain the concept that God is One.)

Tuesday, June 26, 2012

God vs The Multiverse (Part 4.5: Intelligent Design)

In order to give more clarity and precision to the proof, we would like to define how we are using design, orderfine tuning, and intelligence. We will also elaborate on the various areas in which modern physics indicates design, order, and fine tuning. We will thereby show why the fine tuning in the universe truly points to an Intelligent Designer.

The word 'intelligence' derives its meaning from the Latin verb inter-legere, which means to "pick out" or discern.  An intelligent creative act is one that selects one possibility from among many, for the purpose of producing its intended object.  
We define an object as designed, when the qualitative nature of its parts is such that the emergent entity is significantly greater than the sum of its parts.  When this is due to a specific arrangement of its parts, we say that the object is ordered.  When discussing the specific quantitative components of an object that allow the greater entity to emerge, we refer to the fine tuning of its design.

When an object is designed (ordered or fine tuned), one must find the cause that "picked" these parts and their specific arrangement (assuming a situation where randomness doesn't suffice as an explanation).   Since intelligent means "to pick out", we say that the cause of a designed object is an intelligent designer.  If the nature of the design is quantitative, then we can also say that the cause of the fine tuning is an intelligent fine tuner.  For convenience, throughout this proof we will often use 'design' to refer to all the various concepts, as it is also a general term that includes all these ideas.

In order to fully grasp what Intelligent Designer really means when applied to the constants and initial conditions, it is beneficial to have a good idea of how the constants and initial conditions fit into the current framework of fundamental physics.  The more you understand the different components of fundamental physics and their relationship to one another, the greater will be your sense of what the fine tuning really means.  We know this might seem like a daunting task, but we think that a simple analogy can help you appreciate the different components in a very general way.

For our analogy, we are going to extend Richard Feynman's comparison of the laws of nature to the rules of a chess game.  It is an oversimplified analogy, but it will serve to help you conceptually place the fine tuning of the constants and initial conditions.  The video also mentions the concept of simplicity, an important notion in science which we will come back to in Stage Three of the proof.


In chess, there are three things that all have to be fine tuned just right in order to have an official game of chess

(1) The rules for the movement of the piece, which can be divided into qualitative and quantitative components:
  • (a) The qualitative rules for how each piece moves. For example, a king or queen can move in any direction; a castle can only move horizontally or vertically; a bishop can only move diagonally.  This naturally includes the idea that there are different fundamental chess pieces.
  • (b) The quantity of each piece's movement.  For example, while a king and queen can each move in any direction, the king can only move one space at a time, while a queen can move any number of vacant spaces.
(2) The properties of the chessboard, which also have qualitative and quantitative components: 
  • (a) The qualitative properties of the chessboard.  The board is in the shape of a square, which consists of a finite number of spaces (individual squares).
  • (b) The quantitative properties of the chessboard.  It has a specific number of squares (eight by eight, for a total of 64 spaces).
(3) The initial setup of the chess pieces that begins the game.  There is only one specific arrangement of the chess pieces that can begin an official game.

The fine tuning in chess can be seen by realizing that all three major components need to have the correct quantities and proper order, or the game won't be played properly.  If the quantities are too far off, the game won't make sense at all.

An example of fine tuning necessary regarding the rules (1b) can be seen by imagining what the game would look like if we changed some of the quantities that govern how the pieces move.  For example, if pawns were required to move 6 spaces in a single move, the game would quickly break down. Each pawn would be prevented from moving by the opposing pawn 5 spaces away. Similarly, if castles and bishops could only move 2 spaces at a time, the depth and complexity of the game would be compromised, as this restriction would curtail many interesting strategies.

An example of fine tuning of the chessboard (2b) can be seen by imagining what the game would look like if we changed the quantity of spaces on the board.  If the board was a trillion by a trillion spaces, the pieces would never interact, and we would have a very boring game.  Likewise, if the board had too few spaces to fit the number of pieces, we wouldn't even be able to play a boring game, but would be left with no game at all.

An example of special ordering of the initial setup (3) can be seen by imagining other random arrangements of the pieces to begin the game.  Almost every arrangement will not allow an official game to start (which demands one very specific setup). Most other arrangements will certainly not allow a competitive game, which is the essence of the game to begin with, as one side will have an unfair advantage.  Some arrangements will lead to no game at all, such as the game beginning with one king in checkmate or stalemate position.

In an analogous way, fundamental physics can be broken down along these same three categories:

(1) The laws which describe how energy acts, which can be divided into qualitative and quantitative components:
  • (a) The qualitative laws of Quantum Mechanics describe the way that energy acts.  Energy comes in specific forms, which are analogous to the specific pieces of chess.  Instead of the basic chess pieces like castles or bishops, physics has fundamental particles like electrons and quarks.
  • Quantum Mechanics has been beautifully unified in the Standard Model of particle physics.  The Standard Model unifies three of the four fundamental forces of physics (the electromagnetic force, the strong nuclear force, and the weak nuclear force) that govern the behavior and interaction of matter.  The idea of these forces is similar to the rules for how each chess piece moves and interacts with the other pieces.

  • (b)  The constants of the Standard Model (which we mentioned in post 2) determine the quantities of the fundamental particles and forces.  This is (crudely) analogous to determining the quantity of spaces that each chess piece can move, like the rule that a pawn can move 2 spaces on its first turn, but only 1 space thereafter.  Just like in the chess example, if these constants were different, the universe would be completely different in a very boring and uninteresting way.  It would lack any hierarchy of complex structures greater than particles.

(2) The properties of the space-time framework, which also has qualitative and quantitative components:

  • (a)  The qualitative laws of Einstein's Relativity determine the space-time framework within which the different particles interact.  The space-time framework is roughly analogous to a board.  The space-time of Special Relativity is very much like a chessboard with a fixed background for the events to occur in.
  • Special Relativity is superseded by General Relativity, in which gravity (the fourth fundamental force) is seen to change the geometric shape of the framework itself.  This would be like the spaces of the chessboard changing during play, which you can imagine as adding or subtracting spaces in between pieces as they move around the board.  The space-time framework changing is an abstract concept, but it's important to have some idea of it for the next point.

  • (b)  The free constant of General Relativity is called the cosmological constant (mentioned in post 3).  This determines how fast space itself expands.  This is analogous to a rule that says that the spaces between pieces double after every 100 turns.  If the cosmological constant was bigger, space would expand too fast (say 10 squares a turn) for there to be any interactions between particles, and hence nothing interesting would be produced.  If the cosmological constant was negative, space would contract too quickly (say losing half the squares every turn) and there wouldn't be enough space for anything to exist.

(3) The initial conditions of the big bang are analogous to the initial setup of the chess pieces.  Just as we wouldn't expect a random arrangement of the pieces to produce a fair and competitive game, we wouldn't expect a random arrangement of the initial state of the big bang to lead to an ordered universe.  We describe the highly ordered initial arrangement by saying it has low entropy. In post 4, we mentioned that this has the probability of occurring by chance alone of approximately 1 out of 1010123.

Based upon these three categories, modern science has spoken in the clearest possible language that everything in our universe is fine tuned.  Energy, space-time, and its initial conditions are all fine tuned to mind boggling degrees.  In fact, there is no fundamental aspect of the physical universe which is not fine tuned.  Additionally, because of the precision of mathematics, the language of nature, we can clearly see how slight changes to these quantities would result in a chaotic, meaningless, boring universe (i.e., nothing more than many fundamental particles in a state of chaos, which is the sum of its parts).  This is something which might have been intuitively recognized in previous generations, but can be stated in a much more rigorous manner because of modern science.

Based upon these insights from modern physics, we can clearly see that the various components of the fundamental laws of nature truly merge to form a whole entity which is much greater and richer than the sum of its parts.  Novel existences form a hierarchy of complex structures rising from the level of fundamental particles, to complex life, all the way to the galactic scale of the cosmos.  There is order and beauty in our universe everywhere we look, in every way that we look.

We therefore conclude that the universe is designed.  Since the nature of the design we are focusing on in this proof is largely quantitative components of the universe, we also say that the universe is fine tuned.  (See the first comment on post 5 for an example of qualitative design in the laws themselves.)  The design and fine tuning of the universe points to the existence of an Intelligent Designer (an Intelligent Fine Tuner) who picked out the features of the various qualitative laws and particles of the universe, set the proper quantities, and ordered the initial conditions so that the universe evolved to form a whole which is much greater than the sum of its parts.

Sunday, June 24, 2012

God vs The Multiverse (Part 4: The Initial Conditions)

There is another example of fine tuning in the universe we want to highlight because it is of a very different conceptual nature than the constants, and provides an independent proof of an Intelligent Designer.  (For an elaboration of this point, see the first comment below.)  This is regarding the initial conditions of the universe, which were set at the big bang.

We've never seen anyone (which doesn't mean they don't exist) propose either the Master Mathematical Equation theory or the Necessary Existences theory, to explain the fine tuning of the initial conditions.  It's not even clear how such an explanation would even be formulated, as it seems of a qualitatively different character than our current understanding of physical law.  (It would seem at this point, that the only alternative explanation to an Intelligent Agent is the multiverse.)

The big bang is the widely accepted model for the emergence and evolution of the universe as we know it. The arrangement of the matter and other conditions at the big bang were perfectly tuned so that the universe we see today would naturally emerge. This arrangement was highly specialized, in the sense that variations in the initial conditions would have resulted in disorder (a universe filled with black holes) instead of the ordered universe we witness today. The probability of obtaining such a state by random chance is staggeringly low.

(For those afraid of the physics, you can skip to the paragraphs after the video below and you will still follow the main point of this post.  This post will be our last post that contains this much physics and math.  For those interested, the following will provide a good opportunity to review or learn some physics and mathematics, and thereby have a deeper appreciation for the uniqueness of this proof.)

Someone may ask that although it is highly unlikely that the arrangement of matter at the big bang would be exactly as it was, any one arrangement of matter would have an equally low probability. However, it had to have one arrangement.  How do you know the initial conditions were so special? 

The critical distinction we need to make in order to understand this question is between: 
1) the specific arrangement of the individual parts of a system. (A collection of particles.)
2) the state of a system as a whole.

The relationship between the whole and it's parts is the key concept.  Some states of the whole object are contingent on a unique arrangement.  For example, the meaning of this very sentence (we're treating this whole sentence as a system, with the letters as its parts) is contingent on all the letters and spaces being arranged in approximately this order.  If we jumble up all the letters, the sentence as a whole, loses this state (of making intelligible sense).  Other states, like a meaningless jumble of letters, are independent of how the letters are arranged.  Almost every random ordering of the letters will be in this state of meaninglessness.

If we randomly scramble an object's parts, entropy is the measure of how probable a particular state of the whole object is.  A state that can come about through many different arrangements is called a state of high entropy.  A state that can only come about through very few different arrangements is called a state of low entropy.  Entropy is thus a number which measures the likelihood of any particular state of the whole object if we randomly shuffle its individual parts.  (The fact that a state of lower entropy is less probable is a direct consequence of the fundamental postulate in statistical mechanics.) We'll illustrate with an example.

If we toss 2 individual coins, we consider all the possible ways they could land (H - heads, T - tails):
(1) HH  (2) TT  (3) HT  (4) TH.

The probability of each of these 4 outcomes is 1/4.  Upon consideration we notice that outcomes (3) and (4) will appear exactly the same in terms of the whole system; 1 head and 1 tail. Thus a better way to describe the probabilities is as follows: P(0 heads)=1/4, P(1 head)=2/4, P(2 heads)=1/4. One head is more likely to occur then 0 or 2 heads because it can happen in 2 ways, while 0 or 2 heads can only occur in one way each.

We can generalize this idea to flipping 10 coins. In total, there are  210 =1024 possible outcomes. Thus, the probability of obtaining any particular outcome (say, HHHHHHHHHH or HTHTHTHTHT) is 1/1024. However, there is only 1 way to get 10 heads, while there are 252 (for those mathematically inclined, 10 choose 5) ways of getting 5 heads (some examples are HHHHHTTTTT, TTTTTHHHHH, THTHTHTHTH, HTHHHTTTHT). Thus the probability of obtaining 10 heads is 1/1024, while the probability of obtaining 5 heads is the much more likely value of 252/1024, which is approximately 1/4.  

Because it can only occur in 1 way, we consider the outcome of 10 heads to be highly unlikely (which counter-intuitively is called a low entropy state).  Conversely, since 5 heads can occur in many ways, we consider it to be fairly probable (or a high entropy state).  The outcome of eight heads would be somewhere in between in terms of likelihood and entropy. 

In general, one can think of a low entropy state as being highly ordered and a high entropy state as being disordered.  This is because there many ways to randomly bring about a state of disordered nonsense, but there are only a few ways to bring about a state of meaning and order.

The second law of thermodynamics states that all physical processes move an object from lower states of entropy to higher states of entropy.  This means that over time, all objects end up in the state that has the highest number of arrangements that can bring that particular state about.  Meaning if you start with 8 out of the 10 coins on heads, and you give them enough time and let them interact (i.e., you shake the container), you'll end up with a state of about 5 heads. While it is not theoretically impossible for the second law (which is essentially a statistical law based on probabilities) to be violated in a particular instance (i.e., the red sea splitting in half for a few hours), a violation of this law has never been observed (without the observers claiming they have witnessed a miracle).  

When you apply this reasoning to the universe going forward in time (towards the future), you end up with a conclusion that the universe will, at a point far in the future, end up being in its most likely state (which is a very boring, meaningless state).  This is known as the heat death of the universe which is the state of highest entropy and the least amount of order.

The universe is currently in a state of much lower entropy than heat death.  We have things in this universe with a lot of order, such as galaxies, stars, planets, life, etc.; things that are very unlikely to be attained by a random arrangement.  If we extrapolate backwards in time to the big bang, we realize that based on the second law of thermodynamics, the universe must have been in an even lower state of entropy (an even more ordered, highly improbable, state than it is now).

Another way to see this point is based the idea of meaningful states.  The number of possible arrangements of all of the particles in the universe at the big bang was very, very high.  Therefore, the probability of any particular arrangement occurring by chance is very, very low.  However, we can divide all arrangements into two distinguishable classes: (a) those which eventually unfold to an ordered universe; (b) those which eventually unfold to a universe of total nonsense. There are very, very few arrangements in (a) and therefore these states have a low entropy and a very low probability of occurring by chance. There are many, many arrangements in (b) and therefore these states have a high entropy and a very high probability of occurring by chance.

The fact that at the big bang the universe had such a low state of entropy is like tossing up trillions of letters and having them randomly fall in the arrangement of all the Wikipedia articles.  If the universe did not start off in this special, highly unlikely, low entropy state, then even if we had the same qualitative laws of physics and the same fine tuned constants of nature, we would never get a beautiful, ordered, complex universe.  This is what is meant by the fine tuning of the initial conditions of the big bang.

As an aside, this is why the infinitely cyclic universe model of big bang/big crunch was rejected in 1934, as entropy would be infinitely increasing.  There is an arrow of time and it had a beginning.  There are a few modern day approaches that attempt to reincarnate the theory, but as of yet they are still entirely speculative with no experimental support.  In any event, the essential point (that the initial state of our current universe had an incredibly low entropy) is independent of the cyclic universe issue. (This is an old problem, first recognized by Ludwig Boltzmann in 1895, that even a genuine multiverse theory has great difficulties solving. More on this in later posts.)

Roger Penrose derives the probability for this initial state in his book The Emperor’s New Mind (1989).  We highly encourage the more advanced reader to try to read through his basic derivation which is only a few pages that are mostly English We have included the 4 minute video below, where Penrose briefly explains just how special the initial conditions were.


The likelihood of the initial conditions of the universe (the arrangement of matter for the big bang) to occur by chance alone, is the biggest number (or smallest probability) we have ever seen with regards to fine tuning, less than 1 out of 1010123.  It is a double exponent.  For those who are mathematically inclined, try to fathom how big this number really is.  It makes the cosmological constant ("trillion, trillion, trillion....") seem minuscule. If you tried to write the number using every single particle in the universe to represent a zero, you run out of particles! It's not even close. 

There are a few amazing things about this result.  Firstly, that physics, mathematics and computer science have come to the point where we can actually calculate such a probability.  Second, that the probability here is so amazingly small.  Lastly, that such a fine tuned arrangement was "built in" to the big bang in order to naturally unfold to our universe.  It's astounding!

We are going be moving forward in these posts with the assumption that we have sufficiently established the fact of  fine tuning, both in the constants of nature and the initial conditions of the big bang.  We want to expressly mention that there is a very small minority of scientists who deny the fact of fine tuning altogether.  Their view is largely rejected by the scientific community as a whole, and the mistakes in their thinking are fairly easy to see.

We encourage you look at this 76 page article by Luke Barnes that thoroughly examines and rejects the opinion of Victor Stenger.  It also does an excellent job of explaining a lot of the details of fine tuning. (See pages 23-26 in particular for this post, where the author exposes the fallacies in Stenger's attack on Roger Penrose, and concludes "that Stenger has not only failed to solve the entropy problem; he has failed to comprehend it.  He has presented the problem itself as its solution.")

Tuesday, June 19, 2012

God vs The Multiverse (Part 3: The Solution)

The major breakthrough in our understanding of the constants became widespread in 1986 with the publication of Barrow and Tippler's landmark book called the The Anthropic Cosmological Principle.  In it, they explained the constants using the strong anthropic principle.  (It comes in a weak form and a strong form, as well as many other misused forms.  Different authors use it in different ways, which has led to much confusion.  The key thing is not the labels, but rather an understanding of the different logical arguments employed. See the Hawking article from the introduction for a specific example.)

The significant advance in our knowledge was the recognition that the constants were not arbitrary.  Rather, the constants were fine tuned in a way that only these specific values, within a very small range of variation, result in a universe with order, structure, complex life, etc.  Even slightly different values of the constants would lead to a random, chaotic, meaningless universe.

Some particular examples, among many, deal with stars.  Stars produce energy by fusing two hydrogen atoms into a single helium atom. During that reaction, 0.007 percent of the mass of the hydrogen atoms is converted into energy.  If the percentage were 0.006, the universe would be filled only with hydrogen.  If it was 0.008, the universe would have no hydrogen, and therefore no water and no stars like the sun. 

Another example is the fine tuning of the fine structure constant of the previous post.  Barrow showed that if the constant was greater or smaller by 4%, the nuclear fusion in stars would not produce carbon, thereby making carbon-based life impossible.  (Max Born was actually the first physicist to recognize the key role this constant played in determining atomic structure in 1935 when he gave a lecture called The Mysterious Number 137.  It was only after 1986 however, that this type of explanation for many of the constants became widely understood.) 

One of the deeper ways to look at it is, if the fundamental laws of physics stayed the same but the values for different constants changed, we would still have physics but we wouldn't have cosmology, astronomy, chemistry, or biology.  Change one number, and right after the big bang the universe either collapses in on itself or blows up too quickly to produce galaxies.  Change a different constant and stars don't form.  Change a different number and there are no atoms or the periodic table.  Change another one and life never evolves.  Yet all the constants are perfectly fine tuned just right so we have these complex phenomenon, and areas of beauty and wisdom in addition to physics.

We want to make it clear that we are not saying that the constants of nature were set for human existence exclusively (as the terminology 'anthropic principle' implies).   On the contrary, we believe that man should draw a very different conclusion about a human being's significance in the vast cosmos.  (We will develop this idea more in a later post.)  Rather, we are arguing that the constants were fine tuned to produce all the myriads of wonderous creations in the cosmos on all orders of magnitude, i.e.  galaxies, nebulae, stars, quasars, pulsars, solar systems, planets, carbon based DNA, the possibility of non-carbon based life, intelligence, molecules, atoms, etc.

It is important to realize how this teleological explanation (which is one usage for the anthropic principle) removes the difficulty presented by Feynman in the prior post.  The mystery of the constants was how seemingly arbitrary numbers could be fundamental.  What was discovered was that these numbers were neither arbitrary or fundamental as they seemed at first.  Rather, they were fine tuned in the sense that only these numbers in conjunction with the qualitative laws of relativity and quantum mechanics would lead to the universe we observe.

A teleological explanation is an explanation of something based upon a final cause or a purpose.  For example, we could explain why a salt shaker has little holes on its top, based upon it's purpose of sprinkling salt on people's food.  That doesn't tell us what made the little holes, but it does explain why they are there based upon the concept that the salt shaker was made to serve a certain purpose.

Similarly, the reason why the constants and the laws are designed the way they are, is in order for the universe to result from them.  Were they to be even slightly different, all that would exist would be chaotic nonsense.  The particular number for the constants was chosen because the purpose of the laws and constants of physics are to produce a meaningful universe.

This explanation only became possible once science had an understanding of the laws of physics and the critical role that these quantities play in them.  Prior to this understanding, it would have been totally speculative to posit any type of teleological explanation.

The solution to the mystery is that the constants are not ultimately fundamental.  The Fundamental of the 'fundamental constants' is an Intelligent Agent who selected the specific values.   It is important to understand why this solution is not beset by the problem of having to determine the values of the constants to the 120th decimal place.  The demand to explain every last decimal place is only upon the Master Mathematical Equation theory which speculates that there exists some unique mathematical equation which precisely determines the numbers.  A unique equation does not determine a range of values.  (In fact, the Necessary Existence theory fails, not because it doesn't explain the number to precision, but because it fails to explain why it's even in the range.)

An Intelligent Agent is able to choose between a range of numbers (i.e. between 130 and 150) all of which yield the same result.  We can explain and understand why He didn't choose 129 or 151, because since they are outside the range of values, He wouldn't have accomplished His purpose.  Unless we have more knowledge, we can't explain why he picked the exact number 137.03597.  If we discover in the future that it mattered more (meaning the range is only 136-138), then we will know why He didn't choose 135.  And if it didn't matter which value He chose so long as it was within the range, an Intelligent Agent is capable of choosing one value among many choices that all serve His purpose. (You do it all the time.)

Explaining the constants with a final cause was unacceptable to many scientists.  'Purpose' is something we attribute to an Intelligent Agent.  While most physicists were willing to accept eternal, non-physical, non-intelligent laws as the cause of the universe, they were unable to consider that the cause of the universe was an Intelligent Agent who works with a final cause.  An Agent that was able to understand the result of His own actions was simply unacceptable.

Nevertheless, the point was clear.  The tie between the fine tuning of the constants and the order in the universe was undeniable.   It was incumbent upon scientists to either accept a teleological explanation and the clear inference to an Intelligent Cause, or to explain why the universe seemed like it was designed. The fine tuning directly pointed to an Intelligent Designer, and the burden of proof was on those who denied intelligent design to explain the illusion of design based upon some unintelligent mechanism.

The theories mentioned in the first post, that of the constants being necessary existences and that of the Master Mathematical Equation of the Universe, were no longer sufficient in any sense at all. They were developed when the conceptual problem of the constants was one of arbitrariness.  Given our new knowledge of the connection between the values for the constants and the resultant order and complexity in the universe, these theories rapidly fell even further out of favor. It is too coincidental to assume that the values determined by the hypothesized necessary existences or the Master Mathematical Equation of the Universe happen to be those which result in order and complexity many years later.

To illustrate the point, consider the following hypothetical example.  After years of unsuccessfully looking for life on Mars, scientists discover "something" which they cannot quite figure out. After years of analysis of its various parts, they realize that it is a one million year old spaceship which is perfectly suited for travelling on and around Mars.  Despite the fact that we have not as of yet found life on Mars, the perfect design of the spaceship is clear evidence that it was designed by some intelligent being (which we would know nothing about, other than the fact that it was intelligent).  If someone wanted to deny this and claim that it emerged by random chance or some master mathematical equation that necessitates spaceships on Mars, the burden of proof would be on them to develop a compelling theory of how this could have happened.

It should be clear that the spaceship on Mars is a crude analogy. The wisdom and design found in our universe is much more profound, deep and extensive than that of any advanced spaceship. All of man's scientific endeavors lead him to realize that he is but scratching the surface of the deep wisdom abound in our universe.

We have included a short video about the cosmological constant and fine tuning with Leonard Susskind (one of the fathers of string theory and an advocate of the multiverse).  The cosmological constant (also known as dark energy) is recognized as one of the most striking examples of fine tuning, and also plays a critical role in big bang cosmology.  It is an excellent example of fine tuning as it is exceedingly simple to comprehend the consequences of varying it, on a basic level.  As a natural dimensionless value, the cosmological constant is on the order of 10−122 (a decimal point, followed by 121 zeroes, then a 1).  If it had a few more zeroes, the universe would have collapsed in on itself.  A few less zeroes and the universe blows up without forming any structure at all.  It is an excellent video that will blow your mind.


Sunday, June 17, 2012

God vs The Multiverse (Part 2: The Mystery)

Science tries to explain things through a process of simplification.  This means explaining one thing in terms of something else more basic.  Simplification generally means unifying different phenomenon by explaining them in terms of fewer things.  For example, Newton's theory of gravity unified the phenomenon of things falling to the ground on Earth, with the phenomenon of planets orbiting the sun.  Both things were explained in terms of one principle (gravity) which is more fundamental.

The most basic things are called 'fundamental'.  The most basic laws are called the 'fundamental laws of physics'.  The concept of 'fundamental' is of utmost importance in science.  Science is seeking to explain the most fundamental reality.  Science is seeking to explain everything in terms of one (ideally) fundamental theory.  This "theory of everything" will be the fundamental law of physics, in the sense that all other laws can be derived from it, but it cannot be explained in terms of anything simpler.

The most basic particles, 'fundamental particles', are those that can combine to make everything else that is more 'complex'.  These fundamental particles have intrinsic properties like mass.  The more mass something has, the more it weighs.  Every single electron in the universe has the exact same amount of mass.  We can quantify the amount of mass in an electron by comparing it to any proton.  Every proton is always 1,836.15267245 times more massive than any electron.  It is constantly that amount.  Hence, we call the mass of an electron a 'constant.'

The term 'constant' is used in physics to refer to a particular number that doesn't change, and tells us how big something is.  It could be how heavy an electron is, how fast light moves, how strong gravity is, etc.  All these things are finite quantities, which have particular, unchanging values that we only know through measurements and observations. These quantities are called constants.

How can science explain the value of the above mentioned constant in terms of something more fundamental?  What determines this number?  Why isn't it 2000 or 7.6453 or .000001?  Why aren't electrons more massive than protons?  Can science go any further?  How do you explain a number?

Richard Feynman expresses this difficulty in his book QED (page 129), with regard to one of these constants, the fine structure constant (which he refers to as the coupling constant.  Don't get scared if you don't understand what the fine structure constant is.  It's not essential to the proof.  Think about the mass of the electron if it is easier to relate to.)
"There is a most profound and beautiful question associated with the observed coupling constant...It is a simple number that has been experimentally determined to be close to 0.08542455. (My physicist friends won't recognize this number, because they like to remember it as the inverse of its square: about 137.03597 with about an uncertainty of about 2 in the last decimal place. It has been a mystery ever since it was discovered more than fifty years ago, and all good theoretical physicists put this number up on their wall and worry about it.) Immediately you would like to know where this number for a coupling comes from: is it related to pi or perhaps to the base of natural logarithms? Nobody knows. It's one of the greatest damn mysteries of physics: a magic number that comes to us with no understanding by man. You might say the "hand of God" wrote that number, and "we don't know how He pushed his pencil." We know what kind of a dance to do experimentally to measure this number very accurately, but we don't know what kind of dance to do on the computer to make this number come out, without putting it in secretly!"
What was the mystery that all good theoretical physicists worried about for 50 years? 

In our current conception of the fundamental laws of physics, there are 25 or so physical constants (specific quantities like the mass or charge of an electron), some of which are dimensionless physical constants (a pure number with no units. This is not as abstract a concept as it sounds.  It basically just means a ratio between two things with similar units.)  One of these dimensionless constants is 0.08542455, which characterizes the strength of the electromagnetic force and is directly related to the charge of an electron. (The bigger the number, the stronger the repulsive force between two electrons would have been.)  The essential mystery is not tied to the fine structure constant in particular.  It is just one of 25 examples.  When Feynman wrote this in 1985, all these constants were shrouded in this tremendous mystery.  What sense is there to specific numbers being fundamental?

In order to understand Feynman's question, you have to realize what he is assuming.  He is assuming that a number cannot be fundamental.  This is because it makes very little sense to say that the most basic existences in reality are 25 arbitrary numbers.  What Feynman is asking is that if these numbers are not fundamental, how can science possibly explain these constants in terms of something more fundamental?

An appreciation of this problem is necessary before we can move forward in the story.  Specific fundamental numerical values seem to defy any possible form of explanation.  It doesn't seem reasonable to believe that any qualitative physical theory will ever spit out a number like 137.03597 (and some of the other numbers are even worse).  They seem totally arbitrary. (It would be a different story if the numbers we were trying to produce were 1, 3, or the square root of 2 pi;  if it were numbers like these, maybe we could stand a chance at deriving them from some qualitative concept. For instance, if it involved pi, we would look for a qualitative law involving circles...) This was one of the biggest difficulties in modern physics.  We had absolutely no understanding about these fundamental constants, yet they were essential parts of our equations.

Two solutions were proposed (and still are by a minority of scientists) to try to explain where these arbitrary numbers came from.  The first theory simply stated that these 25 numbers were Necessary Existences (this is the theory Feynman is implicitly rejecting).  Needless to say, this did not satisfy most physicists.  While it is obvious that you will ultimately arrive at an idea which is irreducible and not explainable in terms of simpler concepts, it is one thing when your axiomatic ideas are nice theories such as general relativity and quantum mechanics (or maybe a grand unified theory if you prefer one eternal existence); it is altogether a different thing to have a pantheon filled by general relativity, quantum mechanics, and 25 arbitrary numbers, all necessarily coexisting.

A second theory speculated that perhaps these 25 numbers were necessary results of some qualitative Master Mathematical Equation that had yet to be discovered. This too did not satisfy most physicists as it does not seem plausible that any qualitative law would naturally generate the specificity of numbers required by observation.  There was a general state of discontent with these forced explanations as they did not provide very much understanding or insight into the values of the constants.  

What was the mystery that all good theoretical physicists worried about for 50 years?

Feynman's mystery is an abstract point.  (Notice that we haven't mentioned anything about probability or fine tuning.)

How could arbitrary numbers be a fundamental part of reality?  And if they weren't fundamental, what could possibly have caused the constants?

Wednesday, June 13, 2012

God vs The Multiverse (Part 1: Introduction)

We want it to be clear that we are not making any claims based upon religion, as the rational argument for God's Existence is not contingent on the belief in religion.  Try to keep this in mind when you read the comments on this first post, as our regular readers who do believe in the Torah, discuss the issue of the religious implications of a rational proof from science for God's Existence.

The common impression that modern day scientists convey to the world is that God is unnecessary. Science has demonstrated that only religious fools still believe in God.  The purpose of these posts is to show that the opposite is true.  Modern science has in fact supplied compelling evidence for the rational belief in the existence of God.

We want to make it clear from the outset that we are not seeking to prove Divine Providence from any of these arguments.  While it is obvious that the idea of an Intelligent 
Cause for the universe is consistent with a belief in Providence, the proof from the constants only establishes that there is an Intelligent Designer, not that He relates to mankind in a unique manner.

For someone whose belief in God is rooted in blind faith, scientific knowledge is not relevant.  But there are many others who seek a genuine knowledge that God exists, and are perplexed by the "conclusion" reached by atheistic scientists who deny God.  They realize the difference between knowledge and faith; that you can "will" yourself to have faith, but you cannot will yourself to have knowledge.  Knowledge demands an investigation into the nature of reality with an open mind, searching for the truth.  It is for these people that we wrote these posts.

We will present a proof for One Intelligent Cause for the universe.  This is the basic concept that the term 'God' will refer to in the context of these posts, until we elaborate upon it more in Stage Three.  We want to stress that the argument is not contingent upon any prior religious beliefs.  This proof is contingent only upon scientific knowledge (which we will support by citing major scientists) and philosophical reasoning.

In the proof, we will use inductive reasoning from the fine tuning of the constants in nature and the initial conditions of the big bang, to infer an Intelligent Designer of the universe.  What we mean by 'proof' is that a reasonable person would logically draw the same conclusion after understanding the arguments.  We do not mean 'proof' in the sense of a mathematical proof or deductive reasoning, but rather in the sense of a rational argument.

No proof from current science is absolute. Any proof from science is subject to the radical doubt that one's current model of reality is totally wrong.  Nevertheless, it is rational to use your mind to the best of your ability to establish what you believe to be true.  We are not trying to disprove skepticism.  We are assuming the reader accepts the human mind, its ability to reason correctly about reality, and the validity of the scientific method in reaching true conclusions.  To say it succinctly, if you do not think that science has proven that the sun will rise tomorrow in the east, then you will not think this is a proof.  This proof will not exceed the reality you grant to scientific knowledge in general.

Our main objectives are to show a path in studying the deep wisdom in the creation as revealed by modern science, and also to present a proof of God from the constants and initial conditions.  Because of this dual objective, we will be including many interesting ideas from modern science that are related to the proof, even though it is not contingent upon them.  We will try to be clear about what is necessary for the proof, and what is only to provide a direction to understanding the great wisdom in the universe, as revealed through modern scientific knowledge.

Unfortunately, it is anathema to most scientists to recognize a non-physical, intelligent cause.  So they deny it. The prevalent trend in explaining away the proof is the theory of the multiverse Reading Stephen Hawking's article in the Wall Street Journal entitled Why God Did Not Create the Universe, as well as an article in Discover magazine entitled Science's Alternative to an Intelligent Creator, will be helpful in gaining background for some of the issues that we will be discussing.  Many top physicists and cosmologists believe in some version of the multiverse, and it seems that every year, more and more join the ranks of believers.  By some accounts, most physicists currently have faith in it.

In general, the proofs that scientists use for the multiverse are, in fact, the best proofs for God.  There is a part of a person which initially doubts that there is a proof from science simply based upon the fact that most scientists don't believe in God.  However, one's conviction in the reality of the true God can be qualitatively increased when he sees what many scientists are compelled to believe in an effort to deny an Intelligent Cause.  The greatest minds of our generation's scientists would not posit something as wildly speculative as the multiverse, were it not for the fact that the necessary alternative is an Intelligent Designer.

The proof is predicated upon a person recognizing that the universe that we observe is special, in the sense that it is highly structured and ordered on all scales of magnitude and complexity; that it has incredible beauty, symmetry, and simplicity from its most fundamental laws to the complex organisms that inhabit it.  We have never heard any scientist argue this point, and we think everyone who has basic scientific knowledge understands this point.  This amazing interactive site helps convey an appreciation of this idea.

We have embedded videos in many of the posts, as well as linked to scientific articles.  While it is not necessary to watch and read everything, we highly recommend the videos in particular, as they will greatly enhance you appreciation for the main points in the posts.  We will also include many links to Wikipedia articles that further elaborate on the points and provide some background information.  Don't get overwhelmed by the Wiki links (the first paragraph generally has the relevant information needed to follow the post), as the videos and articles are the main sources.

We will try to keep these posts as short and clear as possible, and we encourage you to click on the more significant links in order to deepen your understanding of the issues involved.  We will not be able to take up every point in the articles and videos we link to.  However, we will try to answer specific questions you have in the comments section of each particular post.  If you have any questions on what we say, or if you want to add any points that we missed, feel free to do so in the comments.  We hope that an active discussion about the ideas of these posts, with us and between the readers themselves, will help illuminate the many nuances of the proof.

We will only mention a few of the many constants that science knows are fine tuned.  You can find a more detailed explanation of the fine tuning of specific constants in the book Just 6 Numbers by Martin Rees (who also happens to believe in the multiverse), which is intended for the general reader.  There are many other good sources on the web and You Tube, should you choose to pursue the matter further.

There will be three stages to the proof.  In Stage One, we will be following along with the scientists.  We are well aware that we are not world renowned physicists, and we do not expect you to accept the facts of modern science based on our authority. (While one of us does have a PhD in mathematics, and the other has a Bachelor's degree in physics, we know that does not make us experts by any stretch of the imagination.)  We will be presenting well established science throughout Stage One, using the scientists themselves in videos, articles, and Wiki links.  We will also provide explanations of the material, so that you can understand the arguments first hand, to the best of your ability.

We will depart from the scientists in Stage Two of the proof, when they argue that the evidence points to a multiverse.  We will argue clearly and persuasively (we hope), that the multiverse is not a viable alternative explanation to an Intelligent Designer.  In Stage Three we will present a formulation for the Intelligent Designer of the universe, that stands up to all the numerous questions that atheistic scientists lodge against God.